uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,563,815 | arxiv | \section{Introduction}\label{introduction}}
Spike trains recorded from nerve cells vary in their degree of
regularity. Some emit regular tonic pulses like Purkinje cells, other
show very irregular spike trains, such as ``stuttering cells'' or
``irregular spiking cells'' ~{[}1--5{]}. The ubiquitous irregularity of
action-potential (AP) firing in nerve cells has been noted early on
~{[}6{]}, and the functional implications are debated. In some cases,
noise has been deemed an obstacle for reliable responses ~{[}7{]}, while
other studies have conversely highlighted its beneficent involvement in
creating fast or information-optimal responses ~{[}8{]}. Indeed, nervous
systems may well have in store both: situations where irregularity is
facilitating neuronal function ~{[}9{]}, and others where it is
detrimental. While the functional debate is still on, the phenomenology
of irregular spiking has not been completely characterised, let alone
its mechanisms quantitatively understood.
The spike patterns emitted by a neuron are influenced by the synaptic
and intrinsic fluctuations in conjunction with the neuron's intrinsic
dynamics. Thus, two major sources of irregularity are conceivable: Some
irregular neurons are simply subject to strong fluctuations, caused by
intrinsic ion-channel noise or by synaptic bombardment, which increase
their interspike interval (ISI) variability ~{[}10{]}. For others, the
deterministic dynamics shows a bistability with coexistence of resting
state and spiking. This leads to increased variability even at moderate
noise levels. For an exemplary voltage trace of the latter case see
\cref{fig:fIcurve_snl}(a). The goal of this article is to characterise
the spiking statistics of such a bistability that arises at a
\emph{saddle-homoclinic-orbit} (HOM) bifurcation, see
\cref{fig:fIcurve_snl}(b) and (c). The HOM bifurcation is a universal
element of the fundamental bifurcation structure of all
conductance-based neuron models ~{[}11,12{]}. HOM excitability shows to
some extent an intermediate behaviour between the classical type I and
type II excitability ~{[}13,14{]}.
\begin{figure}
\hypertarget{fig:fIcurve_snl}{%
\centering
\includegraphics{Figure1.pdf}
\caption{Bistability in homoclinic neurons. (a) Voltage trace of a
homoclinic neuron (model definition in Sec. \ref{app:model}, with gating
time constant of \(\tau_n = 0.16\mathrm{ms}\)) driven with
\(I=4.4 \mu\mathrm{A/cm}^2\) and a noise strength of
\(\sigma=24\mathrm{mV}/\sqrt{\mathrm{s}}\) (\(\frac{\sigma^2}2\) is the
voltage diffusion constant in millivolt per second). (b) The homoclinic
regime is reached by decreasing the time scale of the gating variable,
\(\tau_n\). Going from the SNIC to the homoclinic regime, the neuron
passes the codimension-two SNL bifurcation, which is common to all
class-1 excitable neurons. The SNL bifurcation thus acts as a gate to
the bistable, homoclinic spiking regime (shaded area). (c) Phase plots
(gating variable versus voltage) of neurons in different dynamical
regimes, see panel 1(b) for parameter-space organization. (d) Horizontal
transversal of the bifurcation diagram in 1(b) below the SNL point.
Bistability of rest and spiking leads to hysteresis in the
frequency-input curve of homoclinic neurons. Within the bistable region,
noise can switch between rest and spiking.}\label{fig:fIcurve_snl}
}
\end{figure}
The bistability of HOM neurons leads to a hysteresis in the firing-rate
versus input curve, see \cref{fig:fIcurve_snl}(d): Ramping up the input
current, \(I\), the neuron stays at rest until the resting state loses
stability at \({I_{\mathrm{sn}}}\). Conversely, when ramping down the
input current, the neuron remains spiking until the limit cycle (LC)
disappears at \({I_{\mathrm{hom}}}\). The region of bistability extends
from \({I_{\mathrm{hom}}}\) to \({I_{\mathrm{sn}}}\). Consequently,
noise-free deterministic neurons are easily probed experimentally for
hysteresis effects by ramp currents which may then serve as an indicator
for bistable membrane states. In experiments, many biological neuron
types, however, may evade this kind of screening due to their high
degree of stochasticity, in particular when their intrinsic noise causes
them to constantly jump between attraction domains, resulting in a
``mixed state'' that is insensitive to the direction of parameter
change. Yet, the change in interspike interval statistics and the
switching probability between the two attractors that is derived in this
article can still provide insight into the presence of bistability. As
both measures can be estimated from recordings of biological neurons,
they can be used to differentiate neurons in which the irregularity is
solely due to noise from those in which the irregularity is enhanced by
a bistability of the intrinsic dynamics. This might be particularly
interesting when relating single-cell dynamics to the up- and downstates
observed at network level. Previously, up- and downstates on the
single-cell level are modeled by a bistability of two fixed points in
the membrane voltage ~{[}15--18{]}. The here considered setting is
different, with a bistability between a fixed point of the membrane
voltage (the resting state), and the limit cycle (spiking dynamics), yet
if the upstate shows fast spiking behaviour this would be difficult to
distinguish from the two-fixpoints case.
In the space spanned by the three fundamental parameters of
conductance-based neuron models (\emph{i.e.}, membrane leak, capacitance
and input current), a 2D-manifold of HOM bifurcations unfolds from the
degenerate Bogdanov-Takens cusp point, which was proven to generically
occur in these models ~{[}19{]}. Starting with a model showing the
common \emph{saddle-node on invariant cycle} (SNIC) bifurcation at the
creation of the spiking limit cycle, a decrease of the separation of
timescales between voltage and gating kinetics switches the limit cycle
creation to a HOM bifurcation along with the emergence of a bistability
~{[}12{]}, see \cref{fig:fIcurve_snl}(b). The switch in the bifurcation
that creates the spiking limit cycle happens at the codimension-two
\emph{saddle-node loop} (SNL) bifurcation.\footnote{This codim-2
bifurcation is known as saddle-node separatrix loop ~{[}20{]},
saddle-node homoclinic orbit ~{[}21{]}, non-central homoclinic loop to
a saddle-node ~{[}22{]}, orbit flip ~{[}23{]} bifurcation or
saddle-node loop in some neurobiological context ~{[}24{]}.} It can be
induced by many fundamental parameters in neuronal systems ranging from
leak conductance, capacitance and temperature changes, to modifications
of extracellular potassium concentration ~{[}11{]}. Most importantly for
the present article, between the HOM and the SN branch emerging from the
SNL bifurcation there exists a region of bistable. Besides HOM neurons,
bistability between rest and spiking also occurs in neuron models that
undergo a \emph{subcritical Hopf} bifurcation, followed by a \emph{fold
of limit cycles} at their firing onset. The spike statistics of these
neuron models has previously been explored numerically ~{[}25,26{]}. The
spike statistics for SNIC neurons (upper part of the bifurcation diagram
in \cref{fig:fIcurve_snl}(b)) is well characterised both for the
excitable dynamics, \emph{i.e.}, \(I<{I_{\mathrm{sn}}}\) (fluctuation
driven ~{[}10{]}), and the limit cycle dynamics, where
\(I>{I_{\mathrm{sn}}}\) (mean driven ~{[}27{]}). The statistics in the
bistable region of HOM neurons, however, is less studied and will be
explored in this study. The derivation of the associated interspike
interval statistics fills a gap of knowledge and provides the means to
differentiate alternative underlying bifurcation structures using spike
statistics. In particular, the following analysis focuses on the
situation where the perturbing noise is weak such that the time
evolution is still dominated by the attractors of the nonlinear
dynamical system, with noise only switching between them.
Sec. \ref{sec:mod} introduces the model for which in Sec. \ref{sec:isi}
the interspike interval density is derived. To this end, the stochastic
trajectories are projected onto the unstable manifold of the saddle, see
Sec. \ref{sec:coord}. In this coordinate system both the statistics of
intermittent silence, burst-firing and switching between these regimes
are calculated in Sec. \ref{sec:inter}, \ref{sec:intra} and
\ref{sec:splitprop}, respectively. Estimation of the probability of
switching is discussed and the relation to ISI moments are presented in
Sec \ref{sec:est}. A comparison to a second kind of bistability is drawn
in Sec. \ref{sec:hopf} and the emergence of multimodal ISI densities as
a meens of distinguising between them is addressed, see Sec.
\ref{sec:multimodal}.
\hypertarget{conductance-based-neuron-model-with-homoclinic-bistability}{%
\section{Conductance-based neuron model with homoclinic
bistability}\label{conductance-based-neuron-model-with-homoclinic-bistability}}
\label{sec:mod}
In the bistable regime, transitions between two stable attractors can be
induced by noise fluctuations. The associated transition probability
between the two attractors as well as the resulting spike statistics is
derived in the following for a generic class of conductance-based neuron
models with additive white noise and the limit cycle spike emerging from
a HOM bifurcation. The analysis focuses on HOM neurons that are close to
the SNL bifurcation, which allows for useful assumptions as introduced
later.
The present analysis considers an \(n\)-dimensional conductance-based
neuron model with one voltage dimension, the membrane voltage \(v\), and
a set of \(n-1\) ion channel gates \(a_i\). The dynamics of the state
vector \(\boldsymbol x=[v,a_1,...,a_{n-1}]^\top\in{I\!\!R}^n\) is given
by
\begin{equation}\dot{\boldsymbol x}=\boldsymbol F(\boldsymbol x)+\boldsymbol D(\boldsymbol x)\boldsymbol\xi(t).\label{eq:dynamics}\end{equation}
The additive noise \(\boldsymbol D(\boldsymbol x)\boldsymbol\xi\)
originates from a diffusion approximation of either synaptic or
intrinsic noise sources. The voltage dynamics follows a current-balance
equation
\(F_1(\boldsymbol x)=(I-I_\mathrm{ion}(\boldsymbol x))/{C_{\mathrm{m}}}\),
with membrane capacitance \({C_{\mathrm{m}}}\), and the gates have first
order kinetics, see also Appendix, Sec. \ref{app:model} for model
details. Details on the simulations are also stated in Appendix, Sec.
\ref{app:model}.
The analysis assumes that the model shows a HOM bifurcation from which
the limit cycle spike emerges. A large class of conductance-based neuron
models can be tuned into this regime ~{[}11{]}. In HOM neurons, the
limit cycle (corresponding to tonic firing) arises at
\(I={I_{\mathrm{hom}}}\) from a homoclinic orbit to the saddle, and at
\(I = {I_{\mathrm{sn}}}>{I_{\mathrm{hom}}}\), saddle and stable node
(corresponding to the neuron's resting state) collide in a saddle-node
bifurcation. For inputs in between, with
\({I_{\mathrm{hom}}}< I < {I_{\mathrm{sn}}}\), the stable node and the
limit cycle coexist as two stable attractors, see
\cref{fig:fIcurve_snl}. The state space is divided into the basins of
attraction of the fixpoint and the limit cycle by a separatrix
(\cref{fig:separatrix}).
The analysis furthermore assumes that the noise strength is chosen small
enough such that the spike shape is in first order not affected (the
typical small noise approximation). With this, jumping between spiking
and resting state is only possible close to the separatrix. While the
separatrix is non-local, the following analysis shows that salient
properties of its stochastic transition are given by the linearised
dynamics around saddle and stable node.
The linearised dynamics around fixpoints are given by the Jacobian of
\cref{eq:dynamics},
\(J(\boldsymbol x) = \frac{\partial \boldsymbol F(\boldsymbol x)}{\partial \boldsymbol x}\),
which has \(n\) eigenvalues \(\lambda_{1}, ...,\lambda_{n}\). For
neuronal models undergoing a HOM onset bifurcation, the Jacobian at the
saddle has one simple, positive, real eigenvalue corresponding to the
unstable direction, denoted by \(\lambda_1\in{I\!\!R}\). The other
eigenvalues correspond to stable directions, such that
\begin{equation}\lambda_1>0>\lambda_2\geqslant...\geqslant\lambda_{n}.\label{eq:evals}\end{equation}
The associated orthonormal left and right eigenvectors are denoted by
\(\boldsymbol l_k\) and \(\boldsymbol r_k\), \(k \in [1,...,n]\),
respectively, with \(\boldsymbol l_j\cdot\boldsymbol r_k=\delta_{jk}\),
see \cref{fig:doublewell}(a). Analytical expressions of
\(\boldsymbol l_1\) and \(\boldsymbol r_1\) are given in the Appendix,
in Sec. \ref{app:eigenvectors} for the saddle-node fixpoint, and in Sec.
\ref{app:eigenvectorsSaddle} for the saddle fixpoint.
The statistical properties of the bistable dynamical regime are not yet
sufficiently characterised and will be explored in subsequent sections.
\hypertarget{interspike-interval}{%
\section{Interspike interval}\label{interspike-interval}}
\label{sec:isi}
The following analysis considers the spike train of a HOM neuron in the
bistable region, with \({I_{\mathrm{hom}}}< I < {I_{\mathrm{sn}}}\),
subjected to white noise sufficiently strong to induce jumps between the
two basins of attraction, \emph{e.g.}, \cref{fig:fIcurve_snl}(a).
Between two consecutive spikes, the dynamics can either remain in the
basin of attraction of the limit cycle, or it can visit the basin of
attraction of the fixpoint before eventually returning to the limit
cycle. On average, visiting the fixpoint will induce longer interspike
intervals, because the escape from the resting state requires time in
addition to the duration of the limit cycle.
Because the driving stochastic process is white, the process of
subsequently occurring interspike intervals is renewal, as will be
argued in Sec. \ref{sec:splitprop}. The total interspike interval
density is then a mixture of trajectories that remain on the limit
cycle, and such trajectories with intermittent visits to the fixpoint.
The interspike interval density of these two possibilities are denoted
as
(\emph{i}) the probability \(p_\mathrm{lc}(t)\) that an interspike
interval results from a trajectory staying exclusively on the limit
cycle dynamics, and
(\emph{ii}) the probability \(p_\mathrm{fp}(t)\) that an interspike
interval is composed of some time spent near the resting state in
addition to the time required for the limit cycle spike following the
escape of the fixpoint.
The total interspike interval density is a mixture of both kinds of
trajectories,
\begin{equation}p_\mathrm{isi}(t)=(1-{\varpi})\,p_\mathrm{lc}(t)+{\varpi}\,p_\mathrm{fp}(t),\label{eq:isipdf}\end{equation}
where the factor \(1-{\varpi}\) determines the proportion of intervals
for which the dynamics resides entirely on the limit cycle side of the
separatrix, while \({\varpi}\) is the proportion of intervals that
include time spend on the fixpoint side of the separatrix.
In the following, \({\varpi}\) is called the \emph{mixing factor} or
\emph{splitting probability}. For increasing values of \({\varpi}\),
visits to the fixpoint become more frequent. In spike trains, this is
visible as a larger proportion of long interspike intervals. The
phenomemon of neurons showing strong ISI variability, with a ratio of
long \emph{vs.} short interspike intervals, is sometimes termed
\emph{stochastic bursting}, \emph{stuttering}, \emph{irregularly
spiking}, or \emph{missing spikes} in the experimental literature
~{[}2,4,5{]}.
In the following, the ``ingredients'' to approximate the interspike
interval density in \cref{eq:isipdf} are provided. The mixing factor
\({\varpi}\) is derived in Sec. \ref{sec:splitprop}, and the
probabilities \(p_\mathrm{lc}(t)\) and \(p_\mathrm{fp}(t)\) in Secs.
\ref{sec:intra} and \ref{sec:inter}, respectively. To this aim, the
system is transformed into a coordinate system that facilitates the
analysis (Sec. \ref{sec:coord}).
\hypertarget{projecting-crossings-of-the-separatrix-on-a-double-well-problem}{%
\subsection{Projecting crossings of the separatrix on a double-well
problem}\label{projecting-crossings-of-the-separatrix-on-a-double-well-problem}}
\label{sec:coord}
The observation that most crossings of the separatrix happen along the
downstroke of the action potential (AP) permits in the following to
project the crossings of the separatrix onto a one-dimensional problem.
More specifically, the high-dimensional problem of stochastic
transitions through the \((n-1)\)-dimensional separatrix is reduced to a
one-dimensional double-well escape problem of which the occupancy
statistics are known ~{[}28{]}.
\begin{figure}
\hypertarget{fig:separatrix}{%
\centering
\includegraphics{FigureSeparatrix.pdf}
\caption{Depicted are left and right eigenvector of the saddle as well
as the limit cycle (solid line) and the separatrix (dashed line). (a) At
the HOM bifurcation, the homoclinic orbit overlaps with the separatrix.
(b) For higher input amplitudes, the separatrix is shifted away from the
limit cycle.}\label{fig:separatrix}
}
\end{figure}
The separatrix between rest and spiking corresponds to the stable
manifold of the saddle fixpoint. At the saddle, the tangent space of the
separatrix is
\begin{equation}\mathcal T=\Big\{\sum\nolimits_{k=2}^{n}\alpha_k\boldsymbol r_k:\alpha_k\in{I\!\!R}\Big\}.\label{eq:tangentspace}\end{equation}
The orthogonal complement is given by the left eigenvector
\(\boldsymbol l_1\in\mathcal T^\perp\), see \cref{fig:doublewell}(a) for
a two-dimensional example.
\begin{figure}
\hypertarget{fig:doublewell}{%
\centering
\includegraphics{Figure2.pdf}
\caption{Equivalent double-well potential. (a) At the saddle
(\(\circ\)), \(\boldsymbol r_2\) is tangent to the stable manifold. Its
orthogonal complement is \(\boldsymbol l_1\), onto which the node
(\(\bullet\)) and the minimum distance of the limit cycle
(\(d_{\mathrm{lc}}\)) is projected. (b) Simulated particle density in
the projected coordinate \(y\).}\label{fig:doublewell}
}
\end{figure}
For spike onset at \(I={I_{\mathrm{hom}}}\), the separatrix overlaps
with the homoclinic orbit, as both align per definition with the stable
manifold of the saddle. For \(I > {I_{\mathrm{hom}}}\), the limit cycle
detaches from the saddle. The separatrix follows the limit cycle, until
it eventually diverges, see \cref{fig:separatrix}. Along the spike
downstroke, both the limit cycle and the separatrix remain parallel to
the tangent space \(\mathcal T\) for a significant part of the loop, for
details see Appendix, Sec. \ref{app:downstroke}. Most relevant crossings
of the separatrix happen in this region of the state space because
(\emph{i}) due to the slow dynamics in the state space around the
saddle, the limit cycle trajectory spends most of the time close to the
saddle fixpoint, and
(\emph{ii}) the distance between limit cycle and separatrix is minimal
along the spike downstroke, allowing even weak noise deviations to
switch the dynamics between rest and spiking.
In principle, multiple crossings back and forth across the separatrix
are possible, but the final decision is taken when closing in on the
saddle. In the vicinity of the saddle, trajectories on the limit cycle
side of the tangent space \(\mathcal T\) will follow limit cycle
dynamics, while trajectories on the other side of the tangent space
\(\mathcal T\) will visit the stable fixpoint. The decision on which
side of the separatrix a sample path is at a particular time can thus be
read from a projection onto \(\boldsymbol l_1\in\mathcal T^\perp\),
\begin{equation}y(t)=\boldsymbol l_1\cdot(\boldsymbol x(t)-{\boldsymbol x_{\mathrm{s}}}),\label{eq:proj}\end{equation}
where, for simplicity, the dynamics is recentred to the saddle at
\({\boldsymbol x_{\mathrm{s}}}\), such that the saddle is in the
projected coordinates located at \({y_{\mathrm{s}}}=0\). The position of
the stable node is
\({y_{\mathrm{n}}}=\boldsymbol l_1\cdot(\boldsymbol x_\mathrm{n}-{\boldsymbol x_{\mathrm{s}}})\),
see \cref{fig:doublewell}(a). In the following, the convention is used
that \(y>0\) corresponds to the limit cycle side, while \(y<0\) implies
the fixpoint side, corresponding to the rest.
The following analysis uses the minimum distance of the deterministic
limit cycle dynamics to the separatrix,
\begin{equation}d_{\mathrm{lc}}=\mathop{\mathrm{argmin}}_{\boldsymbol x\in\Gamma}\{\boldsymbol l_1\cdot(\boldsymbol x-{\boldsymbol x_{\mathrm{s}}})\},\label{eq:def_dlc}\end{equation}
see \cref{fig:doublewell}(a). Here \(\Gamma\) denotes the invariant set
of the limit cycle. As mentioned above, the minimal distance is
typically reached during the downstroke of the action potential.
\(d_{\mathrm{lc}}\) is the distance in the \(\boldsymbol l_1\)-direction
of the projection along \(\mathcal T\) of the closest point of the limit
cycle to the separatrix.
The projection aims to collapse the decision, whether or not the
fixpoint is visited, into one dimension such that the theory of
double-well potentials can be applied to calculate the occupancy
statistics. A histogram of the projected values, \(y(t)\), from a
simulation shows a bimodal density in \cref{fig:doublewell}(b). Such
bimodal density also appear in the Brownian motion of a particle in a
double-well potential. This motivates the here presented approach to
reduce the properties of stochastic bursting in a high-dimensional
neuron model to a double-well problem:
\begin{equation}\dot y = -U'(y) + \sigma\,\xi(t).\label{eq:noiypotential}\end{equation}
\(y(t)\) here results from the projection of the dynamics onto the
normal direction to the separatrix, as introduced above.
Approximations for the potential \(U(y)\) and the noise strength
\(\sigma\) will be discussed for the different quantities that are
calculated in the following sections.
\hypertarget{splitting-probability}{%
\subsection{Splitting probability}\label{splitting-probability}}
\label{sec:splitprop}
For uncorrelated noise, the series of spike-time events is a renewal
process. After each spike, during the downstroke when the trajectory is
close to the separatrix, the noise in the system operates akin to a
(biased) coin flip that determines if the fixpoint is visited, or if
immediately another round trip on the limit cycle is taken. Hence, the
consecutive decisions from which distribution the spike times are drawn,
\emph{i.e.}, \(p_\mathrm{lc}(t)\) or \(p_\mathrm{fp}(t)\), are Bernoulli
trials (leading to a geometric distribution for the number of spikes per
burst, see \cref{eq:burstlength}). Indeed, the statistics is reduced to
calculating a single parameter: the splitting probability (or mixing
factor) in a double-well potential.
The splitting probability in a double-well potential describes the
probability of a particle that starts at a position in relation to the
barrier to end up in one of the attractors first. In the present case
the particle is initially injected \(d_{\mathrm{lc}}\) away from the
separatrix, see \cref{fig:doublewell}(a). The probabilty of crossing the
barrier and reaching the fixpoint is denoted as \(\varpi\). This
probability can be found by solving the backward Fokker-Planck with the
appropriate boundary conditions ~{[}28{]}. The solution can be expressed
in terms of the steady state density \(p_\mathrm{s}(y)\) as
\begin{equation}\varpi(d_{\mathrm{lc}})=\frac{\int_{d_{\mathrm{lc}}}^{\infty} p_s^{-1}(y)\mathrm{d}y }{\int_{y_\mathrm{fp}}^{\infty} p_s^{-1}(y)\mathrm{d}y}.\label{eq:mixfacdef}\end{equation}
Here, the splitting probability depends on the distance between limit
cycle and separatrix, \(d_{\mathrm{lc}}\), which will be related to the
system parameters in Sec. \ref{sec:LCdistance}.
The Fokker-Planck equation for the stochastic dynamics of the one
dimensional projected variable \(y(t)\) can always be written in
potential form corresponding to \cref{eq:noiypotential}
\[\partial_t p (y,t) = \partial_y \left[ U'(y) p(y,t) \right] + \frac{\sigma^2}{2}\partial_y^2 p(y,t).\]
The stationary solution \(p_\mathrm{s}\) to this equation can then be
expressed in terms of the potential \(U(y)\) as
\[p_\mathrm{s}(y)= \mathcal{N} \exp\Big(\frac{-U(y)}{2\sigma^2}\Big).\]
Assume that the injection point is not to far from the separatrix which
is at \(y_\mathrm{s}=0\), and that the potential is sufficiently
symmetric around the separatrix. The latter assumption is correct in the
vicinity of the saddle-node bifurcation present in the neuron models
considered here. When \(U(y)\) is smooth, it is possible to assume that
for small \(d_{\mathrm{lc}}\),
\begin{equation}U(y)\approx U(0)+\frac12U''(0)d_{\mathrm{lc}}^2\label{eq:quadratic_simplification}\end{equation}
Assuming \(\sigma\) is small, the limit at \(y=y_{fp}\) tends to
\(-\infty\). With \cref{eq:quadratic_simplification},
\cref{eq:mixfacdef} changes into an expression involving Gaussian
integrals. During the downstroke, the projected limit cycle dynamics
near the separatrix is approximated by \cref{eq:noiypotential}, where
the potential in the direction of \(\boldsymbol l_1\) is
\(U(y)=-\frac{\lambda_1}2y^2\), such that \(U''(0)=-\lambda_1\). The
``mixing noise'' in that dimension is approximated by
\(\sigma^2=\sigma_\mathrm{m}^2=\boldsymbol l_1\cdot{\boldsymbol D}_\mathrm{s} \boldsymbol l_1\),
with the diffusion matrix evaluated at the saddle,
\({\boldsymbol D}_\mathrm{s}={\boldsymbol D}(\boldsymbol x_\mathrm{s})\).
Together this yields
\begin{equation}\varpi=\frac12\bigg(1-\mathop{\mathrm{erf}}\Big(\frac{d_{\mathrm{lc}}\sqrt{\lambda_1}}{2\sigma_\mathrm{m}}\Big)\bigg).\label{eq:splitprob}\end{equation}
Here, \(\mathop{\mathrm{erf}}(\cdot)\) denotes the error function. If
the injection occurs at the separatrix, which corresponds to the
situation when the spiking limit cycle is born from the homoclinic orbit
(\cref{fig:separatrix}), the probability of ending up on either side of
the separatrix is \(1/2\). For increasing distance, the probability of
visiting the fixpoint decays, see inset in \cref{fig:mixfac_dlc}(a),
such that repetitive, burst-like, limit cycle excursions become more
likely.
\begin{figure}
\hypertarget{fig:mixfac_dlc}{%
\centering
\includegraphics{Figure3.pdf}
\caption{Comparison of theoretical prediction (lines) and numerical
simulations (markers) for different gating time constants
\(\tau_\mathrm{n}\)=0.155, 0.16 and 0.165ms. (a) Distance between limit
cycle and separatrix, \(d_{\mathrm{lc}}\), versus input current as given
by \cref{eq:distlc}. (b) Mixing factor \({\varpi}\) as a function of
\(d_{\mathrm{lc}}\). (c) \(1-{\varpi}\) versus input
current.}\label{fig:mixfac_dlc}
}
\end{figure}
\hypertarget{limit-cycle-distance-to-the-separatrix}{%
\subsection{Limit cycle distance to the
separatrix}\label{limit-cycle-distance-to-the-separatrix}}
\label{sec:LCdistance}
The limit cycle originates from a homoclinic orbit at
\(I={I_{\mathrm{hom}}}\). As can be seen from the quadratic dynamics in
the centre manifold of the saddle-node, the saddle, and thus the
separatrix, moves as a square-root function of the input current. The
limit cycle position is more invariant, see Appendix, Sec.
\ref{app:inputDependence}. Using \cref{eq:def_dlc}, the distance of the
limit cycle to the saddle in the centre manifold, and thus to the
separatrix, is
\begin{equation}d_{\mathrm{lc}}=\sqrt{\frac{l_{11}}{a\, {C_{\mathrm{m}}}}} (\sqrt{{I_{\mathrm{sn}}}-{I_{\mathrm{hom}}}}
-\sqrt{{I_{\mathrm{sn}}}-I}),\label{eq:distlc}\end{equation} where
\(l_{11}\) is the entry of the left eigenvector \(\boldsymbol l_1\) that
corresponds to the voltage dimension ~{[}29{]}. The factor \(a\) is the
curvature term of the nullclines, and can be determined by ~{[}30,31{]}
\begin{equation}a=\frac12\cdot\boldsymbol l_1\boldsymbol H\boldsymbol r_1\boldsymbol r_1,\label{eq:curvature}\end{equation}
where \(\boldsymbol H\) is the Hessian matrix of the deterministic
dynamics.
\cref{fig:mixfac_dlc}(a) depicts the analytical \(d_{\mathrm{lc}}\) from
\cref{eq:distlc} and the simulated distance of the limit cycle to the
separatrix as a function of the input current. For values of \(I\) away
from the saddle-node, \({I_{\mathrm{hom}}}<I\ll{I_{\mathrm{sn}}}\), the
relation is rather linear. Hence, near the onset of bistability, the
limit cycle distance can be approximated by
\begin{equation}d_{\mathrm{lc}}\approx\sqrt{\frac{l_{11}}{2 a\, {C_{\mathrm{m}}}}} \frac{I-{I_{\mathrm{hom}}}}{\sqrt{{I_{\mathrm{sn}}}-{I_{\mathrm{hom}}}}}.\label{eq:dlcappr}\end{equation}
With these expressions for the distance \(d_{\mathrm{lc}}\), the mixing
factor \({\varpi}\) can be calculated according to \cref{eq:splitprob}.
For comparison, the mixing factor \({\varpi}\) is evaluated in
stochastic simulations. To this end, the relative time spend on the side
of the stable fixpoint and of the limit cycle is detected by recording a
spike when a voltage threshold of -10mV is crossed from below; and
recording a visit to the fixpoint when a two-dimensional threshold is
crossed (crossing the voltage value of the saddle from above and the
value of the \(n\)-variable 5\% above the value corresponding to the
node). The comparison between simulations and the analytical results can
be inspected in \cref{fig:mixfac_dlc}(b).
Next, the probability \(p_\mathrm{lc}(t)\) for staying on the limit
cycle, the probability \(p_\mathrm{fp}(t)\) for visiting the stable
fixpoint, as well as the intra- and interburst statistics are
calculated.
\hypertarget{intraburst-statistics}{%
\subsection{Intraburst statistics}\label{intraburst-statistics}}
\label{sec:intra}
This section determines the probability \(p_\mathrm{lc}(t)\) for staying
on the limit cycle without visiting the fixpoint used in
\cref{eq:isipdf}. From this, the statistics of spikes inside a ``burst''
is derived, \emph{i.e.}, a consecutive sequence of limit cycle
excursions uninterrupted by a crossing of the separatrix into the
attraction domain of the fixpoint.
For trajectories that stay within the basin of attraction of the limit
cycle and a sufficiently small noise amplitude, a phase reduction maps
the process to a one-dimensional Brownian motion in the phase,
\(\theta\), which has constant drift,
\begin{equation}\dot\theta=1/\tau_\mathrm{lc}+\sqrt{2\bar D_\mathrm{lc}}\,\xi(t).\label{eq:phasediffusion}\end{equation}
Here, \(\tau_\mathrm{lc}\) is the intrinsic, deterministic period of the
limit cycle and \(\xi(t)\) a stochastic white-noise process with
effective diffusion matrix \(\bar D_\mathrm{lc}\). The effective
diffusion matrix, \(\bar D_\mathrm{lc}\), is obtained by averaging the
potentially non-stationary noise over the time scale of one interspike
interval with an appropriate weighting function, \(\boldsymbol Z_1\),
that quantifies how susceptible the spike time is to perturbations at a
given phase \(\varphi\) ~{[}32{]}:
\begin{equation}\bar D_\mathrm{lc}=\int_0^1 d\varphi\,\boldsymbol
Z_1(\varphi)\cdot\boldsymbol D(\boldsymbol x_{\mathrm{lc}}(\varphi)) \boldsymbol Z_1(\varphi).\label{eq:phasenoise}\end{equation}
The weighting function is the so-called phase-response curve,
\(\boldsymbol Z_1(\theta)=\boldsymbol\nabla\theta\big|_{\boldsymbol x(\theta)=\boldsymbol x_{\mathrm{lc}}(\theta)}\),
which can be determined numerically or calculated via centre manifold
reductions ~{[}31{]}. Provided that channel or synaptic fluctuations act
on time scales faster than the average limit cycle period, the effective
phase diffusion, \(\bar D_\mathrm{lc}\), quantifies the averaged noise
per interspike interval that causes jitter in the timing of spikes. It
disregards radial excursions due to noise, in particular those that
would cause jumps over the separatrix into the phaseless set (where no
phase is defined). Assuming the intraburst dynamics is governed by the
stochastic phase evolution in \cref{eq:phasediffusion}, the waiting-time
density follows an inverse Gaussian distribution ~{[}27,33{]}
\begin{equation}p_\mathrm{lc}(t)=
\frac{\exp\big(-\frac{(t-\tau_\mathrm{lc})^2}{\tau_\mathrm{lc}^2\,\bar D_\mathrm{lc}\, t}\big)}
{\sqrt{\pi\,\bar D_\mathrm{lc}\, t^3}}.\label{eq:invGauss}\end{equation}
The mean of the distribution, \(\tau_\mathrm{lc}\), is identical to the
deterministic period of the limit cycle. In the case of a homoclinic
neuron, and close to the limit cycle onset (small \(d_{\mathrm{lc}}\))
it scales according to ~{[}29{]}
\begin{equation}\tau_\mathrm{lc}=-\frac1\lambda_1\ln(d_{\mathrm{lc}}).\label{eq:meanintra}\end{equation}
Here, \(d_{\mathrm{lc}}\) is again the distance of the limit cycle to
the separatrix, \emph{cf.} \cref{eq:def_dlc}, which can be expressed in
terms of system parameters in \cref{eq:distlc} and is required to
fulfill \(d_{\mathrm{lc}}\ll1\).
\hypertarget{interburst-statistics}{%
\subsection{Interburst statistics}\label{interburst-statistics}}
\label{sec:inter}
This section develops the probability \(p_\mathrm{fp}(t)\) for
interspike intervals composed of a visit to the resting state fixpoint
and a limit cycle spike used in \cref{eq:isipdf}. The interburst
intervals resulting from fixpoint visits are on average longer than the
intraburst intervals derived in the last section. The corresponding
interspike interval, \(t_\mathrm{fp}\), can be obtained by adding the
time it takes for the trajectory to escape from the fixpoint,
\(t_\mathrm{e}\), and the proceeding time, \(t_\mathrm{lc}\), for a
spike excursion around the limit cycle, to obtain
\(t_\mathrm{fp}=t_\mathrm{e}+t_\mathrm{lc}\). The escape time,
\(t_\mathrm{e}\), from the resting state is described by Poisson
statistics with a Kramer's rate ~{[}10{]}. The required assumption for
Kramer's theory, \emph{i.e.}, that the dynamics be equilibrated around
the resting state, though not perfectly satisfied, appears reasonable
enough, given that the decay time constant of the exponential decay is
correctly described by the escape rate, as previously validated by
comparisons with numerical simulations ~{[}10{]}. However, there is
disagreement in the very short ISIs ~{[}10{]}. Therefore, in the present
case, the escape rate is only supposed to describe the exit over the
separatrix, which is then followed by the time taken for another limit
cycle spike, \(t_\mathrm{lc}\). If the escape and limit cycle dynamics
were to be statistically independent, the waiting time of the complete
interburst statistics \(p_\mathrm{fp}(t)\) would be the convolution of
the escape statistics \(p_\mathrm{e}\) and the additional time
corresponding to the duration of the spike, \(p_\mathrm{lc}\),
\emph{i.e.},
\begin{equation}p_\mathrm{fp}(t)=(p_\mathrm{lc}*p_\mathrm{e})(t)=\int_0^tp_\mathrm{lc}(t-r)p_\mathrm{e}(r)\mathrm{d}r.\label{eq:convPDF}\end{equation}
Note that \cref{eq:convPDF} effectively describes a Poisson neuron with
a refractory period drawn from \(p_\mathrm{lc}\). The assumption of
statistical independence can be motivated by two observations. Firstly,
due to the fast contraction of the stable directions onto the
one-dimensional unstable manifold at the saddle, the trajectories that
leave the stable fixpoint are likely to penetrate the separatrix near
one point. This gives delta-like initial condition for the limit cycle
dynamics and to some extent clears the memory of the preceding
trajectory. Secondly, the noise is uncorrelated.
The interval statistics of the escape, \emph{i.e.}, the Poisson neuron
with Kramer's rate \(1/\tau_\mathrm{e}\), is exponential,
\begin{equation}p_\mathrm{e}(t)=e^{-t/\tau_\mathrm{e}}/\tau_\mathrm{e}.\label{eq:kramer}\end{equation}
The mean interval \(\tau_\mathrm{e}\) is given by the inverse of the
Kramer's rate ~{[}10{]}
\begin{equation}\tau_\mathrm{e}\approx\frac{2\pi}{|\lambda_1|}e^{\Delta{U_{\mathrm{sn}}}/2\sigma^2},\label{eq:escapetime}\end{equation}
where \(\lambda_1\) is the eigenvalue associated with the unstable
manifold of the saddle. \(\Delta{U_{\mathrm{sn}}}\) is the potential
difference between saddle and node,
\(\Delta{U_{\mathrm{sn}}}={U_{\mathrm{sn}}}({y_{\mathrm{s}}})-{U_{\mathrm{sn}}}({y_{\mathrm{n}}})\).
The latter can be approximated in the vicinity of the saddle-node
bifurcation. Saddle and node depart from the saddle-node according to a
square root function, such that locally
\({y_{\mathrm{sn}}}=({y_{\mathrm{s}}}+{y_{\mathrm{n}}})/2\). If
\({y_{\mathrm{s}}}\) and \({y_{\mathrm{n}}}\) have not departed too far
from \({y_{\mathrm{sn}}}\), the potential \(U\) is centrally symmetric
around \({y_{\mathrm{sn}}}\) and hence has no quadratic part
(\emph{i.e.}, the linear dynamics of saddle and node cancel in the
middle). Therefore, the remaining dynamics can be captured in the
following potential:
\[{U_{\mathrm{sn}}}\approx\frac{({I_{\mathrm{sn}}}-I)(y-{y_{\mathrm{sn}}})}{{C_{\mathrm{m}}}}+\frac{a(y-{y_{\mathrm{sn}}})^3}{3},\]
with the factor \(a\) from \cref{eq:curvature}.
The potential difference between saddle and node is hence
\[\Delta{U_{\mathrm{sn}}}\approx(I-{I_{\mathrm{sn}}})({y_{\mathrm{n}}}-{y_{\mathrm{s}}})/{C_{\mathrm{m}}}+ \frac{a}{12}({y_{\mathrm{n}}}-{y_{\mathrm{s}}})^3.\]
\begin{figure}
\hypertarget{fig:fp_eventDuration}{%
\centering
\includegraphics{fp_eventDuration4.pdf}
\caption{Near spike onset, the analytical escape rate (red) fits the
probability density of the escape duration from fixpoint to the
separatrix
(\(\tau_\mathrm{n} = 0.165\mathrm{ms}\)).}\label{fig:fp_eventDuration}
}
\end{figure}
Using this approximation of the potential height in
\cref{eq:escapetime}, the escape time density in \cref{eq:kramer} can be
compared to the simulated neuron. The validity of the approximation can
be inspected in \cref{fig:fp_eventDuration} for different input
currents. With this, all elements of the interspike density in
\cref{eq:isipdf} have been derived.
The full interspike interval distribution is plotted in
\cref{fig:ISIdensity} for one representative simulation, together with
the analytical prediction.
\begin{figure}
\hypertarget{fig:ISIdensity}{%
\centering
\includegraphics{Figure6.pdf}
\caption{Interspike interval density for a numerical simulation with
\(\tau_\mathrm{n} = 0.16\mathrm{ms}\), driven with
\(I =4.4\mu\mathrm{A/cm}^2\) plus current noise with
\(\sigma=0.8\sqrt{\mathrm{ms}}\mu\mathrm{A/cm}^2\). Mean ISI is 4.53 ms,
coefficient of variation is 1.69.}\label{fig:ISIdensity}
}
\end{figure}
\hypertarget{burst-length-statistics-and-estimates-of-the-splitting-probability}{%
\subsection{Burst-length statistics and estimates of the splitting
probability}\label{burst-length-statistics-and-estimates-of-the-splitting-probability}}
\label{sec:burstlength}
As argued in Sec. \ref{sec:splitprop}, the sequence of interspike
intervals generated by the present bistable neuron, driven by white
noise, is a renewal process, \emph{i.e.}, after each spike at the
downstroke, the decision from which of the mixture components the
interval is drawn happens irrespective of the previous intervals. Hence,
no serial correlations between intervals are to be expected.
Consequently, the burst length (number of consecutive limit cycle
traverses before crossing the separatrix to the fixpoint) follows a
geometric distribution which only depends on the splitting probability,
\begin{equation}p(k) = {\varpi}(1-{\varpi})^{k-1}.\label{eq:burstlength}\end{equation}
\cref{fig:burstlength} shows a comparison of numerically obtained
burst-length statistics and the theory. This supports the initial
assumption that the distribution of interspike intervals is indeed a
renewal process. The discrepancy between simulations and theory observed
for larger inputs results from the shape of the potential that separates
stable fixpoint and limit cycle. For large inputs, the potential becomes
shallow, such that repeated jumps over the separatrix become more
likely. Even if the dynamics immediately jump back to the limit cycle,
these events are in the simulations counted as fixpoint visits, while
the theory only considers jumps that converge to the fixpoint. This
leads to shorter bursts in the simulation compared to the theoretical
expectations, as observed in \cref{fig:burstlength} in the panel with
\(I - I_{\mathrm{sn}} = -0.06\mu A/cm^2\).
\begin{figure}
\hypertarget{fig:burstlength}{%
\centering
\includegraphics{burstLength4.pdf}
\caption{Burst-length statistics fitted using a geometric distribution
and the splitting probability from
\cref{eq:splitprob}.}\label{fig:burstlength}
}
\end{figure}
If, for longer experimentally recorded spike trains, histograms of
burst-length distribution are available, the splitting probability
\({\varpi}\) can be inferred as the single parameter that fits \(p(k)\)
to the data.
\hypertarget{moments-and-parameter-estimates}{%
\section{Moments and parameter
estimates}\label{moments-and-parameter-estimates}}
\label{sec:est}
In the presence of noise, hysteresis effects as shown in
\cref{fig:fIcurve_snl}, a distinctive signature of bistability in
deterministic systems, may be obscured. But can bistability still be
detected from stochastic properties of the spike time series? Once
bistability is established, the previous section has identified
multimodality as the distinguishing fact between the bistability
resulting from a saddle-homoclinic orbit bifurcation \emph{versus} a
subcritical Hopf bifurcation.
The splitting probability, \({\varpi}\), may be taken as an indicator
for which regime a neurons is driven,
\begin{enumerate}
\def(\arabic{enumi}){(\roman{enumi})}
\item
\({\varpi}\approx0\to\) mean-driven regime
\item
\({\varpi}\approx1\to\) fluctuation-driven regime
\item
\({\varpi}\approx\frac12\to\) bistable neuron
\end{enumerate}
In Sec. \ref{sec:burstlength}, it was surmised that for long enough
spike trains, the mixing factor \({\varpi}\) could be estimated based on
the burst-length statistics in \cref{eq:burstlength}. One may explore
how the moments of the ISI distribution are related to system
parameters. The uncentred moments of the ISI distribution are obtained
from its Laplace transform in \cref{eq:isilaplace} via
\[\nu_k=(-1)^k\frac{\mathrm{d}^k}{\mathrm{d}s^k}P_\mathrm{isi}(s)\Big|_{s=0}.\]
Although the convolution in \cref{eq:convPDF} cannot be evaluated
analytically, its Laplace transform is a simple product of the transform
of the inverse Gaussian distribution of the limit cycle dynamics,
\[P_\mathrm{lc}(s)=\exp\left(\left(1-\sqrt{1+2s\sigma_\mathrm{lc}^2\tau_\mathrm{lc}^2}\right)/\sigma_\mathrm{lc}^2\tau_\mathrm{lc}\right).\]
and that of the exponential distribution, which is
\(P_{\mathrm{fp}}=(1+ s\tau_\mathrm{e})^{-1}\). Together the Laplace
transform of the ISI distribution is
\begin{equation}P_\mathrm{isi}(s)=(1-{\varpi})P_\mathrm{lc}(s)+\frac{{\varpi}P_\mathrm{lc}(s)}{1+s\tau_\mathrm{e}}.\label{eq:isilaplace}\end{equation}
Thus, mean and variance of \(p_\mathrm{isi}(t)\) are given by
\begin{equation}\mu_\mathrm{isi}={\varpi}\tau_\mathrm{e}+ \tau_\mathrm{lc}\label{eq:samplemean}\end{equation}
and
\begin{equation}\sigma_\mathrm{isi}^2=(2-{\varpi}){\varpi}\tau_\mathrm{e}^2+\tau_\mathrm{lc}^3\sigma_\mathrm{lc}^2.\label{eq:samplevar}\end{equation}
For the high firing rates present in HOM neurons with a small
saddle-homoclinic orbit, the mean escape time \(\tau_\mathrm{e}\) is the
longest time scale in the system and can be estimated independently by
fitting a histogram of the largest ISI samples. For low noise,
\(\tau_\mathrm{lc}\) can be estimated as the peak of the ISI histogram.
Then, using \cref{eq:samplemean}, the mixing factor \({\varpi}\) can be
estimated.
\hypertarget{sec:multimodal}{%
\section{Multimodal ISI densities in bistable
neurons}\label{sec:multimodal}}
\label{sec:mulitmodal}
Neuronal bistability at a separatrix connected to the stable manifold of
a saddle is not the only known bistability in single neuron dynamics.
Already in Hodgkin and Huxley's equations for the squid axon a
coexistence of resting and spiking was found for a small parameter range
~{[}34{]}. In that case, for increasing input, a stable and an unstable
limit cycle originate from a fold of limit cycle bifurcation and the
unstable limit cycle subsequently terminates in a subcritical Hopf
bifurcation, which also changes the stability of the fixpoint. ISI
histograms estimated from numerical simulations of the squid model with
noise ~{[}25,26{]}, as well as analytical calculations with simplified
resonate-and-fire type models ~{[}35,36{]}, have suggested the presence
of multimodal peaks in the ISI density. This raises the question if the
kind of bistability in homoclinic neurons treated here can produce
multimodal ISI densities, too, or if this hallmark can be used to
differentiate between the two kinds of bistability?
\hypertarget{hom-case}{%
\subsection{HOM case}\label{hom-case}}
\label{sec:unimodalHOM}
To answer the question of multimodality, the modes of the components of
the mixture are examined. The inverse Gaussian, \(p_\mathrm{lc}(t)\),
has a single mode at
\[\hat t_\mathrm{lc}=\tau_\mathrm{lc}\left( \sqrt{1+\frac{9}{4}\tau_\mathrm{lc}^2\bar D_\mathrm{lc}^2}-\frac{3}{2}\tau_\mathrm{lc}\bar D_\mathrm{lc}\right).\]
The convolution with an exponential kernel does not produce additional
peaks, and hence \(p_\mathrm{fp}(t)\) as defined by the convolution in
\cref{eq:convPDF} is unimodal, too. The derivative of
\(p_\mathrm{fp}(t)\) is
\(\tau_\mathrm{e}p_\mathrm{fp}'(t)=p_\mathrm{lc}(t)-p_\mathrm{fp}(t)\).
If set to zero, it is found that it has a single mode
\(\hat t_\mathrm{fp}\) which satisfies
\begin{equation}p_\mathrm{lc}(\hat t_\mathrm{fp})=p_\mathrm{fp}(\hat t_\mathrm{fp}),\label{eq:modeinter}\end{equation}
\emph{i.e.}, the single mode is located at the crossing of the two
distributions.
The curvature of \(p_\mathrm{fp}\) is given by
\begin{equation}p_\mathrm{fp}''(t)=\frac{1}{\tau_\mathrm{e}}(p_\mathrm{lc}'(t) -
p_\mathrm{fp}'(t)).\label{eq:curvature}\end{equation} The curvature at
the mode is thus given by
\(p_\mathrm{fp}''(\hat t_\mathrm{fp})=p_\mathrm{lc}'(\hat t_\mathrm{fp})/\tau_\mathrm{e}.\)
The curvature is negative because \(\hat t_\mathrm{fp}\) corresponds to
a maximum. Hence, the mode of \(p_\mathrm{fp}(t)\) is to be found on the
declining part of \(p_\mathrm{lc}(t)\), \emph{i.e.},
\(\hat t_\mathrm{lc}<\hat t_\mathrm{fp}\).
The modes of the mixture distribution are confined to lie in the
interval \([\hat t_\mathrm{lc},\hat t_\mathrm{fp}]\). Within this
interval between both individual peaks, \(p_\mathrm{lc}'(t)<0\) and
\(p_\mathrm{fp}'(t)>0\), such that \cref{eq:curvature} implies the
concavity of \(p_\mathrm{fp}(t)\). Let \(\tilde t\) denote the
inflection point of the declining part of the inverse Gaussian
distribution, \(p_\mathrm{lc}(t)\). The distribution
\(p_\mathrm{lc}(t)\) is concave on the interval
\([\hat t_\mathrm{lc},\tilde t]\). Within the interval
\([\hat t_\mathrm{lc},\min(\tilde t,\hat t_\mathrm{fp})]\), both
distributions, \(p_\mathrm{lc}(t)\) and \(p_\mathrm{fp}(t)\), are
concave, which permits no more than a single peak for the mixing
distribution. If the inflection point lies beyond the mode of
\(p_\mathrm{fp}(t)\), \emph{i.e.}, \(\hat t_\mathrm{fp}<\tilde t\), this
implies unimodality of \(p_\mathrm{isi}(t)\). For the other case,
\(\hat t_\mathrm{fp}>\tilde t\), this implies no more than a peak on the
interval \([\hat t_\mathrm{lc},\tilde t]\). For unimodality, it remains
to be shown that the mixing distribution decays on the interval
\([\tilde t, \hat t_\mathrm{fp}]\). Within this interval, let us assume
that \(\tau_\mathrm{e}\) is the longest time scale in the system.
According to \cref{eq:curvature}, the density \(p_\mathrm{fp}(t)\) can
be made arbitrarily flat compared to the derivative
\(p_\mathrm{lc}'(t)\) by increasing \(\tau_\mathrm{e}\). This means that
for sufficiently large \(\tau_\mathrm{e}\), \(p_\mathrm{isi}(t)\) is
within the interval \([\tilde t, \hat t_\mathrm{fp}]\) dominated by the
derivative \(p_\mathrm{lc}'(t)\), and is thus negative with no
possibility for a peak.
Coming back to the question of the modality of bistable homoclinic ISI
density, it can be asserted that for large \(\tau_\mathrm{e}\), which
occur close to \({I_{\mathrm{hom}}}\), and with all other assumptions
used in this article, the ISI density is unimodal. This is in contrast
to at least a large proportion of bistable Hopf neurons and could offer
a way to distinguish these regimes.
\hypertarget{subcritical-hopf-case}{%
\subsection{Subcritical Hopf case}\label{subcritical-hopf-case}}
\label{sec:hopf}
The second type of bistability in conductance-based neuron models
originates from the case where the stable limit cycle is born together
with an unstable one out of a fold of limit cycles. The unstable limit
cycle shrinks and may either be destroyed by a homoclinic bifurcation to
a saddle ~{[}11{]}, or, as in the Hodgkin-Huxley equations in a
subcritical Hopf bifurcation, which is the case discussed here and shown
in \cref{fig:stat_subHopf}b. The noise-driven dynamics of this latter
case has been investigated in numerical simulations, where the ISI
density was reported as multimodal ~{[}25,26,37{]}. Furthermore, some
geometric considerations about probable exit regions across the unstable
limit cycle have been made ~{[}37{]} and will be discussed in the
following. While at present it is not fully understood how universal
this multimodality is, it can be distinguished from the homoclinic case,
which is not multimodal for sufficiently long escape times from the
fixpoint as argued in Sec. \ref{sec:unimodalHOM}.
In the Hopf case, too, interspike intervals could be categorised into
trajectories cycling around the stable limit cycle and those that visit
the fixpoint region by crossing the circular separatrix given by the
unstable limit cycle as shown in \cref{fig:stat_subHopf}a. Within the
basin of attraction of the stable focus, the dynamics can be linearised,
\begin{equation}\dot{\boldsymbol x}=\boldsymbol J\boldsymbol x+\boldsymbol
B\boldsymbol\xi,\label{eq:linsys}\end{equation} using the Jacobi matrix
at the fixpoint, \(\boldsymbol J\), and given the diffusion matrix,
\(\boldsymbol D=\frac12\boldsymbol B\boldsymbol B^\dagger\). Assuming
there is no focus-to-node transition in the region of bistability
~{[}11{]}, \(\boldsymbol J\) has a pair of conjugate imaginary
eigenvalues with negative real parts. In this case the dynamics shows
noise-induced oscillation (\emph{alias} quasicycles or subthreshold
oscillations) ~{[}38{]}. This class of noisy oscillations that do not
require a deterministic limit cycle can still be described as phase
oscillators using the recently found methods of backward and forward
phases ~{[}39,40{]}. Their average period is given by
\(\tau_\mathrm{H}=2\pi/\omega_\mathrm{H}\) ~{[}40{]}, where
\(\omega_\mathrm{H}\) is the frequency given by the imaginary part of
the Jacobian's eigenvalues at the focus. What determines whether or not
these quasi oscillations are reflected in the ISI density?
For random crossings of the separatrix, one might be inclined to think
that the deterministic definition of phaseless sets ~{[}41{]} implies
that the phases of the spiralling dynamics inside the unstable orbit
will not automatically carry to the phases of the stable limit cycle,
defined by their isocrhons, and thus not influence spike timing
histograms. \emph{A fortiori}, since the isochrons of the stable limit
cycle foliate around the unstable LC ~{[}41,42{]} and thus, in a
deterministic setting, all phases would be available for small
perturbations crossing the separatrix and thus the subthreshold phase
could be scrambled. Yet, the deterministic view is misleading as shown
by numerous simulation studies reporting multimodality ~{[}25,26{]}.
Motivated in part by realistic noise sources such as ion channel
fluctuations, previous considerations have focused on escape from the
fixpoint which is restricted to a region on the unstable LC as a
mechanism for multimodality and found additional peaks ``in all ISI
histograms {[}that{]} have been examined'' ~{[}37{]}. Indeed, the jump
into the unstable limit cycle is typically confined to a region in state
space, because action potentials with a required signal strength (AP
height) need to have a resting state fixpoint that is in one corner of
the AP limit cycle, away from the peak voltage. Due to the closeness of
the AP trajectory and the unstable LC, transitions are likely to occur
in this proximity, in particular if noise were to be restricted to the
voltage dimension. The idea is that each time after completing another
subthrehold cycle there is a probability of jumping out and initialising
a spike at a narrow region in state space. It was previously concluded
that ``the second peak arises when the trajectory {[}\ldots{]} spirals
round in the vicinity of the fixpoint for one cycle, and as the next
cycle starts, it switches back to the stable limit cycle vicinity, thus
creating an ISI roughly twice as long as the period of the stable limit
cycle'' ~{[}37{]}. Actually, the quoted premise would not lead to a peak
at double \(\tau_\mathrm{lc}\) but with that reasoning the modes would
be located at
\[\tau^\mathrm{mode}_\mathrm{H}=\tau_\mathrm{lc}+k\tau_\mathrm{H},\quad\text{for
$k=0,1,..$.}\]
This predicted position of the secondary peaks in the ISI density is,
however, still off as can be seen in \cref{fig:stat_subHopf}c.~A hint on
why, may be inferred from the path simulation in
\cref{fig:stat_subHopf}a showing that as the unstable limit cycle is not
very repellent, trajectories close to it will cycle around it for some
time before switching to the large stable limit cycle of a spike at a
specific region. In that case the mode should be found at multiples of
the period of the unstable limit cycle, \(\tau_\mathrm{s}\),
\[\tau^\mathrm{mode}_\mathrm{s}=\tau_\mathrm{lc}+k\tau_\mathrm{s},\quad\text{for
$k=0,1,..$,}\] which fits better to the data.
\begin{figure}
\hypertarget{fig:stat_subHopf}{%
\centering
\includegraphics{Figure9.pdf}
\caption{(a) Phase portrait and voltage trace of a stochastically
bursting subcritical Hopf neuron are shown in grey. Stable and unstable
limit cycle are shown as solid and dashed black line. Bistability region
is found for DC input parameters between a fold of limit cycles and a
subcritical Hopf bifurcation. The separatrix is an unstable limit cycle.
(b) (A)-(D) show phase portraits before, after and during the emergence
of the unstable limit cycle. Multimodal ISI histogram (c) for noise
parameters derived from the analytically approximated unstable limit
cycle in (a) and shown as the dotted ellipse in (a). Parameter of the
neuron model can be found in ~{[}29{]}, Fig.
6.16.}\label{fig:stat_subHopf}
}
\end{figure}
In order to highlight that ``regionalised exit'' ~{[}37{]} is not a
prerequisite for multimodality in the ISI, the noise sources in the
system were carefully tuned to render exit over the entire unstable
limit cycle equally likely. Though this might be an unlikely scenario in
a real nerve cell, it is a useful theoretical thought experiment to
understand the requirements for multimodality. The scenario arises for
small unstable limit cycles accompanied by \emph{a fortiori} small
noises with a special structure \(\boldsymbol D\). The diffusion matrix
has to be chosen such that the covariance matrix \(\boldsymbol\Sigma\)
of the stationary solution of \cref{eq:linsys} matches with the geometry
of the surrounding unstable limit cycle. Close to the Hopf bifurcation
the emerging unstable limit cycle can be approximated as an ellipse
~{[}43{]}:
\[\Gamma(t)=\varepsilon(\boldsymbol q_\Re\cos(\omega_\mathrm{H}t)-\boldsymbol q_\Im\sin(\omega_\mathrm{H}t)),\]
where \(\boldsymbol q_\Re\) and \(\boldsymbol q_\Im\) are the real and
imaginary part of the right eigenvector corresponding to the Hopf
bifurcation. The covariance matrix \(\boldsymbol\Sigma\) matches the
geometry of the ellipse if
\(\boldsymbol\Sigma=\boldsymbol q_\Re\cdot \boldsymbol q_\Re^\top+\boldsymbol q_\Im\cdot \boldsymbol q_\Im^\top\).
The diffusion matrix is then chose as ~{[}28,40{]}.
\begin{equation}\boldsymbol D=-(\boldsymbol J\boldsymbol\Sigma+\boldsymbol\Sigma\boldsymbol J^\top).\label{eq:optnoise}\end{equation}
With this choice of \(\boldsymbol D\), exit though each segment of the
unstable limit cycle is equiprobable.
The additional example of uniform exit highlights the fact that
(\emph{i}) the subthreshold backwards phase of the quasi cycles with
associated period \(\tau_\mathrm{H}\), (\emph{ii}) the phase of the
unstable limit cycle with associated period \(\tau_\mathrm{uLC}\) and
(\emph{iii}) the phase associated with the stable limit cycle and
isochrons and the period \(\tau_\mathrm{lc}\) are all connected. This
fact is different to to the saddle case where paths are contracted on
the separatrix into almost a single point and hence the previous
dynamics is not forgotten.
\hypertarget{discussion}{%
\section{Discussion}\label{discussion}}
Interspike-interval distributions are commonly investigated to
characterise spiking behaviour in neurons. Experimentally, these
distributions are easily measured by observing spike trains in response
to step currents or noise injections. Theoretical distributions have
been derived for several types of neuron models, in particular the
Poissonian distribution for fluctuation-driven integrate-and-fire-type
or conductance-based neuron models ~{[}10{]}, and the inverse Gaussian
distribution for mean-driven neurons with a SNIC bifurcation at spike
onset ~{[}27,33,44{]}. Here, the interspike-interval distribution for
neurons with a saddle-homoclinic orbit bifurcation, from which the limit
cycle spike emerges, was derived within the bistable regime. These
neurons show, close to spike onset, a region of bistability between
resting state and spiking, and if the dynamics visits the resting state
between two spikes, particularly long interspike intervals can ensue.
But can the present statistical analysis help to discern HOM/SNL, SNIC
and Hopf bifurcations in recordings? Fitting inverse Gaussian,
exponential or the bistable ISI density, as derived here, to recordings
and comparing the model likelihood can be construed as supportive
evidence for one or the other mechanism. However, for generalised
inverse Gaussian distributions, it was shown that several diffusion
processes can in principle result in the same waiting time distribution,
or, conversely, ISI distributions cannot be uniquely mapped to their
underlying diffusion processes ~{[}45{]}. Therefore, caution is
warranted not to overstress the generality of one's inference about the
mechanistic cause. Nonetheless, features of the ISI density, such as its
skewness, have been related to underlying biophysical processes such as
adaptation currents ~{[}44,46{]}. A question similar in spirit may be
whether neuronal bistability is uniquely tied to the ISI distribution
derived in the present article?
In terms of the underlying bifurcation structure at least one other
scenario giving rise to bistability of spiking and resting has been
described previously and in Sec \ref{sec:hopf}: The subcritical Hopf
bifurcation in association with a fold of limit cycles -- present in the
equations derived for the classical squid axon -- also leads to a region
of bistability and hysteresis ~{[}34{]}. In combination with noise,
numerical investigations ~{[}25,26{]} indicated that the ISI
distribution is multi-modal for the tested parameter combinations. At
present, no parameters have been documented for which multimodality does
not manifest in the ISI density. In the case of simplified
resonate-and-fire models, the ISI distribution has been investigated
analytically and multimodal peaks were confirmed ~{[}35,36{]}. In
contrast, the present manuscript argues for the absence of multimodality
in the homoclinic-type bistability and hence this difference may be
exploited to distinguish both kinds of bistability.
As was argued, bistability in homoclinic neurons can lead to spike time
patterns, which resemble spiking observed experimentally in neurons,
such as ``stuttering cells'' ~{[}1,2{]} or ``irregular spiking cells''
~{[}3--5{]}. Some cells show membrane-voltage bistability in the form of
distinct downstates and upstates ~{[}47{]}. The likelihood of seeing
this dynamics seems to be increased during sleep and certain
anesthetics. The emergence of up-/downstates is associated with altered
concentration dynamics in the intra- and extracellular space. Since the
required time scale separation to induce the SNL bifurcation can also be
achieved by modifying reversal potentials ~{[}48{]}, the resulting
homoclinic bistability may be a putative, alternative mechanism
underlying some of the up- and downstate dynamics observed in neurons.
While up- and downstates have been modeled previously as bistable
fixpoints in an integrate-and-fire like model ~{[}14,18{]}, the
bistability between resting state and spiking dynamics introduced here
is easily implemented in biologically more realistic conductance-based
neuron model.
The emergence of bistability in neurons changes their coding properties,
too. It has been noted that, in the absence of noise, rate coding in
neurons close to a SNIC bifurcation is undermined by undesirable
nonlinearities. More favorable for coding, bursting neurons have been
shown to linearise the rate-tuning curve ~{[}49{]}. Furthermore, in a
network, bistability in the membrane voltage has been shown to increase
the power for certain frequency bands of a population transfer function
~{[}18{]}. In a similar way, the filtering associated with individual
homoclinic neurons can transfer considerably higher frequencies during
the spiking periods ~{[}50{]}. Hence, spike-timing based codes can
benefit from the high-frequency coding arising from the symmetry
breaking that is induced by the switch in spike generation from SNIC to
HOM at the SNL point ~{[}12{]}. The option to visit the fixpoint before
spiking adds to the versatile coding possibilities of these neurons when
explicitly considering their bistability. An open question is if the
interspike-interval distribution of the bistable neuron has favourable
properties similar to the power-law interspike interval density
appearing in some theories of optimal coding ~{[}52{]}: The mutual
information between a fast stimulus and the emitted spike train is
bounded from above by the output entropy of the alphabet (\emph{i.e.},
the spike train entropy) ~{[}53{]}. The spike train entropy is increased
by more diverse spike patterns arising from the stochastic bursting
responses in the bistable regime, compared to the tonic response of a
SNIC neuron. It remains to be shown how the conditional entropy is
influenced, which also contributes to the system's information
transmission rate.
Early theories of spiking network dynamics have, for simplicity, assumed
identical neurons. Networks of such identical, yet highly stochastic,
spiking neurons can generate global rhythms ~{[}54{]}. Even with mild
heterogeneity in the network delays, these phenomena seem to persist
~{[}55{]}. Neuron-intrinsic heterogeneity has also been investigated in
networks of leaky integrate-and-fire (IF) units, using randomly
distributed thresholds. Under a rate coding regime an optimal level of
heterogeneity was suggested ~{[}56{]}. Yet, with leaky IF models lack
the rich bifurcation structure of conductance-based models.
Particularly, heterogeneity in thresholds will not produce the drastic
and critical changes described in the present article. The impact of
heterogeneity in single-neuron parameters that bring about SNL
bifurcations in networks may be surmised to be substantial, but awaits
further studies.
Integrate-and-fire models focus on capturing only the spike upstroke
dynamics while relying on a reset for the spike downstroke. These models
have been used to investigate the influence of rapid upstroke dynamics
(in part influenced by Na\textsuperscript{+} channel dynamics) on coding
~{[}57{]}, and network dynamics ~{[}58{]}. The quadratic IF model can be
derived from the centre manifold reduction of saddle-node bifurcations
~{[}30{]}, and can with an appropriately chosen reset serve as the
``normal form'' of the bifurcation structure in
\cref{fig:fIcurve_snl}(b) ~{[}21,29{]}. Then again, this article shows
the switching dynamics to occur outside the centre manifold dynamics
during the downstroke along the strongly stable direction. Hence, the
window of opportunity for jumping the separatrix is more related to the
timescale of the K\textsuperscript{+} channel dynamics. Nonetheless, the
quadratic IF with a reset above the unstable fixpoint and noise will
produce the same ISI dynamics as derived here and can thus be taken as a
simplified form in network theories of homoclinic bistable neurons.
In summary, the interspike interval distribution derived in this paper
is useful on various levels. It provides an experimental check for
bistability due to homoclinic spike generation, conveys information on
coding properties, and forms the basis for a mean-field theory for
networks with bistable single-neuron dynamics. Translated to other
oscillating systems, the analysis might even inform about homoclinic
bistability beyond the neurosciences.
\hypertarget{acknowledgments}{%
\section{Acknowledgments}\label{acknowledgments}}
Funded by the German Federal Ministry of Education and Research (No.
01GQ1403) and the Deutsche Forschungsgemeinschaft (No.~GRK1589/2).
Authors J.-H.S. and J.H. contributed equally to this work. The authors
would like to thank Paul Pfeiffer for useful comments on the manuscript.
|
1,108,101,563,816 | arxiv | \section{introduction}
Far-infrared spectroscopy and transport measurements were from early on
used to investigate the electronic structure\cite{Reimann02:1283} of quantum dots of various
types. Far-infrared spectroscopy of arrays of quantum
dots\cite{Demel90:788a,Shikin91:11903a} turned out to be rather
insensitive to the exact form of the interaction between the electrons.
The reason being that most arrays of quantum dots resulted in almost
parabolic confinement of electrons to individual dots in the low energy regime.
Soon it was realized that an exact symmetry condition, known as the
the extended Kohn's theorem\cite{Kohn61:1242}
is valid for such systems as long as each
dot is much smaller than the wavelength of the dipole radiation, and results in a pure
center-of-mass motion of the electrons in each dot, independent of the number of
electrons and the nature of the interaction between them.
Signatures of deviations from the parabolic confinement where soon discovered in
experimental results and interpreted with model calculations based on various
approaches to linear response.\cite{Pfannkuche91:13132,Gudmundsson91:12098,Pfannkuche93:6}
The Coulomb blockade helped
guaranteeing a definite number of electrons in each quantum dot homogeneously in the large arrays
that were necessary to allow measurement of the weak FIR absorption signal.
Deviations from the parabolic confinement of electrons in quantum dots
lead to the excitation of internal collective modes that can cause splitting of the
upper plasmonic branch and make visible the classical Bernstein\cite{Bernstein58:10}
modes.\cite{Gudmundsson95:17744,Krahne01:195303}
In the lower plasmonic branch they lead to weak oscillations caused by filling factor dependent
screening properties.\cite{Bollweg96:2774,Darnhofer96:591}
Resonant Raman scattering has been applied to quantum dots to analyze ``single-electron''
excitations and collective modes with monopole, dipole, or quadrupole symmetry
($\Delta M=0,\pm1,\pm2$).\cite{Steinebach99:10240,Steinebach00:15600}
As the monopole collective oscillations are excitations that can be exclusively
described by internal relative coordinates one would expect them to be more influenced by
the Coulomb interaction between the electrons than the dipole excitations that have to be
described by relative and center-of-mass coordinates, or purely by the latter ones
when the Kohn theorem holds.\cite{Kohn61:1242}
The $\Delta M=0$ collective mode among
others was measured by a very different method and calculated for a confined two-dimensional
electron system in the classical regime on the surface of liquid Helium.\cite{PhysRevLett.54.1710}
In the far-infrared and the Raman measurements of arrays of dots the excitation has
always been weak and some version of linear response has been an adequate approach to
interpret the experimental results. All the same, curiosity has driven theoretical groups
into questioning how the electron system in a quantum dot would respond as the linear
regime is surpassed and a strong excitation would pump energy into the
system.\cite{Puente01:235324,Gudmundsson03:161301,Gudmundsson03:0304571}
These studies have been undertaken with some kind of a mean-field model to incorporate the
Coulomb interaction between the electrons.
Here, we will explore this nonlinear excitation regime with a model built on exact numerical
diagonalization or configuration interaction (CI)\cite{Pfannkuche93:6} and compare the results
with the predictions of three different mean-field approaches, and a time-dependent Hubbard model.
Besides the question of what happens in the nonlinear regime, we want to see how close to the
exact results the mean-field models can come for only two electrons in the dot, a regime that
is indeed challenging for mean-field approaches which in general are more appropriate
for a higher number of electrons.
We will address issues of nonlinear behavior. What do we classify as nonlinear behavior?
Can we see it emerging in an exact model?
How, and when is it inherent in a mean-field approach?
\section{Short excitation in the THz regime}
In order to describe the response to an excitation of arbitrary strength
we will follow the time-evolution of the system by methods that are
appropriate to each model.
At $t=0$ the quantum dot is radiated by a short THz pulse
\begin{eqnarray}
W(t) &=& V_t r^{|N_p|}\cos{(N_p\phi)}\exp{(-sr^2-\Gamma t)}\nonumber\\
&{\ }&\sin{(\omega_1t)}\sin{(\omega t)}\theta (\pi -\omega_1t),
\label{Wt}
\end{eqnarray}
where $\theta$ is the Heaviside step function. For the purpose of making the
response strongly dependent on the Coulomb interaction between the electrons
we select the monopole or the breathing mode with $N_p=0$. It should be kept in
mind that this short excitation pulse perturbs the system in a wide frequency range.
The quantum dot will have a parabolic confinement potential
\begin{equation}
V_{\mathrm{par}}(r)=\frac{1}{2}m^*\omega_0^2r^2,
\label{Vpar}
\end{equation}
with $\hbar\omega_0 = 3.37$ meV. In addition, we will sometimes add a small
potential hill in the center of the dot
\begin{equation}
V_{\mathrm{c}}(r) = V_0\exp{(-\gamma r^2)},
\label{Vc}
\end{equation}
with $V_0 = 3.0$ meV, and $a^2\gamma = 1.0$, where $a = \sqrt{\hbar/(m^*\omega_0)}$
is the characteristic length scale for the parabolic confinement. We will be assuming
GaAs parameters here with $m^*=0.067m_e$ and a dielectric constant $\kappa = 12.4$.
If we select $sa^2 = 0.8$, $\hbar\omega_1 = 0.658$ meV, $\hbar\omega = 2.63$ meV,
and $\Gamma =2.0$ ps$^{-1}$,
then the initial pulse of duration approximately $3$ ps represents a spatial circular Gaussian
pulse rising from zero and vanishing after its amplitude gets negative.
The system is perturbed by a radial compression followed by a slight radial expansion
and then left to oscillate freely about the equilibrium point. The system will be kicked
out of equilibrium and the time-evolution has to be described accordingly for each model.
The reason for adding the central hill (\ref{Vc}) to the quantum dot is to avoid
any special symmetry that could result from the parabolic confinement (\ref{Vpar}).
\section{Time-evolution of quantum dot Helium with a DFT interaction}
The details of a density functional theoretical (DFT) approach to the model
used to describe nonadiabatic excitation of electrons in a quantum
ring or dots in an external magnetic field has been published
earlier.\cite{Gudmundsson03:161301,Gudmundsson03:0304571}
Here, we will use the model for a vanishing external magnetic field and
properly make clear the difference in the calculation of the time-evolution of this
mean-field model to the CI model. To accomplish this we need to list few steps.
The ``single-electron'' energy spectrum of the model is presented in
Fig.\ \ref{E-rof-dft} at temperature $T=0.1$ K and for a small hill (\ref{Vc}) placed
in the center of the system.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig01.pdf}
\caption{(Color online) The effective single-electron energy spectrum
for the DFT-version of the model of two electrons in a
parabolic quantum dot with a small central hill (\ref{Vc})
as a function of the quantum number of angular momentum $M$.
The chemical potential, $\mu$, needed to have two electrons
in the dot is indicated by a solid green horizontal line.
$V_0=3$ meV, $T=0.1$ K.}
\label{E-rof-dft}
\end{figure}
The finite, but small temperature is used to stabilize the iteration process
used to solve the DFT model. The chemical potential $\mu$ needed to have
two electrons in the ground state of the system is indicated in the figure
by a horizontal green line.
The calculation is a ``grid-free'' approach utilizing the eigenstates of the
noninteracting system as a functional basis $\{|nM\rangle\}$. The interacting
states $|\alpha )$ can not be assigned a definite quantum number $n$ and
$M$, but as the system is circularly symmetric here, by comparing the location
in the energy spectrum and by checking the leading
contribution to the interacting states we allow ourselves to assign, for educational
purposes, the quantum numbers shown in Fig.\ \ref{E-rof-dft}. The central hill (\ref{Vc})
and the Coulomb interaction raise the energy of the states with high $M=0$ contribution.
To calculate the time-evolution of the system kicked out off equilibrium
by the perturbing pulse (\ref{Wt}) we use the Liouville-von Neumann equation for the density operator
\begin{equation}
i\hbar \frac{d}{dt}{\rho}(t) = [H + W(t),\rho (t)],
\label{L-vN-dft}
\end{equation}
represented in the noninteracting basis $\{|n,M\rangle\}$. The structure of this equation is
inconvenient for numerical evaluation so we resort instead to the time-evolution operator $T$,
defined by $\rho (t) = T(t)\rho_0T^+(t)$, which has the simpler equation of motion
\begin{eqnarray}
i\hbar\dot T(t) &=& H(t)T(t)\nonumber\\
-i\hbar\dot T^+(t) &=& T^+(t)H(t).
\label{Teq}
\end{eqnarray}
The single-electron basis is truncated after tests for convergence of the time-evolution
with the parameters used here. We discretize time and use the Crank-Nicholson algorithm
for the time-integration with the initial condition, $T(0)=1$.
The circular symmetry of the confinement potential (Eq.'s (\ref{Vpar}) and (\ref{Vc})) and the
excitation pulse (\ref{Wt}) suggest the mean value of the radius squared to be an
ideal observable to be analyzed. In Fig.\ \ref{r2-dft-T01-hill} we show $\langle r^2\rangle$
as function of time $t$ and the strength of the perturbing pulse $V_t$.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig02.pdf}
\caption{(Color online) The time-evolution of the expectation value
$\langle r^2\rangle$ as function of the strength of the initial
perturbation pulse, $V_t$, for the DFT-version of the model of
two electrons in a quantum dot. $V_0=3$ meV, $T=0.1$ K.}
\label{r2-dft-T01-hill}
\end{figure}
We see already in Fig.\ \ref{r2-dft-T01-hill} that the amplitude of the response to the
initial perturbation (\ref{Wt}) is nonlinear. To analyze this better we show the Fourier
transform in Fig.\ \ref{FFT-r2-dft-T01-hill}(a), where we indeed see a local minimum around
$V_t\approx 35-40$ meV.
\begin{figure}[htbq]
\includegraphics[width=0.46\textwidth,angle=0]{Fig03a.pdf}
\includegraphics[width=0.46\textwidth,angle=0]{Fig03b.pdf}
\caption{(Color online) The Fourier power spectrum for the time-evolution of
$\langle r^2\rangle$ for the DFT-version of the model of two-electrons
in a quantum dot. The lower panel is a side view to demonstrate the
stability in frequency for different values of excitation $V_t$.
$V_0=3$ meV, $T=0.1$ K.}
\label{FFT-r2-dft-T01-hill}
\end{figure}
Curiously enough, this local minimum can not be seen in the results if we turn off
the exchange and the correlation functionals in the DFT model, i.e.\ if we use a
Hartree approximation (HA) for the Coulomb interaction, see Fig.\ \ref{FFT-r2-hartree-T01}.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig04a.pdf}
\includegraphics[width=0.42\textwidth,angle=0]{Fig04b.pdf}
\caption{(Color online) The Fourier power spectrum as a function of energy and
perturbation strength $V_t$ for the Hartree Approximation.
$V_0=3.0$ meV, $T=0.1$ K.}
\label{FFT-r2-hartree-T01}
\end{figure}
For a later discussion we note here that the time-dependent HA calculations for the
present parameters are much more stable then the DFT version. We are thus able to go to
higher values of $V_t$ and observe the time-evolution for longer time resulting in more
accurate Fourier transforms. In the DFT or the HA model the part of the Hamiltonian describing
the effective Coulomb interaction remains time-dependent at all times, even after the
initial perturbing pulse has vanished, since the local effective potential depends on the
electron density which is oscillating in time. It is thus of no surprise that in these
mean-field models the occupation, the diagonal elements of the density matrix (\ref{L-vN-dft}),
remain time-dependent as can be seen in Fig.\ \ref{Occupation-T}.
\begin{figure}[htbq]
\includegraphics[width=0.23\textwidth,angle=0,bb=0 0 160 245,clip]{Fig05a.pdf}
\includegraphics[width=0.23\textwidth,angle=0,bb=0 0 160 245,clip]{Fig05b.pdf}
\caption{(Color online) Time-dependent occupation of effective single-electron states (of the noninteracting
basis ${|nM\rangle}$) for the HA model with a central hill (\ref{Vc}) for
$V_t=10.0$ meV (left panel), and $V_t=200.0$ meV (right panel). $V_0=3.0$ meV, $T=0.1$ K.}
\label{Occupation-T}
\end{figure}
This time-dependence of the occupation and the effective interaction will be in
contrast to what happens in the CI calculation described below.
In a real system, an open system, the oscillations will be damped by phonon
interactions\cite{PhysRevB.75.125324} or photons.\cite{PhysRevB.87.035314}
In the far-infrared regime the radiation time scale is much longer than the
100 ps during which we follow the evolution of the system here.
It is possible to construct the time-dependent induced density, $\delta n(r,t)=n(r,t)-n(r,0)$,
for the oscillations in the system in the hope to monitor the modes being occupied for different values of
$V_t$. In Fig.\ \ref{dn-dft-050-350} we see the induced density for the DFT model over approximately one
oscillation for $V_t=5$ and $35$ meV. It is clear that for the higher value of excitation
a second oscillation mode is superimposed on the fundamental mode visible for $V_t=5$ meV.
\begin{figure}[htbq]
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06a.pdf}
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06b.pdf}\\
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06c.pdf}
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06d.pdf}\\
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06e.pdf}
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06f.pdf}\\
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06g.pdf}
\includegraphics[width=0.21\textwidth,angle=0,bb=42 36 313 207,clip]{Fig06h.pdf}
\caption{(Color online) The induced density $\delta n(r,t)=n(r,t)-n(r,0)$ within one period
for the DFT model for $V_t=5$ meV (left), and $V_t=35$ meV (right).
$V_0=3.0$ meV, $T=0.1$ K.}
\label{dn-dft-050-350}
\end{figure}
For still higher excitation this becomes even more apparent. In Fig.\ \ref{E-rof-dft} the main
``single-electron'' contribution to this collective oscillation is indicated by an arrow between
$|00)$ and $|10)$. Higher excitation brings in a mixing from the $|10)$ to $|20)$ transition, and
higher temperature would activate transitions from $|0-1)$ to $|1-1)$, and from $|01)$ to $|11)$.
\section{Time-evolution of a quantum dot Helium described by a nonlinear
Schr{\"o}dinger-Poisson equation}
We will consider one more variant of a mean-field model for the two
Coulomb interacting electrons in the quantum dot. This model could be considered a
version of the HA for a special case, but we investigate it here for a different
reason. It allows for the application of a nonlinear solution method to be described
at the end of this section.\cite{Reinisch11:699,Reinisch12:902}
Consider a $S=0$ electron pair located at $z_{1,2}=x_{1,2}+iy_{1,2}$ in the $x-y$
plane and confined by the 2D parabolic potential (\ref{Vpar}).
Since their spins are opposite, both electrons can stay, as fermions, in the same orbital state $\psi$.
Moreover, they obey a pair orbital symmetry. Therefore the simplest
two-electron wavefunction $\Psi_{\mathrm{pair}}(z_1,z_2,t)$ is
\begin{equation}
\label{eq-orbital2el}
\Psi_{\mathrm{pair}}(z_1,z_2,t)=\psi(z_1,t) \psi(z_2,t),
\end{equation}
where $|\Psi_{\mathrm{pair}}(z_1,z_2,t)|^2$ is the probability density to find at time $t$ either electron at
$z_i$ while the other is at
$z_j$ ($i\neq j = 1,\,2$). Therefore, the normalization condition reads
\begin{equation}
\label{eq-norm}
\int d^2z_1 d^2z_2 |\Psi_{\mathrm{pair}}(z_1,z_2,t)|^2 =
\left[\int d^2z |\psi(z,t)|^2\right]^2=1.
\end{equation}
We assume that $\psi(z,t)\equiv \psi(x,y,t)$ is a
time-dependent nonlinear state defined by the following
Schr{\"o}dinger-Poisson (SP) differential system
\begin{align}
\label{eq-Schroe_vraie}
i\hbar \frac{\partial}{\partial t} \psi &= H \psi,\\
\label{eq-Poisson_vraie}
\nabla^2 \Phi &= -2\pi{\cal N}\hbar\omega |\psi|^2,
\end{align}
where ${\cal N}$ is a dimensionless
order parameter of the SP system
that defines the strength of the Coulomb repulsive interaction potential $\Phi$
between the particles in units of $\hbar\omega_0$ (in a loose sense, we call it the ``norm'': see
below Eq.\ (\ref{eq-normu})). The 2D nonlinear Hamiltonian is defined by
\begin{equation}
\label{eq-H_vrai}
H =-\frac{\hbar^2}{2m^*} \nabla^2 +\Phi(x,y,t)
+ \frac{1}{2} m^* \omega_0^2 (x^2+y^2).
\end{equation}
Using the characteristic length $a$ of the parabolic confinement
and its frequency $\omega_0$ we perform the following
change of variables
\begin{equation}
\label{eq-change_var}
X=\frac{x}{a};\enspace Y=\frac{y}{a};\enspace \tau=\omega t
;\enspace \psi =\sqrt{\frac{2m^*\omega_0}{\hbar{\cal N}}}u(X,Y,\tau).
\end{equation}
Accordingly, Eq.\ (\ref{eq-norm}) becomes
\begin{equation}
\label{eq-normu}
\int |u(X,Y,\tau)|^2 dX dY ={\cal N},
\end{equation}
while the SP time-space differential system (\ref{eq-Schroe_vraie}-\ref{eq-H_vrai}) yields
\begin{equation}
\label{eq-Sdimless}
i\frac{\partial}{\partial \tau}u +\nabla_{X,Y}^2 u - V u =0,
\end{equation}
\begin{equation}
\label{eq-Pdimless}
\nabla_{X,Y}^2 V +|u|^2-1 =0,
\end{equation}
where $\nabla_{X,Y}$ operates on the new variables $X$ and $Y$.
The (time-dependent) effective mean-field dimensionless
potential experienced by the particles is
\begin{equation}
\label{eq-POTdimless}
V= \frac{\frac{1}{2} m^* \omega^2 (x^2+y^2)+\Phi}{\hbar\omega_0}
=\frac{1}{4}(X^2+Y^2)+\frac{\Phi}{\hbar\omega_0}.
\end{equation}
We wish to define the observable which allows comparison with
the previous sections. Labelling $\bar z=\frac{1}{2}(z_1+z_2)$,
$\bar x=\frac{1}{2}(x_1+x_2)$, and $\bar y=\frac{1}{2}(y_1+y_2)$,
we have $\bar z \bar z^*={\bar x}^2+{\bar y}^2$ and therefore
\begin{equation}
\label{eq-MeanVal}
\langle\langle \bar z \bar z^* \rangle\rangle=
\frac{1}{2}\Bigl[ \langle x^2 \rangle + \langle y^2 \rangle +
\langle x \rangle^2 + \langle y \rangle^2\Bigr],
\end{equation}
where for any observable $A$
\begin{equation}
\label{eq-MeanPair}
\langle\langle A\rangle\rangle=\int d^2z_1 d^2z_2 A |\Psi_{\mathrm{pair}}|^2 ,
\end{equation}
and
\begin{equation}
\label{eq-MeanSP}
\langle A\rangle=\int dx dy A |\psi|^2,
\end{equation}
(cf.\ Eq.\ (\ref{eq-norm})). Obviously, $\sqrt{\langle\langle \bar z \bar z^* \rangle\rangle}$
is a sound measure of the time-dependent extension of the system. In the dimensionless
variables (\ref{eq-change_var}), it reads
\begin{equation}
\label{eq-MeanSPreduced}
R(\tau)= \frac{1}{\sqrt{2}}\Bigl[ \langle X^2 \rangle_u + \langle Y^2 \rangle_u +
\langle X \rangle_u ^2 + \langle Y \rangle_u ^2\Bigr]^{\frac{1}{2}},
\end{equation}
where
\begin{equation}
\label{eq-MeanSPred}
\langle A\rangle_u=\frac{1}{{\cal N}} \int dX dY A |u|^2 ,
\end{equation}
(cf.\ Eq.\ (\ref{eq-normu})).
The solution of system (\ref{eq-Schroe_vraie}-\ref{eq-Poisson_vraie}) demands the
initial profile $\psi(x,y,0)$. For these means we use the radial symmetric ground
state of the time-independent system. The Poisson equation (\ref{eq-Poisson_vraie})
is two-dimensional here and would thus produce a logarithmic Green function for
homogeneous space instead of the $1/r$ three-dimensional that we should be using
since the electric field can not be confined to 2D even though the electrons can be.
But, we do accept this discrepancy for three reasons. First, the asymptotic behavior at
$r\sim 0$ is not so dissimilar though the logarithm represents a bit softer repulsion,
and second, the long range behavior will not carry
much weight due to the parabolic confinement potential (\ref{Vpar}). Third, and most
important, the SP system (\ref{eq-Schroe_vraie}-\ref{eq-Poisson_vraie}) can be solved
directly to obtain a nonlinear solution.\cite{Reinisch11:699,Reinisch12:902}
Generally, for physical mean-field models, which are of course nonlinear, the traditional
method is to seek a solution by iteration. In case of the HA or the DFT model here, the
effective interaction potential is calculated after an initial guess has been made for the
wavefunctions. Then the new wavefunctions are sought by methods from linear algebra, and
the iterations are continued until convergence is reached. The wavefunctions will be
orthonormal. When the SP system is solved directly the wavefunctions are not in general
orthonormal. Besides convenience, the reason for the iteration method is the connection
of the Hartree and Hartree-approximations to higher order methods in many-body theory,
that can only be established in case of orthonormal solutions.
The hope is that the iteration method supplies the nonlinear solution in this
sense or a solution very close to it. In fact, the nonlinear solutions are almost
orthonormal with some small discrepancy of the order of $1-5\%$.
In Fig.\ \ref{FFT-Schr-Poisson-nohill} we show the results for the time-evolution of the expectation value
$\langle\langle \bar z \bar z^* \rangle\rangle$ and the corresponding Fourier
transform for the SP model without a central hill in the quantum dot.
\begin{figure}[htbq]
\includegraphics[width=0.46\textwidth,angle=0,bb=123 45 1411 722,clip]{Fig07a.pdf}
\includegraphics[width=0.46\textwidth,angle=0,bb=123 45 1411 722,clip]{Fig07b.pdf}
\caption{(Color online) The time-dependent expectation value of
$\langle r^2 \rangle$ and the corresponding Fourier power spectrum for the
Schr{\"o}dinger-Poisson model of the quantum dot without a central hill. $V_0=0$, $T=0$ K.}
\label{FFT-Schr-Poisson-nohill}
\end{figure}
\begin{figure}[htbq]
\includegraphics[width=0.46\textwidth,angle=0,bb=123 45 1411 722,clip]{Fig08a.pdf}
\includegraphics[width=0.46\textwidth,angle=0,bb=123 45 1411 722,clip]{Fig08b.pdf}
\caption{(Color online) The time-dependent expectation value of
$\langle r^2 \rangle$ and the corresponding Fourier power spectrum for the
Schr{\"o}dinger-Poisson model of the quantum dot with a central hill. $V_0=3.0$ meV, $T=0$ K.}
\label{FFT-Schr-Poisson}
\end{figure}
In Fig.\ \ref{FFT-Schr-Poisson} we display the time-evolution of the expectation value
$\langle\langle \bar z \bar z^* \rangle\rangle$ and the corresponding Fourier
transform for the SP model with a central hill in the dot.
Below, we will compare the location of the main peak
or peaks for low $V_t$ for the different models, but here we notice that the main
peak shows a local minimum around $V_t=40$ meV, a behavior not so different from the
DFT model, but after $V_t=60$ meV the peak splits into a complex collection of smaller
peaks. For the system without a central hill (Fig.\ \ref{FFT-Schr-Poisson-nohill})
this disintegration of the main peaks happens earlier, and the resulting smaller
peaks are fewer than in the system with a central hill.
The time-evolution in this essentially nonlinear model is very different from
what is known for linear models. In order to appreciate this fact better we look at a
linear model before we comment futher on the time-evolution of the SP model.
\section{Exact time-evolution in a truncated Fock-space}
The CI-version of the model is capable to deliver the time-evolution of few
Coulomb interacting electrons in a quantum dot in an external magnetic field.
Here, we will use it for two electrons in the parabolic confinement introduced
earlier (\ref{Vpar}) with the option of the small central hill (\ref{Vc}).
The ground state for a vanishing external magnetic field is calculated in a
truncated two-particle Fock-space. The truncation limits the two-electron Fock-space
to the 16836 lowest states in energy. The Fock-space is constructed from the single-electron
states of the parabolic confinement. The time evolution is again formally by the
same Liouville-von Neuman equation (\ref{L-vN-dft}) as was used for the mean-field
version of the model, but now the density operator is a two-electron operator that
is expressed in the Fock-space for the interacting two electrons.
The main difference here is that the Hamiltonian of the system is only time-dependent
as long as the initial perturbation (\ref{Wt}) is switched on. The Coulomb part of
the Hamiltonian is always time-independent and no iterations are necessary within
each time step in order to attain convergence for the interaction like in the case
of the DFT-model.
The penalty of this approach is instead the size of the matrices
need for the calculation, but we have used two important technical items in order to
attain the time-evolution to 100 ps. First, we tested for the present parameters
how much we could reduce the Fock-space for the time-integration of the time-evolution
operator (\ref{Teq}). The states which contribute for $V_t=200$ meV to the density matrix
with a contribution larger than $10^{-5}$ are less than 2415, so in the time-integration
we further truncate the Fock-space to that size. We remind that these 2415 interacting
two-electron states were initially calculated using 16836 noninteracting two-electron
states. Still the matrices are considerably larger
than in the DFT-case, so we then rewrote the time-integration to run on powerful
GPU's.\cite{Siro20121884} Furthermore, we tried two different
methods for the time-integration, in one we refer the time-evolution operator
to the initial time $t=0$, and in the other one we only refer it to the one
earlier time step and accumulate the time-evolution in the density matrix.
We selected a time-step small enough for the methods to give the same
results.
After the initial perturbation pulse (\ref{Wt}) dies out nothing is explicitly
dependent on time in the Hamiltonian and therefore the diagonal elements of the
density matrix, the occupation of the interacting two-electron states stays constant.
In Figures \ref{FFT-exact} and \ref{FFT-exact-occ} we show the Fourier power spectrum for the collective
oscillations of the model expressed in terms of the expectation value $\langle r^2\rangle$,
together with the time-independent occupation of each interacting two-electron state participating
in the collective oscillations. Here, we have a pure
parabolic confinement without a central hill.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig09a.pdf}
\includegraphics[width=0.42\textwidth,angle=0,bb=22 13 315 189,clip]{Fig09b.pdf}
\caption{(Color online) The Fourier power spectrum for the time-dependent
expectation value of $\langle r^2\rangle$ for the CI model
without a central hill. The lower panel focuses in on the
energy axis close to resonances. $V_0=0$, $T=0$ K.}
\label{FFT-exact}
\end{figure}
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig10a.pdf}
\includegraphics[width=0.42\textwidth,angle=0]{Fig10b.pdf}
\caption{(Color online) The Fourier power spectrum for the time-dependent
expectation value of $\langle r^2\rangle$ for the CI model
without a central hill (upper panel). The time-independent occupation of the
interacting two-electron states $|\alpha )$ after the perturbation pulse has
vanished (lower panel). $V_0=0$, $T=0$ K.}
\label{FFT-exact-occ}
\end{figure}
The logarithmic scale for the occupation in the lower panel of Fig.\ \ref{FFT-exact-occ}
hides the fact that for $V_t=200$ meV the occupation of the ground state has fallen to
77\%. This is another measure of the strength of the excitation.
The results for the quantum dot with a central hill (\ref{Vc}) added are shown in
Figures \ref{FFT-exact-hill} and \ref{FFT-exact-hill-occ}
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig11a.pdf}
\includegraphics[width=0.42\textwidth,angle=0,bb=22 13 315 189,clip]{Fig11b.pdf}
\caption{(Color online) The Fourier power spectrum for the time-dependent
expectation value of $\langle r^2\rangle$ for the CI model
with a central hill. The lower panel focuses in on the
energy axis close to resonances. $V_0=3.0$ meV, $T=0$ K.}
\label{FFT-exact-hill}
\end{figure}
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig12a.pdf}
\includegraphics[width=0.42\textwidth,angle=0]{Fig12b.pdf}
\caption{(Color online) The Fourier power spectrum for the time-dependent
expectation value of $\langle r^2\rangle$ for the CI model
with a central hill (upper panel). The time-independent occupation of the
interacting two-electron states $|\alpha )$ after the perturbation pulse has
vanished (lower panel). $V_0=3.0$ meV, $T=0$ K.}
\label{FFT-exact-hill-occ}
\end{figure}
The main surprise for the exact results is that we do not find any local minimum for
$V_t\approx 35-40$ meV. Indeed, the main peak found in the exact results shows behavior
that is closer to the results of the HA if we consider only the height of the main peak
found. There are more peaks visible in the exact
results and that is reminiscent of the comparison in the linear response regime for the
exact and the Hartree-Fock approach.\cite{Pfannkuche94:1221} One might of course worry
about the possibility that the DFT-model could not predict the time-evolution properly
or could not describe the excited states correctly,
if it got stuck in some local minimum instead of a global minimum. We have tried to
exclude this possibility by performing the DFT-calculation at higher temperatures,
$T=1.0$ and $4.0$ K. In both cases a minimum around $V_t\approx 35-40$ meV is found.
In addition, we have varied the minimum seeking, but in vain, the minimum always
reappears.
The DFT-approach can be criticized by our use of a static functional instead of a
more appropriate frequency dependent one, especially since we are using it to
describe a collective oscillation in the system. We have no good excuse for this, but
interestingly enough the DFT-model can reproduce the extended Kohn theorem valid
for parabolic confinement for $|N_p|=1$ with ease. The same test has of course been
used with success both for the exact CI-model and the Hartree-version of the DFT-model.
Opposite to the CI-model the seeking of the ground state for the DFT model without
a central hill is a very time-consuming and difficult affair. This behavior has to
be related to the fact that the presence of the central hill (\ref{Vc}) reduces the
importance of the Coulomb interaction. In some sense this is also true for the
nonphysical self-interaction in the Hartree-version of the DFT-model.
Corresponding reduction of the importance of the Coulomb interaction in the case of the
CI-model can eventually be seen in the lower panel of Fig.\ \ref{FFT-exact-occ} and
Fig.\ \ref{FFT-exact-hill-occ} for the occupation of the two-electron states caused
by the initial perturbation. The energy spectra for the 100 lowest interacting two-electron
states are compared in the upper panel of Fig.\ \ref{Exact-E}. Besides the general behavior
of the central hill (\ref{Vc}) to increase the energy of each state we see a partial
lifting of degeneracy.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig13a.pdf}
\includegraphics[width=0.42\textwidth,angle=0]{Fig13b.pdf}
\caption{(Color online) The interacting two-electron spectra versus the
state number $\mu$ (upper panel), and total energy
versus the excitation strength $V_t$ (lower panel) compared for the exact model for the
system with ($V_0=3.0$ meV) and without ($V_0=0$) a central hill, $T=0$ K.}
\label{Exact-E}
\end{figure}
We are here dealing with nonlinear response of a system as can be verified by looking at the
expectation value for the total energy of the system described by the CI-model, after the
excitation pulse has vanished, shown in Fig.\ \ref{Exact-E}. The excitation pulse pumps a
finite amount of energy into the system. This is important when interpreting
the occupation of the interacting two-electron states in the system displayed in the lower
panels of Fig.\ \ref{FFT-exact-occ} and \ref{FFT-exact-hill-occ}. If we look at the system without
a central hill, Fig.\ \ref{FFT-exact-occ}, we see that the ground state $|1)$ is occupied
with probability close to 1, and for low excitation, $V_t$, the next state is $|24)$ and for
higher $V_t$ state $|26)$ competes with $|24)$. If we check the energy differences we find
$E_{24}-E_1=6.139$ meV, and $E_{26}-E_1=6.746$ meV, which indeed fit with the main peak seen
and a side peak appearing for higher $V_t$ in Fig.\ \ref{FFT-exact}.
For the case of a central hill in the system we find that again state $|24)$ has the next highest
occupation, but now for the whole $V_t$ range. Next comes state $|33)$ for low values of $V_t$.
Indeed, we get $E_{24}-E_1=5.698$ meV, and $E_{33}-E_1=6.472$ meV, which again fits very well with
the location of the peaks in Fig.\ \ref{FFT-exact-hill}. The graphs of the occupation of the
interacting two-electron states $|\alpha )$ are thus indicating which states are being occupied
as a result of the excitation of the system. We have verified that states $|24)$ and $|26)$ for the
system without a central hill and states $|24)$ and $|33)$ for the system with one, all have
a total angular momentum $\hbar{\cal M}=\hbar (M_1+M_2)=0$, where $M_i$ is the quantum number for
angular momentum of electron $i$. As was noted earlier\cite{Pfannkuche93:2244} the CI-model
allows for contributions to an ${\cal M}=0$ state two single electron states with angular
momentum $\pm\hbar M$, a combination that is not possible in a HA with circular
symmetry.\cite{Pfannkuche93:2244}
In Figure Fig.\ \ref{FFT-MaxMin} we compare the Fourier power spectra for
$V_t=10$ meV and $V_t=200$ meV in the case of the system with a central hill
and without one, but here we have taken an extra long time-series, integrating the
equations of motion for 1000 ps instead of the 100 ps we have used for the CI-model
above.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig14.pdf}
\caption{(Color online) The Fourier power spectra compared for $V_t=10$ meV and
$V_t=200$ meV for the system without a central hill (a), and with a
central hill (b). $T=0$ K.}
\label{FFT-MaxMin}
\end{figure}
As could be expected for a linear model the peaks visible at low excitation are still
present with unchanged frequency for strong excitation, but the strong excitation activates
several more peaks. It is also clear that the presence of the central hill (Fig.\ \ref{FFT-MaxMin}(b))
shifts the frequency of the main peaks and allows for the excitation of many more.
The central hill does break some special symmetry imposed by the parabolic confinement, that
the Coulomb interaction alone does not break.
\section{Time-evolution of a Hubbard model}
Above, we have introduced mean-field theoretical models and a many-electron model
that is solved exactly in a truncated Fock-space for two electrons to describe the
strong radial excitation
of electrons in a quantum dot. These models do all appear in different studies of linear
response of quantum dots. The mean-field models tend, due to their nature, though to be
used for dots with a higher number of electrons. The nonlinear SP-model can though be
considered as an attempt to create a version of a mean-field approach fit for two
electrons. For curiosity we like to add the last model, the Hubbard model, a many-electron
model that has not often been applied to describe the electrons in a single parabolically
confined quantum dot.
The Hamiltonian for the electrons in a quantum dot described by the Hubbard model is
\begin{equation}
H = H_{\mathrm{int}} + H_{\mathrm{hop}} + H_{\mathrm{V}},
\label{Hub-H}
\end{equation}
where the Coulomb interaction between the electrons is described by a
spin dependent contact interaction
\begin{equation}
H_{\mathrm{int}} = U \sum_{i=1}^N n_{i,\downarrow} n_{i,\uparrow},
\label{Hub-int}
\end{equation}
and the hopping part has the form
%
\begin{equation}
H_{\mathrm{hop}} = -t \sum_{\sigma=\downarrow,\uparrow} \sum_{\langle i,j\rangle}
c_{i,\sigma}^\dagger c_{j,\sigma} +h.c.,
\label{Hub-hop}
\end{equation}
where $\langle i,j\rangle$ denotes a summation over the neighboring sites.
The model is written in terms of the creation $c_{i,\sigma}^\dagger$, the destruction $c_{i,\sigma}$,
and the number operator $n_{i,\sigma}$ for electrons with spin $\sigma$ on site $i$.
The potential part
\begin{equation}
H_\mathrm{V} = \sum_{\sigma=\downarrow,\uparrow} \sum_{i=1}^N V(r_i) n_{i,\sigma},
\label{Hub-Vpar}
\end{equation}
includes the parabolic potential (\ref{Vpar}) and possibly the central small hill (\ref{Vc}).
We set the Hubbard model on a small square lattice with totally $N$ sites.
We use the numbering of the states in the Fock-space suggested by Siro and Harju.\cite{Siro20121884}
The height and the width of the lattice is fixed in terms of the characteristic length scale for
the parabolic confinement to be $6a$. The lattice length is then $a_{\mathrm{latt}}=6a/(\sqrt{N}-1)$
and the hopping constant is $t=\hbar^2/(2m^*a_\mathrm{latt})$. The value for the strength of the
Coulomb interaction is not so straightforward to find, but we fix the value of $U$ such that the
energy of the ground state of the system is in accordance with the value found in the exact model.
We keep in mind that there will always be a difference in the many-body energy spectrum of these
two models, due to the different treatment of the Coulomb interaction and the finite square lattice
that is bound to break the angular symmetry of the original model, but we want to see if we can
identify some many-electron character in the excitation response.
The parabolic confinement of the electrons spreads out the energy spectrum of the Hubbard model
that otherwise is extremely dense, and thus we can use the same approach as for the exact many-body
model to solve it exactly within a truncated Fock-space. We performed this on GPU's for a
$5\times 5$ lattice. The results for the Fourier transform of the
time-dependent oscillations in $\langle r^2\rangle$
are shown in Fig.\ \ref{FFT-hubbard} for the pure parabolic confinement, and in Fig.\ \ref{FFT-hubbard-hill}
for the model with a small central hill (\ref{Vc}).
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig15a.pdf}
\includegraphics[width=0.42\textwidth,angle=0]{Fig15b.pdf}
\caption{(Color online) The Fourier power spectrum of the expectation value of
$\langle r^2\rangle$ (upper panel), and the occupation of the interacting
two-electron states (lower panel) for the Hubbard model without
a central hill. $V_0=0$, $T=0$ K.}
\label{FFT-hubbard}
\end{figure}
The time-evolution of the system is calculated in the same way as was used
for the CI model using the time-evolution operators presented above (\ref{Teq})
in an interacting two-electron basis.
\begin{figure}[htbq]
\includegraphics[width=0.42\textwidth,angle=0]{Fig16a.pdf}
\includegraphics[width=0.42\textwidth,angle=0]{Fig16b.pdf}
\caption{(Color online) The Fourier power spectrum of the expectation value of
$\langle r^2\rangle$ (upper panel), and the occupation of the interacting
two-electron states (lower panel) for the Hubbard model with
a central hill. $V_0=3.0$ meV, $T=0$ K.}
\label{FFT-hubbard-hill}
\end{figure}
The square symmetry of the lattice can be expected to produce deviations
that should already be present in the excitation spectrum for low
excitation.\cite{PhysRevB.60.16591} We have tested the Hubbard model for dipole
active excitation modes, $N_p=\pm 1$, to verify this.
The main peak (the lowest excitation) is indeed split for the Hubbard model
in Figures \ref{FFT-hubbard} and \ref{FFT-hubbard-hill},
and the modes at higher energy, only appear for a stronger excitation,
i.e.\ a higher value of $V_t$. By looking at the lower panels in Figures
\ref{FFT-hubbard} and \ref{FFT-hubbard-hill} we see again that in the
system without a central hill (Fig.\ \ref{FFT-hubbard}) more modes get active as
the excitation grows. This is in accordance with our observation for the CI model.
We have to admit that on this small lattice chosen the energy of the lowest mode
is a bit higher than all the other models predict, even though we have chosen the
interaction strength $U$ to give the similar energy for the ground state as the
CI model does. We do not use the Hubbard model for higher excitation than
$V_t=100$ meV to avoid artifacts created by the finite size of the model.
\section{Comparison of model results and discussion}
For the quantum dot with no central hill present ($V_0=0$) at weak excitation,
$V_t\sim 0$, all the models deliver one main peak that grows linearly with the
excitation strength. As we have seen the Hubbard model due to the square symmetry
imposed by the underlying lattice has two peaks,\cite{PhysRevB.60.16591} and in the case of the CI model
we see a small side peak on the ``blue'' side, reminiscent of known results for the
$N_p=\pm 1$ modes.\cite{Pfannkuche94:1221}
The location of the main peak in the case of the SP model is redshifted by an amount
slightly surpassing 0.5 meV. This must be accredited to the slightly weaker repulsion
of the electrons having a logarithmic singularity in the case of the SP model instead
of the 3D Coulomb repulsion in the other mean-field approximations, and in the CI model.
With the small central hill in the quantum dot the location of the main peak in the DFT
model is blue shifted by 0.2 meV compared to the CI model, and the same analysis gives
a blueshift of 0.3 meV for the HA. Calculations of the ground state for the CI, the SP,
and the DFT model all give similar energy, but the HA gives results far off.\cite{Pfannkuche93:2244}
In the CI, the SP, and the Hubbard models the inclusion of a small central potential
hill in the confinement potential of the quantum dot causes more collective modes to
be activated with increasing excitation $V_t$. At the same time the lower panels of
Figures \ref{FFT-exact-occ}, \ref{FFT-exact-hill-occ}, \ref{FFT-hubbard}, and
\ref{FFT-hubbard-hill} displaying occupation of interacting two-electron states
indicates a slight simplification effects caused by the central hill, at least
for some range of $V_t$. Amazingly, in the HA only one peak for the collective
oscillations is seen for the whole range of excitation strength we try.
This is probably caused by the artificial self-interaction that is specially
large for two electrons described with the HA. On the other hand the dependence
of the height of the main peak on $V_t$ for the Fourier transform of the expectation value
$\langle r^2\rangle$ for the HA is very close to the results for the main peak for the
CI model.
The finite occupation of higher energy states together with the
increase of the mean total energy seen in the lower panel of Fig.\ \ref{Exact-E}
shows that we have left the linear response regime with increasing $V_t$.
This fact is further demonstrated by the nonlinear growth of the height of the Fourier peak for
the expectation value $\langle r^2\rangle$ for all models with increasing $V_t$ beyond
the linear regime for low $V_t$. In addition, we notice that for low $V_t$ the
electrons in the quantum dot oscillate with $\langle r^2\rangle$ very close to the
ground state value. As the excitation is increased energy is pumped into the quantum
dot and it increases in size.
We have identified nonlinear behavior in all the models when observing how the
amplitude of the oscillations of $\langle r^2\rangle$ behave as a function of
the excitation strength $V_t$ once we leave the linear response regime valid
for very low excitation. The CI model is a purely linear model. All the possible
excited states for the CI model are calculated before the time-integration of the
system is started. This is clearly demonstrated in Fig.\ \ref{FFT-MaxMin} where the
excitations are compared for weak and strong excitation. The main peak at low $V_t$ is
still visible in the excitation spectrum for large $V_t$, at exactly the same energy.
Stronger excitation activates higher lying collective modes, and even in this simple
system there very are many of them available.
The time-evolution for the mean-field models has to be viewed in different terms.
In case of the DFT or the Hartree model information about the two-electron excitation
spectrum does not exist before the time-integration is started. The effective potential
changes in each time-step and the occupation of effective single-electron states becomes
time-dependent, see Fig.\ \ref{Occupation-T}.
The effective potential (or equivalently, the density, or the density operator here) has to
be found by iterations in each time-step in order to include the effects of the Coulomb
interaction. Within each iteration the problem is treated as a linear one. In case of the
SP model the nonlinear solution for the groundstate is sought directly without an iteration,
and the same is true for the time-dependent solutions. The time-evolution of the SP model
is thus nontrivial and could in principle bring forward phenomena that could be blocked by
the linear solution requirement within each iteration step for the other mean-field models,
especially in a long time series where small effects from this methodology gathered in each
time-step might sum up.
Within the range for $V_t$ considered here the HA brings results that look very stable,
one peak with no frequency shift as $V_t$ increases, but with a slight nonlinear behavior for
the amplitude of the oscillations of $\langle r^2\rangle$. The SP and the DFT models bring
similar results for $V_t\leq 60$ meV with a local minimum for the amplitude of the oscillations
of $\langle r^2\rangle$. For larger values of $V_t$ the SP model brings a plethora of collective
oscillations, and for the DFT model it becomes too difficult to stabilize a solution for a longer
time interval. It should be kept in mind that also the CI model, especially for the case of no
central hill, shows an increased number of active modes, but only for much stronger excitation and
in a more ``controlled'' way. The different characteristics or the nuance of the nonlinear
properties of the mean-field models may be influencing their response here to a strong excitation
in a fundamentally different way than in the linear CI model.
It should be stated once more that extreme care has been taken in verifying and
testing our numerical results by comparing different numerical methods, models, and
variation of sizes and types of functional spaces and grids.
\section{Conclusion}
The modeling of nonlinear response of confined quantum systems on the nanoscale
is in its infancy, but may bring new insight into the systems as the measuring,
processing, and growth techniques evolve opening up the field. For systems with many
particles we most likely will have to rely on mean-field and DFT models, and
only for systems of few particles can we expect to be able to rely on CI models.
In anticipation of this we have studied here how some of these models fare
describing the nonlinear response of a two electron model.
We have to expect the CI-model to deliver numerically exact results that we
can compare the results of the other models to. The results of our implementation
of a DFT-model do not compare well when leaving the linear response regime.
This is not totally unexpected as we have not used any time-dependent functionals.
In addition, the numerical time-integration of the DFT model is difficult to guarantee
for strong excitation and long times. The Hartree model is easier to use and the overall
qualitative nonlinear response of it is in accordance with the CI model, except for fine
structure of side peaks visible in the CI model. Similar comparison has been seen in the linear
response earlier \cite{Pfannkuche94:1221}. The results of the coarse lattice Hubbard
model deviate quantitatively from the CI-results, but the qualitative behavior is similar,
side peaks and occupation of higher modes with increased excitation.
Regarding the emergence of nonlinear effects the comparison to the SP model is
valuable. In a mean field, or a local approximation to a DFT theory the results are
usually obtained by iterations, and most often there is a condition that the underlying
linear basis is orthogonal. In calculations of molecules this condition is sometimes relaxed,
but most often it is used to guarantee a connection to higher order many-body methods.
This is not done in the SP model. There the nonlinear solution is found directly and
the resulting states are not orthogonal. Looking at our results we see that this
essential nonlinearity does strongly affect the solution of the SP model beyond some
excitation strength. These effects, emergence of many new excitation modes, splitting of
modes, is not seen in any of the other models. So, even if the mean-field and the DFT
models are nonlinear, then the iteration procedure in a linear functional space does
protect them from this mode splitting and multiplication. As stated in the previous
section the nonlinear behavior seen from the CI results is much more modest and probably
only results from the ``shape'' of the many-body energy spectrum that can be reached with
increasing excitation.
All these points in the end only stress how exciting and important experimental undertaking into
this nonlinear regime will be.
\begin{acknowledgments}
This work was supported by the Research Fund of the University of Iceland,
the Icelandic Research and Instruments Funds, and a Special Initiative for Students of the
Directorate of Labour. Some of the calculations
were performed on resources provided by the Nordic High Performance Computing (NHPC).
C.\ Besse and G.\ Dujardin are partially supported by the French
programme Labex CEMPI (ANR-11-LABX-0007-01).
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,108,101,563,817 | arxiv | \section{Introduction}
The first gravitational wave (GW) detection of a binary neutron star (NS) merger, GW$\,$170817 \citep{Abbott+17-GW170817-Ligo-Detection},
was accompanied by the first electromagnetic counterpart to any GW detection -- the weak, short duration gamma-ray burst,
GRB$\,$170817A \citep{Abbott+17-GW170817-GRB170817A}, that
originated in the nearby ($D\approx40\,$Mpc) elliptical galaxy NGC 4993 \citep{Coulter+17}. An impressive observational campaign detected the
quasi-thermal kilonova emission
in the NIR-optical-UV energy
bands over the next few weeks \citep[see, e.g.,][and references therein]{Abbott+17-GW170817A-MMO}.
The non-thermal afterglow emission was detected after $8.9\,$days in X-rays \citep{Troja+17} and
after $16.4\,$days in the radio \citep{Hallinan+17}.
GW$\,$170817/GRB$\,$170817A's long-lived X-ray to radio afterglow emission was highly unusual. In contrast to the
flux decay seen in almost all GRB afterglows, it showed an exceptionally long-lasting
flux rise, as $F_\nu(t_{\rm{obs}})\propto\nu^{-0.6}t_{\rm{obs}}^{0.8}$, up to the peak at $t_{\rm{obs,pk}}\sim 150\,$days post merger
\citep[e.g.][]{Margutti+18,Mooley+18a}, followed by a sharp decay as
$F_\nu\propto{}t_{\rm{obs}}^{a}$ where $a\simeq-2.2$ \citep{Mooley+18b,vanEerten+18}.
The broadband (X-rays, radio, and
late-time optical) afterglow emission is consistent with arising from a single power-law
segment (PLS) of the afterglow synchrotron spectrum, $\nu_m\leq\nu\leq\nu_c$.\footnote{Here $\nu_m$
is the synchrotron frequency of minimal energy electrons and
$\nu_c$ of electrons that cool on the dynamical time \citep{Sari+98}.}
Almost all successful off-axis jet models for this afterglow have an angular profile that is
either a (quasi-) Gaussian or a narrow core with sharp power-law wings
\citep{Lamb-Kobayashi-18,Lazzati+18,Troja+17,DAvanzo+18,Gill-Granot-18b,Margutti+18,Resmi+18,Troja+18}.
Moreover, several works have argued that a top-hat jet can be ruled out
\citep[e.g.,][]{Margutti+18,Mooley+18a} since it would produce a very sharp initial flux rise ($F_\nu\propto{}t_{\rm{obs}}^{a}$ with
$a\gtrsim3$) compared to the observed one.
Such a sharp initial flux rise was obtained both
numerically from 2D hydrodynamic simulations \citep[e.g.,][]{vanEerten-MacFadyen-11,Granot+18b}, and analytically
assuming an idealized top-hat jet
\citep[e.g.,][]{Granot+02,Eichler-Granot-06,Nakar-Piran-18}
Here we show that while an idealized top-hat jet would indeed produce sharply rising early lightcurves for off-axis observers, a more realistic description of the dynamics (using numerical
simulations) for an initially top-hat jet leads to a much shallower flux rise that can explain the GRB$\,$170817A afterglow observations (lightcurves, flux centroid motion, and upper limits on the image size).
The main difference arises since within the simulation's first dynamical time an
initial top-hat jet develops a bow-shock like angular structure, which produces afterglow emission resembling
that from a core-dominated structured jet,\footnote{\red{I.e. a jet in which most of the energy resides within a narrow core, outside of which the energy per solid angle sharply drops.}} with a much shallower flux rise, making the two models
practically indistinguishable \red{at $t_{\rm{}obs}\gtrsim{}t_{\rm{obs,pk}}$, and not always that easy to distinguish between even at earlier times}.
Numerical simulations have a finite lab-frame start time, $t=t_0>0$, thus missing contributions to $F_\nu$ from $t<t_0$. This is often compensated for by adding emission at $t<t_0$ from a conical wedge from the \citet[][hereafter BM76]{Blandford-McKee-76} spherical self-similar solution \citep[e.g.,][]{vanEerten+12,DeColle+12a,DeColle+12b,Bietenholz+14,Granot+18a,Granot+18b}. This still results in an unphysically sharp flux rise at early observed times, $t_{\rm{obs}}\lesssim2t_{\rm{obs},0}$, corresponding to lab-frame times $t\lesssim2t_0$.
The effects of $t_0$ including $t_{\rm{obs},0}(\theta_{\rm{obs}},t_0)$ are analytically explained in \S~\ref{sec:t_start}.
The effect of starting the simulations with a larger Lorentz factor (LF) $\Gamma_0=\Gamma(t_0)$ and
correspondingly smaller $t_0$ is shown in \S~\ref{sec:sim} through 2D relativistic hydrodynamic
simulations.
In \S~\ref{sec:scalings} model scalings and the minimal energy and circum-burst medium density estimates are provided.
In \S~\ref{sec:FC-image-size} we calculate and compare the flux centroid location and the image size and shape
with radio afterglow measurements of GW$\,$170817/GRB$\,$170817A. Our conclusions are discussed in \S~\ref{sec:dis}.
\section{The Effect of Simulation Start Time}
\label{sec:t_start}
We perform 2D relativistic hydrodynamical simulations with initial conditions of a conical wedge
of half-opening angle $\theta_0$ taken out of the BM76 solution. This initially narrow and relativistic
jet expands into a cold circum-burst medium (CBM) with a power-law rest-mass density profile with
radius $R$ from the central source, $\rho(R)=AR^{-k}$, where for uniform (wind-like) density environment
$k=0$ $(k=2)$. The BM76 spherical self-similar phase occurs after the original outflow is significantly
decelerated
and most of the energy is in the
shocked CBM behind the forward (afterglow) shock. The material just behind the shock moves with velocity $\beta c$,
with $c$ being the speed of light, and bulk LF $\Gamma=(1-\beta^2)^{-1/2}=\Gamma_{\rm{shock}}/\sqrt{2}$.
The BM76 phase reasonably holds for a top-hat jet while $\Gamma>1/\theta_0$ (assuming $\Gamma_0\theta_0\gg1$,
as typically inferred for GRBs) before significant lateral spreading can occur.
The radial width behind the forward shock containing most of the blastwave's energy is $\Delta\sim0.1\,R/\Gamma^2$.
During the BM76 self-similar phase $\Gamma^2R^{3-k}=\Gamma_0^2R_0^{3-k}=(17-4k)E_{\rm{k,iso}}/16\pi{}Ac^2={\rm{const}}$,
with $R_0=R(t_0)\approx{}ct_0$ being the initial shock radius. Thus the initial radial width $\Delta_0=\Delta(t_0)\sim0.1R_0/\Gamma_0^2\propto{}R_0^{4-k}\propto\Gamma_0^{-2(4-k)/(3-k)}$ ($\propto\Gamma_0^{-8/3}$ for $k=0$)
becomes much narrower and harder to resolve for larger $\Gamma_0$ or correspondingly smaller $t_0\approx{}R_0/c\propto\Gamma_0^{-2/(3-k)}$
($\propto\Gamma_0^{-2/3}$ for $k=0$). This practically limits $\Gamma_0$ from above and $t_0$ from below.
An on-axis observer ($\theta_{\rm{}obs}<\theta_0$) receives the first photons from the simulation after a radial time delay of
\begin{equation}\label{eq:tobs,r}
\frac{t_{\rm{obs},r}}{(1+z)}=t_0-\frac{R_0}{c}\approx\frac{R_0}{4(4-k)c\Gamma_0^2}\approx\frac{t_0}{4(4-k)\Gamma_0^2}~,
\end{equation}
$z$ being the source's cosmological redshift.
For an off-axis observer ($\Delta\theta\equiv\theta_{\rm{}obs}-\theta_0>0$), there is an additional angular time delay,
\begin{eqnarray}\nonumber
\frac{t_{\rm{obs},\theta}}{(1+z)}&=&\frac{R_0}{c}[1-\cos(\Delta\theta)]\approx\frac{\Delta\theta^2}{2}t_0\\~\label{eq:tobs0}
&\approx&\frac{\Delta\theta^2}{2}\left[\frac{(17-4k)E_{\rm{k,iso}}}{16\pi{}Ac^{5-k}\Gamma_0^2}\right]^\frac{1}{3-k}~,
\end{eqnarray}
\citep[e.g.,][]{Granot+17}, which dominates the total time delay
$t_{\rm{obs},0}=t_{\rm{obs},r}+t_{\rm{obs},\theta}\approx{}t_{\rm{obs},\theta}$ for $\Delta\theta>1/\Gamma_0$. For such off-axis viewing angles one can conveniently express $\Gamma_0\propto{}t_{\rm{obs},0}^{-(3-k)/2}$, which for $k=0$, $E_{\rm{k,iso}}\approx(2/\theta_0^2)E$ and $z\ll1$ gives
\begin{eqnarray}\nonumber
\Gamma_0&\approx&\sqrt{\frac{17E\theta_0^{-2}(\Delta\theta)^6}{64\pi{}nm_pc^{5}t_{\rm{obs},0}^{3}}}\\~\label{eq:Gamma0}
&=&149E_{50.3}^{1/2}n_{-3.6}^{-1/2}\theta_{0,-1}^{-1}\fracb{\Delta\theta}{0.21}^3\fracb{t_{\rm{obs},0}}{10\,\rm{d}}^{-3/2}~,
\end{eqnarray}
where for the numerical value we normalize by our best-fit model parameters derived in \S~\ref{sec:sim}, for which $t_{\rm{obs},0}=38.1,\,23.0,\,18.3\;$days for $\Gamma_0=20,\,40,\,60$.
The compactness argument implies that GRB jets typically have $\Gamma_0\gtrsim100$ for the emission region to be optically thin to $\gamma\gamma$-annihilation \citep[e.g.][]{Lithwick-Sari-01}.
Such large $\Gamma_0$ are very difficult to simulate, and current numerical works usually set $\Gamma_0\sim20-25$ \citep[see, however,][]{vanEerten-MacFadyen-13}.
Simulations initialized at $t_0$
do not contribute any flux at $t_{\rm{obs}}<t_{\rm{obs},0}$ (see Fig.~\ref{fig:components}).
Over the first dynamical time ($t_0<t\lesssim2t_0$), as the simulated jet relaxes from its artificially sharp top-hat initial condition,
the flux sharply rises at times
$t_{\rm{obs},0}\leq{}t_{\rm{obs}}\lesssim2t_{\rm{obs},0}$,
after which the flux evolves smoothly with time. During this relaxation phase, the top-hat jet is slowed down due
to its interaction with the CBM and develops a bow-shock like structure
\citep[e.g.][]{Granot+01,vanEerten-MacFadyen-11,DeColle+12b}.
Its structure at this point resembles a `structured jet' with a highly energetic core, whose velocity is almost radial, surrounded by less energetic
slower-moving material whose velocity points more sideways. Therefore, an initially top-hat jet inevitably transforms
into a structured jet. The slower material at angles
$\theta>\theta_0$ has a much wider beaming cone and its emission starts dominating the off-axis flux.
As the jet gradually decelerates, its beaming cone widens and off-axis observers start to
receive flux from smaller $\theta$ closer to the jet's core, resulting in a more gradual flux rise compared
to an analytic perpetually sharp-edged jet.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{components.pdf}
\caption{
Simulated lightcurve decomposition into the synthetic part, obtained from the
initial condition (top-hat jet), and that obtained from the simulated region
for $t_{\rm{obs}}>t_{\rm{obs,0}}$. Comparison is made with lightcurve from BOXFITv2 code ($\Gamma_0=25$)
for the same model parameters (see Fig.~\ref{fig:model-fit}). Extension of both
lightcurves at $t_{\rm{obs}}<t_{\rm{obs,0}}$ matches the analytical flux scaling
for an off-axis relativistic top-hat jet (the slightly shallower slope towards $t_{\rm{obs,0}}$ arises because of its proximilty to $t_{\rm{obs,pk}}$). }
\label{fig:components}
\end{figure}
To compensate for the missing flux at $t_{\rm{obs}}<t_{\rm{obs},0}$, as shown in Fig.~\ref{fig:components},
lightcurves derived from numerical simulations
are often supplemented with synthetic lightcurves obtained for the initial conditions \citep[usually a conical wedge from the BM76 self-similar solution, e.g.,][]{vanEerten+12,DeColle+12a,DeColle+12b,Bietenholz+14,Granot+18a,Granot+18b}
over a wide range of earlier lab-frame times, $t_*<t<t_0$ with $t_*\ll{}t_0$.
We also compare the lightcurve obtained from the publicly available afterglow modeling
code BOXFITv2 \citep{vanEerten+12}, which has been widely used to fit afterglow observations of GRB$\,$170817A.
Lightcurves obtained from our numerical simulations are in excellent agreement with that obtained from BOXFITv2.
The observed flux density is given by \citep[e.g.][]{Granot-05,Granot-Ramirez-Ruiz-12}
\begin{equation}\label{eq:Fnu}
F_\nu(t_{\rm obs})=\frac{(1+z)}{4\pi d_L^2(z)}\int{}dt\,\delta_t\int\delta_D^3dL'_{\nu'}\propto\delta_D^3L'_{\nu'}~,
\end{equation}
where $d_L(z)$ is the luminosity distance, the $\delta$-function,
$\delta_t=\delta\left(t-t_{\rm{obs}}/(1+z)-R\tilde\mu/c\right)$, accounts for the photon arrival
times \citep{Granot+99}, $R\tilde\mu=\hat{n}\cdot\vec{R}$ where $\hat{n}$ is the direction to the
observer and $\vec{R}$ is the radius vector (measured from the central source) of each fluid element
having velocity $\vec v=\vec{\beta}c$ and Doppler factor $\delta_D=[\Gamma(1-\hat{n}\cdot\vec{\beta})]^{-1}$.
For radial velocities (e.g. a spherical shell), $\hat{n}\cdot\vec{\beta}=\beta\tilde\mu$ and
$\delta_D\approx2\Gamma/[1+(\Gamma\tilde\theta)^2]$ for $\Gamma\gg1$.
In Eq.~(\ref{eq:Fnu}), $F_\nu\propto\delta_D^{3}L'_{\nu'}$ holds where $L'_{\nu'}$ and $\delta_D$ are those of the part of the source that dominates the observed emission, which for a top-hat jet viewed off-axis is within an angle $\sim\max(\Gamma^{-1},\Delta\theta)$ of the point in the jet closest to the observer (where $\tilde{\theta}\approx\Delta\theta$), occupying a solid angle $\Omega_*\sim\min[\max(\Gamma^{-2},\Delta\theta^2),\theta_0^2]$. During the early flux-rising phase while the radiation is beamed away from the observer ($\Gamma>1/\Delta\theta$), $\Omega_*={\rm{const}}$ and one can use the scalings of $L'_{\nu'}$ for a spherical flow,
$L'_{\nu'}\propto{}R^a\nu'^b\propto{}R^a\delta_D^{-b}$, where the PLS-dependent power-law indices $a$ and $b$ are explicitly calculated in \citet{Granot-05}.
Therefore, $F_\nu\propto\delta_D^{3-b}R^a$ where
\citep[e.g.][]{Salmonson-03,Granot-05} $\delta_D\approx2/\Gamma\Delta\theta^2\propto{}R^{(3-k)/2}\Longrightarrow{}F_\nu\propto{}R^{[2a+(3-k)(3-b)]/2}$.
For GRB$\,$170817A, PLS~G is relevant and $a=[15-9p-2k(3-p)]/4$, $b=(1-p)/2$. From Eq.~(\ref{eq:tobs0}), $t_{\rm{obs}}\propto{}R$ which implies $F_\nu\propto{}t_{\rm{obs}}^{3(5-p)/2}$ for a
uniform CBM ($k=0$).
In Fig.~\ref{fig:components}, we show the extension of the lightcurve to $t_{\rm{obs}}<t_{\rm{obs},0}$,
where we reproduce the analytic flux scaling derived above. It is clear that BOXFITv2 also supplements the lightcurve at early times ($t<t_0\Leftrightarrow{}t_{\rm{obs}}<t_{\rm{obs},0}$) with the flux from a conical wedge out of the BM76 self-similar solution (also used for the initial conditions).
Although BOXFITv2 allows the user to not include this extension in the final lightcurve, many works
indeed do include it, even when fitting to observations.
Either way, the flux at $t_{\rm{obs}}\lesssim2t_{\rm{obs},0}$ is strongly affected by the rather arbitrary simulation start time $t_0$.
Initializing the simulation at a smaller $t_0$ corresponding to a larger $\Gamma_0$ would shift this feature to earlier times and
recover the much shallower flux rise in the lightcurve.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{fit.pdf}
\caption{Comparison of simulated afterglow lightcurves for an initially top-hat jet with observations,
for different $\Gamma_0$ ({\bf\textit{top-left}}), slightly different viewing angles
$\theta_{\rm{obs}}$ ({\bf\textit{top-right}}), different $\theta_0$ ({\bf\textit{bottom-right}}),
and \red{semi-analytic models of different jet structures ({\bf\textit{bottom-left}}; see text for model parameters)}.
Observations in different energy bands (with late-time X-ray data from \citealt{Haggard+2018,Hajela+2019}) are normalized to the corresponding flux density
at $\nu_0=3$~GHz. Upper limits are marked by downward triangles. The simulated-flux deficiency at
$t_{\rm{obs}}\lesssim2t_{\rm obs,0}$ is an artefact of starting the simulation with low $\Gamma_0$ and
at a correspondingly large lab-frame time $t_0$. No simulation flux is available at
$t<t_0\Leftrightarrow t_{\rm{obs}}<t_{\rm{obs},0}$.
}
\label{fig:model-fit}
\end{figure*}
\section{Different $\Gamma_0$ fits to the Afterglow Data of GW$\,$170817/GRB$\,$170817A}
\label{sec:sim}
Here we show results of 2D hydrodynamic simulations using the special-relativistic
hydrodynamics code \emph{Mezcal}, post-processed by a complimentary radiation code
\citep[see][for details]{DeColle+12a,DeColle+12b}. The simulations are initialized with a conical wedge of half-opening angle $\theta_0=0.1,\,0.2\;$rad and initial LF $\Gamma_0=20,\,40,\,60$ expanding into a uniform CBM ($k=0$) of rest-mass density
$\rho_0=nm_p$ and number density $n$, $m_p$ being the proton mass.
The outflow has an isotropic-equivalent kinetic energy $E_{\rm{k,iso}}=10^{53}\;$erg,
corresponding to a true jet energy of $E=(1-\cos\theta_0)E_{\rm{k,iso}}\approx5\times10^{50}\;$erg for $\theta_0=0.1$
and $E\approx2\times10^{51}\;$erg for $\theta_0=0.2$.
We consider synchrotron radiation from relativistic electrons that are accelerated at the
afterglow shock to a power-law energy distribution, $dN_e/d\gamma_e\propto\gamma_e^{-p}$
for $\gamma_e>\gamma_m$ with $p=2.16$, which are a fraction $\xi_e$ of all post-shock electrons, and hold a fraction $\epsilon_e=0.1$ of the post-shock internal energy density, where a fraction $\epsilon_B=0.1$ goes to the magnetic field.
The radiation is calculated numerically for a fixed set of model parameters ($E,\,n,\,\epsilon_e,\,\epsilon_B,\,p,\,\theta_0$)
and for a grid of $\theta_{\rm{obs}}$ values. \red{When including the parameter $\xi_e$, the set of model parameters become degenerate,
where the afterglow flux is invariant under the change $E\to E/\xi_e$, $n\to{}n/\xi_e$, $\epsilon_e\to\epsilon_e\xi_e$,
and $\epsilon_B\to\epsilon_B\xi_e$, for $m_e/m_p<\xi_e\leq1$.} We then use the scaling relations described in \citet{Granot-12} for arbitrary values of ($E,\,n$), as well as the scaling with the shock microphysical parameters in each PLS
\citep[Table~2 of][]{Granot-Sari-02}. See \citet{Granot+17} for further details.
\red{There are in total 8 model parameters,
i.e. $E,\,n,\,\epsilon_e,$ $\epsilon_B,\,p,\,\xi_e,\,\theta_0,\,\theta_{\rm obs}$.
There are 5 effective observational constraints: (i)
the spectral index $b\approx-0.58$
($F_\nu\propto\nu^b$; $b=[1-p]/2$ for PLS G, which determines $p=1-2b\approx2.16$), (ii)
the lightcurve peak time $t_{\rm obs,pk}\approx150\,$days, (iii) the peak flux $F_{\nu,\rm pk}$, (iv) the shape of the lightcurve near the peak (which approximately determines $\theta_{\rm{}obs}/\theta_0$), (v) the radio flux centroid's apparent velocity. These 5 constraints involve equalities and reduce the dimensionality of the allowed parameter space from an 8D to a 3D.
There are also 3 additional constraints that involve inequalities
and hence only reduce its volume but not its dimensionality:
the fact that all the broadband afterglow observations lie within PLS G, $\nu_m<\nu<\nu_c$, and $\theta_{\rm obs}\lesssim0.5$
from the GW detection.
}
Our afterglow lightcurve fitting is guided by the measured peak at
$t_{\rm{obs,pk}}\sim150\;$days \citep{Dobie+18} and the data points near the peak. Fig.~\ref{fig:model-fit} shows
the fit to the afterglow data
for different initial $\Gamma_0$ ({\it{}top-left panel}) and viewing angles $\theta_{\rm{obs}}$ ({\it{}top-right panel}).
We do not attempt to fit the early time data at $t_{\rm{obs}}\lesssim40$~days, before the simulated lightcurves contain the
dominant and dynamically relaxed contribution from the hydrodynamic simulation. Nevertheless,
we obtain a reasonable fit to the afterglow data for different values of $\Gamma_0$, where our lightcurves for larger
$\Gamma_0$ extend to earlier times and can adequately explain the data at $t_{\rm{obs}}\gtrsim40\,$days.
The best constrained parameters are \citep[also see][]{Granot+18a}: (i) $p\approx2.16$,
and (ii) $\theta_{\rm{obs}}/\theta_0\approx3.1\pm0.1$, since it significantly affects the shape of the
lightcurve before and around
the peak time. In the bottom-right panel of Figure~\ref{fig:model-fit}, we compare the model lightcurves for $\theta_0=0.1,\,0.2$ and show
that in both cases $\theta_{\rm{obs}}/\theta_0=3.1$ \red{provides a comparably good fit}, while fixing the same values for the shock microphysical
parameters but varying the true jet energy $E$ and CBM density $n$.
\red{We compare the simulation lightcurves with those obtained from semi-analytic models of different jet
structures, namely a top-hat (THJ), Gaussian (GJ), and a power law jet (PLJ) \citep[see][for models of structured jets]{Gill-Granot-18b}.
For the top-hat jet we prescribe the same dynamics as that for the two structured jets, i.e. every part of the jet evolves
locally as if it were part of a spherical flow, with no sideways spreading. As a result, all three semi-analytic models yield very
similar lightcurves right after the peak when the compact core of the jet becomes visible to the off-axis observer.
On the other hand, the simplified dynamics of the semi-analytic models leads to a significantly shallower post-peak flux decay rate
compared to the simulated one, which may be attributed to the combination of a shallower asymptotic decay and a smaller overshoot
just after the peak \citep[e.g.][]{Granot-07}. The post-peak flux decay behavior of different structured jets will be investigated
in more detail using 2D numerical simulations in another work (Urrutia et al. 2019, in preparation).
For the semi-analytic models one set of model parameter values that can explain the observations sufficiently well are:
$E_{\rm k,iso,\{c, jet\}}\approx10^{51.6}\,$erg, $\theta_{\rm \{c,jet\}}\approx5^\circ$,
$\theta_{\rm obs}=27^\circ$, $\epsilon_e\approx10^{-1}$, $\epsilon_B\approx10^{-2.8}$, and the only difference is in the
core Lorentz factors between the three models, with $\Gamma_{\rm c}^{\rm PLJ}=100$, $\Gamma_{\rm jet}^{\rm THJ}=\Gamma_{\rm c}^{\rm GJ}=600$.
}
\section{Flux scalings, model degeneracies, and minimum jet energy and CBM density estimates}\label{sec:scalings}
For the lightcurve fits we assume $\xi_e=1$,
and use the dependence on the shock microphysical parameters in PLS~G
from \citet{Granot-Sari-02}, now including the degeneracy due to $\xi_e$ \citep[e.g.][]{vanEerten-MacFadyen-12}, $F_{\nu,G}\propto\epsilon_e^{p-1}\epsilon_B^{(p+1)/4}\xi_e^{2-p}\nu^{(1-p)/2}$. We also use the global scaling
relations \citep{Granot-12}, which are conveniently parameterized through
length and time,
\begin{equation}\label{eq:scaling}
\alpha=\frac{\ell'}{\ell}=\frac{t'}{t}=\frac{t'_{\rm{obs}}}{t_{\rm{obs}}}=\left(\frac{E'/E}{n'/n}\right)^{1/3}~,
\end{equation}
and through mass and energy, $\zeta=m'/m=E'/E$, where the rescaled parameters
are denoted with a prime, $\mathcal F = F'_{\nu,G}(t'_{\rm{obs}},\epsilon'_e,\epsilon'_B,\xi'_e)/F_{\nu,G}(t_{\rm{obs}},\epsilon_e,\epsilon_B,\xi_e)$,
\begin{equation}\label{eq:Fscaling}
\mathcal F = \zeta^{(p+5)\over4}\alpha^{-3(p+1)\over4}\fracb{\epsilon'_e}{\epsilon_e}^{p-1}
\fracb{\epsilon'_B}{\epsilon_B}^{(p+1)\over4}\fracb{\xi'_e}{\xi_e}^{2-p}~.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{intersections.pdf}
\caption{Allowed 3D parameter space [$\xi_e,\epsilon_e,\epsilon_B$] shown by planes
in this space for different jet energies, $\log_{10}E = 48.3, 48.6,...,49.8$ (from red to cyan)
following Eq.~(\ref{eq:scaling2a}), which is satisfied in the region above the black plane for which
$\xi_{e,\min}\leq\xi_e\leq1$. The constraint on $\xi_{e,\min}$ from Eq.~(\ref{eq:xie-min}) is shown by
the black plane. The excluded region, for which $\xi_{e,\min}>1$, is shown by the shaded transparent
region on the top-face of the cube.}
\label{fig:intersections}
\end{figure}
Next, we constrain $E$ from below by using these scaling relations and our (partly degenerate)
best-fit parameters: $E=10^{50.4}\;{\rm{erg}}$, $n=10^{-3.6}\;{\rm{}cm^{-3}}$, $\epsilon_e=10^{-1.8}$,
$\epsilon_B=10^{-3.12}$, $\theta_{\rm{obs}}/\theta_0=3.1$ (fixing $\xi_e=1$, $p=2.16$, $\theta_0=0.1$). Matching the peak time of
the simulated lightcurve to $t_{\rm{obs,pk}}\approx150\,$days requires no significant time rescaling, and yields
$\alpha=t'_{\rm{obs}}/t_{\rm{obs}}\approx1$. Matching the peak flux to that observed requires equating Eq.~(\ref{eq:Fscaling}) to
unity. Altogether, replacing the unprimed quantities by the best-fit values, and then making the rescaled quantities unprimed,
and solving for $\zeta$, yields
\begin{eqnarray}\label{eq:scaling2a}
\zeta&&=\frac{E}{10^{50.4}\,{\rm{erg}}}=\frac{n}{10^{-3.6}\,{\rm{cm}}^{-3}} \nonumber \\
\label{eq:scaling2b}
&&\approx\fracb{\epsilon_e}{10^{-1.8}}^{4(1-p)\over(p+5)}\fracb{\epsilon_B}{10^{-3.12}}^{-(p+1)\over(p+5)}\xi_e^{4(p-2)\over(p+5)}~,
\end{eqnarray}
where the equality in Eq.~(\ref{eq:scaling2a}) results from Eq.~(\ref{eq:scaling}) when $\alpha=1$.
This leaves us with a 3D allowed parameter space since we started with 7 free model parameters ($\theta_0=0.1$ was fixed by
the simulation, leaving $E,\,n,\,\epsilon_e,$ $\epsilon_B,\,p,\,\xi_e,\,\theta_{\rm obs}$) and used 4 observational constraints.
The jet energy in Eq.~(\ref{eq:scaling2a})
decreases with increasing $\epsilon_e,\,\epsilon_B$ and increases only weakly with $\xi_e$. A minimal energy
constraint can be obtained by maximizing the values of $\epsilon_e,\,\epsilon_B$ and minimizing that of $\xi_e$. This
is demonstrated in Fig.~\ref{fig:intersections}, where we show planes in the 3D parameter space [$\xi_e, \epsilon_e, \epsilon_B$]
for different jet energies. Here we first use the fact that the broadband afterglow observations lie on a single PLS,
with $\nu>\nu_m$, where we obtain
\begin{equation}\label{eq:nu_m}
\nu_m = 8.93\times10^5~\xi_e^{-2}E_{{\rm k,iso},52.7}^{1/2}
\epsilon_{e,-1.8}^2\epsilon_{B,-3.12}^{1/2}t_{\rm obs,150d}^{-3/2}\,{\rm Hz}
\end{equation}
for $t_{\rm obs,150d}=t_{\rm obs}/(150\,{\rm days})$ and $p=2.16$ from the expression for PLS G given in \citet{Granot-Sari-02}.
This expression is only valid for a spherical flow and for an on-axis observer, for
whom the flux is dominated by emission from material along the LOS. At $t_{\rm obs}\geq t_{\rm obs,pk}\approx150\,$days, the flux is dominated
by that from the core of the jet with $E_{\rm k,iso,c}\lesssim10^{52.7}\,$erg. At $t_{\rm obs}<t_{\rm obs,pk}$, the flux is dominated by emission from
material outside of the core at $\theta>\theta_0$ with $E_{\rm k,iso}<E_{\rm k,iso,c}$. To obtain the value of $\nu_m$ for an off-axis observer, we
calibrated Eq.~(\ref{eq:nu_m}) by comparing it with the value of $\nu_m$ obtained from our numerical simulation around the time of the earliest radio observations at $t_{\rm obs}\approx16.4\,$days.
Next, we use the relation from Eq.~(\ref{eq:scaling2b}) in Eq.~(\ref{eq:nu_m}) and replace $E_{\rm k,iso}$ to obtain an expression that depends
only on shock microphysical parameters, which, for $\nu_m({\rm 16.4~days})<\nu_{\rm obs}=3\,$GHz, yields a lower limit on $\xi_e$
\begin{equation}\label{eq:xie-min}
\xi_e>\xi_{e,\min} \approx 0.84~\epsilon_{e,-1}^{6/7}\epsilon_{B,-1}^{1/7}~.
\end{equation}
This constraint is shown as a shaded black plane in Fig.~\ref{fig:intersections} above which Eq.~(\ref{eq:scaling2a}) is satisfied.
Another useful constraint here is that $\xi_{e,\min}<1$, which yields
\begin{equation}\label{eq:epsB-max}
\epsilon_e<\epsilon_{e,\max}=0.12\epsilon_{B,-1}^{-1/6}\ .
\end{equation}
We first use the constraint on $\xi_{e}$ from Eq.~(\ref{eq:xie-min}) in Eq.~(\ref{eq:scaling2a}) and remove the dependence on $\xi_e$.
Next, we use the additional constraint on $\epsilon_e$ from Eq.~(\ref{eq:epsB-max}) (which is equivalent to substituting $\xi_e=1$ and $\epsilon_e=\epsilon_{e,\max}$ in Eq.~[\ref{eq:scaling2a}]) to obtain
\begin{equation}\label{eq:Emin}
E_{\min}\approx7.7\times10^{48}\,\epsilon_{B,-1}^{-1/3}\;\rm{erg}=5.3\times10^{48}\epsilon_{B,-0.5}^{-1/3}\;\rm{erg}~,
\end{equation}
as also demonstrated in Fig.~\ref{fig:intersections} by the intersection of the black plane with planes marked by jet energies $E>E_{\min}$.
If we consider only some $\xi_e<1$, as may be expected on theoretical grounds, then Eq.~(\ref{eq:xie-min}) will lead to
$\epsilon_{e,\max}=0.12\epsilon_{B,-1}^{-1/6}\xi_e^{7/6}$ and accordingly increase $E_{\min}$ to
\begin{eqnarray}\nonumber
E_{\min}&\approx&3.6\times10^{49}\,\epsilon_{B,-1}^{-1/3}\xi_{e,-1}^{-2/3}\;\rm{erg}\\ \label{eq:Emin2}
&=&5.3\times10^{48}\epsilon_{B,-0.5}^{-1/3}\xi_e^{-2/3}\;\rm{erg}~.
\end{eqnarray}
Finally, according to Eq.~(\ref{eq:scaling2b}) $E_{\min}$ also corresponds to a minimal CBM density,
\begin{eqnarray}\nonumber
n_{\min}&\approx&3.6\times10^{-5}\,\epsilon_{B,-1}^{-1/3}\xi_{e,-1}^{-2/3}\;\rm{cm^{-3}}\\ \label{eq:n_min}
&=&5.3\times10^{-6}\epsilon_{B,-0.5}^{-1/3}\xi_e^{-2/3}\;\rm{cm^{-3}}~.
\end{eqnarray}
\section{Model comparison with afterglow image size and flux centroid motion}\label{sec:FC-image-size}
We compare the afterglow image size and flux centroid motion
on the plane of the sky as obtained
from our simulations to the GW$\,$170817/GRB$\,$170817A radio observations. VLBI observations between 75 and $230\,$days revealed an unresolved
source whose flux centroid showed apparent superluminal motion with $\langle v_{\rm{app}}\rangle/c=\langle\beta_{\rm{app}}\rangle=4.1\pm0.5$ \citep{Mooley+18b}. The flux centroid's location on the plane of the sky is defined as
\begin{equation}
\mathbf{\tilde{r}}_{\rm{fc}}=(\tilde{x}_{\rm{fc}},\,\tilde{y}_{\rm{fc}})=\frac{\int{}dF_\nu\,\mathbf{\tilde{r}}}{\int{}dF_\nu}=\frac{\int{}d\tilde{x}\,d\tilde{y}\,I_\nu\mathbf{\tilde{r}}}{\int{}d\tilde{x}\,d\tilde{y}\,I_\nu}
\end{equation}
\citep[e.g.,][]{Granot+18b}, where $dF_\nu=I_\nu d\Omega=I_\nu{}d_A^{-2}dS_\perp$, with $I_\nu$ being the specific intensity, $d_A$ the angular distance, and $dS_\perp=d\tilde{x}\,d\tilde{y}$ a transverse area element on the plane of the sky. The jet symmetry axis is in the $\tilde{x}$-$\tilde{z}$ plane, where the $\tilde{z}$-axis points to the observer.
Because of the flow's axisymmetry, the image has the reflection symmetry $I_\nu(\tilde{x},\tilde{y})=I_\nu(\tilde{x},-\tilde{y})$.
Therefore, $\mathbf{\tilde r}_{\rm fc}=(\tilde{x}_{\rm{fc}},0)$ and the flux centroid moves along the $\tilde x$-axis. Since $I_\nu=d_A^2dF_\nu/dS_\perp\propto{}F_\nu/S_\perp$ where $S_\perp\propto\ell^2$, it scales in PLS~G as
$\mathcal I = I'_{\nu,G}(t'_{\rm{obs}},\tilde{x}',\tilde{y}')/I_{\nu,G}(t_{\rm{obs}},\tilde{x},\tilde{y})$,
\begin{equation}\label{eq:Inu_scaling}
\mathcal I
=\zeta^{(p+5)\over4}\alpha^{-(3p+11)\over4}\fracb{\epsilon'_e}{\epsilon_e}^{p-1}\fracb{\epsilon'_B}{\epsilon_B}^\frac{p+1}{4}\fracb{\xi'_e}{\xi_e}^{2-p}~.
\end{equation}
The image size, flux centroid location, and observed time all scale as $\alpha=\tilde{x}'/\tilde{x}=\tilde{y}'/\tilde{y}=\tilde{x}'_{\rm{fc}}/\tilde{x}_{\rm{fc}}=t'_{\rm{obs}}/t_{\rm{obs}}$, independent of the r.h.s of Eq.~(\ref{eq:Inu_scaling}). The flux centroid's apparent velocity $\beta_{\rm{app}}$ remains unchanged, but shifts to the rescaled observer time
\citep[see, e.g. Sec. 4 of][for more details]{Granot+18b}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Fig2d_alpha_GW170817.pdf}
\caption{The observed mean radio flux centroid velocity between 75 and 230 days,
$\mean{\beta_{\rm{app}}}=4.1\pm0.5$ \citep[\textit{horizontal lines};][]{Mooley+18b}, is compared to that from our best-fit simulation (\textit{thick red line}) as a function of $\alpha$. It corresponds to $\alpha=0.661^{+0.242}_{-0.141}$ (\textit{vertical lines}) or a $1\sigma$ confidence interval $0.520 <\alpha<0.903$.
}
\label{fig:alpha-fit}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Fig2d_images_GW170817.pdf}
\caption{{\bf\textit{Top}}: The evolution of the afterglow image flux-centroid location ($\tilde{x}_{\rm{fc}}$; \textit{deep purple}), and best-fit parameters to an elliptical Gaussian: semi-minor axis $\sigma_x$ (\textit{blue}), semi-major axis $\sigma_y$ (\textit{red}), and center $\tilde{x}_{\rm{el}}$ (\textit{magenta}).
Solid lines are for our fiducial model, and dotted lines of the same color are for our
best-fit length-time rescaling parameter $\alpha=0.661$.
Our model calculations are compared to observational
upper limits on the semi-major (\textit{red}) and semi-minor (\textit{blue}). The limits at $75,\,230\;$days \citep{Mooley+18b} are $\sim1\sigma$; ellipse symbols assume a 4:1 axis ratio; black-circle symbols assume a circular Gaussian and apply to both axes. At $207\;$days \citep{Ghirlanda+18} we show $68\%$~CL and $90\%$~CL limits for our calculated axis ratio ($\sigma_y/\sigma_x=2.55$).
The vertical dotted black lines indicate the two epochs (75 and 230~days) between which
$\mean{\beta_{\rm{app}}}=4.1\pm0.5$ was measured \citep{Mooley+18b}.
{\bf\textit{Bottom}}: The
evolution of the flux-centroid location (\textit{left $y$-axis}) for our fiducial model
(\textit{deep purple}) and its rescaled version to best fit the measured $\mean{\beta_{\rm{app}}}$ (shaded region of matching color for the $1\sigma$ confidence region), as well as of the flux centroid's apparent velocity (\textit{right $y$-axis}). For the latter we show both the
mean apparent velocity from $t=0$, $\mean{\beta_{\rm{app}}}_0=\vert\tilde{x}_{\rm{fc}}\vert/ct_{\rm{obs}}$
(\textit{dark green}), and for the instantaneous $\beta_{\rm{app}}=\vert{}d\tilde{x}_{\rm{fc}}/d(ct_{\rm{obs}})\vert$ (\textit{blue}).}
\label{fig:gaussian-fit}
\end{figure}
Fig.~\ref{fig:alpha-fit} shows how our best-fit simulated $\langle\beta_{\rm{app}}\rangle$ varies with $\alpha$. The measured $\langle\beta_{\rm{app}}\rangle=4.1\pm0.5$ corresponds to $\alpha=0.661^{+0.242}_{-0.141}$, and is consistent (at the 1.35$\sigma$ level) with
our fiducial model that fits the afterglow lightcurve ($\alpha=1$), which thus passes an important consistency check.
To calculate the afterglow image size and shape, we fit the surface brightness to an elliptical Gaussian, $I_\nu\propto\exp[-(\tilde{x}-\tilde{x}_{\rm{el}})^2/2\sigma_x^2-\tilde{y}^2/2\sigma_y^2]$ centered at $(\tilde{x}_{\rm{el}},0)$, where $(\sigma_x,\sigma_y)$ are the
standard deviations of the semi-minor and semi-major axes along the $\tilde{x}$-axis and $\tilde{y}$-axis, respectively \citep{Granot+18b}.
The top-panel of Fig.~\ref{fig:gaussian-fit} shows the evolution of the afterglow flux-centroid location, and the afterglow image size and shape for $\alpha=1$ and for the $\mean{\beta_{\rm{app}}}$ best-fit $\alpha=0.661$. Our image size is consistent with the upper limits from radio VLBI observations
\citep{Mooley+18b,Ghirlanda+18}. The bottom-panel of Fig.~\ref{fig:gaussian-fit} shows the flux centroid's location, $\tilde{x}_{\rm{fc}}(t_{\rm{obs}})$,
as well as its instantaneous ($\beta_{\rm{app}}=\vert{}d\tilde{x}_{\rm{fc}}/d(ct_{\rm{obs}})\vert$) and mean
($\mean{\beta_{\rm{app}}}_0=\vert\tilde{x}_{\rm{fc}}\vert/ct_{\rm{obs}}$) apparent velocities, for our fiducial model ($\alpha=1$),
and over the $1\sigma$ confidence interval of $\alpha$ derived in Fig.~\ref{fig:alpha-fit}. We find that $\beta_{\rm{app}}(t_{\rm{obs,pk}})\approx\mean{\beta_{\rm{app}}}$.
The measured $\mean{\beta_{\rm{app}}}$ favors a slightly larger $\theta_0$ compared to our $\theta_0=0.1$.
The lightcurve peak occurs when $1/\Delta\theta\approx\Gamma(t_{\rm{obs,pk}})\approx\beta_{\rm{app}}(t_{\rm{obs,pk}})\approx\mean{\beta_{\rm{app}}}$,
implying $\theta_0\approx[\mean{\beta_{\rm{app}}}(\theta_{\rm{obs}}/\theta_0-1)]^{-1}\approx0.116^{+0.016}_{-0.013}$
using the measured $\langle\beta_{\rm{app}}\rangle=4.1\pm0.5$ and our inferred $\theta_{\rm{obs}}/\theta_0=3.1\pm0.1$.
The latter implies $\Gamma(t_{\rm{obs,pk}})\propto\theta_0^{-1}$, which in turn
for the measured $t_{\rm{obs,pk}}(\theta_0)\approx150\,{\rm{days}}$, and either pre- or post-jet break simple
analytic dynamics, implies $E/n\propto\theta_0^{-6}$. This agrees with the best-fit values for our
$\theta_0=0.1,\,0.2$ to within 34\%, $(0.2/0.1)^6(10^{50.32}/10^{-2})/(10^{50.4}/10^{-3.6})\approx1.337$.
Even for $\theta_0=0.2$, a derivation of $E_{\rm{min}}$ following the one done above for $\theta_0=0.1$ gives
a result very similar to Eq.~(\ref{eq:Emin}), implying that it is quite robust.
Altogether, $\mean{\beta_{\rm{app}}}$ provides an additional observational constraint that allows us to
constrain an additional model parameter, $\theta_0$, which still leaves us with a 3D allowed parameter space.
\section{Discussion and Conclusions}
\label{sec:dis}
This work demonstrates using afterglow lightcurves and image size, shape and flux centroid motion,
all derived from 2D hydrodynamical numerical simulations, that an initially top-hat jet can fit the afterglow
observations of GW$\,$170817/GW$\,$170817A for $\theta_0\approx0.1$ and $\theta_{\rm{obs}}/\theta_0\approx3$
at $t_{\rm obs}\gtrsim t_{\rm obs,pk}$.
We show that simulations of initially top-hat jets with a modest $\Gamma_0\sim20-25$ can only be used to fit
the late-time observations near the lightcurve's peak at $t_{\rm{obs,pk}}\approx150\,$days. Fitting earlier
observations at $t_{\rm{obs}}\lesssim60\,$days requires $\Gamma_0\gtrsim25$.
We analytically express the allowed parameter space (Eqs.~[\ref{eq:scaling2b}]) showing the full degeneracies
between the model parameters, and find a robust lower limit on the jet's true energy,
$E_{\min}\approx5.3\times10^{48}\,$erg (Eq.~[\ref{eq:Emin}]), and the CBM density,
$n_{\min}\approx5.3\times10^{-6}~{\rm cm}^{-3}$ (Eq.~[\ref{eq:n_min}]).
Our numerical simulations are initialized using a conical wedge from the BM76 self-similar solution;
a similar setup is used in the BOXFITv2 code. The simulation is initialized at a finite lab-frame time
$t_0=t(\Gamma_0)$ corresponding to the modest $\Gamma_0=\Gamma(t_0)$. Therefore, no flux contributions
are obtained from the simulated region at $t<t_0\Leftrightarrow t_{\rm{obs}}<t_{\rm{obs,0}}$.
Artificially supplementing the lightcurve at those times with flux arising from the initial
condition (a top-hat jet) over a wide time-range produces an early sharply-rising flux for an off-axis
($\theta_{\rm{obs}}>\theta_0$) observer.
However, within a dynamical time ($t_0<t\lesssim2t_0\Leftrightarrow{}t_{\rm{obs,0}}<t_{\rm{obs}}\lesssim2t_{\rm{obs,0}}$), as
the outflow relaxes from the initial conditions it develops a bow-shock like angular structure that resembles a structured jet
having an energetic relativistic core surrounded by mildly (and sub-) relativistic low-energy material. Outside the
highly-relativistic core, whose emission is strongly beamed, the slower material makes the dominant contribution to the flux
for off-axis observers due to its much wider beaming cone.
As the jet's core decelerates, its beaming cone widens and the observer sees a gradual rise in flux until the entire core becomes
visible, at which point the flux peaks and starts to decline thereafter, gradually joining the on-axis lightcurve.
We demonstrate here that by using increasingly larger $\Gamma_0=20,\,40,\,60$ the initial observed time
can be shifted to correspondingly earlier times, $t_{\rm{obs},0}=38.1,\,23.0,\,18.3\;$days, thereby replacing the sharp rise in
flux with a much more gradual rise.
In GRB$\,$170817A, the shallow flux rise seen from $t_{\rm{obs,0}}\simeq10\,$days can potentially be
reproduced for $\Gamma_0\gtrsim10^{2.5}$, which are physically plausible but computationally challenging,
although the exact shape of the early rising lightcurve in this case is still unclear.
Nevertheless, the initially top-hat jet model has some limitations. For example, the early time afterglow lightcurve
shows a power-law rise ($F_\nu\propto t_{\rm obs}^{0.8}$) to the peak, whereas the model lightcurve has some
curvature. In this work we did not carry out a detailed model fit to the data to determine the goodness of fit since
our simulations were limited to $\Gamma_0 = 60$ and could not fit observations at $t_{\rm obs}\lesssim40\,$days.
Numerical simulations of structured jets that show a greater degree of complexity, and therefore are more realistic,
also have larger number of model parameters, which allows them to capture the subtleties of the observed afterglow data
more effectively.
Numerical simulations of a relativistic jet penetrating through the dynamical ejecta/neutrino-driven wind of
BNS merger \citep{Bromberg+18,Gottlieb+18,Xie+18,Geng+19} find that the emergent jet develops a core-dominated
angular structure similar to what we find. Moreover, our afterglow
model fit parameters are consistent with works featuring initially structured core-dominated jets. This renders
both scenarios practically indistinguishable from afterglow observations alone, particularly close to and after
the peak time of the lightcurve \citep[also see, e.g.,][]{Gottlieb+19} when emission from the core starts dominating
the observed flux, thereby validating the use of initially top-hat jet simulations as an attractive tool for afterglow
modeling of core-dominated jets.
Both the jet's dynamics and initial angular structure outside its core, before it is decelerated by the external medium,
affects the afterglow emission before the lightcurve peak time. From the afterglow observations alone, it might be difficult
to disentangle their effects, however, they may be better probed by the prompt emission.
For example, in the case of GRB$\,$170817A, its highly sub-luminous and mildly soft
prompt $\gamma$-ray emission rules out an initial top-hat jet \citep[e.g.,][]{Abbott+17-GW170817-GRB170817A,Granot+17}, favoring instead emission from sub-energetic mildly-relativistic material near our line of sight.
\acknowledgments
R.G. and J. G. are supported by the Israeli Science Foundation under grant No. 719/14.
FDC aknowledges support from the UNAM-PAPIIT grant IN117917.
We acknowledge the support from the Miztli-UNAM supercomputer (project LANCAD-UNAM-DGTIC-281)
in which the simulations were performed.
|
1,108,101,563,818 | arxiv | \section{Introduction\label{sec:intro}}
Being able to describe the data collected from the observations of various physical phenomena with simple analytical equations and formulas is the holy grail in theoretical physics --- the physicists who are lucky enough to find such relationships typically get those laws named after them. In the era of big data, this task is becoming increasingly difficult for a human --- the data is just too complex and/or very high-dimensional. Recent advances in computer science and theoretical modelling have allowed us to entertain the idea that the discovery process could perhaps be automated (at least as a matter of principle) and novel laws of phenomenological behavior can be constructed entirely with a machine and without any human intervention \cite{Langley1977, Langley1987,Kokar1986,Langley1989,Zembowicz1992,Todorovski1997,Bongard2007,Schmidt2009,Battaglia2016,Chang2016,Guimera2020}. A less ambitious, but still worthy, task is to simply let the machine re-derive the known classical physics laws from data \cite{Udrescu:2019mnk,Cranmer:2020wew,liu2022ai,https://doi.org/10.48550/arxiv.2206.10540}.
Spurred by the extensive recent research on symbolic learning in the machine learning community, the above program was recently successfully applied to examples in a wide range of physics areas, e.g.
in astrophysics \cite{Cranmer2019,Cranmer:2020wew,2021arXiv211102422D},
in astronomy for the study of orbital dynamics \cite{Iten2020,Lemos2022}
and exoplanet transmission spectroscopy \cite{Matchev2022ApJ},
in collider physics \cite{Choi:2010wa,Butter:2021rvz,Dersy:2022bym,Alnuqaydan:2022ncd},
in materials science \cite{wang_wagner_rondinelli_2019},
and in behavioral science \cite{Arechiga2021}.
A common ML tool used in such studies is symbolic regression --- an interpretable machine learning algorithm
which searches the space of functions until it finds an algebraic expression that approximates the dataset well. While most current applications of symbolic regression are limited to low-dimensional data, the approach can be easily extended to higher-dimensional spaces by using a neural network as a proxy, as illustrated in Ref.~\cite{Cranmer:2020wew} with the example of N-body problems.
The basic task in symbolic regression is to learn an analytical expression $f(\mathbf{x})$ given some labelled data $(\mathbf{x},y)$, where $\mathbf{x}$ are input features, typically high-dimensional, and $y$ is the output target label\footnote{In principle, $y$ can also be high-dimensional, however, for simplicity in this paper we shall focus on a single $y$.}. The learned function $f(\mathbf{x})$ can be scrutinized further in three aspects corresponding to fundamental principles of explainable AI \cite{559461}:
\begin{itemize}
\item {\bf Explanation accuracy.} The first question is, how good is the result, i.e., how well does $f(\mathbf{x})$ fit the training data. Typical datasets are imperfect, due to noise, experimental errors, etc., in which case the fitted function will provide only an approximate description of the data. The fit is only expected to get worse as the errors in the data increase \cite{2204.02704}. On the other hand, even if the data is perfect, the fit may be sub-optimal due to factors related to the training of the symbolic regression itself. For example, one may have started with the wrong choice of basis functions, one may have unnecessarily restricted the functional complexity, or the training may simply not converge to the right answer. Our numerical examples considered in this paper shall illustrate many of those situations.
\item {\bf Generalizability (knowledge limits).} A system should only operate under conditions for which it was designed. In the case of symbolic regression, extrapolating into the regions away from the training data in principle could be dangerous and should be handled with care. At the same time, physics laws are universal --- if we find the correct relationship, it should be valid over the full allowed domain of the input variables. As shown below, this principle could be used to narrow down the list of candidate analytical expressions.
\item {\bf Explainability (meaningful).} A system must provide explanations that are understandable to the intended consumers, and furthermore, these explanations must correctly reflect the reason for generating the output and/or the system’s process. A common criticism of deep learning models is that they are black boxes which provide little insight into the fundamental processes that are at work. A symbolic regression is arguably the most intuitive and meaningful approach from the point of view of a theorist --- theorists are used to working with analytical formulas and from experience can often find the physical interpretations of the various terms in an analytical expression.
\end{itemize}
In this paper we consider several applications of symbolic regression to problems in collider physics and specifically particle kinematics. These examples will be presented in order of increasing difficulty, starting from simple cases in which the exact theoretical formula is known. Nevertheless, rederiving those answers with a symbolic regression will serve as an important illustration and validation of our procedure.
Symbolic regression is a promising machine learning method that searches over a large space of functions until it finds an expression which is both a) relatively simple and b) a good fit to the training data. Because the evolutionary algorithm requires diversity in order to effectively explore the search space, the result of the symbolic regression is a collection of several high-scoring models, which need to be scrutinized by the user to identify an approximation that offers a good trade-off between accuracy and simplicity. At the same time, training a symbolic regression is a computationally expensive process, since the function space to be scanned is in principle infinite. This is why, as a proof-of-concept, in this paper we shall limit ourselves to a few simple examples, which do not require a high-performance cluster, and can be done on a personal laptop.
To train a symbolic regression, we shall make use of the \textsc{PySR}~software package \citep{pysr}, which models the data set with a graph neural network before applying symbolic regression to fit different internal parts of the learned model that operate on reduced dimension representations \cite{Cranmer:2020wew}. We shall not attempt any hyperparameter optimization and for the most part will use the default configuration in the \textsc{PySR}~ version 0.10.1 distribution.
The paper is organized as follows. In Section~\ref{sec:mt2} we shall use parton-level data (in the narrow-width approximation) to re-derive some known analytical results for the Cambridge $M_{T2}$ variable. In Section~\ref{sec:F} we repeat the same exercise and try to derive the splitting function ${\cal F}(E,\theta)$ for the ISR photon at an $e^+e^-$ collider, which gives us the probability to radiate a photon with a given energy $E$ and a given polar angle $\theta$. We perform two versions of the exercise. First, in Section~\ref{sec:Fdirect} we sample the ${\cal F}$ function directly to create a perfect data sample with no statistical fluctuations. Then, in Section~\ref{sec:F_MC} we use a sample of Monte Carlo (MC) generated events to first obtain a binned estimate of ${\cal F}$ (which is subject to statistical errors) before applying the symbolic regression. In Section~\ref{sec:F_MC_detector} we perform a more realistic analysis by adding detector resolution effects. Section~\ref{sec:conclusions} is reserved for a summary and outlook.
\section{Deriving analytic expressions for algorithmically defined kinematic variables: $M_{T2}$}
\label{sec:mt2}
A standard analysis of particle physics data (such as events from collisions at the Large Hadron Collider (LHC) at CERN) involves the study of distributions of kinematic variables, which are typically defined in terms of the energies and momenta of the particles observed in the detector (for recent reviews of the kinematic variables commonly used in collider phenomenology, see \cite{Han:2005mu,Barr:2010zj,Barr:2011xt,Franceschini:2022vck}). Many of these variables, e.g., invariant mass, missing transverse momentum, etc., are defined in terms of simple analytical expressions and can be readily computed from the collections of particle 4-momenta in the event. However, there also exist another class of kinematic variables, which are defined algorithmically, i.e., through a well-defined optimization procedure which involves the minimization (or maximization) of a relevant kinematic function. In that case, the kinematic variable is a quantity which can be computed only once the algorithm has converged, and typically there is no a priori known analytical expression for it in the general case. Examples of such variables include many traditional event shape variables (thrust, sphericity, etc.) \cite{Banfi:2010xy,Franceschini:2022vck}, some modern substructure variables like N-jettiness \cite{Stewart:2010tn} and N-subjettiness \cite{Thaler:2010tr}, and many others. Another large class of algorithmic variables which have received a lot of attention in the last 15 years, are the so-called constrained mass variables which are computed via constrained minimization of a kinematic function of the particle 4-momenta \cite{Han:2005mu,Barr:2010zj,Barr:2011xt,Franceschini:2022vck}. The minimization is typically performed over the energy and momentum components of invisible particles in the event (neutrinos or dark matter candidates). Examples of constrained mass variables include the Oxbridge variable $M_{T2}$ \cite{Lester:1999tx,Barr:2003rg} and its 4-dimensional generalization $M_2$ \cite{Cho:2014naa,Cho:2015laa,Cho:2014yma}, the variable $M_{2C}$ \cite{Ross:2007rm}, etc. In this paper, for concreteness we shall focus on the well-known $M_{T2}$ variable \cite{Lester:1999tx,Barr:2003rg}, which is algorithmically defined and does not have a known analytical formula in the general case. The advantage of $M_{T2}$ is that there exist formulas for special cases of certain momentum configurations for the visible final state particles. As a warm-up, in this section we shall use these special $M_{T2}$ cases to validate and illustrate the use of symbolic regression for the purpose of deriving new formulas for computing kinematic variables.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{./plots/diagrams.pdf}
\caption{The generic $\slashed{E}_\genericT$ event topology applicable to the $M_{T2}$ variable. The parent particles ${\cal P}_1$ and ${\cal P}_2$ are produced in association with some visible upstream transverse momentum $\vec{p}_T^{\, ISR}$. The remaining visible final state particles are divided into two groups (solid black lines), with 4-momenta $p_1$ and $p_2$ and masses $m_1$ and $m_2$, respectively. The two invisible final state particles (red dashed lines) have 4-momenta $q_1$ and $q_2$ and are assumed to have a common mass $m_\chi$. }\label{fig:diagram}
\end{figure}
A well-motivated class of new physics models which generically predict a $\slashed{E}_\genericT$ signature, are models with dark-matter candidates. In such models, the lifetime of the dark-matter particle is typically protected by an exact discrete symmetry, which implies that the collider signals will involve not one, but {\it two} decay chains, each terminating in a dark-matter particle invisible in the detector. The simplest $\slashed{E}_\genericT$ event topology of this type is illustrated in Figure~\ref{fig:diagram}, where two identical parent particles ${\cal P}_1$ and ${\cal P}_2$ are produced with additional objects, typically from initial state radiation (ISR). Each parent particle ${\cal P}_i$, $(i=1,2)$, decays to a visible particle system with invariant mass $m_i$ and 4-momentum $p_i = (E_i, \vec p_{i T}, p_{i z})$ and an invisible particle $\chi_i$ with 4-momentum $q_i=(\varepsilon_i, \vec q_{i T}, q_{i z})$. The masses of the invisible particles are a priori unknown. Here, we shall assume that the invisible particles $\chi_1$ and $\chi_2$ are identical and have a common mass $m_\chi$. Momentum conservation in the transverse plane implies
\begin{equation}
\vec{q}_{1T} + \vec{q}_{2T} = \slashed{\vec{p}}_\genericT\,,
\label{eq:q1Tq2TMET}
\end{equation}
where the missing transverse momentum vector is given by
\begin{equation}
\slashed{\vec{p}}_\genericT = - (\vec{p}_{1T} + \vec{p}_{2T}) - \vec{p}_{T}^{\,ISR}.
\label{eq:mptdef}
\end{equation}
The transverse momentum vectors $\vec{p}_{iT}$, $\vec{q}_{iT}$, $\slashed{\vec{p}}_\genericT$ and $\vec{p}_{T}^{\,ISR}$ are illustrated in Figure~\ref{fig:config}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{./plots/transverse_momentum.pdf}
\caption{A generic configuration of the transverse momentum vectors $\vec{p}_{iT}$, $\vec{q}_{iT}$, $\slashed{\vec{p}}_\genericT$ and $\vec{p}_{T}^{\,ISR}$ entering the definition (\ref{eq:mt2def}) of $M_{T2}$.
}\label{fig:config}
\end{figure}
The two main ingredients in the $M_{T2}$ calculation are the transverse masses $M_{T{\cal P}_i}$ of the two parent particles ${\cal P}_i$:
\begin{equation}
M_{T {\cal P}_i}(\vec q_{iT},m_\chi) = \sqrt{m_i^2 + m_\chi^2
+ 2 \big ( E_{iT} \varepsilon_{iT} - \vec p_{iT} \cdot \vec q_{iT} \big )},~~~~
\label{eq:MTPdef}
\end{equation}
where the transverse energies are defined as
\begin{equation}
E_{i T} = \sqrt{ \vec p_{i T}^{\, 2} + m_i^2 } , \quad
\varepsilon_{i T} = \sqrt{ \vec q_{i T}^{\,2} + m_\chi^2 } \, .
\end{equation}
The $M_{T2}$ is defined as \cite{Lester:1999tx,Barr:2003rg}
\begin{eqnarray}
M_{T2} (\tilde m) &\equiv& \min_{\vec{q}_{1T},\vec{q}_{2T}}
\hspace*{-0.1cm}
\left\{\max\left[M_{T{\cal P}_1}(\vec{q}_{1T},\tilde m),\,M_{T{\cal P}_2} (\vec{q}_{2T},\tilde m)\right] \right\} ,\nonumber\\
\slashed{\vec{p}}_\genericT &=& \vec{q}_{1T}+\vec{q}_{2T} \;, \label{eq:mt2def}
\end{eqnarray}
where the {\it a priori} unknown invisible daughter mass $m_\chi$ has been replaced with a test mass parameter $\tilde m$. This construction guarantees that on an event-by-event basis the computed value of $M_{T2}$ does not exceed the mass of the parent ${\cal P}_i$.
In general, the minimization in (\ref{eq:mt2def}) has to be done numerically. However, for certain special cases, analytical solutions have been derived \cite{Barr:2003rg,Lester:2011nj,Lally:2012uj,Lester:2007fq,Cho:2007qv,Cho:2007dh}. In this section, we shall apply symbolic regression to rederive several of those analytical solutions.
\subsection{The case of no upstream momentum}
\label{sec:noISR}
The minimization in eq.~(\ref{eq:mt2def}) may result in one of two distinct possibilities: the transverse masses of the parents are equal, $M_{T {\cal P}_1} = M_{T {\cal P}_2}$, which is known as the balanced solution, or the transverse masses of the parents are unequal, $M_{T {\cal P}_1} \ne M_{T {\cal P}_2}$, known as the unbalanced case. The analytical expression for $M_{T2}$ in the unbalanced case is simply given by eq. (\ref{eq:MTPdef}) \cite{Barr:2003rg}, so the balanced case is the only one we need to worry about. Unfortunately, there is no known analytical formula for the balanced $M_{T2}$ solution for generic momentum configurations like the one in Figure~\ref{fig:config}. However, for the special momentum configuration shown in Figure~\ref{fig:configNoISR}, where $\vec p_T^{ISR}=0$, the analytical formula for the balanced $M_{T2}$ solution is known to be \cite{Cho:2007qv,Lester:2007fq}
\begin{eqnarray}
&&M_{T2}^{2}(\tilde{m})
= \tilde{m}^2 + A_T \label{eq:mt2oldB}\\
&&\hspace*{0.5cm}+
\sqrt{ \left ( 1 + \frac{4 \tilde{m}^2}{2A_T-m_1^2-m_2^2} \right )
\left ( A_T^2 - m_1^2 ~m_2^2 \right ) } \,,\nonumber
\end{eqnarray}
where $A_T$ is a convenient shorthand notation introduced in \cite{Cho:2007dh}
\begin{equation}
A_T = E_{1T} E_{2T} + \vec{p}_{1T}\cdot\vec{p}_{2T} \, .
\label{ATdef}
\end{equation}
In order to avoid always taking an extra square root, from now on for convenience we shall focus on the $M_{T2}$ variable {\em squared}.
In what follows an important attribute of an analytical expression will be the so-called complexity $C$ (defined as the number of leaf nodes in the binary tree representing the analytical expression) \cite{pysr,Cranmer:2020wew}. Clearly, functions of higher complexity in turn will demand more extensive computational resources, including longer computational times. The function (\ref{eq:mt2oldB}) is of complexity 24, which is already a formidable challenge. Given a) our rather modest computational budget, and b) our goal of to demonstrate the method as a proof of principle, here we shall limit ourselves to four simple, yet nontrivial, special cases of (\ref{eq:mt2oldB}) which have lower complexity, namely
\begin{itemize}
\item {\em Massless visible and massless invisible final state particles.} Setting $m_1=m_2=0$ and $\tilde m=0$ in (\ref{eq:mt2oldB}), we obtain:
\begin{equation}
M_{T2}^2(\tilde m) = 2A_T = 2(E_{1T}\, E_{2 T}+\vec{p}_{1 T} \cdot \vec{p}_{2 T} ) \, .
\label{eq:MT2_special1}
\end{equation}
\item {\em Massless visible and massive invisible final state particles.} Substituting $m_1=m_2=0$ and $\tilde m\ne 0$ into (\ref{eq:mt2oldB}), we get:
\begin{equation}
M_{T2}^2(\tilde m) = \tilde m^2 + A_T+\sqrt{A_T(A_T+2\tilde m^2) } \, .
\label{eq:MT2_special2}
\end{equation}
\item {\em Equally massive visible and massless invisible final state particles.} Alternatively, choosing $m_1=m_2 = m \ne 0$ and $\tilde m= 0$ in (\ref{eq:mt2oldB}), we find:
\begin{equation}
M_{T2}^2(\tilde m=0) = A_T + \sqrt{A_T^2- m^4} \, .
\label{eq:MT2_special3}
\end{equation}
\item {\em Equally massive visible and massive invisible final state particles.} Finally, choosing $m_1=m_2 = m \ne 0$ and $\tilde m\ne 0$ in (\ref{eq:mt2oldB}), we find:
\begin{equation}
M_{T2}^{2}(\tilde{m})
= \tilde{m}^2 + A_T + \sqrt{ \left ( A_T - m^2 + 2\tilde{m}^2 \right )
\left ( A_T + m^2 \right ) } \, .
\label{eq:MT2_special4}
\end{equation}
\end{itemize}
We shall now try to reproduce\footnote{ Note that all of these results would have been completely novel prior to 2007, i.e., only 15 years ago.} each of those expressions (\ref{eq:MT2_special1}-\ref{eq:MT2_special4}) with the symbolic regression algorithm implemented in \textsc{PySR}~\cite{pysr,Cranmer:2020wew}. For this purpose, we shall generate a large sample of events, compute the target variable $M_{T2}^2$ numerically from the defining formula (\ref{eq:mt2def}), using the python code {\sc mt2 1.2.0} \cite{Lester:2014yga}, and then ask the symbolic regression to ``discover" the analytical results (\ref{eq:MT2_special1}-\ref{eq:MT2_special4}).
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{./plots/special_configuration_1.pdf}
\caption{The special momentum configuration with $\vec p_{T}^{\, ISR} = 0$ considered in Section~\ref{sec:noISR}. The missing transverse momentum $\slashed{\vec{p}}_\genericT$ exactly balances the total visible transverse momentum $\vec{p}_{1T}+\vec{p}_{2T}$.
\label{fig:configNoISR}}
\end{figure}
\begin{table*}[t]
\centering
\renewcommand\arraystretch{2.0}
\begin{tabular}{||c|c|c|c|c||}
\hline
Case &Complexity & Fitted function & MSE & Score \\
\hline
\hline
\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{$\mu =0$, $\tilde \mu=0$}}}
& 1 & $\vec{p}_{T1} \cdot \vec{p}_{T2}$ & $7\times10^7$& $0 $\\
& 3 &$|\vec{p}_{T1}+ \vec{p}_{T2} |^2$ & $2.2\times10^6 $&$1.74$ \\
& 5 & $2(\vec{p}_{T1} \cdot \vec{p}_{T2}+E_{T1}\, E_{T2} )$& $0$ & $\infty$\\
\hline\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu =0$, $\tilde \mu \ne 0$}}}
& 9 & $2A_T+1.8(\tilde\mu-3.91) $ & $7.345\times10^3$ & $6.73\times10^{-3}$ \\
& 11 & $2A_T+\tilde\mu/0.556-9.51 $ & $7.316\times10^3 $ & $1.985\times10^{-3}$ \\
& 13 & $2A_T+\tilde\mu+0.10\,\tilde\mu\, A_T^{1/4} $ & $6.377\times10^3 $ & $6.870\times10^{-2}$ \\
& 14 & $\tilde\mu+A_T+\sqrt{A_T^2+2\tilde\mu A_T} $ & $0$ & $\infty$ \\
\hline\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu \ne 0$, $\tilde \mu= 0$}}}
& 5 & $2.02(A_T- 133.35) $ & $6.67\times 10^4$ & $0.23$ \\
& 7 & $2.03A_T-0.44\mu $ & $3.39\times 10^4$ & $ 0.34$\\
& 9 & $2A_T -\mu^2/A_T$ & $ 2.06\times 10^4$ & $0.25$\\
& 10 & $A_T + \sqrt{(A_T-\mu)(A_T+\mu)}$ & $0$ & $\infty $\\
\hline\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu \ne 0$, $\tilde \mu\ne 0$}}}
& 12 & $ A_T + \sqrt{2}\sqrt{ \left ( A_T - \mu+\tilde{\mu} \right )
A_T }$ & $3.90\times10^4$ & $0.28$ \\
& 13 & $ A_T + \sqrt{2}\sqrt{ \left ( A_T - 0.99\mu \right )
A_T } $ & $3.34\times10^4$ & $ 0.16$\\
& 14 & $ A_T + \sqrt{ \left ( A_T - \mu + 2.74\tilde{\mu} \right )
\left ( A_T + \mu \right ) } $ & $3.488\times10^3$ & $2.26$\\
& 16 & $\tilde{\mu} + A_T + \sqrt{ \left ( A_T - \mu + 2\tilde{\mu} \right )
\left ( A_T + \mu \right ) } $ & $1.12\times10^{-6}$ & $10.93$\\
\hline\hline
\end{tabular}
\caption{Results from the $M_{T2}$ exercise with no ISR considered in Section~\ref{sec:noISR}.
In each case, we show the best fitted functions at several representative values of the complexity. The correct answers are given by eqns.~(\ref{eq:MT2_special1}-\ref{eq:MT2_special4}), with
the substitutions $\tilde m^2 \to \tilde\mu$ and $m^2\to \mu$.
\label{tab:noISR}
}
\end{table*}
In the case of no upstream momentum ($\vec p_T^{\,ISR}=0$) considered in this subsection, there are 7 input degrees of freedom, which naively can be taken to be the two transverse momentum components of each visible particle, $\vec{p}_{1T}$ and $\vec{p}_{2T}$, their masses $m_1$ and $m_2$, and the invisible test mass $\tilde m$. In principle, one can use this set of primitive variables as inputs to the symbolic regression, but the disadvantage is that the machine will need to learn the physics principles from scratch. In order to improve and speed up the performance of the symbolic regression, it is crucial to use an optimized set of input variables which reflects the underlying physics principles of the problem. One possibility is to use dimensional analysis and feed only groups of variables which have the proper physics dimensions \cite{Matchev2022ApJ}. In our case here, since we are looking for a formula for a mass-squared quantity, $M_{T2}^2$, it makes sense if all of our input variables have mass-dimension 2, otherwise the complexity of the function will increase, making it more difficult for the symbolic regression to find it. Furthermore, we know that the answer must be rotationally invariant, back-to-back boost invariant \cite{Cho:2007qv,Cho:2007dh}, and symmetric with respect to permutations among the visible particles $(1\leftrightarrow 2)$. These considerations restrict the relevant set of variables to fewer degrees of freedom, which further improves the performance of the symbolic regression. For example, in the case of the function (\ref{eq:MT2_special1}) we shall consider as inputs the set
$\{E_{1T}E_{2T}, \vec{p}_{1T}\cdot \vec{p}_{2T},|\vec{p}_{1T}+ \vec{p}_{2T}|\}$,
in terms of which the answer (\ref{eq:MT2_special1}) is only of complexity 5. Similarly, in the case of the function (\ref{eq:MT2_special2}), we shall input the values of $A_T$ and $\tilde\mu \equiv\tilde m^2$, which results in complexity 12. Then for the function (\ref{eq:MT2_special3}) we shall use the values of $A_T$ and $\mu \equiv m^2$ as inputs, and the corresponding complexity is 8. Finally, for the function (\ref{eq:MT2_special4}) we shall feed in $\{A_T, \mu, \tilde\mu\}$ and the complexity is 15.
\begin{table*}[t]
\centering
\renewcommand\arraystretch{2.0}
\begin{tabular}{||c|c|c|c|c||}
\hline
Case & Complexity & Fitted function & MSE & Score \\
\hline
\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu =0$, $\tilde \mu\ne0$}}}
& 8 & $\sqrt{\tilde\mu(\vec{p}_{T1} \cdot \vec{p}_{T2}+E_{T1}\, E_{T2})/0.468}$ & $26.9$ & $ 2.946$ \\
& 10 & $\tilde\mu+\sqrt{\tilde\mu\,(\vec{p}_{T1} \cdot \vec{p}_{T2}+E_{T1}\, E_{T2})/0.5}$ & $2.91\times10^{-5} $ & $ 6.87$ \\
& 12 & $\tilde\mu+\sqrt{2\tilde\mu(\vec{p}_{T1} \cdot \vec{p}_{T2}+E_{T1}\, E_{T2})} +0.005$ & $ 1.25\times10^{-5}$ & $4.24\times10^{-1}$ \\
& 13 & $\tilde\mu+(\sqrt{\tilde\mu} )\sqrt{\vec{p}_{T1} \cdot \vec{p}_{T2}+E_{T1}\, E_{T2}}\sqrt{2}$ & $0$ & $\infty$
\\
\hline\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu \ne 0$, $\tilde \mu\ne0$}}}
& 6 & $\sqrt{\tilde\mu\, A_T/0.22}$ & $ 5.33\times10^5$& $ 9.168\times10^{-1}$\\
& 8 &$(\mu+\sqrt{\tilde\mu\, A_T/0.296})$ & $ 1.64\times10^5$ & $5.89\times10^{-1} $ \\
& 10 & $1.29(\tilde\mu+\mu+\sqrt{\tilde\mu A_T})$ & $ 7.08\times10^{3}$ & $ 1.57$ \\
& 12 & $\tilde\mu+\mu+\sqrt{2\tilde\mu(\mu+A_T)}$ & $0$ & $\infty$
\\
\hline\hline
\end{tabular}
\caption{Results from the $M_{T2}$ exercise considered in Section~\ref{sec:noMET} for the momentum configuration with $\slashed{p}_\genericT=0$ displayed in Figure~\ref{fig:configNoMET}.
\label{tab:noMET}
}
\end{table*}
In order to train the symbolic regression, we need to create suitable training data. For the exercise in this subsection, we sample the lepton momenta $\vec{p}_{1T}$ and $\vec{p}_{2T}$, which also fixes the missing transverse momentum vector as $\slashed{\vec{p}}_\genericT = - (\vec{p}_{1T}+\vec{p}_{2T})$
(see Figure~\ref{fig:configNoISR}). From those momenta we compute the input features (of mass dimension 2) to the symbolic regression as explained above. The target variable $M_{T2}^2$ is then calculated numerically with the {\sc mt2} code \cite{Lester:2014yga}. This exercise is performed four different times, depending on the choice for the mass parameters $\mu$ and $\tilde \mu$ being zero or non-zero, leading to the four different cases in eqns.~(\ref{eq:MT2_special1}-\ref{eq:MT2_special4}).
In each of these four cases, we train the \textsc{PySR}~symbolic regression algorithm on 10,000 events. We mostly use the default hyperparameter configuration in the \textsc{PySR}~distribution. Due to the relatively high complexity of our functions, we increased the number of iterations to 10. We allow for the simple arithmetic operators addition ($+$), subtraction ($-$), multiplication ($*$), division ($/$), and square root ($\sqrt{\cdot}$). The loss function is the mean squared error (MSE).
The output from a typical \textsc{PySR}~run is a set of functions of increasing complexity $C$, together with their MSE and score.
The score is calculated by the fractional drop in the MSE over the increase in the complexity from the next best model \cite{Cranmer:2020wew}
\begin{eqnarray}
{\rm Score} &=& - \frac{\Delta \log \left( MSE \right )}{\Delta c} \, .
\end{eqnarray}
The results from the four exercises in this subsection are displayed in Table~\ref{tab:noISR}. In each case, the symbolic regression was able to eventually reproduce the correct functional dependence, once the required complexity was reached. We note that Eq.~(\ref{eq:MT2_special4}) turned out to be more challenging than the others, so for that case we increased the population{\underline{\hspace*{0.2cm}}}size parameter to 50 and used 40,000 training samples with batchsize 5,000.
Note that sometimes we obtain an equivalent expression of slightly higher complexity. For example, in the case of eq.~(\ref{eq:MT2_special2}), the answer has expanded the parentheses under the square root, which leads to an equivalent expression, but formally increases the complexity of the function to 14. Also note that since in this exercise the data is sampled from the exact function (no noise or errors), the MSE for the right answer is zero (or very close to it) and the score is infinite (or very large). The successful replication of the known special cases (\ref{eq:MT2_special1}-\ref{eq:MT2_special4}) validates our use of symbolic regression as implemented in \textsc{PySR}~and motivates us to consider more realistic examples in the following sections.
\subsection{The case of no missing transverse momentum}
\label{sec:noMET}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{./plots/special_configuration_2.pdf}
\caption{The special momentum configuration with $\slashed{\vec{p}}_\genericT = 0$ considered in Section~\ref{sec:noMET}. The invisible particles have equal and opposite momenta in the transverse plane. As a result, $\vec{p}_T^{\, ISR}$ exactly balances the total visible transverse momentum $\vec{p}_{1T}+\vec{p}_{2T}$.
\label{fig:configNoMET}}
\end{figure}
\begin{table*}[t]
\centering
\renewcommand\arraystretch{2}
\begin{tabular}{||c|c|c|c|c||}
\hline
Case & Complexity & Fitted function & MSE & Score \\
\hline
\hline
\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{$\mu =0$, $\tilde \mu = 0$}}}
& 3 & $|\slashed{\vec{p}}_\genericT||\vec{p}_{T12}|$ & $3.13\times10^{5}$ & $1.59$ \\
& 5 & $|\slashed{\vec{p}}_\genericT|(|\vec{p}_{T12}|-4.04)$ & $2.11\times10^{5}$ & $0.20$ \\
& 7 & $ 2A_T|\slashed{\vec{p}}_\genericT|/|\vec{p}_{T12}|$ & $8.65\times10^{-8}$ & $14.26$ \\
\hline\hline
\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{$\mu =0$, $\tilde \mu\ne0$}}}
& 14 & $\tilde\mu-QA_T+\sqrt{A_T(\tilde\mu+A_T)/0.50}$ & $52.83$ & $1.08$ \\
& 16 & $\tilde\mu-QA_T/0.897+\sqrt{A_T(2\tilde\mu+A_T)}$ & $44.51$ & $8.57\times10^{-2}$ \\
& 18 & $\tilde\mu-QA_T+\sqrt{A_T(2\tilde\mu+Q^2A_T)} $ & $3.86\times10^{-5}$ & $6.98$ \\
\hline\hline
\end{tabular}
\caption{
Results from the $M_{T2}$ exercise with the collinear momentum configuration in Section~\ref{sec:collinear}.
\label{tab:collinear}
}
\end{table*}
Recently, Ref.~\cite{Lester:2011nj} pointed out a new special case which also allows an analytical formula for $M_{T2}$. Its momentum configuration is shown in Figure~\ref{fig:configNoMET}, where the two invisible momenta are equal and opposite, and as a result $\slashed{\vec{p}}_\genericT=0$. This cancellation of the invisible momenta is purely accidental, which is why this case is mostly of academic interest --- there will be very few events (if any) of this type in the data. Nevertheless, for completeness we shall explore this situation as well.
For simplicity, we shall focus on the case when the masses of the visible final state particles are the same, i.e., $m_1=m_2\equiv m$. The formula for $M_{T2}$ is given by \cite{Lester:2011nj}
\begin{subequations}
\begin{eqnarray}
M_{T2}^2(\tilde \mu) &=& \tilde\mu+\mu+\sqrt{2\tilde\mu(\mu+E_{1T} E_{2T}+\vec{p}_{1T} \cdot \vec{p}_{2T})}~~~~~~ \label{eq:MT2MET0_pE} \\
&=& \tilde\mu+\mu+\sqrt{2\tilde\mu(\mu+A_T)}, \label{eq:MT2MET0_AT}
\end{eqnarray}
\label{eq:MT2MET0}
\end{subequations}
where to facilitate later comparisons to the \textsc{PySR}~output, we have used the mass squared parameters $\tilde\mu=\tilde m^2$ and $\mu=m^2$.
Once again, we may consider several special cases, depending on the masses of the visible and invisible particles. The case of massless invisible particles ($\tilde \mu=0$) leads to a trivial function $M^2_{T2}=\mu$ and will not be considered further. On the other hand, the case of massless visible particles ($\mu=0$ in (\ref{eq:MT2MET0_pE})) gives a non-trivial function
\begin{equation}
M_{T2}^2(\tilde \mu) = \tilde\mu+\sqrt{2\tilde\mu(E_{1T} E_{2T}+\vec{p}_{1T} \cdot \vec{p}_{2T})}\, .
\label{eq:MT2MET0_speical1}
\end{equation}
Keeping in mind that the answer must be symmetric with respect to interchanging $1\leftrightarrow 2$, we can use the set of mass-dimension 2 variables $\{E_{1T}E_{2T}, \vec{p}_{1T}\cdot \vec{p}_{2T}, \tilde\mu\}$, in terms of which the function (\ref{eq:MT2MET0_speical1}) is of complexity 10.
Proceeding as in Section~\ref{sec:noISR}, we train a symbolic regression with the default parameter configuration in {\sc PySR} on 10,000 events in the $\slashed{\vec{p}}_\genericT=0$ configuration of Figure~\ref{fig:configNoMET}. We repeat the exercise twice --- once for massless visible particles ($\mu=0$) and then again for massive visible particles ($\mu\ne 0$). The value of $M_{T2}$ is always computed with massive invisible particles ($\tilde\mu\ne 0$). The results are displayed in Table~\ref{tab:noMET}. In the case $\mu=0$, the exact formula (\ref{eq:MT2MET0_speical1}) is reproduced, albeit in a mathematically equivalent form of slightly higher complexity. In the massive case ($\tilde\mu\ne 0$), our set of input variables was taken to be $\{\mu, \tilde\mu, A_T\}$, and the result was again successful, reproducing the correct function (\ref{eq:MT2MET0_AT}) at complexity level 12.
\subsection{The collinear momentum configuration}
\label{sec:collinear}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{./plots/special_configuration_3.pdf}
\caption{The special balanced momentum configuration $\slashed{\vec{p}}_\genericT = Q ( \vec p_{1T} + \vec p_{2T})$ considered in Section~\ref{sec:collinear}. In general, the proportionality factor $Q$ can be positive or negative, while $Q=0$ reduces to the case considered in Section~\ref{sec:noISR}. Note that $\vec{p}_{T}^{\,ISR}$ is also necessarily collinear with $\slashed{\vec{p}}_\genericT$ and the total visible transverse momentum $\vec{p}_{1T}+\vec{p}_{2T}$.
\label{fig:configCollinear}}
\end{figure}
A second special case discussed in Ref.~\cite{Lester:2011nj} is that of the collinear momentum configuration in Figure~\ref{fig:configCollinear}, where the three transverse vectors $\slashed{\vec{p}}_\genericT$, $\vec{p}_T^{ISR}$ and $\vec{p}_{1T}+\vec{p}_{2T}$ all lie along the same line in the transverse plane. Ref.~\cite{Lester:2011nj} parametrized this case through a proportionality factor $Q$ defined by
\begin{equation}
\slashed{\vec{p}}_\genericT = Q\, (\vec{p}_{1T}+\vec{p}_{2T}) \equiv Q\, \vec{p}_{T12},
\label{eq:Qfactordef}
\end{equation}
where we have introduced a shorthand notation $\vec{p}_{T12}$ for the total visible transverse momentum $\vec{p}_{1T}+\vec{p}_{2T}$. Note that $Q$ is unbounded and can take both positive and negative values, i.e., $-\infty < Q < \infty$. For definiteness, in Figure~\ref{fig:configCollinear} we show the case of $Q<0$.
Like before, we only consider the case of $\mu=0$ in which case the formula is
\begin{equation}
M_{T2}^2(\tilde \mu) = \tilde\mu-QA_T+\sqrt{A_T(2\tilde\mu+Q^2A_T)} .
\end{equation}
For completeness we also consider separately the special case $\tilde\mu=0$ when the formula simplifies to
\begin{eqnarray}
M_{T2}^2(\tilde \mu) &=& - QA_T + |QA_T| \nonumber \\
&=& \left\{
\begin{array}{ll}
0, & {\rm for~} Q>0,\\
2 A_T |Q| = 2 A_T \frac{|\slashed{\vec{p}}_\genericT|}{|\vec{p}_{T12}|}, & {\rm for~} Q<0.
\end{array}
\right.
\label{ATQ}
\end{eqnarray}
For simplicity we shall only test the non-trivial case given by the second line in (\ref{ATQ}).
Training \textsc{PySR}~as before, we find the results shown in Table~\ref{tab:collinear}. In the case of $\tilde\mu= 0$, we choose the variables $A_T$, $|\slashed{\vec{p}}_\genericT|$ and $|\vec{p}_{T12}|$ as our input features, while in the case of $\tilde\mu\ne 0$, our input features were $A_T$, $Q$, and $\tilde \mu$. We see that the correct answers are reproduced at complexities 7 and 18, respectively.
\section{Deriving analytic expressions for NLO kinematic distributions}
\label{sec:F}
As our second example, we shall apply symbolic regression to learn the shapes of kinematic distributions at next-to-leading order (NLO). For simplicity, we shall consider the simplest possible process at leading order (LO), namely, the pair-production $e^+e^-\to \chi\chi$ of two invisible particles at a lepton collider with CM energy $\sqrt{s}$. Here the $\chi$ particles can be neutrinos or stable BSM dark matter candidates that escape undetected. In order to observe such events, we have to tag with a photon from initial state radiation (ISR), i.e., consider the NLO process $e^+e^-\to\chi\chi+\gamma$ \cite{Birkedal:2004xn}.
In general, there is no model-independent exact theoretical prediction for the resulting kinematic distribution of the ISR photon (for model-dependent studies, see \cite{Gopalakrishna:2001iv,Oller:2004br,Mawatari:2014cja,Kalinowski:2022xjw}). However, if the emitted photon is either {\it soft} or {\it collinear} with the incoming electron or positron, soft/collinear factorization theorems provide an approximate model-independent relation between the LO and NLO differential cross-sections:
\begin{equation}
\frac{d\sigma(e^+e^-\to \chi\chi+\gamma)}{dx\, d\cos\theta} \approx
{\cal F}(x, \sin\theta)\,\hat{\sigma}(e^+e^-\to\chi\chi),
\label{collinear}
\end{equation}
where $\theta$ is the angle between the photon direction and the direction of the incoming electron beam, and the dimensionless quantity
\begin{equation}
x=\frac{2E_\gamma}{\sqrt{s}}
\label{xdef}
\end{equation}
is a measure of the photon energy $E_\gamma$, normalized by the beam energy $\sqrt{s}/2$. Further, $\hat{\sigma}$ is the LO $\chi$ pair-production cross section evaluated at the reduced center of mass energy, $\hat{s}=(1-x)s$. Finally, ${\cal F}$ denotes the splitting function
\begin{equation}
{\cal F}(x, \sin\theta) = \frac{\alpha}{\pi}\frac{1+(1-x)^2}{x}
\frac{1}{\sin^2\theta}\,, \label{llog}
\end{equation}
which upon integration over $\theta$, reproduces the familiar Weizsacker-Williams distribution function. The factor ${\cal F}$ is universal: it does not depend on the nature of the (electrically neutral) particles produced in association with the photon.
\begin{table}[t]
\scalebox{0.97}{
\centering
\renewcommand\arraystretch{1.7}
\begin{tabular}{||c|c|c|c||}
\hline
Complexity & Fitted function & MSE & Score \\
\hline
\hline
5 & $ (3.73)/\sin^2\theta$ & $5.56\times10^{3}$ & $0.14$ \\
7 & $ 1.60/(x\sin^2\theta)$ &$2.08\times10^{2}$ & $1.64$ \\
9 &$ (-1.15+\frac{1.89}{x})/\sin^2\theta$ & $8.63$ & $1.59$ \\
11 &$ (x-2+\frac{2}{x})/\sin^2\theta$ & $5.71\times10^{-11}$ & $12.87$\\
\hline\hline
\end{tabular}
}
\caption{Results from the warm-up symbolic regression exercise considered in Section~\ref{sec:Fdirect}.
\label{tab:Fdirect_2}
}
\end{table}
\begin{table*}[t]
\centering
\renewcommand\arraystretch{1.7}
\begin{tabular}{||c|c|c|c||}
\hline
Complexity & Fitted function & MSE & Score \\
\hline
\hline
9 & $ (-0.039+0.063/x)/\sin^2\theta$ & $8.32\times10^{-3}$ & $1.59$ \\
11 & $ [-0.030+0.063/(x-0.012)]/\sin^2\theta$ &$2.72\times10^{-3}$ & $0.558$ \\
13 &$ (-0.068+0.068/x+0.034x)/\sin^2\theta$ & $1.53\times10^{-4}$ & $1.44$ \\
15 &$ [(-0.067+0.067/x+0.034x)/\sin\theta-0.001]/\sin\theta$ & $1.51\times10^{-4}$ & $6.80\times10^{-3}$\\
\hline\hline
\end{tabular}
\caption{Results for a few representative complexities from the symbolic regression exercise performed in Section~\ref{parton-splitting}.
\label{tab:Fdirect}
}
\end{table*}
Note that the normalization of (\ref{collinear}) depends on the fine structure coupling constant $\alpha$ appearing in (\ref{llog}). Our main goal in this section will be to apply symbolic regression and learn {\em the shape} of the splitting function (\ref{llog}) from a sample of MC events generated either according to the soft/collinear approximation (\ref{collinear}) (see Sections ~\ref{parton-splitting} and \ref{detector-splitting}) or using the full matrix element in a specific model (see Sections ~\ref{parton-madgraph} and \ref{detector-madgraph}). In Section~\ref{sec:F_MC} (Section~\ref{sec:F_MC_detector}) the exercise will be performed without (with) detector effects, i.e., smearing the photon energy according to the calorimeter resolution.
\subsection{Warm-up toy exercise: learning the splitting function directly}
\label{sec:Fdirect}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{plots/figure6.png}
\caption{The MSE loss as a function of complexity for the warm-up symbolic regression exercise considered in Section~\ref{sec:Fdirect}.
}
\label{fig:lossIIIA}
\end{figure}
First we begin with a toy exercise where we create the training data by sampling the function ${\cal F}$ directly, i.e., for a given choice of $x$ and $\theta$, we compute the target variable $y$ directly from eq.~(\ref{llog}). In other words, our training dataset will be the set
\begin{equation}
\left(x, \sin\theta, \frac{{\cal F}(x, \sin\theta)}{\alpha/\pi}\right),
\label{trainingdata}
\end{equation}
where for simplicity we have factored out the constant $\alpha/\pi$. One can view this exercise as corresponding to the case of infinite MC statistics in the absence of any detector effects.
We generate training data (\ref{trainingdata}) by sampling $x\in [0.1, 1]$ and $\sin\theta\in[0.1, 1]$ on a $100\times100$ grid. Using the default parameter options in \textsc{PySR}, we obtain the results shown in Table~\ref{tab:Fdirect_2} for the target function in this case, $\frac{\pi}{\alpha}{\cal F}(x, \sin\theta)$. We see that the correct analytical expression, $(1+(1-x)^2)/(x\sin^2\theta)$, is recovered at complexity 11, as indicated by the sharp drop of the MSE loss (note also the drastic improvement in the score at complexity 11). This is pictorially illustrated in Figure \ref{fig:lossIIIA}, which shows the evolution of the MSE loss as a function of complexity.
\subsection{Learning from gen-level MC data}
\label{sec:F_MC}
Having validated our symbolic regression procedure with the toy example of the previous subsection, we shall now modify this exercise, making it more realistic in several ways:
\begin{itemize}
\item Instead of considering infinite statistics, we shall now limit ourselves to a finite event sample, thereby introducing statistical errors in the target values of the function which are used for the training of the symbolic regression.
\item In the toy example of Section~\ref{sec:Fdirect}, we generated the training data by simply looking up the value of the target function at a given $x$ and $\sin\theta$ from the correct formula. In reality this will be impossible, and the target values in the training data will have to be determined from experimental or MC simulated data via some sort of density estimation, e.g., through bin counts. Therefore, from now on we shall always rely on MC simulated data to obtain the values for the (unit-normalized) target function
from event counts in suitably chosen bins. This approach is a better representation of what would be done in an actual experiment.
\item While in this subsection we shall restrict ourselves to gen-level data, in the next subsection \ref{sec:F_MC_detector} we shall account for the finite detector resolution by smearing the photon energy.
\end{itemize}
\subsubsection{Data generated using a splitting function}
\label{parton-splitting}
For this version of the symbolic regression exercise, we first generate MC data according to the approximate model-independent differential cross-section (\ref{collinear}). We avoid the soft/collinear singularity at $x=0$ and $\sin\theta=0$ by focusing on the previously considered region $x\in [0.1, 1]$ and $\sin\theta\in[0.1, 1]$ binned on a $100\times100$ grid. We then sample 100 million events and populate the bins, whose final event counts then serve as the values of the target function (after unit-normalization) to be used in the training. The input features are again $x$ and $\sin\theta$ and we use the default parameter setup in \textsc{PySR}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{plots/figure7.png}
\caption{The same as Figure~\ref{fig:lossIIIA}, but for the symbolic regression exercise performed in Section~\ref{parton-splitting}.
}\label{fig:loss}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.23\textwidth]{plots/figure8_1.png}
\includegraphics[width=0.23\textwidth]{plots/figure8_2.png} \\
\includegraphics[width=0.23\textwidth]{plots/figure8_3.png}
\includegraphics[width=0.23\textwidth]{plots/figure8_4.png}
\caption{Heatmaps in the $(x, \sin\theta)$ plane of the differences between the fit functions found by the symbolic regression and the true target function. The rectangular box marked with a dashed line delineates the domain of values on which the symbolic regression was trained.}\label{fig:error}
\end{figure}
Our results are shown in Table~\ref{tab:Fdirect} and Figure~\ref{fig:loss} in complete analogy to the earlier Table~\ref{tab:Fdirect_2} and Figure~\ref{fig:lossIIIA}. Once again, \textsc{PySR}~finds the correct expression which is now of complexity 13 (the increase by 2 relative to the result in Table~\ref{tab:Fdirect_2} is due to the numerical prefactor in front of the linear $x$ term). However, the MSE error this time does not go down to machine precision, and instead saturates at around $10^{-4}$, which is due to the statistical uncertainties on the target function values in our training data. Note that ``the knee" in Figure~\ref{fig:loss} is a marker for the true complexity of our target function.
As mentioned in the introduction, an important principle of explainable AI is ``generalizability", i.e., extrapolating into the region away from the training data. In order to demonstrate this, we repeat the exercise, but this time we train only on the data within the restricted domain shown with the dashed rectangle in Figure~\ref{fig:error}. We then compare the predictions from the fitted functions found by \textsc{PySR}~to the true target function, by plotting the difference as a heatmap in the $(x, \sin\theta)$ plane (see Figure~6 in \cite{Matchev2022ApJ}). Note that in all four cases, the fit within the training domain is reasonably good, but the extrapolation away from it is successful only for the correct answers at complexities 13 and 15. Furthermore, a careful inspection of the plots in the lower row reveals that the extrapolation is better for complexity 13 compared to complexity 15, even though within the training domain the performance is similar. This fact favors the complexity 13 answer over its competitor.
\subsubsection{Data generated with Madgraph}
\label{parton-madgraph}
The training data used in the previous Section~\ref{parton-splitting} was generated with the approximate factorized formula (\ref{collinear}) which is valid in the soft/collinear limit. The advantage of doing so is that we knew the answer that we were supposed to get, which allowed us to judge and validate the performance of \textsc{PySR}. In this subsection, we shall instead generate our training data with a full blown event generator, {\sc MadGraph5\textunderscore}a{\sc MC@NLO}~\cite{Alwall:2011uj}, which avoids the soft/collinear approximation. For concreteness, we shall use one of the low energy supersymmetry study points from Ref.~\cite{Birkedal:2004xn}, namely, the one with neutralino mass of $M_\chi=225$ GeV. We choose $\sqrt{s} = 500$ GeV at the International Linear Collider. We assumed electromagnetic calorimeter acceptance of $\sin\theta > 0.1$, and required $p_{T\, \gamma} = E_{\gamma} \sin\theta > 7.5$ GeV corresponding to the mask calorimeter acceptance of 1 degree. With that setup, we generated 10 million events as our training data, and repeated the symbolic regression exercise with default \textsc{PySR}~parameters.
\begin{table}[t!]
\centering
\scalebox{0.89}{
\renewcommand\arraystretch{1.8}
\begin{tabular}{||c|c|c|c||}
\hline
Complexity & Fitted function & MSE & Score \\
\hline
\hline
5 & $17.07-98.85x$ & $1.968$ & $0.671$ \\ [1mm]
7 & $17.08-98.85x+x^2$ & $1.968$ & $1.6\times10^{-5}$ \\ [1mm]
9 & $-11.72+\dfrac{2.42-0.057/x}{x}$ & $0.115$ & $1.419$ \\ [2mm]
11 & $x-11.97+\dfrac{2.44-0.057/x}{x}$ &$0.113$ & $0.007$ \\ [2mm]
13 &$2x-12.23+\dfrac{2.46-0.058/x}{x}$ & $0.112$ & $0.007$ \\ [2mm]
15 &$3x-12.48+\dfrac{2.48+0.058/x}{x}$ & $0.111$ & $0.006$\\ [2mm]
\hline\hline
\end{tabular}
}
\caption{The same as Table~\ref{tab:Fdirect}, but for the symbolic regression exercise with {\sc MadGraph5\textunderscore}a{\sc MC@NLO}~training data performed in Section~\ref{parton-madgraph}.
\label{tab:Gen_NLO}}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{plots/figure9.png}
\caption{The same as Figure~\ref{fig:loss}, but for the symbolic regression exercise with {\sc MadGraph5\textunderscore}a{\sc MC@NLO}~ training data performed in Section~\ref{parton-madgraph}.
\label{fig:lossMG5} }
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\textwidth]{plots/figure10_1.png}
\includegraphics[width=0.45\textwidth]{plots/figure10_2.png}
\caption{
Unit-normalized distribution of the events in the training data (red) and PYSR output (blue). The results in the left panel are from Section~\ref{parton-madgraph} and do not include detector effects, while the results in the right panel are from Section~\ref{detector-madgraph} and account for the detector resolution. \label{fig:distMG5} }
\end{figure*}
In analogy to the earlier Tables~\ref{tab:Fdirect_2} and \ref{tab:Fdirect} and Figures~\ref{fig:lossIIIA} and \ref{fig:loss}, we present the results in Table~\ref{tab:Gen_NLO} and Figure~\ref{fig:lossMG5}, where for simplicity we focus on the $x$-dependence only. The knee in Figure~\ref{fig:lossMG5} is observed at complexity 9, which also has the highest score in Table~\ref{tab:Gen_NLO}. The form of the function resembles that of (\ref{llog}), but the coefficients are modified. The expressions at higher complexities (11, 13 and 15), while having comparable MSE, might be disfavored using the method discussed at the end of the previous subsection, see Figure~\ref{fig:error}.
Since in this example we do not have a simple analytical answer as a point of reference, the only way to judge the quality of the answer is to numerically compare to the distribution in the training data. In Figure~\ref{fig:distMG5}, we show the unit-normalized distribution of the events in the training data (red) and PYSR output (blue). The results from the current subsection are shown in the left panel, where the blue line corresponds to the fitted function at complexity 9. We see that the symbolic regression was capable of producing a simple analytical expression which describes the data quite well, the main visible discrepancy is in the low statistics tail which is not represented well in the training data, and furthermore, is not relevant for the experimental analysis.
\subsection{Learning from detector-level MC data}
\label{sec:F_MC_detector}
So far in this section we have been ignoring any instrumental effects, so that the observed distribution followed the theoretical formula (up to statistical errors). In this section we shall add the effects of the detector resolution which would in principle cause the result from the symbolic regression to differ slightly from the theoretical prediction at gen-level.
\subsubsection{Data generated using a splitting function}
\label{detector-splitting}
\begin{table*}[t]
\centering
\scalebox{0.86}{
\renewcommand\arraystretch{1.7}
\begin{tabular}{||c|c|c|c||}
\hline
Complexity & Fitted function & MSE & Score \\
\hline
\hline
\multicolumn{4}{||c||}{$\sigma=0.01$}\\
\hline
9 & $ (-0.039+0.064/x)/\sin^2\theta$ & $8.41\times10^{-3}$ & $1.58$ \\
11 & $ 0.056/(x\sin^2\theta(x+0.82))$ &$5.91\times10^{-4}$ & $1.33$ \\
13 &$ (-0.068+0.068/x+0.034x)/\sin^2\theta$ & $2.46\times10^{-4}$ & $0.438$ \\
17 &$ [(-0.067+0.067/x+0.034x+\sin\theta)/\sin\theta-1.00]/\sin\theta$ & $2.44\times10^{-4}$ & $2.11\times10^{-3}$\\
\hline\hline
\multicolumn{4}{||c||}{$\sigma = 0.03$}\\
\hline
9 & $ (-0.039+0.064/x)/\sin^2\theta$ & $8.71\times10^{-3}$ & $1.57$ \\
11 & $ 0.056/(x\sin^2\theta(x+0.81))$ &$7.95\times10^{-4}$ & $1.20$ \\
13 &$ (-0.068+0.068/x+0.034x)/\sin^2\theta$ & $5.33\times10^{-4}$ & $0.20$ \\
15 &$ (-0.068+0.068/x+0.034x)/\sin^2\theta-1.24\times10^{-3}$ & $5.32\times10^{-4}$ & $1.06\times10^{-3}$\\
\hline\hline
\multicolumn{4}{||c||}{$\sigma = 0.05$}\\
\hline
9 & $ (-0.039+0.064/x)/\sin^2\theta$ & $1.16\times10^{-2}$ & $1.44$ \\
11 & $ 0.056/(x\sin^2\theta(x+0.81))$ &$3.89\times10^{-3}$ & $5.46\times10^{-1}$ \\
13 &$ (-0.067+0.068/x+0.033x)/\sin^2\theta$ & $3.70\times10^{-3}$ & $2.51\times10^{-2}$ \\
15 &$ [(-0.067+0.068/x+0.033x)/\sin\theta-1.10\times10^{-3}]/\sin\theta$ & $3.70\times10^{-3}$ & $2.78\times10^{-4}$\\
\hline\hline
\multicolumn{4}{||c||}{$\sigma = 0.1$}\\
\hline
9 & $ (-0.039+0.064/x)/\sin^2\theta$ & $3.90\times10^{-2}$ & $9.00\times10^{-1}$ \\
11 & $ 0.056/(x\sin^2\theta(x+0.82))$ &$3.30\times10^{-2}$ & $8.28\times10^{-2}$ \\
13 &$ (-0.064+0.067/x+0.029x)/\sin^2\theta$ & $3.27\times10^{-2}$ & $3.60\times10^{-3}$ \\
15 &$ 0.078/[\sin^2\theta(1.62x^2+x+0.014)]$ & $3.17\times10^{-2}$ & $1.75\times10^{-2}$\\
\hline\hline
\end{tabular} }
\caption{Results from the symbolic regression exercise performed in Section~\ref{detector-splitting} for several values of the detector resolution parameter $\sigma$: 0.01, 0.03, 0.05 and 0.10.
\label{tab:F_MC}
}
\end{table*}
Here we repeat the exercise from Section~\ref{parton-splitting}, but account for the detector resolution via Gaussian smearing of the energy (but not direction) of the photon with some resolution parameter $\sigma$. By varying the value of $\sigma$, we shall investigate the impact of the detector on our results, which are collected in Table~\ref{tab:F_MC} for four different values of $\sigma$: 0.01, 0.03, 0.05 and 0.10. In all cases, we observe the expected $\sin^{-2}\theta$ dependence. We note that when the detector effects are relatively mild, $\sigma\lesssim 5\%$, the $x$ dependence is well recovered as well. This fact --- that symbolic regression appears to be robust against noise --- has been observed in other independent studies as well \cite{DBLP:journals/corr/abs-1912-04871}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.235\textwidth]{plots/figure11_1.png}
\includegraphics[width=0.235\textwidth]{plots/figure11_2.png} \\
\includegraphics[width=0.235\textwidth]{plots/figure11_3.png}
\includegraphics[width=0.235\textwidth]{plots/figure11_4.png}
\caption{The same as Figure \ref{fig:loss}, but for the exercise performed in Section~\ref{detector-splitting} with the added detector smearing. Results are shown for several values of the detector resolution parameter $\sigma$ as labelled in the plots. }
\label{fig:loss_smearing}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{plots/figure12.png}
\caption{Loss as a function of the number of events in the training data, for several values of the detector resolution parameter $\sigma$. In each case, we chose to show the \textsc{PySR}~result whose complexity is at the ``knee" of the corresponding plot from Figure~\ref{fig:loss_smearing}. \label{fig:loss2}}
\end{figure}
In analogy to Figure~\ref{fig:loss}, in Figure~\ref{fig:loss_smearing} we show the evolution of the MSE loss with the complexity of the fitted function, for several different values of $\sigma$: 0.01, 0.05, 0.10 and 0.20. The ``knee" structure is again evident, and the location of the knee depends slightly on the amount of applied smearing.
Since the exercises in this subsection include both errors due to the finite statistics and due to the detector resolution, it is instructive to look at the interplay of the two types of errors as a function of the number of events $N_{\rm events}$ in the training data, see Figure~\ref{fig:loss2}. When the detector effects are absent ($\sigma=0$, black line), the average loss improves as $N_{\rm events}$ increases, since statistical errors scale as $1/\sqrt{N_{\rm events}}$. On the other hand, the detector effects are not influenced by $N_{\rm events}$, and at some point will start to dominate the error budget. As a result, as illustrated in Figure~\ref{fig:loss2}, the MSE loss will start to deviate from the benchmark case of $\sigma=0$. The exact point where this deviation occurs, depends on the size of the detector smearing parameter --- the larger the smearing, the earlier the loss saturates.
\subsubsection{Data generated with MadGraph}
\label{detector-madgraph}
Finally, we repeat the exercise from Section \ref{parton-madgraph} with the addition of calorimeter detector resolution typical of the ILC, $\delta E/E = 0.17/\sqrt{E}$ \cite{CALICE:2008kht,Bambade:2019fyw,Habermehl:2020njb,ILDConceptGroup:2020sfq}. The results are displayed in Table~\ref{tab:Gen_NLO_detector} and Figure~\ref{fig:lossMG5smeared}, which are the analogues of Table~\ref{tab:Gen_NLO} and Figure~\ref{fig:lossMG5} from Section~\ref{parton-madgraph}.
The results are as expected, based on what we have observed in the previous subsections. The corresponding predicted differential distribution is shown in the right panel of Figure~\ref{fig:distMG5}.
\begin{table}[t]
\centering
\renewcommand\arraystretch{1.8}
\scalebox{0.8}{
\begin{tabular}{||c|c|c|c||}
\hline
Complexity & Fitted function & MSE & Score \\
\hline
\hline
5 & $0.724/(x+0.015)$ & $6.82$ & $0.051$ \\[2mm]
7 & $18.07+ 98.855446x$ & $1.97$ & $0.623$ \\[2mm]
9 & $-11.72+\dfrac{2.42-0.057/x}{x}$ & $0.115$ & $1.419$ \\[2mm]
11 & $x-11.97+\dfrac{2.44-0.057/x}{x}$ &$0.114$ & $0.007$ \\[2mm]
13 &$2x-12.23+\dfrac{2.46-0.058/x}{x}$ & $0.112$ & $0.007$ \\[2mm]
15 &$-x-9.90+\dfrac{2.04+(-0.03-0.0004/x)/x}{x}$ & $0.091$ & $0.102$\\[2mm]
\hline\hline
\end{tabular} }
\caption{Results from the symbolic regression exercise performed in Section \ref{detector-madgraph} including detector effects in the training data.
\label{tab:Gen_NLO_detector}}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{plots/figure13.png}
\caption{
The same as Figure \ref{fig:lossMG5}, but adding the effects of the calorimeter resolution as in Section~\ref{detector-madgraph}.}\label{fig:lossMG5smeared}
\end{figure}
\section{Conclusions and outlook}
\label{sec:conclusions}
This study adds to the already wide range of applications of modern machine learning to event generation and simulation-based inference in collider phenomenology \cite{Butter:2022rso}. We demonstrated the use of symbolic regression for two common problems in high energy particle physics. First, in the case of kinematic or event variables which are defined through some kind of an algorithm, the symbolic regression produces analytical formulas whose accuracy is limited only by the desired functional complexity. In Section~\ref{sec:mt2} we showed how to do this in the example of the stransverse mass variable $M_{T2}$ --- we were able to rederive all known analytical formulas for $M_{T2}$ in certain special transverse momentum configurations. Second, the symbolic regression can also produce analytical formulas for certain kinematic distributions of interest, for which theoretical results are unknown or difficult to obtain. In fact, parametrizing the observed distributions in the data with analytical formulas is a standard task in many analyses which attempt to measure the background from data. In Section~\ref{sec:F} we demonstrated that this fit can be done either at the gen-level (Section~\ref{sec:F_MC}) or at the detector level (Section~\ref{sec:F_MC_detector}). Note that this last exercise is a non-trivial result, which involves the convolution of the parton-level analytical result with the transfer function describing the detector. To the best of our knowledge, such analytical expressions are rarely discussed in the literature.
The work presented here can be extended in several directions. For example, the $M_{T2}$ concept can be readily applied to more complex event topologies, where one has several choices of designating parent and daughter particles, leading to a menagerie of different ``subsystem" $M_{T2}$ variables \cite{Kawagoe:2004rz,Burns:2008va}. It would be interesting to see whether the symbolic regression can ``derive" the correct answer for $M_{T2}$ in the general case, for which no analytical formula is known. One could also explore other modern techniques for symbolic regression that are adaptable to high dimensional data \cite{Arechiga2021,https://doi.org/10.48550/arxiv.2201.04600,https://doi.org/10.48550/arxiv.2204.10532,https://doi.org/10.48550/arxiv.2205.11798}.
\acknowledgements
We thank A.~Roman for collaboration in the early stages of this work. K. Kong and K. Matchev would like to thank the Aspen Center for Physics for hospitality during the completion of this work, supported in part by National Science Foundation grant PHY-1607611. This work is supported in parts by US DE-SC0019474 and DOE DE-SC0022148.
|
1,108,101,563,819 | arxiv | \section{Introduction}
\label{intro}
In this paper, we are interested in second-order optimality conditions for the following constrained vector optimization problem
\begin{align*}
& \text{min}\, f(x)\label{problem} \tag{VP}
\\
&\text{subject to}\ \ x\in Q_0:=\{x\in X\,:\, g(x)\leqq 0\},
\end{align*}
where $f:=(f_i)$, $i\in I:=\{1, \ldots, p\}$, and $g:=(g_j)$, $j\in J:=\{1, \ldots, m\}$ are vector-valued functions defined on a Banach space $X$.
{As a mainstream in the study of vector optimization problems, optimality condition for vector optimization problems has attracted the attention of many researchers in the field of optimization due to their important applications in many disciplines, such as variational inequalities, equilibrium problems and fixed pointed problems; see, for example, \cite{Lee98,Lu18,mor06,Petrusel18,Wang16,XS,SXC1,SXC2,Qin}.}
It is well-known that if $f_i$, $g_j$ are differentiable at $\bar x\in Q_0$ and $\bar x$ is a local weak efficient solution of \eqref{problem}, then
there exist Lagrange multipliers $(\lambda, \mu)\in \mathbb{R}^p\times\mathbb{R}^m$ satisfying
\begin{align}
&\sum_{i=1}^p\lambda_i\nabla f_i(\bar x)+\sum_{j=1}^m\mu_j\nabla g_j(\bar x)=0,\label{equa_intro:1}
\\
&\mu=(\mu_1, \ldots, \mu_m)\geqq 0, \mu_jg_j(\bar x)=0,\label{equa_intro:2}
\\
&\lambda=(\lambda_1, \ldots, \lambda_p)\geqq 0, (\lambda, \mu)\neq 0;\label{equa_intro:3}
\end{align}
see \cite[Theorem 7.4]{Jahn04}. Conditions \eqref{equa_intro:1}--\eqref{equa_intro:3} are called the first-order F.-John necessary optimality conditions. If $\lambda$ is nonzero, then these conditions are called the first-order Karush--Kuhn--Tucker $(KKT)$ optimality conditions. By Motzkin's theorem of the alternative \cite[p.28]{Mangasarian69}, the existence of $KKT$ multipliers is equivalent to the inconsistency of the following system
\begin{align}
\nabla f_i(\bar x)(v)&<0, \ \ \ i\in I, \label{equa_intro:4}
\\
\nabla g_j(\bar x)(v)&\leqq 0, \ \ \ j\in J(\bar x),\label{equa_intro:5}
\end{align}
with unknown $v\in X$, where $J(\bar x)$ is the active index set at $\bar x$. Conditions \eqref{equa_intro:4}--\eqref{equa_intro:5} are called
the first-order $KKT$ necessary conditions in primal form.
The first-order $KKT$ optimality conditions are needed to find optimal solutions of constrained optimization problems. In order to obtain these optimality conditions, constraint qualifications and regularity conditions are indispensable; see, for example, \cite{Andreani11,Tuyen18,Tung-Luu,Luu-Mai,TungLT,Tuyen-Xiao-Son,Movahedian,Soleimani,Gunther}. We recall here that these assumptions are called constraint qualifications $(CQ)$ when they have to be fulfilled by the constraints of the problem, and they are called regularity conditions $(RC)$ when they have to be fulfilled by both the objectives and the constraints of the problem; see \cite{Rizvi12} for more details.
Second-order necessary optimality conditions play an important role in both
the theory and practice of constrained optimization problems. These conditions are used to eliminate nonoptimal KKT points of optimization problems. Moreover, the second-order optimality condition is a key tool of numerical analysis in proving convergence and deriving error
estimates for numerical discretizations of optimization problems; see, for example, \cite{Bertsekas99,Izmailov08,Nocedal99}.
One of the first investigations to obtain second-order optimality conditions of $KKT$-type for smooth vector optimization problems was carried out by Wang \cite{Wang91}. Then, by introducing a new second-order constraint qualification in the sense of Abadie, Aghezzaf et al. \cite{Aghezzaf99} extended Wang's results to the nonconvex case. Maeda \cite{Maeda04} was the first to propose an Abadie regularity condition and established second-order $KKT$ necessary optimality conditions for $C^{1,1}$ vector optimization
problems. By using the second-order directional derivatives and introducing a new second-order constraint qualification of Zangwill-type, Ivanov \cite{Ivanov15} introduced some optimality conditions for $C^1$ vector optimization problems with inequality constraints. Very recently, by proposing some types of the second-order Abadie regularity conditions, Huy et al. \cite{Huy162,Huy163} have obtained some second-order $KKT$ necessary optimality conditions for $C^{1,1}$ vector optimization problems in terms of second-order symmetric subdifferentials. For other contributions to second-order $KKT$ optimality conditions for vector optimization, the reader is invited to see the papers \cite{Elena,Ginchev08,Giorgi09,Ivanov152,Ivanov10,Luu17,Kim-Tuyen,Huy-Tuyen} with the references therein.
Our aim is to weaken the hypotheses of the optimality conditions in \cite{Aghezzaf99,Elena,Huy163,Ivanov15,Luu17,Maeda04,Wang91}. To obtain second-order $KKT$ necessary conditions, by using second-order upper generalized directional derivatives and second-order tangent sets, we introduce some second-order constraint qualifications of Zangwill type, Abadie type and Mangasarian-Fromovitz type as well as a regularity condition of Abadie type.
Our obtained results improve and generalize the corresponding results in \cite{Aghezzaf99,Elena,Huy163,Ivanov15,Luu17,Maeda04,Wang91}, because the objective functions and the active constraint functions are only locally Lipschitz at the referee point and the required constraint qualifications are also weaker. Moreover, the connections between these proposed conditions are established.
The organization of the paper is as follows. In Section \ref{Preliminaries}, we recall some notations, definitions and preliminary material. Section \ref{Abadie_RC_sect} is devoted to investigate second-order constraint qualifications and regularity conditions in a nonsmooth setting for vector optimization problems. In Section \ref{Second_order_optim_sect} and Section~\ref{Strong_Second_order_optim_sect}, we establish some second-order necessary optimality conditions of $KKT$-type for a local (weak, Geoffrion properly) efficient solution of \eqref{problem}. Section \ref{conclusions_sect} draws some conclusions.
\section{Preliminaries}
\label{Preliminaries}
In this section, we recall some definitions and introduce basic results, which are useful in our study.
Let $\mathbb{R}^p$ be the $p$-dimensional Euclidean space. For $a, b\in\mathbb{R}^p$, by $a\leqq b$, we mean $a_i\leqq b_i$ for all $i\in I$; by $a\leq b$, we mean $a\leqq b$ and $a\neq b$; and by $a<b$, we mean $a_i<b_i$ for all $i\in I$.
We first recall the definition of local (weak, Geoffrion properly) efficient solutions for the considered problem \eqref{problem}. Note that the concept of properly efficient solution has been introduced at first to eliminate the efficient solutions with unbounded trade-offs. This concept was introduced initially by Kuhn and Tucker \cite{Kuhn50} and was followed thereafter by Geoffrion \cite{Geoffrion68}. Geoffrion's concept enjoys economical interpretations, while Kuhn and Tucker's one is useful for numerical and algorithmic purposes.
\begin{definition}{\rm Let $Q_0$ be the feasible set of \eqref{problem} and $\bar x\in Q_0$. We say that:
\begin{enumerate}[(i)]
\item $\bar x$ is {\em an efficient solution} (resp., {\em a weak efficient solution}) of \eqref{problem} iff there is no $x\in Q_0$ satisfying $f(x)\leq f(\bar x)$ (resp., $f(x)<f(\bar x)$).
\item $\bar x$ is a {\em Geoffrion properly efficient solution} of \eqref{problem} iff it is efficient and there exists $M>0$ and such that, for each $i$,
$$\frac{f_i(x)-f_i(\bar x)}{f_j(\bar x)-f_j(x)}\leqq M,$$
for some $j$ such that $f_j(\bar x)<f_j(x)$ whenever $x\in Q_0$ and $f_i(\bar x)>f_i(x)$.
\item $\bar x$ is a {\em local efficient solution} (resp., {\em local weak efficient solution, local Geoffrion properly efficient solution}) of \eqref{problem} iff it is an efficient solution (resp., weak efficient solution, Geoffrion properly efficient solution) in $U\cap Q_0$, where $U$ is some neighborhood of $\bar x$.
\end{enumerate}
}
\end{definition}
Hereafter, we assume that $X$ is a Banach space equipped with the norm $\|\cdot\|$. Let $\Omega$ be a nonempty subset in $X$. The {\it closure}, {\it convex hull} and {\it conic hull} of $\Omega$ are denoted by $\mbox{cl}\,\Omega$,
$\mbox{conv}\,\Omega$ and $\mbox{cone}\,\Omega$, respectively.
\begin{definition}{\rm Let $\bar x\in \Omega$ and $u\in X$.
\begin{enumerate}[(i)]
\item The {\em tangent cone} to $\Omega$ at $\bar x\in \Omega$ is defined by
$$T(\Omega; \bar x):=\{d\in X\,:\,\exists t_k\downarrow 0, \exists d^k\to d, \bar x+t_kd^k\in \Omega, \ \ \forall k\in \mathbb{N}\}.$$
\item The {\em second-order tangent set} to $\Omega$ at $\bar x$ with respect to the direction $u$ is defined by
$$T^2(\Omega; \bar x, u):=\left\{v\in X:\exists t_k\downarrow 0, \exists v^k\to v, \bar x+t_ku+\frac12t_k^2v^k\in \Omega,\ \ \forall k\in \mathbb{N}\right\}.$$
\end{enumerate}
}
\end{definition}
Clearly, $T(\,\cdot\,; \bar x)$ and $T^2(\,\cdot\,; \bar x, u)$ are isotone, i.e., if $\Omega^1\subset \Omega^2$, then
\begin{align*}
T(\Omega^1; \bar x)&\subset T(\Omega^2; \bar x),
\\
T^2(\Omega^1; \bar x, u)&\subset T^2(\Omega^2; \bar x, u).
\end{align*}
It is well-known that $T(\Omega; \bar x)$ is a nonempty closed cone. For each $u\in X$, the set $T^2(\Omega; \bar x, u)$ is closed, but may be empty. However, we see that the set $T^2(\Omega; \bar x, 0)=T(\Omega; \bar x)$ is always nonempty.
Let $F\colon X\to\mathbb{R}$ be a real-valued function defined on $X$ and $\bar x\in X$. The function $F$ is said to be {\em locally Lipschitz} at $\bar x$ iff there exist a neighborhood $U$ of $\bar x$ and $L\geqq 0$ such that
\begin{equation*}
|F(x)-F(y)|\leqq L\|x-y\|,\ \ \ \forall x, y\in U.
\end{equation*}
\begin{definition}{\rm Assume that $F\colon X\to\mathbb{R}$ is locally Lipschitz at $\bar x\in X$. Then:
\begin{enumerate}[(i)]
\item (See \cite{Clarke83}) The {\em Clarke's generalized derivative} of $F$ at $\bar x$ is defined by
\begin{equation*}
F^{\circ} (\bar x, u):= \limsup\limits_{\mathop {x \to \bar x}\limits_{t \downarrow 0} } \dfrac{F(x+tu)-F(x)}{t}, \ \ \ u\in X.
\end{equation*}
\item (See \cite{Pales94}) The {\em second-order upper generalized directional derivative} of $F$ at $\bar x$ is defined by
\begin{equation*}
F^{\circ\circ} (\bar x, u):= \limsup\limits_{\mathop {t \downarrow 0} }\dfrac{F(\bar x+tu)-F(\bar x)-tF^{\circ} (\bar x, u)}{\frac12t^2}, \ \ \ u\in X.
\end{equation*}
\end{enumerate}
}
\end{definition}
It is easily seen that $F^{\circ} (\bar x, 0)=0$ and $F^{\circ\circ} (\bar x, 0)=0$. Furthermore, the function $u\mapsto F^\circ(\bar x, u)$ is finite, positively homogeneous, and subadditive on $X$; see, for example, \cite{Clarke83,SX2,XS3}.
The following lemmas will be useful in our study.
\begin{lemma}\label{lemma1} Suppose that $F\colon X\to\mathbb{R}$ is locally Lipschitz at $\bar x\in X$. Let $u\in X$ and let $\{(t_k, u^k)\}$ be a sequence converging to $(0^+, u)$. If
\begin{equation*}
F \left(\bar x+t_ku^k\right)\geqq F(\bar x) \ \ \mbox{for all} \ \ k\in\mathbb{N},
\end{equation*}
then $F^{\circ} (\bar x, u)\geqq 0.$
\end{lemma}
\begin{proof} Since $F$ is locally Lipschitz at $\bar x$ and $$\lim\limits_{k\to\infty} (\bar x+t_ku^k)=\lim\limits_{k\to\infty} (\bar x+t_ku)=\bar x,$$
there exist $L\geqq0$ and $k_0\in\mathbb{N}$ such that
$$|F(\bar x+t_ku^k)-F(\bar x+t_ku)|\leqq L t_k\|u^k-u\|\ \ \text{for all}\ \ k\geqq k_0.$$
Thus,
\begin{align*}
0&\leqq F(\bar x+t_ku^k) -F(\bar x)
\\
&= [F(\bar x+t_ku^k)-F(\bar x+t_ku)]+[F(\bar x+t_ku)-F(\bar x)]
\\
&\leqq Lt_k\|u^k-u\| +F(\bar x+t_ku)-F(\bar x)
\end{align*}
for all $k\geqq k_0$. This implies that
\begin{align*}
0&\leqq \lim_{k\to \infty} L \|u^k-u\|+\limsup_{k\to \infty} \dfrac{F(\bar x+t_ku)-F(\bar x)}{t_k}
\\
&\leqq \limsup\limits_{\mathop {x \to \bar x}\limits_{t \downarrow 0} } \dfrac{F(x+tu)-F(x)}{t}.
\end{align*}
Therefore, $F^{\circ} (\bar x, u)\geqq 0$, as required.
\end{proof}
\begin{lemma}\label{lemma2} Suppose that $F\colon X\to\mathbb{R}$ is locally Lipschitz at $\bar x\in X$. Let $(u, v)$ be a vector in $X\times X$ and let $\{(t_k, v^k)\}$ be a sequence converging to $(0^+, v)$ satisfying
\begin{equation*}
F \left(\bar x+t_ku+\frac12 t^2_kv^k\right)\geqq F(\bar x) \ \ \mbox{for all} \ \ k\in\mathbb{N}.
\end{equation*}
If $F^{\circ} (\bar x, u)= 0$, then $F^{\circ} (\bar x, v)+F^{\circ\circ} (\bar x, u)\geqq 0.$
\end{lemma}
\begin{proof} For each $k\in \mathbb{N}$, put $x^k:= \bar x+t_ku+\frac12 t^2_kv^k$ and $y^k:=\bar x+t_ku+\frac12 t^2_kv$. Since $F$ is locally Lipschitz at $\bar x$ and
$$\lim\limits_{k\to\infty} x^k=\lim\limits_{k\to\infty} y^k=\bar x,$$
there exist $L\geqq 0$ and $k_0\in\mathbb{N}$ such that
\begin{equation*}
|F(x^k)-F\left(y^k\right)|\leqq \frac12t^2_kL\|v^k-v\|\ \ \text{for all}\ \ k\geqq k_0.
\end{equation*}
Thus,
\begin{align*}
0&\leqq F(x^k) - F(\bar x)
\\
&= [F(x^k)-F(y^k)]+[F(y^k) - F(\bar x+t_ku)]
\\
&+ [F(\bar x+t_ku)-F(\bar x)-t_kF^{\circ}(\bar x, u)]
\\
&\leqq \frac12t^2_kL\|v^k-v\|+[F(y^k) - F(\bar x+t_ku)] + [F(\bar x+t_ku)-F(\bar x)-t_kF^{\circ}(\bar x, u)]
\end{align*}
for all $k\geqq k_0$. This implies that
\begin{align*}
0&\leqq \lim\limits_{k\to\infty} L\|v^k-v\| + \limsup\limits_{k\to\infty}\dfrac{F(\bar x+t_ku+\frac12 t^2_kv) - F(\bar x+t_ku)}{\frac12t^2_k}
\\
&+\limsup\limits_{k\to\infty}\dfrac{F(\bar x+t_ku)-F(\bar x)-t_kF^{\circ}(\bar x, u)}{\frac12t^2_k}
\\
&\leqq\limsup\limits_{\mathop {x \to \bar x}\limits_{t \downarrow 0} }\dfrac{F(x+tv) - F(x)}{t} + \limsup\limits_{t\downarrow 0}\dfrac{F(\bar x+tu)-F(\bar x)-tF^{\circ}(\bar x, u)}{\frac12t^2}
\\
&= F^{\circ}(\bar x, v) + F^{\circ\circ} (\bar x, u).
\end{align*}
Therefore, $F^{\circ}(\bar x, v) + F^{\circ\circ} (\bar x, u)\geqq 0$. The proof is complete.
\end{proof}
\section{Second-order constraint qualification and regularity condition}
\label{Abadie_RC_sect}
From now on, we consider problem \eqref{problem} under the following assumptions:
\begin{equation*}
\begin{cases}
\text{The functions}\ \ f_i, i\in I, g_j, j\in J(\bar x), \ \ \text{are locally Lipschitz at} \ \ \bar x,
\\
\text{The functions}\ \ g_j, j\in J\setminus J(\bar x),\ \ \text{are continuous at}\ \ \bar x,
\end{cases}
\end{equation*}
where $\bar x$ is a feasible point of \eqref{problem} and $J(\bar x)$ is the {\em active index set} at $\bar x$, that is,
$$J(\bar x):=\{j\in J\,:\,g_j(\bar x)=0\}.$$
For any vectors $a=(a_1, a_2)$ and $b=(b_1, b_2)$ in $\mathbb{R}^2$, we denote the lexicographic order by
\begin{align*}
a&\leqq_{\rm lex} b,\ \ {\rm iff} \ \ a_1<b_1\ \ {\rm or} \ \ (a_1=b_1\ \ {\rm and }\ \ a_2\leqq b_2),
\\
a&<_{\rm lex} b,\ \ {\rm iff} \ \ a_1<b_1\ \ {\rm or} \ \ (a_1=b_1\ \ {\rm and }\ \ a_2< b_2).
\end{align*}
Let us introduce some notations which are used in the sequel. For each $\bar x\in Q_0$ and $u\in X$, put
\begin{align*}
&Q:=Q_0\cap \{x\in X\,:\, f_i(x)\leqq f_i(\bar x), \ \ i\in I\},
\\
&J(\bar x; u):=\{j\in J(\bar x)\,:\, g_j^{\circ}(\bar{x}, u)=0\},
\\
&I(\bar x; u):=\{i\in I\,:\,f_i^{\circ}(\bar{x}, u)=0\}.
\end{align*}
We say that $u$ is a {\em critical direction} of \eqref{problem} at $\bar x$ iff
\begin{align*}
f_i^{\circ}(\bar{x}, u)&\leqq 0, \ \ \ \forall i\in I,
\\
f_i^{\circ}(\bar{x}, u)&= 0, \ \ \ \mbox{at least one} \ \ i\in I,
\\
g_j^{\circ}(\bar{x}, u)&\leqq 0, \ \ \ \forall j\in J(\bar x).
\end{align*}
The set of all critical directions of \eqref{problem} at $\bar x$ is denoted by $\mathcal{C}(\bar x)$. Obviously, $0\in\mathcal{C}(\bar x)$.
We now use the following second-order approximation sets for $Q$ and $Q_0$ to introduce second-order constraint qualifications and regularity condition. For each $\bar x\in Q_0$ and $u\in X$, set
\begin{eqnarray*}
L^2(Q; \bar x, u)&&:=\bigg\{v\in X\, :\, F^2_i(\bar{x}; u, v)\leqq_{\rm lex} (0, 0), \ \ i\in I
\\
&& \qquad\ \ \,\,\text{and} \ \ G^2_j(\bar{x}; u, v)\leqq_{\rm lex} (0,0),\ \ j\in J(\bar x)\bigg\},
\\
L^2(Q_0; \bar x, u)&&:=\bigg\{v\in X\, :\, G^2_j(\bar{x}; u, v)\leqq_{\rm lex} (0,0),\ \ j\in J(\bar x)\bigg\},
\\
L_0^2(Q_0; \bar x, u)&&:=\bigg\{v\in X\, :\, G^2_j(\bar{x}; u, v)<_{\rm lex} (0,0),\ \ j\in J(\bar x)\bigg\},
\end{eqnarray*}
\begin{eqnarray*}
A(\bar x; u)&&:=\bigg\{v\in X\,:\, \forall j\in J(\bar x; u)\,\,\exists \delta_j>0 \ \ \mbox{with}\ \ g_j\bigg(\bar x+tu+\frac12t^2v\bigg)\leqq 0
\\
&& \qquad \qquad\qquad \qquad\qquad \qquad\qquad \qquad \qquad \qquad\qquad \qquad\,\forall t\in (0,\delta_j)\bigg\},
\\
B(\bar x; u)&&:=\bigg\{v\in X\,:\, g_j^{\circ}(\bar{x}, v)+g_j^{\circ\circ}(\bar{x}, u) \leqq 0, \ \ \forall j\in J(\bar x; u)\bigg\},
\end{eqnarray*}
where
\begin{align*}
F^2_i(\bar{x}; u, v)&:= \left(f_i^{\circ}(\bar{x}, u), f_i^{\circ}(\bar{x}, v)+f_i^{\circ\circ}(\bar{x}, u) \right), \ \ i\in I, v\in X,
\\
G^2_j(\bar{x}; u,v)&:=\left(g_j^{\circ}(\bar{x}, u), g_j^{\circ}(\bar{x}, v)+g_j^{\circ\circ}(\bar{x}, u) \right), \ \ j\in J(\bar x), v\in X.
\end{align*}
For brevity, we denote $L(Q; \bar x):=L^2(Q; \bar x, 0)$. It is easily seen that, for each $u\in\mathcal{C}(\bar x)$, we have
\begin{equation*}
L_0^2(Q_0; \bar x, u)=\bigg\{v\in X\,:\, g_j^\circ(\bar x, v)+g_j^{\circ\circ}(\bar x, u)<0, \ \ j\in J(\bar x, u)\bigg\}.
\end{equation*}
\begin{definition}\label{def-SOCQ}{\rm
Let $\bar x\in Q_0$ and $u\in X$. We say that:
\begin{enumerate} [(i)]
\item The {\em Zangwill second-order constraint qualification} holds at $\bar x$ for the direction $u$ iff
\[
B(\bar x; u)\subset \text{cl}\, A(\bar x; u). \tag{$ZSCQ$}\label{SACQ_1}
\]
\item The {\em Abadie second-order constraint qualification} holds at $\bar x$ for the direction $u$ iff
\[
L^{2}(Q_0; \bar x, u)\subset T^2(Q_0; \bar x, u). \tag{$ASCQ$}\label{SACQ_2}
\]
\item The {\em Mangasarian--Fromovitz second-order constraint qualification} holds at $\bar x$ for the direction $u$ iff
\[ L_0^2(Q_0; \bar x, u) \neq \emptyset.\tag{$MFSCQ$} \label{MFSCQ}
\]
\item The {\em weak Abadie second-order regularity condition} holds at $\bar x$ for the direction $u$ iff
\[
L^{2}(Q; \bar x, u)\subset T^2(Q_0; \bar x, u). \tag{$WASRC$}\label{SACQ_3}
\]
\end{enumerate}
}
\end{definition}
The \eqref{SACQ_1} type was first introduced by Ivanov \cite[Definition 3.2]{Ivanov15} for $C^1$ functions. The \eqref{SACQ_2} type was proposed by Aghezzaf and Hachimi for \eqref{problem} with $C^2$ data; see \cite[p.40]{Aghezzaf99}. The \eqref{MFSCQ} type was first introduced in \cite{Ben-Tal80} for $C^2$ scalar optimization problems. The \eqref{SACQ_3} type was used for $C^{1,1}$ vector optimization problems in \cite{Huy163}. For problems with only locally Lipschitz active constraints and objective functions, these conditions are new.
\begin{definition}\label{def3.2}{\rm Let $\bar x\in Q_0$. We say that the {\em Zangwill constraint qualification $(ZCQ)$} (resp., {\em Abadie constraint qualification $(ACQ)$}, {\em Mangasarian--Fromovitz constraint qualification $(MFCQ)$}, {\em weak Abadie regularity condition} $(WARC)$) holds at $\bar x$ iff the \eqref{SACQ_1} (resp., \eqref{SACQ_2}, \eqref{MFSCQ}, \eqref{SACQ_3}) holds at $\bar x$ for the direction $0$.
}
\end{definition}
The following result shows that the \eqref{SACQ_3} is weaker than other constraint qualification conditions in Definition \ref{def-SOCQ}.
\begin{proposition}\label{relations-CQ} Let $\bar x\in Q_0$ and $u\in X$. Then the following implications hold:
\begin{enumerate}[\rm(i)]
\item $ (\, B(\bar x; u)\subset {\rm cl}\, A(\bar x; u) \, ) $ $\Rightarrow$ $ (\,L^{2}(Q_0; \bar x, u)\subset T^2(Q_0; \bar x, u)\, )$ $\Rightarrow$\\ $ (\,L^{2}(Q; \bar x, u)\subset T^2(Q_0; \bar x, u)\, ), \,\,i.e,$ $$\eqref{SACQ_1}\Rightarrow\eqref{SACQ_2}\Rightarrow\eqref{SACQ_3}.$$
\item $ (\,L_0^2(Q_0; \bar x, u) \neq \emptyset \, ) \Rightarrow (\, L^{2}(Q_0; \bar x, u)\subset T^2(Q_0; \bar x, u)\, ),\,\,i.e.,$
$$\eqref{MFSCQ}\Rightarrow\eqref{SACQ_2}.$$
\item
$(\,L_0^2(Q_0; \bar x, 0) \neq \emptyset \, )\,\, \Rightarrow\, (\,L_0^2(Q_0; \bar x, u) \neq \emptyset,\,\,\forall\, u\,\in\mathcal{C}(\bar x)\, ).$
\end{enumerate}
\end{proposition}
\begin{proof} (i) Clearly, $L^2(Q; \bar x, u) \subset L^2(Q_0; \bar x, u)$. Thus the second implication of (i) is trivial. We now assume that the \eqref{SACQ_1} holds at $\bar x$ for the direction $u\in X$. Fix $v\in L^{2}(Q_0; \bar x, u)$. Then,
\begin{equation*}
G^2_j(\bar{x}; u, v)\leqq_{\rm lex} (0,0),\ \ \forall j\in J(\bar x).
\end{equation*}
This implies that
\begin{align*}
g_j^{\circ}(\bar{x}, u)&\leqq 0, \ \ \forall j\in J(\bar x),
\\
g_j^{\circ}(\bar{x}, v)+g_j^{\circ\circ}(\bar{x}, u) &\leqq 0, \ \ \forall j\in J(\bar x; u).
\end{align*}
Thus, $v\in B(\bar x; u)$. Since the \eqref{SACQ_1} holds at $\bar x$ for the direction $u$, we have $v\in \mathrm{cl}\, A(\bar x; u)$. Thus there exists a sequence $\{v^k\}\subset A(\bar x; u)$ converging to $v$. Let $\{t_h\}$ be an arbitrary positive sequence converging to $0$. We claim that there is a subsequence $\{t_{h_k}\}\subset \{t_h\}$ such that
\begin{equation*}
\bar x+t_{h_k}u+\frac12t^2_{h_k}v^k\in Q_0, \ \ \forall k\in \mathbb{N}.
\end{equation*}
We will prove this claim by induction on $k$.
In case of $k=1$, let $\{x_h\}$ be a sequence defined by
$$x^h:=\bar x+t_{h}u+\frac12t^2_{h}v^1\ \ \ \text{for all}\ \ h\in \mathbb{N}.$$
Let us consider the following possible cases for $j\in J$.
{\bf Case 1.} $j\notin J(\bar x)$. This means that $g_j(\bar x)<0$. Since $g_j$ is continuous at $\bar x$ and $\lim\limits_{h\to\infty} x^h=\bar x$, there is $H_1\in\mathbb{N}$ such that $g_j\left(x^h\right)<0$ for all $h\geqq H_1$.
{\bf Case 2.} $j\in J(\bar x)\setminus J(\bar x; u)$. This means that
$g_j(\bar x)=0$ and $g_j^{\circ} (\bar x, u)<0$. We claim that there exists $H_2\in\mathbb{N}$ such that $g_j\left(x^h\right)<0$ for all $h\geqq H_2$.
Indeed, if otherwise, there is a subsequence $\{t_{h_l}\}\subset \{t_h\}$ satisfying
$$g_j\left(\bar x+t_{h_l}u+\frac12t^2_{h_l}v^1\right)\geqq g_j(\bar x)=0, \ \ \forall l\in\mathbb{N},$$
or, equivalently,
\begin{equation*}
g_j\left(\bar x+t_{h_l}\left(u+\frac12t_{h_l}v^1\right)\right)\geqq g_j(\bar x), \ \ \forall l\in\mathbb{N}.
\end{equation*}
Clearly, $\lim\limits_{l\to\infty}\left(u+\frac12t_{h_l}v^1\right)=u$. By Lemma \ref{lemma1}, $g_j^{\circ}(\bar x, u)\geqq 0$, and which contradicts with the fact that $g_j^{\circ} (\bar x, u)<0$.
{\bf Case 3.} $j\in J(\bar x; u)$. Since $v^1\in A(\bar x; u)$ and $j\in J(\bar x; u)$, there exists $\delta_j>0$ such that
$$g_j\left(\bar x+tu+\frac12t^2v^1\right)\leqq 0, \ \ \forall t\in (0, \delta_j).$$
From $\lim\limits_{h\to\infty}t_h=0$ it follows that there is $H_3\in\mathbb{N}$ such that $t_h\in (0, \delta_j)$ for all $h\geqq H_3$. Thus, $g_j\left(x^h\right)\leqq 0$ for all $h\geqq H_3$.
Put $h_1:=\max\{H_1, H_2, H_3\}$. Then, we have $g_j\left(x^h\right)\leqq 0$ for all $h\geqq h_1$ and $j\in J$. This implies that
$$\bar x+t_{h}u+\frac12t^2_{h}v^1\in Q_0 \ \ \forall h\geqq h_1.$$
Thus, by induction on $k$, there exists a subsequence $\{t_{h_k}\}\subset \{t_h\}$ such that
\begin{equation*}
\bar x+t_{h_k}u+\frac12t^2_{h_k}v^k\in Q_0, \ \ \forall k\in \mathbb{N}.
\end{equation*}
From this, $\lim\limits_{k\to\infty}t_{h_k}=0$, and $\lim\limits_{k\to\infty} v^k=v$, it follows that $v\in T^2(Q_0; \bar x, u)$. Since $v$ is arbitrary in $L^2(Q_0; \bar x, u)$, we have
$$L^2(Q_0; \bar x, u)\subset T^2(Q_0; \bar x, u).$$
Thus the \eqref{SACQ_2} holds at $\bar x$ for the direction $u$.
(ii) We now assume that the \eqref{MFSCQ} holds at $\bar x$ for the direction $u\in X$ and $v^0\in L^2_0(Q_0; \bar x, u)$. Fix $v\in L^2(Q_0; \bar x, u)$. Then,
\begin{align*}
g_j^{\circ}(\bar{x}, u)&\leqq 0, \ \ \forall j\in J(\bar x),
\\
g_j^{\circ}(\bar{x}, v)+g_j^{\circ\circ}(\bar{x}, u) &\leqq 0, \ \ \forall j\in J(\bar x; u).
\end{align*}
Let $\{s_k\}$ and $\{t_h\}$ be any positive sequences converging to zero. For each $k\in\mathbb{N}$, put $v^k:=s_kv^0+(1-s_k)v$. Then, $\lim\limits_{k\to\infty}v^k=v$. We claim that there exists a subsequence $\{t_{h_k}\}$ of $\{t_h\}$ such that
\begin{equation*}
\bar x+t_{h_k}u+\frac12t^2_{h_k}v^k\in Q_0, \ \ \forall k\in \mathbb{N}.
\end{equation*}
Consequently, $v\in T^2(Q_0; \bar x, u)$ and we therefore get the \eqref{SACQ_2}.
Indeed, for $k=1$, we have that $v^1=s_1v^0+(1-s_1)v$. Fix $j\in J$. If $j\in J\setminus J(\bar x; u)$, then, we prove as in Case 1 and Case 2 of the proof of assertion (i) that there exists $H_1\in\mathbb{N}$ such that
\begin{equation*}
g_j\left(x^h\right)<0, \ \ \forall h\geqq H_1,
\end{equation*}
where $x^h:=\bar x+t_{h}u+\frac12t^2_{h}v^1$. If $j\in J(\bar x; u)$, then
\begin{equation*}
g_j^{\circ}(\bar{x}, v^0)+g_j^{\circ\circ}(\bar{x}, u) < 0.
\end{equation*}
Hence,
\begin{align*}
g_j^{\circ}(\bar{x}, v^1)+g_j^{\circ\circ}(\bar{x}, u) &\leqq s_1g_j^{\circ}(\bar{x}, v^0)+(1-s_1)g_j^{\circ}(\bar{x}, v)+g_j^{\circ\circ}(\bar{x}, u)
\\
&= s_1[g_j^{\circ}(\bar{x}, v^0)+g_j^{\circ\circ}(\bar{x}, u)]+(1-s_1)[g_j^{\circ}(\bar{x}, v)+g_j^{\circ\circ}(\bar{x}, u)]
\\
&<0.
\end{align*}
Thus,
\begin{align*}
\limsup_{h\to\infty}\frac{g_j(x^h)}{\frac{1}{2}t_h^2}&=\limsup_{h\to\infty}\frac{g_j(x^h)-g_j(\bar x)-t_hg_j^\circ(\bar x; u)}{\frac{1}{2}t_h^2}
\\
&\leqq \limsup_{h\to\infty}\frac{g_j((\bar x+t_hu)+\frac{1}{2}t_h^2v^1)-g_j(\bar x+t_hu)}{\frac{1}{2}t_h^2}
\\
&+\limsup_{h\to\infty}\frac{g_j(\bar x+t_hu)-g_j(\bar x)-t_hg_j^\circ(\bar x; u)}{\frac{1}{2}t_h^2}
\\
&\leqq g_j^\circ(\bar x; v^1)+g_j^{\circ\circ}(\bar x; u)
\\
&<0.
\end{align*}
This implies that there exists $H_2\in\mathbb{N}$ such that $g_j(x^h)<0$ for all $h\geqq H_2$. Put $h_1:=\max\{H_1, H_2\}$. Then we have $g_j(x^h)<0$ for all $h\geqq h_1$ and $j\in J$. Thus,
$$\bar x+t_hu+\frac{1}{2}t_h^2v^1\in Q_0\ \ \forall h\geqq h_1,$$
and the assertion follows by induction on $k$.
(iii) Assume that there exists $v^0\in L_0^2(Q_0; \bar x, 0)$. Then $g_j^\circ(\bar x, v^0)<0$ for all $j\in J(\bar x)$. Let $u\,\in\mathcal{C}(\bar x)$. For each $t>0$, put $v(t):=u+tv^0$. We claim that there exists $t>0$ such that $v(t)\in L_0^2(Q_0; \bar x, u)$. Indeed, for each $j\in J(\bar x; u)$, one has
\begin{align*}
g^\circ_j(\bar x, v(t))+g_j^{\circ\circ}(\bar x, u)&\leqq g^\circ_j(\bar x, u)+tg_j^\circ(\bar x, v^0)+g_j^{\circ\circ}(\bar x, u)
\\
&=tg_j^\circ(\bar x, v^0)+g_j^{\circ\circ}(\bar x, u)
\\
&<0
\end{align*}
for $t$ large enough. This implies that $v(t)\in L_0^2(Q_0; \bar x, u)$ for $t$ large enough, as required.
\end{proof}
The relations between second-order constraint qualifications are summarized in Figure \ref{Fig1}.
\begin{center}
\begin{figure}[htp]
\begin{center}
\includegraphics[height=4cm,width=7cm]{Fig11}
\end{center}
\caption{Relations between second-order constraint qualifications}
\label{Fig1}
\end{figure}
\end{center}
\begin{remark}
{\rm
The forthcoming Examples~\ref{ex4.1} and \ref{ex4.2} show that $\eqref{SACQ_3} \not\Rightarrow\eqref{SACQ_1}$ and $\eqref{SACQ_3} \not\Rightarrow\eqref{MFSCQ}$.
}
\end{remark}
For the remainder of this paper, we apply the \eqref{SACQ_3} to establish some second-order $KKT$ necessary optimality conditions for efficient solutions of \eqref{problem}. We point out that, by Proposition~\ref{relations-CQ}, these results
still valid when the \eqref{SACQ_3}
is replaced by one of \eqref{SACQ_1}, \eqref{SACQ_2} and \eqref{MFSCQ}.
\section{Second-order optimality conditions for efficiencies}
\label{Second_order_optim_sect}
In this section, we apply the \eqref{SACQ_3} to establish some second-order $KKT$ necessary optimality conditions in primal form for local (weak) efficient solutions of \eqref{problem}.
The following theorem gives a first-order necessary optimality condition for \eqref{problem} under the reqularity condition ($WARC$).
\begin{theorem}\label{first_order_nec}
If $\bar x\in Q_0$ is a local {\rm(}weak{\rm)} efficient solution of \eqref{problem} and $(WARC)$ holds at $\bar x$, then the system
\begin{align}
f_i^{\circ} (\bar x, u)&<0, \ \ i\in I, \label{equa:3}
\\
g_j^{\circ}(\bar x, u)&\leqq 0, \ \ j\in J(\bar x),\label{equa:4}
\end{align}
has no solution $u\in X$.
\end{theorem}
\begin{proof}
Arguing by contradiction, assume that there exists $u\in X$ satisfying conditions \eqref{equa:3} and \eqref{equa:4}. This implies that $u\in L(Q; \bar x)$. Since the $(WARC)$ holds at $\bar x$, one has
$$L(Q; \bar x)\subset T(Q_0; \bar x).$$
Consequently, $u\in T(Q_0; \bar x)$. Thus there exist $t_k\to 0^+$ and $u^k\to u$ such that
$$\bar x+t_ku^k\in Q_0$$
for all $k\in\mathbb{N}$. We claim that, for each $i\in I$, there exists $K_i\in\mathbb{N}$ satisfying
$$f_i(\bar x+t_ku^k)<f_i(\bar x), \ \ \forall k\geqq K_i.$$
Indeed, if otherwise, there exist $i\in I$ and a sequence $\{k_l\}\subset \mathbb{N}$ such that
$$f_i(\bar x+t_{k_l}u^{k_l})\geqq f_i(\bar x), \ \ \forall l\in\mathbb{N}. $$
By Lemma \ref{lemma1}, we have $f_i^{\circ}(\bar x, u)\geqq 0$, contrary to \eqref{equa:3}.
Put $K_0:=\max\, \{K_1, \ldots, K_p\}$. Then,
$$f_i((\bar x+t_ku^k)<f_i(\bar x)$$
for all $k\geqq K_0$ and $i\in I$, which contradicts the hypothesis of the theorem.
\end{proof}
\begin{remark}{\rm
\begin{enumerate}[(i)]
\item Recently, Gupta et al. \cite[Theorems 3.1]{Gupta2017} showed that {\em``If $\bar x$ is an efficient solution of \eqref{problem}, $X=\mathbb{R}^n$, for each $i\in I$, $f_i$ is $\partial^c$-quasiconcave at $\bar x$, and there exists $i\in I$ such that
\begin{equation}\label{Gupta-condition}
L(M^i; \bar x)\subset T(M^i; \bar x),
\end{equation}
where
\begin{align*}
M^i&:=\{x\in Q_0\;:\; f_i(x)\leqq f_i(\bar x)\},
\\
L(M^i; \bar x)&:=\{u\in X\;:\; f^{\circ}_i(\bar x; u)\leqq 0, g^{\circ}_j(\bar x; u)\leqq 0, j\in J(\bar x)\},
\end{align*}
then the system \eqref{equa:3}--\eqref{equa:4} has no solution''}.
Clearly,
\begin{align*}
T(M^i; \bar x)&\subset T(Q_0; \bar x),
\\
L(Q; \bar x)&\subset L(M^i; \bar x).
\end{align*}
This implies that if condition \eqref{Gupta-condition} holds at $\bar x$, then so does the $(WARC)$. Thus, Theorem \ref{first_order_nec} improves \cite[Theorems 3.1]{Gupta2017}. We note here that the assumption that $f_i$ is $\partial^c$-quasiconcave at $\bar x$ is not necessary in our result.
\item Theorem \ref{first_order_nec} also improves \cite[Theorems 3.3]{Gupta2017}. Theorem 3.3 in \cite{Gupta2017} is as follows: {\em ``If $\bar x$ is a weak efficient solution of \eqref{problem}, $X=\mathbb{R}^n$, $Q_0$ is convex, for each $i\in I$, $f_i$ is $\partial^c$-quasiconcave at $\bar x$, and there exists $i\in I$ such that
\begin{equation}\label{Gupta-condition-ii}
L(M^i; \bar x)\subset \mathrm{cl}\,\mathrm{conv}\, T(M^i; \bar x),
\end{equation}
then the system \eqref{equa:3}--\eqref{equa:4} has no solution''}.
Since $T(M^i; \bar x)\subset T(Q_0; \bar x)$ and $Q_0$ is a closed convex set, we have
$$\mathrm{cl}\,\mathrm{conv}\, T(M^i; \bar x) \subset T(Q_0; \bar x).$$
This implies the $(WARC)$ is weaker than condition \eqref{Gupta-condition-ii} and so Theorem \ref{first_order_nec} sharpens \cite[Theorems 3.3]{Gupta2017}. We would like to remark that our result does not require any convexity assumptions.
\end{enumerate}
}
\end{remark}
Now we are ready to present our result of second-order $KKT$ optimality conditions for local (weak) efficient solutions of \eqref{problem} under the \eqref{SACQ_3}.
\begin{theorem}\label{nec_condition_weak_eff} Let $\bar x$ be a local {\rm(}weak{\rm)} efficient solution of \eqref{problem}. Suppose that the (\ref{SACQ_3}) holds at $\bar x$ for any critical direction. Then, the system
\begin{align}
F^2_i(\bar{x}; u, v)&<_{\rm lex} (0, 0),\ \ \ i\in I, \label{equa:5}
\\
G^2_j(\bar{x}; u, v)&\leqq_{\rm lex} (0, 0),\ \ \ j\in J(\bar x)\label{equa:6}.
\end{align}
has no solution $(u, v)\in X\times X$.
\end{theorem}
\begin{proof} Arguing by contradiction, assume that there exists $(u, v)\in X\times X$ satisfying conditions \eqref{equa:5} and \eqref{equa:6}. It follows that $v\in L^2 (Q; \bar x, u)$ and
\begin{eqnarray*}
f_i^{\circ}(\bar x, u)&\leqq 0, \ \ \ &i\in I,
\\
g_j^{\circ}(\bar x, u)&\leqq 0, \ \ \ &j\in J (\bar x).
\end{eqnarray*}
Since the \eqref{SACQ_3} holds at $\bar x$, so does the $(WARC)$. By Theorem \ref{first_order_nec}, there exists $i\in I$ such that $f_i^{\circ}(\bar x, u)=0$. This means that $u$ is a critical direction of \eqref{problem} at $\bar x$. Since the \eqref{SACQ_3} holds at $\bar x$ for the critical direction $u$, we have
$$v\in T^2(Q_0; \bar x, u).$$
Thus there exist a sequence $\{v^k\}$ converging to $v$ and a positive sequence $\{t_k\}$ converging to $0$ such that
$$x^k:=\bar x+t_ku+\frac12t_k^2v^k\in Q_0,\ \ \ \forall k\in\mathbb{N}.$$
We claim that, for each $i\in I$, there exists $K_i\in \mathbb{N}$ such that
$$f_i(x^k)<f_i(\bar x)$$
for all $k\geqq K_i$. Indeed, if otherwise, there exist $i_0\in I$ and a sequence $\{k_l\}\subset \mathbb{N}$ satisfying
\begin{equation}\label{equa:7}
f_{i_0}\left(\bar x+t_{k_l}u+\frac12t^2_{k_l}v^{k_l}\right)\geqq f_{i_0}(\bar x), \ \ \forall l\in\mathbb{N}.
\end{equation}
We consider the following possible cases for $i_0$.
{\bf Case 1.} $i_0\in I(\bar x; u)$. This means that $f_{i_0}^{\circ}(\bar x, u)=0$. From \eqref{equa:5} it follows that
\begin{equation}\label{equa:8}
f_{i_0}^{\circ}(\bar x, v)+f_{i_0}^{\circ\circ}(\bar x, u)<0.
\end{equation}
From \eqref{equa:7}, $\lim\limits_{l\to\infty} t_{k_l}=0$, $\lim\limits_{l\to\infty} v^{k_l}=v$, and Lemma \ref{lemma2}, it follows that
$$f_i^{\circ}(\bar x, v)+f_i^{\circ\circ}(\bar x, u)\geqq 0,$$
contrary to \eqref{equa:8}.
{\bf Case 2.} $i_0\notin I(\bar x; u)$. This means that $f_{i_0}^{\circ}(\bar x, u)<0$. In this case we now rewrite \eqref{equa:7} as
$$f_{i_0}\left(\bar x+t_{k_l}\left(u+\frac12t_{k_l}v^{k_l}\right)\right)\geqq f_{i_0}(\bar x), \ \ \forall l\in\mathbb{N}.$$
From $\lim\limits_{l\to\infty} t_{k_l}=0$, $\lim\limits_{l\to\infty} \left(u+\frac12t_{k_l}v^{k_l}\right)=u$, and Lemma \ref{lemma1}, it follows that $f_{i_0}^{\circ}(\bar x, u)\geqq 0$. This contradicts the fact that $f_{i_0}^{\circ}(\bar x, u)<0$.
Put $K_0:=\max\{K_i\,:\, i\in I\}$. Then, we have
$$f_i(x^k)<f_i(\bar x)$$
for all $k\geqq K_0$ and $i\in I$, which contradicts the hypothesis of the theorem.
\end{proof}
An immediate consequence of the above theorem is the following corollary.
\begin{corollary}\label{second_order_nec} Let $\bar x$ be a local {\rm(}weak{\rm)} efficient solution of \eqref{problem} and $u\in \mathcal{C}(\bar x)$. Suppose that the \eqref{SACQ_3} holds at $\bar x$ for the direction $u$. Then the following system
\begin{align}
&f_{i}^{\circ}(\bar x, v)+f_{i}^{\circ\circ}(\bar x, u)<0, \ \ i\in I(\bar x; u), \label{equa:9}
\\
&g_{j}^{\circ}(\bar x, v)+g_{j}^{\circ\circ}(\bar x, u)\leqq 0, \ \ j\in J(\bar x, u), \label{equa:10}
\end{align}
has no solution $v\in X$.
\end{corollary}
\begin{remark}{\rm Suppose that $F\colon X\to \mathbb{R}$ is of class $C^1(X)$, i.e., $F$ is Fr\'echet differentiable and its gradient mapping is continuous on $X$. If $F$ is second-order directionally differentiable at $\bar x$, i.e., there exists
$$F^{\prime\prime}(\bar x, u):= \lim\limits_{t\downarrow 0} \dfrac{F(\bar x+tu)-F(\bar x)- t\langle \nabla F(\bar x), u\rangle}{\frac12t^2},\ \ u\in X,$$
then $F^{\prime\prime}(\bar x, u) = F^{\circ\circ}(\bar x, u)$ for all $u\in X$. In \cite{Ivanov15}, Ivanov considered problem \eqref{problem} under the following conditions:
\begin{equation}\label{Ivanov_condition}\tag{$\mathfrak{C}$}
\left.
\begin{aligned}
&\text{The functions } g_j, j\notin J(\bar x) \text{ are continuous at } \bar x;
\\
&\text{The functions } f_i, i\in I, g_j, j\in J(\bar x) \text{ are of class } C^1(X);
\\
&\text{If } \langle \nabla f_i(\bar x), u\rangle=0, \text{ then there exists } f_i^{\prime\prime} (\bar x, u);
\\
&\text{If } \langle \nabla g_j(\bar x), u\rangle=0, j\in J(\bar x), \text{ then there exists } g_j^{\prime\prime} (\bar x, u).
\end{aligned}
\right \}
\end{equation}
If condition \eqref{Ivanov_condition} holds at $\bar x$ for the direction $u$, then the system \eqref{equa:9}--\eqref{equa:10} becomes
\begin{align*}
&\langle \nabla f_i(\bar x), v\rangle+f_i^{\prime\prime}(\bar x, u)<0, \ \ i\in I(\bar x, u),\\
&\langle \nabla g_j(\bar x), v\rangle+g_j^{\prime\prime}(\bar x, u)\leqq 0, \ \ j\in J(\bar x, u).
\end{align*}
Since the \eqref{SACQ_3} is weaker than the \eqref{SACQ_1}, Corollary \ref{second_order_nec} improves and extends result of Ivanov \cite[Theorem 4.1]{Ivanov15} and of Huy et al. \cite[Theorem 3.2]{Huy163}. To illustrate, we consider the following example.
}
\end{remark}
\begin{example}\label{ex4.1}
{\rm Let $f\colon \mathbb{R}^2 \to\mathbb{R}^3$ and $g\colon \mathbb{R}^2\to \mathbb{R}$ be two maps defined by
\begin{align*}
f(x)&:= (f_1(x), f_2(x), f_3(x))=(x_2, x_1+x_2^2, -x_1-x_1|x_1|+x_2^2)\\
g(x)&:=|x_1|+x_2^3-x_1^2, \ \ \forall x=(x_1, x_2)\in\mathbb{R}^2.
\end{align*}
Then the feasible set of \eqref{problem} is
$$Q_0=\{(x_1, x_2)\in\mathbb{R}^2\,:\,|x_1|+x_2^3-x_1^2\leqq 0\}.$$
Let $\bar x=(0,0)\in Q_0$. It is easy to check that $\bar x$ is an efficient solution of \eqref{problem}. For each $u=(u_1, u_2)\in\mathbb{R}^2$, we have
\begin{align*}
&f_1^{\circ}(\bar x, u)=\langle\nabla f_1(\bar x), u\rangle=u_2, f_2^{\circ}(\bar x, u)=\langle\nabla f_2(\bar x), u\rangle= u_1
\\
&f_3^{\circ}(\bar x, u)=\langle\nabla f_3(\bar x), u\rangle=-u_1, g^{\circ}(\bar x, u)=|u_1|.
\end{align*}
Thus,
$$\mathcal{C}(\bar x)=\{(u_1, u_2)\in\mathbb{R}^2\,:\, u_1=0, u_2\leqq 0\}.$$
Clearly, $0_{\mathbb{R}^2}:=(0,0)$ is a critical direction at $\bar x$. We claim that the \eqref{SACQ_3} holds at $\bar x$ for the direction $0_{\mathbb{R}^2}$. Indeed, we have
\begin{equation*}
L^2(Q; \bar x, 0_{\mathbb{R}^2})=\{(v_1, v_2)\in\mathbb{R}^2\,:\, v_1=0, v_2\leqq 0\}.
\end{equation*}
An easy computation shows that
$$T^2(Q_0; \bar x, 0_{\mathbb{R}^2})=T(Q_0; \bar x)=\{(v_1, v_2)\in\mathbb{R}^2\,:\, v_1=0, v_2\leqq 0\}.$$ This implies that the \eqref{SACQ_3} holds at $\bar x$ for the direction $0_{\mathbb{R}^2}$. By Corollary \ref{second_order_nec}, the system
\begin{align*}
&f_{i}^{\circ}(\bar x, v)+f_{i}^{\circ\circ}(\bar x, 0_{\mathbb{R}^2})<0, \ \ i\in I(\bar x; 0_{\mathbb{R}^2}),
\\
&g^{\circ}(\bar x, v)+g^{\circ\circ}(\bar x, 0_{\mathbb{R}^2})\leqq 0,
\end{align*}
has no solution $v\in\mathbb{R}^2$. The second-order necessary conditions of Huy et al. \cite[Theorem 3.2]{Huy163} and of Ivanov \cite[Theorem 4.1]{Ivanov15} are not applicable to this example as the constraint function $g$ is not Fr\'echet differentiable at $\bar x$. Furthermore, the \eqref{SACQ_1} does not hold at $\bar x$ for the direction $0_{\mathbb{R}^2}$. Indeed, we have
$$B(\bar x; 0_{\mathbb{R}^2})=\{(v_1, v_2)\in\mathbb{R}^2\,:\, v_1=0, v_2\in\mathbb{R}\}.$$
Let $v=(v_1, v_2)\in\mathbb{R}^2$. We have $v\in A(\bar x; 0_{\mathbb{R}^2})$ if and only if
there exists $\delta>0$ such that
$$g\left(\bar x+t0_{\mathbb{R}^2}+\frac12t^2v\right)\leqq 0, \ \ \forall t\in(0,\delta),$$
or, equivalently,
\begin{equation}\label{equa:11}
|v_1|-\frac12t^2v_1^2+\frac14t^4v_2^3\leqq 0, \ \ \forall t\in(0,\delta).
\end{equation}
It is easy to check that \eqref{equa:11} is true if and only if $v_1=0$ and $v_2\leqq 0$. Thus,
$$A(\bar x; 0_{\mathbb{R}^2})=\{(v_1, v_2)\in\mathbb{R}^2\,:\, v_1=0, v_2\leqq 0\}.$$
Clearly, $B(\bar x; 0_{\mathbb{R}^2}) \nsubseteq \mbox{cl}\,A(\bar x; 0_{\mathbb{R}^2})$. This means that the \eqref{SACQ_1} does not hold at $\bar x$ for the direction $0_{\mathbb{R}^2}$.
}
\end{example}
\begin{remark}{\rm Recently, by using the \eqref{MFSCQ}, Luu \cite[Corollary 5.2]{Luu17} derived some second-order KKT necessary conditions for weak efficient solutions of differentiable vector problems in terms of the second-order upper generalized directional derivatives. By Proposition \ref{relations-CQ}, the \eqref{SACQ_3} is weaker than the \eqref{MFSCQ}.
Thus, Corollary \ref{second_order_nec} improves \cite[Corollary 5.2]{Luu17}.
To see this, let us consider the following example.
}
\end{remark}
\begin{example} \label{ex4.2}
Let $f\colon \mathbb{R}^2 \to\mathbb{R}^2$ and $g\colon \mathbb{R}^2\to \mathbb{R}^2$ be two maps defined by
\begin{align*}
f(x)&:= (f_1(x), f_2(x))=(x_1+x_2^2, -x_1-x_1|x_1|+x_2^2)\\
g(x)&:=(g_1(x), g_2(x))=(x_1-x_2^2, -x_1-x_2^2), \ \ \forall x=(x_1, x_2)\in\mathbb{R}^2.
\end{align*}
Then the feasible set of \eqref{problem} is
$$Q_0=\{(x_1, x_2)\in\mathbb{R}^2\,:\, -x_2^2\leqq x_1\leqq x^2_2\}.$$
Let $\bar x=(0,0)\in Q_0$. Clearly, $\bar x$ is an efficient solution of \eqref{problem}. It is easy to check that the \eqref{SACQ_3} holds at $\bar x$ for the critical direction $0_{\mathbb{R}^2}$ but not the \eqref{MFSCQ}. Thus Corollary \ref{second_order_nec} can be applied for this example, but not \cite[Corollary 5.2]{Luu17}.
\end{example}
\section{Strong second-order optimality condition for local Geoffrion properly efficiencies}\label{Strong_Second_order_optim_sect}
In this section, we apply the \eqref{SACQ_3} to establish a strong second-order $KKT$ necessary optimality condition for a local Geoffrion properly efficient solution of \eqref{problem}.
\begin{theorem}\label{Geoffrion_necessary_I} Let $\bar x\in Q_0$ be a local Geoffrion properly efficient solution of \eqref{problem}. Suppose that the \eqref{SACQ_3} holds at $\bar x$ for any critical direction. Then the system
\begin{eqnarray}
F^2_i(\bar{x}; u, v)&\leqq_{\rm lex} (0, 0),\ \ \ &i\in I, \label{equ:G1}
\\
F^2_i(\bar{x}; u, v)&<_{\rm lex} (0, 0),\ \ \ &\mbox{at least one} \ \ i\in I(\bar x; u), \label{equ:G2}
\\
G^2_j(\bar{x}; u, v)&\leqq_{\rm lex} (0, 0),\ \ \ &j\in J(\bar x) \label{equ:G3}
\end{eqnarray}
has no solution $(u, v)\in X\times X$.
\end{theorem}
\begin{proof} Arguing by contradiction, assume that the system \eqref{equ:G1}--\eqref{equ:G3} admits a solution $(u, v)\in X\times X$. Without any loss of generality we may assume that
\begin{equation*}
F^2_1(\bar{x}; u, v)<_{\rm lex} (0, 0),
\end{equation*}
where $1\in I(\bar x; u)$. This implies that
\begin{equation}\label{equ:G4}
f_{1}^{\circ}(\bar x, v)+f_{1}^{\circ\circ}(\bar x, u)<0.
\end{equation}
From \eqref{equ:G1} and \eqref{equ:G3} it follows that $v\in L^2 (Q; \bar x, u)$ and
\begin{eqnarray*}
f_{i}^{\circ}(\bar x, u)&\leqq 0, \ \ \ &i\in I,
\\
g_{j}^{\circ}(\bar x, u)&\leqq 0, \ \ \ &j\in J (\bar x).
\end{eqnarray*}
This and $1\in I(\bar x; u)$ imply that $u$ is a critical direction at $\bar x$. Since the \eqref{SACQ_3} holds at $\bar x$ for the critical direction $u$, we have $v\in T^2(Q_0; \bar x, u).$ Thus there exist a sequence $\{v^k\}$ converging to $v$ and a positive sequence $\{t_k\}$ converging to $0$ such that
$$x^k:=\bar x+t_ku+\frac12t_k^2v^k\in Q_0,\ \ \ \forall k\in\mathbb{N}.$$
Since $1\in I(\bar x; u)$ and \eqref{equ:G4}, as in the proof of Case 1 of Theorem \ref{nec_condition_weak_eff}, there exists $K_1\in \mathbb{N}$ such that
\begin{equation*}
f_1(x^k)<f_1(\bar x)
\end{equation*}
for all $k\geqq K_1$.
For each $i\in I\setminus I(\bar x; u)$, we have $f_{i}^{\circ}(\bar x, u)<0.$ As in the proof of Case 2 of Theorem \ref{nec_condition_weak_eff}, there exists $K_i\in \mathbb{N}$ such that
\begin{equation*}
f_i(x^k)<f_i(\bar x)
\end{equation*}
for all $k\geqq K_i$. Without any loss of generality we may assume that
\begin{equation*}
f_i(x^k)<f_i(\bar x)
\end{equation*}
for all $k\in \mathbb{N}$ and $i\in \{1\}\cup [I\setminus I(\bar x; u)]$. For each $k\in \mathbb{N}$, put
$$I_k:= \{i\in I(\bar x;u)\setminus\{1\} \ : f_i(x^k)>f_i(\bar x)\}.$$
We claim that $I_k$ is nonempty for all $k\in\mathbb{N}$. Indeed, if $I_k=\emptyset$ for some $k\in \mathbb{N}$, then we have
$$f_i(x^k)\leqq f_i(\bar x)\ \ \forall i\in I(\bar x;u)\setminus\{1\}.$$
Using also the fact that $f_i(x^k)<f_i(\bar x)$ for all $i\in \{1\}\cup [I\setminus I(\bar x; u)]$, we arrive at a contradiction with the efficiency of $\bar x$.
Since $I_k\subset I(\bar x;u)\setminus\{1\} $ for all $k\in\mathbb{N}$, without any loss of generality, we may assume that $I_k=\bar I$ is constant for all $k\in \mathbb{N}$. Thus, for each $i\in \bar I$, we have
\begin{equation*}
f_i(x^k)>f_i(\bar x), \ \ \forall k\in\mathbb{N}.
\end{equation*}
By Lemma \ref{lemma2}, we have
\begin{equation*}
f_i^{\circ}(\bar x, v)+f_i^{\circ\circ}(\bar x, u)\geqq 0, \ \ i\in \bar I.
\end{equation*}
Since \eqref{equ:G1}, for each $i\in \bar I\subset I(\bar x;u)\setminus\{1\} $, we have
\begin{equation*}
f_i^{\circ}(\bar x, v)+f_i^{\circ\circ}(\bar x, u)\leqq 0.
\end{equation*}
Thus,
\begin{equation}\label{equ:G8}
f_i^{\circ}(\bar x, v)+f_i^{\circ\circ}(\bar x, u)=0, \ \ i\in \bar I.
\end{equation}
Let $\delta$ be a real number satisfying
\begin{equation*}
f_{1}^{\circ}(\bar x, v)+f_{1}^{\circ\circ}(\bar x, u)<\delta<0,
\end{equation*}
or, equivalently,
\begin{equation*}
-[f_{1}^{\circ}(\bar x, v)+f_{1}^{\circ\circ}(\bar x, u)]>-\delta>0.
\end{equation*}
It is easily seen that
$$\limsup_{k\to\infty} \dfrac{f_1(x^k)-f_1(\bar x)}{\frac12t^2_k}\leqq f_{1}^{\circ}(\bar x, v)+f_{1}^{\circ\circ}(\bar x, u).$$
Thus there exists $k_0\in\mathbb{N}$ such that
\begin{equation*}
f_1(\bar x)- f_1(x^k)>-\frac12\delta t^2_k>0
\end{equation*}
for all $k\geqq k_0$. Then, for any $i\in \bar I$ and $k\geqq k_0$, we have
\begin{equation*}
0< \dfrac{f_i(x^k)-f_i(\bar x)}{f_1(\bar x)- f_1(x^k)}\leqq \dfrac{f_i(x^k)-f_i(\bar x)}{-\frac12\delta t^2_k}.
\end{equation*}
From this and \eqref{equ:G8}, we have
\begin{align*}
0\leqq \lim_{k\to\infty}\dfrac{f_i(x^k)-f_i(\bar x)}{f_1(\bar x)- f_1(x^k)}&\leqq \limsup_{k\to\infty}\dfrac{f_i(x^k)-f_i(\bar x)}{-\frac12\delta t^2_k}
\\
&\leqq \limsup_{k\to\infty}\frac{f_i(x^k)-f_i(\bar x+t_ku)}{-\frac12\delta t^2_k}
\\
&+\limsup_{k\to\infty}\frac{f_i(\bar x+t_ku)-f_i(\bar x)-t_kf_i^\circ(\bar x; u)}{-\frac12\delta t^2_k}
\\
&\leqq-\frac{1}{\delta}[f_i^{\circ}(\bar x, v)+f_i^{\circ\circ}(\bar x, u)]
\\
&=0.
\end{align*}
Thus,
$$\lim_{k\to\infty}\dfrac{f_1(x^k)-f_1(\bar x)}{f_i(\bar x)-f_i(x^k)}=+\infty,$$
contrary to the fact that $\bar x$ is a local Geoffrion properly efficient solution of \eqref{problem}. The proof is complete.
\end{proof}
The following corollary is immediate from Theorem \ref{Geoffrion_necessary_I}.
\begin{corollary}\label{Geoffrion_necessary_II} Let $\bar x\in Q_0$ be a local Geoffrion properly efficient solution of \eqref{problem} and $u\in \mathcal{C}(\bar x)$. Suppose that the \eqref{SACQ_3} holds at $\bar x$ for the direction $u$. Then the system
\begin{align*}
&f_{i}^{\circ}(\bar x, v)+f_{i}^{\circ\circ}(\bar x, u)\leqq 0, \ \ i\in I(\bar x; u),
\\
&f_{i}^{\circ}(\bar x, v)+f_{i}^{\circ\circ}(\bar x, u)< 0, \ \ \mbox{at leats one} \ \ i\in I(\bar x; u),
\\
&g_{j}^{\circ}(\bar x, v)+g_{j}^{\circ\circ}(\bar x, u)\leqq 0, \ \ j\in J(\bar x, u),
\end{align*}
has no solution $v\in X$.
\end{corollary}
The next corollary shows that if the $(WARC)$ holds at $\bar x$, then every Geoffrion properly efficient solution of \eqref{problem} is also proper in the sense of Kuhn and Tucker \cite{Kuhn50}.
\begin{corollary}\label{first_order_nec_cond} Let $\bar x\in Q_0$ be a local Geoffrion properly efficient solution of \eqref{problem}. Suppose that the $(WARC)$ holds at $\bar x$. Then the system
\begin{align}
&f_{i}^{\circ}(\bar x, u)\leqq 0, \ \ i\in I,\label{first1}
\\
&f_{i}^{\circ}(\bar x, u)< 0, \ \ \mbox{at leats one} \ \ i\in I,\label{first2}
\\
&g_{j}^{\circ}(\bar x, u)\leqq 0, \ \ j\in J(\bar x),\label{first3}
\end{align}
has no solution $u\in X$.
\end{corollary}
\begin{proof} Since the $(WARC)$ holds at $\bar x$, the \eqref{SACQ_3} holds at $\bar x$ for the critical direction $0$. Clearly, $I(\bar x; 0)=I$ and $J(\bar x; 0)=J(\bar x)$. Thus, applying Corollary \ref{Geoffrion_necessary_II}, the system \eqref{first1}--\eqref{first3} has no solution $u\in X$.
\end{proof}
\begin{remark}{\rm Conditions \eqref{first1}--\eqref{first3} are also known as strong first-order $KKT$ ($SFKKT$) necessary conditions in primal form. In \cite{Rizvi12}, Burachik et al. introduced a generalized Abadie regularity condition $(GARC)$ and established $SFKKT$ necessary conditions for Geoffrion properly efficient solutions of differentiable vector optimization problems. Later on, Zhao \cite{Zhao15} proposed an extended generalized Abadie regularity condition $(EGARC)$ and then obtained $SFKKT$ necessary conditions for problems with locally Lipschitz data in terms of Clarke's directional derivatives. Recall that the $(EGARC)$ holds at $\bar x\in Q_0$ if
\begin{equation}\label{EGARC}
L(Q; \bar x)\subset \bigcap_{i=1}^l T(M^i; \bar x),
\end{equation}
for all $i\in I$; see \cite[Definition 3.1]{Zhao15}. If $f_i$ and $g_j$ are of class $C^1(X)$, then condition \eqref{EGARC} is called by the generalized Abadie regularity condition $(GARC)$; see \cite[p.483]{Rizvi12}. By the isotony of $T(\,\cdot\,; \bar x)$ and the fact that $M^i\subset Q_0$, we have
$$T(M^i; \bar x)\subset T(Q_0; \bar x)\ \ \ \text{for all}\ \ i\in I.$$
Thus the $(WARC)$ is weaker than the $(EGARC)$ $((GARC))$. The following example illustrates our results in which the condition $(WARC)$ is satisfied, but the condition $(EGARC)$ $((GARC))$ is not fulfilled.
It turns out that Corollary~\ref{first_order_nec_cond} improves and
extends results of Zhao \cite[Theorem 4.1]{Zhao15} and Burachik et al. \cite[Theorem 4.3]{Rizvi12}.
}
\end{remark}
\begin{example}{\rm Consider the following problem:
\begin{align*}
& \text{min}\, f(x):=(f_1(x), f_2(x))
\\
&\text{subject to}\ \ x\in Q_0:=\{x\in\mathbb{R}^2\,|\, g(x)\leqq 0\},
\end{align*}
where
$$f_1(x):=|x_1|+x_2^2, f_2(x):=-f_1(x), g(x):=x_2 \ \ \text{for all} \ \ x=(x_1, x_2)\in\mathbb{R}^2.$$
Clearly, $\bar x=(0,0)$ is a Geoffrion properly efficient solution. The optimality conditions of Burachik et al. \cite[Theorem 4.3]{Rizvi12} cannot be used for this problem as the functions $f_1$ and $f_2$ are not differentiable at $\bar x$.
For each $u=(u_1,u_2)\in\mathbb{R}^2$, we have
$$f_1^{\circ}(\bar x,u)=|u_1|, f_2^{\circ}(\bar x,u)=-|u_1|, g^{\circ}(\bar x,u)=\langle\nabla g(\bar x), u\rangle=u_2.$$
It is easy to check that
$$\mathcal{C}(\bar x)=L(Q; \bar x)=\{(u_1, u_2)\in\mathbb{R}^2\,:\, u_1=0, u_2\leqq 0\}.$$
We claim that the $(EGARC)$ does not hold at $\bar x$. Indeed, since
\begin{align*}
M^1&=\{(x_1, x_2)\in \mathbb{R}^2\,:\, f_1(x_1, x_2)\leqq 0, g(x_1, x_2)\leqq 0\}=\{\bar x\},
\\
M^2&=\{(x_1, x_2)\in \mathbb{R}^2\,:\, f_2(x_1, x_2)\leqq 0, g(x_1, x_2)\leqq 0\}=Q_0,
\end{align*}
we have $ T(M^1; \bar x)=\{\bar x\}$ and $T(M^2; \bar x)=Q_0$. Thus, $T(Q_0; \bar x)=Q_0$ and
$$\bigcap_{i=1}^2T(M^i; \bar x)=\{\bar x\}.$$
Consequently,
\begin{equation*}
L(Q; \bar x) \nsubseteq \bigcap_{i=1}^2T(M^i; \bar x),
\end{equation*}
as required. This shows that the result of Zhao \cite[Theorem 4.1]{Zhao15} cannot be applied for this example.
Next we check the first-order necessary optimality conditions of our Corollary \ref{first_order_nec_cond}. Since $T(Q_0; \bar x)=Q_0$, we have
$$L(Q; \bar x)\subset T(Q_0; \bar x).$$
This means that the $(WARC)$ holds at $\bar x$. By Corollary \ref{first_order_nec_cond}, the system \eqref{first1}--\eqref{first3}
has no solution $u\in \mathbb{R}^2$.
}
\end{example}
\section{Concluding remarks}
\label{conclusions_sect}
In this paper we obtain primal second-order $KKT$ necessary conditions for vector optimization problems with inequality constraints in a nonsmooth setting using second-order upper generalized directional derivatives. We suppose that the objective functions and active constraints are only locally Lipschitz. Some second-order constraint qualifications of Zangwill type, Abadie type and Mangasarian-Fromovitz type as well as a regularity condition of Abadie type are proposed. They are applied in the optimality conditions. Our results improve and generalize the corresponding results of Aghezza et al. \cite[Theorem 3.3]{Aghezzaf99}, Gupta et al. \cite[Theorems 3.1 and 3.3]{Gupta2017}, Huy et al. \cite[Theorem 3.2]{Huy163}, Ivanov \cite[Theorem 4.1]{Ivanov15}, Constantin \cite[Theorem 2]{Elena}, Luu \cite[Corollary 5.2]{Luu17} Zhao \cite[Theorem 4.1]{Zhao15}, and Burachik et al. \cite[Theorem 4.3]{Rizvi12}.
To obtain second-order $KKT$ necessary conditions in dual form, we need assume that the objective functions and constraint functions are of class $C^1(X)$. Then one can follow the scheme of the proof of \cite[Theorem 3.4]{Aghezzaf99} and we leave the details to the reader.
\section*{Acknowledgments}
The authors would like to thank the anonymous referee for his valuable remarks and detailed suggestions that allowed us to improve the original version. Y. B. Xiao is supported by the National Natural Science Foundation of China (11771067) and the Applied Basic Project of Sichuan Province (2019YJ0204). N. V. Tuyen is supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 101.01-2018.306 as well as the grant from School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, P.R. China. C. F. Wen and J. C. Yao are supported by the Taiwan MOST [grant number 106-2115-M-037-001], [grant number 106-2923-E-039-001-MY3], respectively, as well as the grant from Research Center for Nonlinear Analysis and Optimization, Kaohsiung Medical University, Taiwan.
|
1,108,101,563,820 | arxiv | \section{Introduction}
\end{document}
\section{Approach}
\label{approach}
The former sections show that the role of the KG encoder on CSKGs is to mainly complement PTMs in the task of commonsense reasoning.
Instead of node features, relations features are the key to the KG encoder for improving PTMs.
Based on these findings, we develop a simple commonsense KG encoder based on the statistical relation features from CSKGs, namely \textbf{SAFE}.
Figure~\ref{fig:model} presents the overview of our model.
\subsection{Capturing High-Order Relation Semantics}
\label{extracting}
Since relation features are shown useful to improve the performance of commonsense reasoning, we consider extracting relation features for better capturing the knowledge semantics from the CSKG.
Inspired by KG reasoning studies~\cite{lin2018multi,feng2020scalable}, we construct multi-hop relation paths that connect question nodes with answer candidate nodes on the CSKG to capture the higher-order semantic relatedness among them.
Formally, given the commonsense subgraph $\mathcal{G}^{q,c_{i}}$ for the question $q$ and the answer candidate $c_{i}$, we first extract a set of relation paths within $k$ hops that connect a question concept node $v_q \in \mathcal{V}_q$ and an answer concept node $v_{c_i} \in \mathcal{V}_{c_i}$, denoted as $\mathcal{P}^{q,c_{i}}$.
Specifically, a path $p \in \mathcal{P}^{q,c_{i}}$ can be represented as a sequence of nodes and relations as $p=\{v_1, r_1, \cdots, r_{k-1}, v_k\}$.
Based on the empirical findings in Section~\ref{pilot}, we consider a simplified representation for relation paths that removes node IDs but only keeps the relations on a path.
To keep the role of each node, we replace a node ID by a three-valued type, indicating this node belongs to a \emph{question node} (0), \emph{answer node} (1) or \emph{others} (2).
In this way, a path $p$ can be represented by $p=\{ t_{v_1}, r_1, t_{v_2}, r_2, \cdots, r_{k-1}, t_{v_k}\}$, where $t_{v}$ is the role type of node $v$.
Since we remove explicit node IDs, our model can concentrate on more essential relation features.
Based on the above method, for a question $q$ and an answer candidate $c_i$, we extract all the simplified relation paths and count their frequencies among all the paths.
We use $\mathcal{F}^{q,c_i}=\{ \langle p_j, f_j \rangle \}$ to denote all the paths for the question $q$ and the answer candidate $c_i$, where each entry consists of the $j$-th path $p_j$ and its frequency $f_j$.
Unlike prior approaches (\emph{e.g.,}\xspace QA-GNN), we use such very simple features of relation paths from CSKGs to improve the reasoning capacity of PTMs.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/model_v3.pdf}
\caption{The illustration of our approach. We adopt an all-MLP KG encoder to model the extracted relation features from CSKG to enhance the PTM.
}
\label{fig:model}
\end{figure}
\subsection{A MLP-based KG encoder}
\label{KG encoder}
Our KG encoder is built on a full MLP architecture based on simplified relation path features, consisting of a path encoder and a feature aggregator.
\paratitle{Path Encoder.}
The path encoder is a two-layer MLP that encodes a relation path into a scalar feature value.
As shown in Section~\ref{extracting}, we can obtain the path feature set $\mathcal{F}^{q,c_i}=\{ \langle p_j, f_j \rangle \}$ for the question $q$ and the answer candidate $c_i$.
Different from general KGs, CSKGs usually contain much fewer types of relations (\emph{e.g.,}\xspace 36 relations in ConceptNet), we adopt one-hot representations of these types to represent these relations.
For node type (from \emph{question}, \emph{candidate} or \emph{others}), we also adopt the similar representations.
Then, we concatenate these one-hot vectors to compose the sparse representation of a relation path $p$ in order, denoted as $\mathbf{v}_{p}$.
Subsequently, the sparse path representation is encoded by a two-layer MLP~(\emph{i.e.,}\xspace the path encoder) to produce the corresponding scalar feature value $x_{p}$:
\begin{equation}
x_{p}=\text{MLP}_{2}(\text{MLP}_{1}(\mathbf{v}_{p})),
\end{equation}
where $x_{p}$ reflects the importance of such a relation path for commonsense reasoning.
\paratitle{Feature Aggregator.}
Based on the above path encoder, we can generate the scalar feature values for all the relation paths in the feature set $\mathcal{F}^{q,c_i}=\{ \langle p_j, f_j \rangle \}$.
The feature aggregator aims to aggregate these feature values to produce the confidence score of the answer candidate \emph{w.r.t.} the question, from the KG perspective.
Concretely, we sum the different feature values of relation paths weighted by their frequencies as follows:
\begin{equation}
x_{q,c_i}=\sum_{\langle p_j, f_j\rangle \in \mathcal{F}^{q,c_i}}x_{p_j} \cdot f_j,
\end{equation}
where $x_{p_j}$ is the mapping feature value of path $p_j$ and $f_j$ is the frequency of path $p_j$. Here, $x_{q,c_i}$ aims to capture the overall confidence score based on the subgraph $\mathcal{G}^{q,c_i}$ given the question and the answer candidate.
However, since the weighted sum is likely to cause extreme values (\emph{i.e.,}\xspace too large or too small), we add an extra two-layer MLP for scaling:
\begin{equation}
S_{KG}(q,c_i)=\text{MLP}_{4}(\text{MLP}_{3}(x_{q,c_i})),
\label{skg}
\end{equation}
where $S_{KG}$ is the prediction score indicating the confidence level that candidate $c_i$ is the right answer to question $q$ from the perspective of KG.
\subsection{Integrating KG Encoder with PTM}
In this part, we integrate the above KG encoder with the PTM for commonsense reasoning.
\paratitle{The PTM Encoder}. Following existing works~\cite{Yasunaga2021qagnn}, we utilize a PTM as the backbone of commonsense reasoning.
Given a question $q$ and an answer candidate $c_i$, we concatenate their text to compose the input of the PTM.
After encoding by the multiple Transformer layers, we select the output of the \texttt{[CLS]} token in the last layer as the contextual representation of the question-candidate pair, denoted by $\mathbf{h}_{cls}$.
Then, we feed $\mathbf{h}_{cls}$ into a MLP layer to produce a scalar output $S_{PTM}$,
\begin{align}
\mathbf{h}_{cls}&=\text{PTM}(q,c_i), \\
S_{PTM}(q,c_i) &= \text{MLP}(\mathbf{h}_{cls}),
\label{sptm}
\end{align}
which is the plausibility score of the answer candidate from the perspective of PTM.
\paratitle{Combining the Prediction Scores}.
We then derive the prediction score of each answer candidate for a question by leveraging both the PTM and KG encoder based on either textual or structured semantics.
For each question-candidate pair $(q, c_i)$, we combine the prediction scores of the two modules as:
\begin{equation}
S(q,c_i) = S_{PTM}(q,c_i) + S_{KG}(q,c_i),
\end{equation}
where $S_{PTM}(q,c_i)$ (Eq.~\ref{sptm}) and $S_{KG}(q,c_i)$ (Eq.~\ref{skg}) are the prediction scores of PTM and KG encoder, respectively.
Given a set of answer candidates $\{c_1,...,c_n\}$, we further normalize $S(q,c_i)$ into a conditional probability $\text{Pr}(c_i|q)$ via the softmax operation over the $n$ candidates.
During the training stage, we optimize the parameters of the whole model~(including both the PTM and KG encoder) with the cross entropy loss between the predictions and the ground-truth answer (based on the probability distribution $\{ \text{Pr}(c_i|q)\}_{i=1}^n$ ).
During inference, we first compute the probability score $\text{Pr}(c_i|q)$ for each answer candidate, and then select the highest one as the predicted answer.
\begin{table}[t]
\centering
\small
\begin{tabular}{lccc|c}
\toprule
& \textbf{RGCN} & \textbf{MHGRN} & \textbf{QAGNN} & \textbf{SAFE} \\
\midrule
Node emb. & $\surd$ & $\surd$ & $\surd$ &$\times$ \\
Relation & $\surd$ & $\surd$ & $\surd$ & $\surd$\\
GNN & $\surd$ & $\surd$ & $\surd$ & $\times$ \\
MLP-based &$\times$ &$\times$ & $\times$ &$\surd$\\
\midrule
\# Params & 365K & 547K & 2845K & 4.7k\\
\bottomrule
\end{tabular}
\caption{Comparisons of different KG encoders for commonsense reasoning. Instead of using node embeddings and GNN structure, we adopt relation paths as the input features and incorporate a full MLP architecture.
}
\label{tab:comparsion}
\end{table}
\subsection{Comparison with Existing KG Encoders}
For the task of commonsense reasoning, it has become a common approach by integrating PTM with an external KG encoder based on CSKGs.
The major difference among these methods (including our approach) lies in the design of the KG encoder.
Next, we compare these variants for the KG encoder.
We summarize the comparison between our KG encoder and representative KG encoders in Table~\ref{tab:comparsion}.
We can see that, our approach no longer lies in the node embeddings and the structure of GNNs.
Instead, we mainly utilize relation paths as the features of the KG encoder, which is built on a simple MLP-based architecture.
Therefore, the number of the model parameters involved in our KG encoder is much smaller than those of existing KG encoders.
As will be shown in Section~\ref{experiment}, our KG encoder yields better or at least comparable performance compared with existing GNN-based encoders, based on the same configuration for PTMs.
Specifically, our approach can largely reduce the computational costs for encoding the CSKG.
For our approach, we need to extract the relation paths from question nodes to all the answer candidate nodes on the CSKG, and it can be efficiently fulfilled via a $k$-hop constrained Depth-First Search~\cite{dfs}, which can be pre-computed in offline processing.
When the relation paths have been extracted, it is efficient to encode these paths with our MLP architecture.
Such a process can be easily paralleled or accelerated by optimized matrix multiplication.
In contrast, existing GNN-based encoders rely on iterative propagation and aggregation on the entire subgraph, which takes a much larger computational time cost.
\section{Conclusion} \label{conclusion}
In this work, we study how the external commonsense knowledge graphs~(CSKGs) are utilized to improve the reasoning capacity of pre-trained models~(PTMs).
Our work makes an important contribution to understanding and enhancing the commonsense reasoning capacity of PTMs.
Our results show that relation paths from the CSKG are the key to performance improvement.
Based on this finding, we design a rather simple MLP-based KG encoder with relation paths from the CSKG as features, which can be generally integrated with various PTMs for commonsense reasoning tasks.
Such a lightweight KG encoder has significantly fewer than 1\% trainable parameters compared to previous GNN-based KG encoders.
Experimental results on five commonsense reasoning datasets demonstrate the effectiveness of our approach.
In future work, we will study how to effectively leverage the commonsense knowledge from large-scale unstructured data to improve PTMs.
We will also try to apply our approach to other knowledge-intensive tasks, \emph{e.g.,}\xspace knowledge graph completion and knowledge graph based question answering~\cite{lan-kbqa-ijcai}.
\section{Ethical Consideration} \label{ethical}
This work primarily investigates how external commonsense knowledge graphs (CSKGs) enhance the commonsense reasoning capacity of pre-trained models (PTMs) and proposes a simple but effective KG encoder on CSKGs to enhance PTMs.
A potential problem derives from using PTMs and CSKGs in our approach.
PTMs have been shown to capture certain biases from the data that have been pre-trained on~\cite{bender2021dangers}.
And existing works~\cite{mehrabi2021lawyers} have found that CSKGs are likely to contain biased concepts derived from human annotations.
However, a comprehensive analysis of such biases is outside of the scope of this work.
It is a compelling direction to investigate to what extent the combination of CSKGs and PTMs can help mitigate such biases.
An alternative consideration is to consider filtering biased concepts in the process of subgraph extraction from the CSKG.
By devising proper rules, it is promising to reduce the influence of biased concepts on our approach.
\section{Acknowledgments}
This work was partially supported by Beijing Natural Science Foundation under Grant No. 4222027, and National Natural Science Foundation of China under Grant No. 61872369, Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098.
This work is also supported by Beijing Academy of Artificial Intelligence~(BAAI). Xin Zhao is the corresponding author.
\section{Experiment} \label{experiment}
\subsection{Experimental Setup}
In this part, we introduce the experimental setup.
\paratitle{Evaluation Tasks.}
We conduct experiments on five commonsense reasoning tasks, shown in Table~\ref{tab:number_static}.
$\bullet$ \textbf{CommonsenseQA}~\citep{csqa} is a 5-way multiple-choice QA dataset. It is created based on ConceptNet~\citep{conceptnet}.
$\bullet$ \textbf{OpenBookQA}~\citep{obqa} is a 4-way multiple-choice QA dataset about elementary science questions to evaluate the science commonsense knowledge.
$\bullet$ \textbf{SocialIQA}~\citep{socialiqa} is a 3-way multiple-choice QA dataset to evaluate the understanding of social commonsense knowledge.
$\bullet$ \textbf{PIQA}~\citep{piqa} is a binary-choice QA dataset about physical commonsense.
$\bullet$ \textbf{CoPA}~\citep{copa} is a commonsense inference dataset, to select the most plausible alternative with the causal relation to the premise.
\begin{table}[t]
\centering
\small
\begin{tabular}{l|rrr}
\toprule
\textbf{Task} & \textbf{Train} & \textbf{Dev} & \textbf{Test} \\
\midrule
CommonsenQA & 9,741 & 1,221 & 1,140 \\
OpenBookQA & 4,957 & 500 & 500 \\
SocialIQA & 33,410 & 1,954 & - \\
PIQA & 16,113 & 1,838 & - \\
CoPA & - & 500 & 500 \\
\bottomrule
\end{tabular}
\caption{Statistics of the datasets. ``-'' denotes the unused or not available dataset split in our experiments.}
\label{tab:number_static}
\end{table}
\paratitle{Data Preprocessing.}
For CommonsenseQA and OpenBookQA, we use their original train/dev/test split settings.
Since the test set of CommonsenseQA is not available, we follow previous work~\cite{lin2019kagnet} that extracts 1,241 examples from the original training set as the test set.
Besides, the test sets of SocialIQA and PIQA are not available.
Therefore, we report the experimental results on their development sets for a fair comparison~\cite{shwartz2020unsupervised}.
For CoPA that only provides development and test sets, we follow \citet{niu2021sematic} to train models on the development set and evaluate the performance on the test set.
For commonsense KG, we adopt \emph{ConceptNet}~\cite{conceptnet}, a general-domain and task-agnostic CSKG, as our external knowledge source $\mathcal{G}$ for all the above models and tasks.
For each question-candidate pair $(q, c_{i})$, we follow previous works~\cite{lin2019kagnet,feng2020mhgrn} to retrieve and construct the subgraph $\mathcal{G}^{q,c_{i}}$ from the CSKG $\mathcal{G}$.
\paratitle{Baseline Methods.}
We compare our model with the following six baseline methods, including a fine-tuned PTM and five PTM+GNN models:
$\bullet$ \textbf{Fine-tuned PTM} directly fine-tunes a PTM without using any CSKG.
We use RoBERTa-large~\cite{roberta} for all tasks.
Additionally, we also use BERT-large~\citep{bert} and AristoRoBERTa~\citep{aroberta} for OpenBookQA to evaluate the generality of our KG-encoder.
$\bullet$ \textbf{PTM+GNN models} integrate PTM with additional GNN-based KG encoders. Based on the same PTM (the above baseline), we consider five variants with different KG encoders:
(1) \emph{Relation Network}~(RN)~\citep{santoro2017simple} using a relational reasoning structure over the CSKG;
(2) \emph{GcoAttn}~\citep{lin2019kagnet} using a graph concept attention model to aggregate entity information from the CSKG;
(3) \emph{RGCN}~\citep{rgcn} extending the GCN with relation-specific weights;
(4) \emph{MHGRN}~\citep{feng2020mhgrn} using a GNN architecture reasoning over the CSKG that unifies both GNNs and path-based models;
(5) \emph{QA-GNN}~\citep{Yasunaga2021qagnn} using a GAT to perform jointly reasoning over the CSKG.
For all these methods, we adopt the same architecture and configuration for the PTM, so that we can examine the effect of different KG encoders.
\begin{table*}[t]
\centering
\small
\begin{tabular}{p{0.262\columnwidth}cccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Methods}}&
\multicolumn{6}{c}{CommonsenseQA} & \multicolumn{6}{c}{OpenBookQA}\\
\cmidrule(lr){2-7} \cmidrule(lr){8-13}
& 5\% & 10\% & 20\% & 50\% & 80\% & 100\% & 5\% & 10\% & 20\% & 50\% & 80\% & 100\%\\
\midrule
RoBERTa-large & 29.66 & 42.84 & 58.47 & 66.13 & 68.47 & 68.69$^{\dag}$ & 37.00 & 39.4 & 41.47 & 53.07 & 57.93 & 64.8$^{\dag}$\\
\midrule
+ RGCN & 24.41 & 43.75 & 59.44 & 66.07 & 68.33 & 68.41$^{\dag}$ & 38.67 & 37.53 & 43.67 & 56.33 & 63.73 & 62.45$^{\dag}$\\
+ GconAttn & 21.92 & 49.83 & 60.09 & 66.93 & 69.14 & 68.59$^{\dag}$ & 38.60 & 36.13 & 43.93 & 50.87 & 57.87 & 64.75$^{\dag}$\\
+ RN & 23.77 & 34.09 & 59.90 & 65.62 & 67.37 & 69.08$^{\dag}$ & 33.73 & 35.93 & 41.40 & 49.47 & 59.00 & 65.20$^{\dag}$\\
+ MHGRN & 29.01 & 32.02 & 50.23 & 68.09 & 70.83 & 71.11$^{\dag}$ & 38.00 & 36.47 & 39.73 & 55.73 & 55.00 & 66.85$^{\dag}$\\
+ QA-GNN & 32.95 & 37.77 & 50.15 & 69.33 & 70.99 & 73.41$^{\dag}$ & 33.53 & 35.07 & 42.40 & 54.53 & 52.47 & 67.80$^{\star}$\\
\midrule
+ SAFE(\textbf{Ours}) & \textbf{36.45} & \textbf{56.51} & \textbf{65.16} & \textbf{70.72} & \textbf{73.22} & \textbf{74.03} & \textbf{38.80} & \textbf{41.20} & \textbf{44.93} & \textbf{58.33} & \textbf{65.60} & \textbf{69.20}\\
\bottomrule
\end{tabular
\caption{Performance comparison on CommonsenseQA and OpenBookQA with different proportions of training data. We report the average test performance of three runs, and the best results are highlighted in bold. $\dag$ indicates the reported results from \citet{Yasunaga2021qagnn}. $\star$ indicates the reported results from \citet{wang2021gsc}}
\label{tab:few-shot}
\end{table*}
\begin{table}[t]
\centering
\small
\begin{tabular}{lcccc}
\toprule
\textbf{Methods}& \textbf{SocialIQA} & \textbf{PIQA} & \textbf{CoPA}\\
\midrule
RoBERTa-large & 78.25 & 77.53 & 67.60\\
\midrule
+ {GcoAttn} & 78.86 & 78.24 &70.00 \\
+ {RN} & 78.45 & 76.88 &70.20 \\
+ {MHGRN} & 78.11 & 77.15 & 71.60\\
+ {QAGNN} & 78.10 & 78.24 & 68.40 \\
\midrule
+ {SAFE}~(\textbf{Ours}) & \textbf{78.86} & \textbf{79.43} & \textbf{71.60} \\
\bottomrule
\end{tabular}
\caption{Performance comparison on SocialIQA, PIQA, and CoPA (Dev accuracy).
}
\label{tab:other_qa}
\end{table}
\subsection{Implementation Details}
We implement all PTMs based on HuggingFace Transformers~\cite{huggingface}.
For all the baselines, we keep the common hyper-parameters as identical as possible and set their special hyper-parameters following the suggestions from the original papers.
In our approach, we extract the relation paths with no more than 2 hops between the concept nodes from the question and the answer candidate.
We tune the hidden dimension of MLPs from the path encoder in \{32, 64, 100\}, and the batch size in \{32, 48, 60, 120\}.
The parameters of the model are optimized by RAdam~\citep{radam}, and the learning rate of the PTM and the KG encoder is also tuned in \{1$e$-4, 1$e$-5, 2$e$-5\} and \{1$e$-3, 1$e$-2\}, respectively.
To accelerate the training process, we don't incorporate Dropout regularization in our model.
All the above hyper-parameters are tuned on the development set.
\subsection{Results Analysis}
Following previous works~\cite{Yasunaga2021qagnn,wang2021gsc}, we take the results on CommonsenseQA and OpenBookQA as the main experiments to compare different methods.
In order to test their robustness to data sparsity, we examine the performance under five different proportions of training data, \emph{i.e.,}\xspace $\{5\%, 10\%, 20\%, 50\%, 80\%, 100\%\}$.
\paratitle{CommonsenseQA and OpenBookQA.}
The results of different methods on CommonsenseQA and OpenBookQA are presented in Table~\ref{tab:few-shot}.
Comparing the results under the full-data setting~(\emph{i.e.,}\xspace 100\% training data), we can see that all the PTM+GNN methods perform better than vanilla PTM (\emph{i.e.,}\xspace RoBERTa-large).
It indicates that the KG encoder on the CSKG is able to incorporate useful knowledge information to improve PTMs on commonsense reasoning tasks.
Additionally, among all the PTM+GNN baselines, QA-GNN performs the best. The major reason is that QA-GNN uses the PTM to estimate the importance of KG nodes and connects the QA context and the CSKG to form a joint graph, which is helpful to improve the reasoning ability on the CSKG.
Finally, our method consistently outperforms all the baselines. Our approach incorporates a lightweight MLP architecture as the KG encoder with relation paths as features.
It reduces the parameter redundancy of the KG encoder and focuses on the most essential features for reasoning, \emph{i.e.,}\xspace semantic relation paths.
Such an approach is effective to enhance the commonsense reasoning capacity of PTMs.
Comparing the results under different sparsity ratios of training data, we can see that the performance substantially drops when the size of training data is reduced.
While, our method performs consistently better than all baselines.
It is because that our KG encoder consists of significantly fewer parameters than those of the baselines, which reduces the risk of overfitting and endows our approach with better robustness in data scarcity scenarios.
\begin{table}[tb]
\centering
\small
\begin{tabular}{lcc}
\toprule
\textbf{Methods} & \textbf{BERT-large} & \textbf{AristoRoBERTa} \\
\midrule
Fine-tuned PTMs & 59.00 & 78.40$^{\dag}$ \\
\midrule
+ RGCN & 45.40 & 74.60$^{\dag}$ \\
+ GconAttn & 48.20 & 71.80$^{\dag}$ \\
+ RN & 48.60 & 75.35$^{\dag}$ \\
+ MHGRN & 46.20 & 80.60$^{\dag}$ \\
+ QA-GNN & 58.47 & 82.77$^{\dag}$ \\
\midrule
+ SAFE~(\textbf{Ours}) & \textbf{59.20} & \textbf{87.13} \\
\bottomrule
\end{tabular}
\caption{Evaluation with other PTMs on OpenBookQA (average test accuracy of three runs). Methods with AristoRoBERTa use the textual evidence by \citet{science-exam} as an additional input to the QA context. \dag ~indicates reported results in \cite{Yasunaga2021qagnn}.}
\label{tab:obqa_main}
\end{table}
\paratitle{Other Commonsense Reasoning Datasets}.
To further verify the effectiveness of our method, we also compare the results of different methods on other commonsense reasoning datasets.
These datasets are from different domains or different tasks.
These results are shown in Table~\ref{tab:other_qa}.
Similarly, our approach also achieves the best performance in most cases.
It indicates that our approach is generally effective for various commonsense reasoning datasets or tasks, by outperforming competitive but complicated baselines.
Among all the datasets, our approach improves the performance of the PTM on CoPA dataset by a large margin.
The reason is that CoPA is a small dataset with only 500 training examples.
Baselines with heavy architectures are easy to overfit on it.
In contrast, our KG encoder is lightweight, which is more capable of resisting the overfitting issue.
\subsection{Evaluation with Other PTMs}
The major contribution of our approach lies in the lightweight KG encoder, which can be also used to enhance the commonsense reasoning capacity of various PTMs.
To validate it, we examine the performance of our KG encoder when integrated with two other PTMs, \emph{i.e.,}\xspace BERT-large and AristoRoBERTa, on OpenBookQA dataset.
As shown in Table~\ref{tab:obqa_main}, the BERT-large and AristoRoBERTa enhanced by our KG encoder perform better than original PTMs.
Especially, our KG encoder can improve the performance of AristoRoBERTa by a large margin (with 8.73\% improvement).
These results show that our KG encoder is a general method to improve PTMs for commonsense reasoning.
In contrast, when adapting other KG encoders to these two PTMs, the performance decreases in most cases.
It is mainly because these KG encoders have complicated architectures, which may not be easily adapted to other PTMs.
\subsection{Hyper-parameters Analysis}
\begin{figure}[t]
\small
\centering
\includegraphics[width=0.62\columnwidth]{figures/hidden_analysis.pdf}
\caption{Analysis of different hidden dimension size of our SAFE model.}
\label{fig-hidden}
\end{figure}
For hyper-parameter analysis, we study the hidden dimension size of the MLP in the path encoder.
Concretely, we evaluate our model with varying values of the hidden dimension size on CommonsenseQA and OpenBookQA datasets using RoBERTa-large model.
The results are shown in Figure~\ref{fig-hidden}.
We can see that with the increase of the hidden dimension size, the performance improves at first and then drops to some extent.
The possible reason lies in two aspects.
On the one hand, a too small hidden dimension size makes the path encoder hard to represent sufficient information from relation paths for commonsense reasoning.
On the other hand, a larger hidden dimension size enlarges the parameter number of our KG encoder, which increases the risk of overfitting that may cause performance degradation.
\subsection{Case Study}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/case_study.pdf}
\caption{The generated feature values of relation path examples by the path encoder. \textbf{Q} and \textbf{A} denote the concept nodes from the question and the answer candidate, respectively.}
\label{fig:case_study}
\end{figure}
We propose a rather simple KG encoder to effectively utilize the relation features from the CSKG, which first computes the feature values of the relation paths and then aggregates these values as the confidence score of the question and choice from the perspective of KG.
In this way, we can generate a table in advance that maps each type of relation path into its feature value that reflects its contribution to the confidence score.
Based on this table, it is convenient to directly judge the importance of the relation path and quickly assess the confidence about if the choice is the answer to the question from the perspective of KG.
Figure~\ref{fig:case_study} shows some path-value examples on CommonsenseQA dataset.
As we can see, the path with a higher value indeed provide more persuasive evidence (\emph{e.g.,}\xspace \emph{causes} and \emph{capableof}) that indicates the choice is more likely to be the answer to the question.
In contrast, the path with a lower value usually represents an ambiguous relationship~(\emph{e.g.,}\xspace \emph{relatedto}), which contributes less to the judge of whether the choice is the answer.
\section{Introduction} \label{introduction}
In the era of artificial intelligence, it is desirable that intelligent systems can be empowered by the capacity of commonsense reasoning in natural language.
For this purpose, a surge of commonsense reasoning tasks and datasets are proposed to evaluate and improve such an ability of NLP models, \emph{e.g.,}\xspace CommonsenseQA~\cite{csqa} and SocialIQA~\cite{socialiqa}.
Although large-scale pre-trained models (PTMs)~\cite{bert,roberta} have surpassed human performance in a number of NLP benchmarks, it is still hard for PTMs to accurately capture and understand commonsense knowledge for accomplishing complex reasoning tasks~\cite{csqav2}.
In order to enhance the reasoning capacity, commonsense knowledge graphs~(CSKGs)~(\emph{e.g.,}\xspace ConceptNet~\citep{conceptnet} and ATOMIC~\citep{atomic}) have been adopted for injecting external commonsense knowledge into PTMs.
By conducting entity linking to CSKGs, existing methods~\cite{Yasunaga2021qagnn,feng2020mhgrn} aim to capture the structured knowledge semantics via knowledge graph~(KG) encoders (\emph{e.g.,}\xspace graph neural network~(GNN)~\cite{gat,gcn}), and then integrate the KG encoders for improving the commonsense reasoning capacity of PTMs~\cite{Yasunaga2021qagnn}.
Despite the effectiveness, these approaches are built on highly complicated network architectures (involving both PTMs and GNNs).
Thus, it is difficult to explain how and why external commonsense knowledge improves the commonsense reasoning capacity of PTMs.
Besides, existing CSKGs~\cite{mehrabi2021lawyers,nguyen2021refined} are mostly crowdsourced from massive selected resources~(\emph{e.g.,}\xspace books, encyclopedias, and scraped web corpus), containing a wide variety of content.
Without a clear understanding of how these external resources should be utilized, it is likely to incorporate irrelevant concepts or even knowledge biases~\cite{mehrabi2021lawyers,nguyen2021refined} into PTMs, which might hurt the reasoning performance.
Indeed, some researchers have noted this issue and questioned whether existing GNN-based modules are over-complicated for commonsense reasoning~\cite{wang2021gsc}.
Furthermore, they find that even a simple graph neural counter can outperform existing GNN modules on CommonsenseQA and OpenBookQA benchmarks.
However, existing studies can't well answer the fundamental questions about knowledge utilization for commonsense reasoning: How do external knowledge resources enhance the commonsense reasoning capacity of PTMs? What is necessarily required from external knowledge resources for PTMs?
Since the simplified knowledge-aware GNN has already yielded performance improvement on the CommonsenseQA~\cite{wang2021gsc}, we speculate that there might be a simpler solution if we could identify the essential knowledge for commonsense reasoning.
Focusing on this issue, we think about designing the solution by further simplifying the KG encoder.
Based on our empirical analysis, we observe a surprising result that it is indeed \emph{relation features} from CSKGs, but not \emph{node features}, that are the key to the task of commonsense reasoning (See more details in Section~\ref{pilot}).
According to this finding, we propose a rather simple approach to leveraging external knowledge resources for enhancing the commonsense reasoning capacity of PTMs.
Instead of using a heavy GNN architecture, we design a lightweight KG encoder fully based on the multi-layer perceptron~(MLP), which utilizes \textbf{S}tatistical relation p\textbf{A}th from CSKGs as \textbf{FE}atures, namely \textbf{SAFE}.
We find that semantic relation paths can provide useful knowledge evidences for PTMs, which is the key information for helping commonsense reasoning.
By conducting extensive experiments on five benchmark datasets, our approach achieves superior or competitive performance compared with state-of-the-art methods, especially when training data is limited.
Besides the performance improvement, our approach largely reduces the parameters for encoding CSKGs (fewer than 1\% trainable parameters compared to GNN-based KG encoders~\citep{Yasunaga2021qagnn}).
Our main contributions can be summarized as follows: (1) We empirically find that relation features from CSKGs are the key to the task of commonsense reasoning; (2) We design a simple MLP-based architecture with relation paths as features for enhancing the commonsense reasoning capacity of PTMs; (3) Extensive experiments conducted on five benchmark datasets demonstrate the effectiveness of our proposed approach, which also largely reduces the parameters of the KG encoder.
\section{Empirical Analysis on the Commonsense KG Encoder} \label{pilot}
In this section, we conduct an empirical study to investigate how the external KG encoder helps PTMs with commonsense reasoning.
\subsection{Analysis Setup}
To conduct the analysis experiments, we select QA-GNN~\citep{Yasunaga2021qagnn}, a representative approach that integrates PTM with GNN for the commonsense QA task, as the studied model.
We adopt the CommonsenseQA~\citep{csqa} and OpenBookQA~\cite{obqa}, two of the most widely used commonsense reasoning benchmarks, for evaluation, with the same data split setting in \cite{lin2019kagnet}.
We perform two analysis experiments: one examines the effect of the commonsense KG encoder, and the other one examines the effect of different features in the commonsense KG encoder.
To be specific, the two experiments focus on two key questions about commonsense reasoning:
(1) what is the effect of the commonsense KG encoder on PTMs?
(2) what is the key information within the commonsense KG encoder?
\subsection{Results and Findings}
Next, we conduct the experiments and present our findings of commonsense reasoning.
\paratitle{Effect of Commonsense KG Encoder.}
Since existing studies have widely utilized a GNN module to encode the commonsense knowledge, we examine its contribution to the improvement of reasoning performance.
We consider comparing three variants of QA-GNN: (A) \emph{PTM-Only} that directly removes the GNN module and degenerates into a pure PTM, (B) \emph{PTM-Pred} that trains the PTM and GNN simultaneously but only makes the prediction with the PTM module, and (C) \emph{GNN-Pred} that trains the PTM and GNN simultaneously but only makes the prediction with the GNN module.
The comparison results are shown in Figure~\ref{fig-kg-encoder}.
As we can see, using the predictions solely based on the GNN module~(\emph{i.e.,}\xspace GNN-Pred) can only answer a relatively minor proportion of the questions (no more than 60\% in CommonsenseQA).
As a comparison, when trained independently~(\emph{i.e.,}\xspace PTM-Only) or jointly with the GNN module~(\emph{i.e.,}\xspace PTM-Pred), the PTM module can answer a large proportion of the questions (at least 70\% in CommonsenseQA).
Furthermore, the incorporation of the GNN encoder is useful to improve the performance of PTMs (PTM-Only \emph{v.s.} QAGNN).
These results show that:
$\bullet$~In the joint PTM-GNN approach, PTM contributes the most to the commonsense reasoning task, which is the key to the reasoning performance.
$\bullet$~Commonsense KG encoder is incapable of performing effective reasoning independently, but can enhance PTM as the auxiliary role.
\begin{figure}[t]
\centering
\subfigure[CommonsenQA]{\label{fig-csqa-control}
\centering
\includegraphics[width=0.22\textwidth]{figures/csqa_control.pdf}
}
\subfigure[OpenBookQA]{\label{fig-obqa-control}
\centering
\includegraphics[width=0.22\textwidth]{figures/obqa_control.pdf}
}
\centering
\caption{Performance comparison on CommonsenseQA and OpenBookQA (Dev accuracy).
}
\label{fig-kg-encoder}
\end{figure}
\begin{figure}[t]
\centering
\subfigure{\label{fig-node-dimension}
\centering
\includegraphics[width=0.223\textwidth]{figures/node_dimension.pdf}
}
\subfigure{\label{fig-edge-drop}
\centering
\includegraphics[width=0.215\textwidth]{figures/edge_drop.pdf}
}
\centering
\caption{Performance examination for KG encoder on CommonsenseQA and OpenBookQA (Dev accuracy).
}
\label{fig-edge-node}
\end{figure}
\paratitle{Effect of Node/Relation Features from KG.}
The major aim of the KG encoder is to characterize the commonsense knowledge and provide necessary knowledge evidence for enhancing the reasoning capacity of PTMs.
Generally, a CSKG consists of concept nodes and relational links.
To identify the key knowledge information that is necessarily needed, we now examine the effect of node and relation features from CSKG.
To eliminate the effect of PTM module, we remove it and compare the performance of only KG encoder under two experiment settings: (A) reducing the dimension of node embeddings to $d$ (PCA~\citep{pca} is applied to select $d$ most informative dimensions), and (B) randomly removing $p$ percent of relational links in the KG subgraph for a question-candidate pair.
As shown in Figure~\ref{fig-edge-node}, we surprisingly find that even after reducing the dimension of node embeddings to 1, the performance of the GNN encoder can be still improved.
These results show that node features are not the key information utilized by the GNN encoder.
In contrast, removing a considerable proportion of links significantly reduces the performance.
From these observations, we can conclude that: The relation features from the CSKG are indeed the key knowledge information that is actually needed by the KG encoder.
\section{Task Description}
According to pioneer works~\citep{csqa,obqa}, the commonsense reasoning task can be generally described as a multi-choice question answering problem: given a natural language question $q$ and a set of $n$ choices $\{c_{1},\cdots,c_{n}\}$ as the answer candidates, the goal is to select the most proper choice $c^{\star}$ from these candidates to answer the question based on necessary commonsense knowledge.
To explicitly capture commonsense knowledge, external commonsense knowledge graphs~(CSKGs) have often been utilized in this task, \emph{e.g.,}\xspace ConceptNet~\citep{conceptnet}.
A CSKG can be formally described as a multi-relational graph $\mathcal{G} = (\mathcal{V}, \mathcal{R},\mathcal{E})$, where $\mathcal{V}$ is the set of all concept (or entity) nodes (\emph{e.g.,}\xspace \emph{hair} and \emph{water}), $\mathcal{R}$ is the set of relation types (\emph{e.g.,}\xspace \emph{relatedto} and \emph{atlocation}), and $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{R} \times \mathcal{V}$ is the set of relational links that connect two concept nodes in $\mathcal{V}$.
Following prior studies~\cite{lin2019kagnet}, we solve the commonsense reasoning task in a \emph{knowledge-aware} setting, where a CSKG $\mathcal{G}$ is available as input.
We first link the mentioned concepts from the question and the answer candidates to the CSKG, so that we can leverage the rich semantic knowledge from the CSKG for commonsense reasoning.
Based on the linked concepts in the question and each answer candidate, we further extract their neighbouring nodes from $\mathcal{G}$ and the relational links that connect them, to compose a subgraph $\mathcal{G}^{q,c_{i}}$ for characterizing the commonsense knowledge about the question $q$ and the answer candidate $c_i$.
\ignore{The associated concepts in the question and candidates are denoted by
$\mathcal{V}_q \subseteq \mathcal{V}$ and $\mathcal{V}_{c_{i}} \subseteq \mathcal{V}$, respectively.
Based on these seed nodes, we further extract the neighbouring nodes from $\mathcal{G}$ and the associated links that connect them, to compose a subgraph $\mathcal{G}_{(q,c_{i})}$ that characterizes the commonsense knowledge about the question $q$ and the answer candidate $c_i$.
}
\section{Related Work} \label{related_work}
We review the related studies in two aspects, \emph{i.e.,}\xspace commonsense reasoning and KG-enhanced pretrained models.
\paratitle{Commonsense Reasoning.}
Commonsense reasoning tasks aim to evaluate the understanding of commonsense knowledge~\cite{davis2015commonsense}, \emph{e.g.,}\xspace physical commonsense~\citep{hswag}, which are mostly formulated as a multi-choice QA problem.
Early studies either rely on explicit text features~\citep{pmi} to capture the relations between the question and answer candidates, or adopt neural networks~(\emph{e.g.,}\xspace DNN or LSTM)~\citep{dnn4qa,lstm4qa} to model the implicit correlation features.
Recently, pre-trained models (PTM)~\citep{bert, roberta} have achieved remarkable performance on commonsense reasoning tasks.
Furthermore, a surge of works incorporate external knowledge resources to further improve the reasoning performance.
Among them, CSKG~(\emph{e.g.,}\xspace~ ConceptNet~\citep{conceptnet}) has been widely studied, and existing works mainly adopt graph neural networks to learn useful commonsense knowledge from the CSKG to enhance PTMs.
Based on these works, we systemically study what is necessarily needed from CSKGs for improving PTMs. Our analysis leads to an important finding that relation features mainly contribute to the performance improvement, and we design a lightweight MLP architecture to simplify the KG encoder.
\paratitle{KG-Enhanced Pre-trained Models.}
Recently, a series of works focus on enhancing PTMs with external KGs to improve the performance on factual knowledge understanding~\cite{colake,wang2021kepler} and knowledge reasoning tasks~\cite{csqa,zhang2019ernie,he-kgc-www}.
These works inject the structured knowledge from the external KG into PTMs in either pre-training or fine-tuning stage.
The first class of works mainly focus on devising knowledge-aware pre-training tasks~\cite{wang2021kepler,zhang2019ernie} to improve the understanding of entities or triples from the KG, \emph{e.g.,}\xspace knowledge completion~\cite{wang2021kepler} and denoising entity auto-encoder~\cite{zhang2019ernie}.
Another class of works adopt task-specific KG encoders to enhance PTMs during fine-tuning, \emph{e.g.,}\xspace path-based relation network~\cite{feng2020mhgrn} and GNN~\cite{Yasunaga2021qagnn}.
Different from them, we aim to directly enhance PTMs with a KG encoder on the downstream commonsense reasoning tasks, and design a rather simple yet effective KG encoder.
|
1,108,101,563,821 | arxiv | \section{Introduction}
Radio-loud Active Galactic Nuclei (AGN) are the most commonly observed astrophysical sources in high energy. They emit bright non-thermal radiation across a wide range of energies and wavelengths (from radio to very high energy $\gamma$-ray) through the relativistic jets assumed to be produced by the combination of the magnetic field lines and the central supermassive black hole (SMBH). If the relativistic jet of the AGN is aligned at a small angle with the observer's line of sight (LoS) then it is called as blazar (\cite{urry1995unified}). In most of the blazars a wide range of variability time across the whole electromagnetic spectrum is found (\cite{aharonian2007exceptional}; \cite{raiteri2013awakening}). Historically, blazars are divided into two subclasses depending upon their optical spectra: BL Lacertae objects (BL Lacs), typically have very weak or absent features in their optical spectra, and Flat-Spectrum Radio Quasars (FSRQs), known for having strong broad optical emission lines. \cite{berton2018radio} has shown that the flat-spectrum radio-loud Narrow-Line Seyfert
1 (NLS1) galaxies can host low power blazar-type jets aligned with our LoS. The observed broadband Spectral Energy Distribution (SED) of blazars show a typical double-peaked structure. The low energy peak, extending from radio to optical are produced by the synchrotron emission of relativistic electrons accelerated inside the jet magnetic field. Whereas the emission processes which gives rise to the high-energy peak are not very clear. In the simplest leptonic emission model the high energy $\gamma$-ray radiation is caused by Inverse-Compton (IC) scattering of soft target photons originating in synchrotron radiation process (SSC; \cite{sikora2009constraining}) or external photon fields (EC; \cite{dermer1992high}; \cite{sikora1994comptonization}). More advanced models incorporate relativistic protons as well called lepto-hadronic emission models (e.g. \cite{bottcher2013leptonic}).
PKS 0346-27 is a FSRQ with coordinates R.A. = $57.1589354^{\circ}$, Decl. = $-27.8204344^{\circ}$ (J2000, \cite{beasley2002vlba}) is a blazar source located at a redshift z = 0.991 (\cite{white1988redshifts}). It is also known as BZQ J0348-2749 ( \cite{massaro2009roma}). In the Parkes catalog (\cite{bolton1964parkes}), it was first identified as a radio source. Based on its optical spectrum, it was first classified as a quasar (\cite{white1988redshifts}). Later it was revealed as an X-ray source by ROSAT (\cite{voges1999rosat}, and references therein). Energetic Gamma Ray Experiment Telescope (EGRET, \cite{thompson1993calibration}) was not detected it in the $\gamma$-ray band. However, in $\gamma$-ray it was first detected by Fermi-LAT and included in the Fermi-LAT First Source Catalog (1FGL, \cite{abdo2010fermi}). In the Fermi-LAT Fourth Source Catalog (4FGL, The FermiLAT \cite{lat2019fermi}, which is the latest Fermi-LAT catalog) it is associated with the $\gamma$-ray source 4FGL J0348.5-2749.
Based on the data taken on 2017 Nov 14 (MJD 58071), a near infrared (NIR) flare was first reported from PKS 0346-27 (\cite{2017ATel10999....1C}). A few months later, on 2018 Feb 02 (MJD 58151), strong $\gamma$-ray flaring activity was reported from the source based on the Fermi-LAT data (\cite{angioni2018fermi}). PKS 0346-27 was found in an elevated state with a significantly harder spectrum with respect to the one reported in the 3FGL and reaching a daily $\gamma$-ray flux (of energy > 100 MeV) more than 100 times larger than the average flux reported in the 3FGL. Multi-wavelength follow-up observations revealed enhanced activity in the optical-NIR (\cite{nesci2018high}; \cite{vallely2018asas}), ultraviolet (UV) and X-ray (\cite{nesci2018x}).
In this work, we have studied the flaring states between 2019 January and 2021 December (MJD 58484-59575) using $\gamma$-ray and X-ray/UVOT data. We characterize the flaring activity of PKS 0346-27 for this period, probing the temporal behaviour in $\gamma$-rays. Multi-wavelength light-curves are generated to identify flaring episodes. The $\gamma$-ray flares are identified and modelled with the time-dependent leptonic emission model. In section \ref{sec2} we discuss multi-wavelength observations and data analysis techniques for different archival data. The sum-of-exponential fitting of $\gamma$-ray flares are discussed in section \ref{sec3}. In section \ref{sec4} the spectral energy distribution (SED) and its modelling are discussed. The power density spectrum and auto-correlation study of $\gamma$-ray light-curve are discussed in section \ref{sec5} and section \ref{sec6} respectively. Our results are discussed in section \ref{res_dis}.
\section{Multiwavelength Observations and Data Analysis}\label{sec2}
\subsection{Fermi-LAT}
\label{subsec:fermi_lat}
Fermi is a space-based gamma-ray telescope launched in June, 2008. The Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM) are the two instruments on-board the telescope. With a maximum effective area (1-10 Gev energy range) of $9500 \; cm^2$ at normal incidence, LAT mainly operates in the 20 Mev-300 GeV energy range, although it is sensitive to energies outside this interval. The on-axis energy resolution is 9\%-15\% for 100 MeV-1 GeV gamma-rays. The multiple interleaved tungsten and silicon layers act as a converter-tracker system. The tungsten layers convert the gamma-rays into electron-positron pairs and the silicon-strip layers record the tracks of these charged particles to estimate the direction of the incident radiation. The energy of an incident gamma-ray is estimated from the energy deposited by the corresponding electron-positron pair on the calorimeter made of Cs(Tl) crystals at the base of the LAT. The instrument has a Field of View (FoV) of 2.4 sr along with an angular resolution of < 3.5\degree at 100 MeV and < 0.15\degree at 10 GeV (\cite{atwood2009large}). The cosmic rays are a major contributor to the background noise in the LAT operating energy range. The ratio of charged cosmic rays to gamma-ray detections for the LAT is in the range $10^3 - 10^5$ (\cite{ajello2021fermi}). The Anti-Coincidence Detector (ACD) which surrounds the converter-tracker layers is responsible for differentiating between the two radiations.
Observations of the object PKS 0346-27 from MJD 58484 to MJD 59575 were selected for analysis with a $10\degree$ circular Region of Interest (ROI) centered at the source. (RA=57.1589, Dec=-27.8204).
The Fermi-LAT data was analysed using the recommended fermitools\footnote{\url{https://github.com/fermi-lat/Fermitools-conda/}} package. The data in the energy range 100 MeV to 300 GeV was obtained for the period MJD 58484-59575. The photon data was filtered using \textit{gtselect} tool with the constraints \textit{evclass=128} and \textit{evtype=3}. The zenith angle for the observations was restricted to less than 90 degrees to avoid contamination from Earth limb. The time intervals were filtered using the constraint ‘$(DATA\_QUAL>0)\&\&(LAT\_CONFIG==1)$’ in the \textit{gtmktime} tool. The light curve was binned in 1-day intervals using \textit{gtbin} tool and the exposure in $cm^2 s$ was computed using \textit{gtexposure}. The flux can be obtained from the counts and exposure terms.
Using the event file obtained in the gamma-ray light curve analysis, a live time HealPix table was computed in the region of interest from \textit{expCube} tool. An exposure map for a radius of 10 degrees more than the ROI was computed using the live time HealPix table and the \textit{expMap} function with the instrument response file \textit{P8R3\_SOURCE\_V3} as input. A total of 131 point sources and 1 extended source were identified in the region of interest using the user-contributed tool \textit{make4FGLxml.py}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make4FGLxml.py}}. The background files \textit{gll\_iem\_v07.fits}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}} and \textit{iso\_P8R3\_SOURCE\_V3\_v1.txt}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}} were used for the response model. The models and their respective parameters of the nearby sources were stored in an \textit{xml} file and the diffusion response was computed. The $\gamma$-ray light curve shown in the top panel of Figure \ref{fig_broadband_lc} is obtained with a binsize of 1 day (24 hours). The gamma-ray spectral analysis is explained in section \ref{sec:gamma_ray_SED}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/combined_lc.pdf}
\caption{Multi-wavelength light curve of PKS 0346-27 in the period MJD 58484-59575. The time period is further divided into Flares 1-5 for observing changes in spectral parameters over time. (Top panel) One-day binned $\mathrm{\gamma-ray}$ light curve obtained from Fermi-LAT data using aperture photometry method in the 0.1-300 GeV energy range. (Middle panel) X-ray light curve generated from 30 Swift-XRT observations in the energy range 0.3-8 keV. (Bottom panel) Light curves for six Swift-UVOT bands (V, B, U, UVW1, UVM2, and UVW2) obtained from the same observations as Swift-XRT. Swift data for the period between MJD 58702 and MJD 59520 is not available.}
\label{fig_broadband_lc}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{plots/rise_decay_histogram.pdf}
\caption{Statistical distribution of Rise ($\mathrm{R_t}$) and Decay ($\mathrm{D_t}$) times of the local peaks identified in the $\mathrm{\gamma-ray}$ light curve show in Figure \ref{fig_gamma_lc_fit}. }
\label{fig:rise_and_decay_time_distribution}
\end{figure}
\subsection{Swift-XRT}
The Neil Gehrels Swift observatory was launched in November, 2004 aboard the Delta II rocket with three instruments onboard i.e., the Burst Alert Telescope (BAT), the X-Ray Telescope (XRT) and the Ultraviolet and Optical Telescope (UVOT). The Swift-XRT is a grazing incidence Wolter-I telescope with an effective area of 110 $cm^2$ and 0.2-10 keV energy range.
A total of 30 observations from Swift-XRT (see Table \ref{tab_observation_ids}) were available in the same time period as Fermi observations. The exposure times in the observations span from 1100 to 3800 seconds in the period MJD 58508 to MJD 58701. Lack of observations from MJD 58701 for X-ray analysis is a drawback. Software tools from Heasoft package were used for analysis.
The XRT data was analysed using Xselect and Xspec tools available in Heasoft Software. The Level 1 data is passed to xrtpipeline resulting in the generation of Level 2 files used for further analysis. Using the event file and Xselect tools the image file is extracted which is further used to create source and background regions in the DS9 viewer. The source region file contains a circular 60 arc sec region and the background region file is a circular 120 arc sec region adjacent to the source. Again using Xselect both the light curves and spectra were extracted for source and background regions.
The auxiliary response file was generated using the xrtmkarf tool with the image and the source spectrum files generated using Xselect as inputs. The quzcif tools was used to find the corresponding redistribution matrix file. Using grppha the source and background files, the redistribution matrix file and auxiliary response file are combined into one spectrum file for Xspec analysis. Xspec was used to model the X-ray spectra with energy ranges 0.3-0.8 KeV. The power law model with $N_H (= 8.16 \times 10^{19} cm^{-2})$ was used to model the spectra.
\subsection{Swift-UVOT}
In the same period as Swift-XRT observations a total of 166 observations were available for Swift-UVOT analysis with exposure times ranging from 31 to 1626 seconds. The image data files for all observations were combined using the \textit{uvotimsum} tool. Then, the source and background files from X-ray analysis were used as inputs to the \textit{uvotsource} tool along with the combined image file. The UVOT data is recorded at six wavelengths namely UVW2 (\SI{1928}{\angstrom}), UVM2 (\SI{2246}{\angstrom}), UVW1 (\SI{2600}{\angstrom}), V (\SI{3464}{\angstrom}), U (\SI{4392}{\angstrom}), B (\SI{5468}{\angstrom}). The flux obtained from uvotsource is multiplied with the corresponding wavelengths to obtain the energy flux in $\mathrm{erg \; cm^{-2} \; s^{-1}}$.
\begin{table}
\centering
\begin{tabular}{c c c}
\hline
\textbf{Observation ID} & \textbf{MJD} & \textbf{Exposure time (s)} \\ \hline
38373017 & 58508 & 3502.7 \\
38373018 & 58511 & 3768.5 \\
38373019 & 58533 & 1526.9 \\
38373020 & 58537 & 2630.3 \\
38373021 & 58540 & 2576.6 \\
38373022 & 58557 & 2998.4 \\
38373023 & 58565 & 2601.0 \\
38373024 & 58598 & 2877.3 \\
38373026 & 58630 & 1969.7 \\
38373027 & 58633 & 2009.7 \\
38373028 & 58640 & 1983.3 \\
38373029 & 58647 & 1108.4 \\
38373030 & 58653 & 1947.8 \\
38373031 & 58690 & 1988.3 \\
38373032 & 58693 & 1835.3 \\
38373033 & 58695 & 1785.2 \\
38373034 & 58698 & 2020.9 \\
38373035 & 58701 & 1898.0 \\
38373036 & 59522 & 1978.3 \\
38373037 & 59524 & 1842.9 \\
38373040 & 59526 & 2203.9 \\
38373043 & 59530 & 2146.2 \\
38373044 & 59533 & 569.2 \\
38373045 & 59537 & 1609.7 \\
38373046 & 59540 & 2101.1 \\
38373047 & 59543 & 2492.3 \\
38373048 & 59544 & 2173.8 \\
38373050 & 59546 & 1860.4 \\
38373051 & 59572 & 2028.4 \\
38373052 & 59574 & 2733.0 \\ \hline
\end{tabular}
\caption{Swift observations used for this analysis}
\label{tab_observation_ids}
\end{table}
\section{Multi-waveband light curve}\label{sec3}
\subsection{Fitting the light curve}
The gamma-ray light curve with 24 hr bins was divided into five flares corresponding to the largest peaks and troughs. The segments Flare1, Flare 2, Flare 3, Flare 4, and flare 5 have the time periods MJD 58494-58569, MJD 58570-58672, MJD 58673-58774, MJD 58775-58879, and MJD 59396-59575 respectively. Further, it is observed that each individual flares have their own substructure with multiple peaks (see Figure \ref{fig_gamma_lc_fit}). These are modelled using the sum of exponentials function given in equation \ref{eqn:sum_of_exponetials}.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/fit_lc_flare1b.pdf}}\\[-5ex]
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/fit_lc_flare2a.pdf}}
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/fit_lc_flare2b.pdf}}\\[-5ex]
\subfloat[]{\includegraphics[width = \textwidth, height=0.46\textwidth ]{plots/fit_lc_flare3.pdf}}
\caption{The local peaks in the $\mathrm{\gamma-ray}$ light curve for each flare are fitted with the Sum of Exponentials (SoE) function. The individual Flares 1, 2, and 4 are subdivided into two parts (a and b) favor of a better fit. A total of 39 peaks were modeled and the corresponding rise and decay times were estimated. Fitting for Flare 1a was not satisfactory due to data discontinuities and is excluded from the analysis.}
\label{fig_gamma_lc_fit}
\end{figure*}
\begin{figure*}\ContinuedFloat
\centering
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/fit_lc_flare4a.pdf}}
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/fit_lc_flare4b.pdf}}\\[-5ex]
\subfloat[]{\includegraphics[width = \textwidth]{plots/fit_lc_flare5.pdf}}
\caption{(Continued)}
\label{fig:my_label}
\end{figure*}
\begin{equation}
A(t) = A_0 + \sum_{i=1}^{n} 2C_i \left( \exp\left(\frac{P_{ti}-t}{R_{ti}}\right) + \exp\left(\frac{t-P_{ti}}{D_{ti}}\right) \right)
\label{eqn:sum_of_exponetials}
\end{equation}
where $A(t)$ is the flux magnitude, $A_0$ is the continuum flux, $C_i$s are the peak constants, $P_{ti}$ is the peak time value while $R_{ti}$ and $D_{ti}$ are the rise and decay times respectively of $i$th peak in the period considered for fitting. A total of 39 peaks were modelled and the distribution of the obtained rise and decay times is shown in Figure \ref{fig:rise_and_decay_time_distribution}. The peaks were chosen such that the average of the five points with the current point at the centre is the highest compared to immediate neighbouring points. The distribution of both rise and decay times appears to be skewed towards the left with peaks around 2 days and some values extending up to 15 days. The rise and decay time distribution are shown in Figure 2.
\subsection{Multi-wavelength variability}
\label{section:variability}
The variability in the light curve provides an indirect measurement of the size and location of the emission region where the broadband emission (light
curves) are produced.
The fast flux variability of the order of hours or minutes (\cite{Goyal_2017}, \cite{Goyal_2018}) scales suggest a very small region very close to the supermassive black hole (SMBH). Many models have been proposed to explain that. The best accepted model is the shock-in-jet (\cite{1985ApJ...298..114M}) model where the shock can be produced at the base of the jet with local fluctuations. Another possible explanation has been proposed recently known as magnetic-reconnection which happens at farther distances down the jet (\cite{shukla2020gamma}).
The location of the emission region is another long-standing problem in blazar physics. In some sources, It has been found that under the one-zone leptonic scenario the emission region is generally located within the boundary of the broad-line region (BLR). However, there has been many studies which show that the location of the emission region can also be far down the jet outside the BLR.
The variability time scale can be estimated using the following expression,
\begin{equation}
F_2 = F_1 2^{(t_2 - t_1)/t_d}
\end{equation}
where, F$_2$ and F$_1$ are the fluxes measured at time t$_2$ and t$_1$ and the t$_d$ is the flux doubling/halving time. We calculated the variability for all the wavebands shown in Figure \ref{fig_broadband_lc}. The minimum variability time for the $\gamma$-ray light curve has the value 1.343 days. The corresponding value for the X-ray light curve is 0.102 days. Performing similar calculations on the Swift-UVOT data for the wavebands V (5468 \AA), B (4392 \AA), U (3465 \AA), UVW1 (2600 \AA), UVM2 (2246 \AA), and UVW2 (1928 \AA) resulted in the minimum variability times of 0.102 days, 0.131 days, 0.160 days, 0.105 days, 0.156 days, and 0.144 days respectively. For the distribution of $t_d$ values see Appendix \ref{appendix_sec1}.
\begin{table*}
\centering
{\scriptsize
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c c c c c c c c|}
\hline
\multicolumn{8}{|c|}{Power Law} \\ \hline
Flare & $\mathrm{F_{0.1-300GeV}}$ & Prefactor ($N_0$) & Index($\gamma$) & Scale ($E_0$)
& & TS & TS$_{curve}$\\
& ($\mathrm{10^{-7}\:ph\:cm^{-2}\:s^{-1}}$) & ($\mathrm{10^{-10}\:ph\:cm^{-2}\:s^{-1}\: MeV^{-1}}$) & & ($\mathrm{MeV}$) & & & \\ \hline
1 & 9.513 $\pm$ 0.008 & 2.022 $\pm$ 0.001 & -2.018 $\pm$ 0.0005 & \multirow{5}{*}{680.1} & & 14525.93& - \\
2 & 11.085 $\pm$ 0.049 & 2.404 $\pm$ 0.009 & -1.996 $\pm$ 0.003 & & & 25411.73& - \\
3 & 7.212 $\pm$ 0.046 & 1.501 $\pm$ 0.008 & -2.040 $\pm$ 0.004 & & & 13420.26 & -\\
4 & 6.189 $\pm$ 0.053 & 1.335 $\pm$ 0.009 & -2.002 $\pm$ 0.006 & & & 11105.37& - \\
5 & 3.300 $\pm$ 0.067 & 0.714 $\pm$ 0.012 & -2.001 $\pm$ 0.013 & & & 8909.44&- \\
\hline
\multicolumn{8}{|c|}{Broken Power-Law} \\ \hline
Flare & $\mathrm{F_{0.1-300GeV}}$ & Prefactor ($N_0$) & Index1 ($\gamma_1$) & Index2 ($\gamma_2$) & Break Value ($E_b$) & TS & TS$_{curve}$ \\
& ($\mathrm{10^{-7}\:ph\:cm^{-2}\:s^{-1}}$) & ($\mathrm{10^{-10}\:ph\:cm^{-2}\:s^{-1}\: MeV^{-1}}$) & & & ($\mathrm{MeV}$) & &\\ \hline
1 & 9.133 $\pm$ 0.022 & 0.3721 $\pm$ 0.0005 & -1.924 $\pm$ 0.001 & -2.234 $\pm$ 0.003 & 1692 $\pm$ 1.122 & 14447.65 &-156.56 \\
2 & 10.287 $\pm$ 0.024 & 1.345 $\pm$ 0.002 & -1.821 $\pm$ 0.001 & -2.207 $\pm$ 0.002 & 998.3 $\pm$ 0.671 & 23525.53 &-3772.4 \\
3 & 6.888 $\pm$ 0.036 & 1.352 $\pm$ 0.004 & -1.894 $\pm$ 0.003 & -2.181 $\pm$ 0.005 & 766.7 $\pm$ 1.143 & 13411.65 & -17.22\\
4 & 5.564 $\pm$ 0.003 & 0.7686 $\pm$ 0.003 & -1.781 $\pm$ 0.0003 & -2.235 $\pm$ 0.0006 & 998.1 $\pm$ 0.1753 & 10659.25 & -892.24\\
5 & 3.100 $\pm$ 0.085 & 0.390 $\pm$ 0.015 & -1.844 $\pm$ 0.026 & -2.189 $\pm$ 0.036 & 1005.0 $\pm$ 19.7 & 8931.38 & 43.88\\
\hline
\multicolumn{8}{|c|}{Log-Parabola} \\ \hline
Flare & $\mathrm{F_{0.1-300GeV}}$ & Norm ($N_0$) & $\alpha$ & $\beta$ & $E_b$ & TS & TS$_{curve}$ \\
& ($\mathrm{10^{-7}\:ph\:cm^{-2}\:s^{-1}}$) & ($\mathrm{10^{-10}\:ph\:cm^{-2}\:s^{-1}\: MeV^{-1}}$) & & & ($\mathrm{MeV}$) & &\\ \hline
1 & 9.048 $\pm$ 0.056 & 2.185 $\pm$ 0.008 & 1.978 $\pm$ 0.003 & 0.5177 $\pm$ 0.0159 & \multirow{5}{*}{680.1} & 14461.04 & -129.78 \\
2 & 10.450 $\pm$ 0.103 & 2.673 $\pm$ 0.019 & 1.949 $\pm$ 0.006 & 0.709 $\pm$ 0.0333 & & 23534.64& -3754.18 \\
3 & 6.798 $\pm$ 0.014 & 1.647 $\pm$ 0.002 & 1.999 $\pm$ 0.001 & 0.6499 $\pm$ 0.0062 & & 13386.68& -67.16 \\
4 & 5.269 $\pm$ 0.004 & 1.486 $\pm$ 0.001 & 1.887 $\pm$ 0.001 & -0.998 $\pm$ 0.002 & & 10425.39&-1359.96 \\
5 & 2.900 $\pm$ 0.027 & 0.789 $\pm$ 0.005 & 1.912 $\pm$ 0.006 & 0.836 $\pm$ 0.028 & & 8560.39& -698.1 \\
\hline
\multicolumn{8}{|c|}{PLExpCutoff} \\ \hline
Flare & $\mathrm{F_{0.1-300GeV}}$ & Prefactor ($N_0$) & Index ($\gamma$) & Scale ($E_0$) & Cutoff ($E_c$) & TS & TS$_{curve}$ \\
& ($\mathrm{10^{-7}\:ph\:cm^{-2}\:s^{-1}}$) & ($\mathrm{10^{-10}\:ph\:cm^{-2}\:s^{-1}\: MeV^{-1}}$) & & & ($\mathrm{10^4\:MeV}$) && \\ \hline
1 & 9.211 $\pm$ 0.208 & 2.154 $\pm$ 0.042 & -1.938 $\pm$ 0.018 & \multirow{5}{*}{680.1} & 2.992 $\pm$ 0.172 & 14486.28 & -79.3 \\
2 & 10.688 $\pm$ 0.002 & 2.665 $\pm$ 0.0004 & -1.884 $\pm$ 0.0001 & & 1.860 $\pm$ 0.002 & 25454.22&84.98 \\
3 & 6.905 $\pm$ 0.010 & 1.629 $\pm$ 0.002 & -1.937 $\pm$ 0.001 & & 2.054 $\pm$ 0.020 & 13347.70&-145.12 \\
4 & 5.843 $\pm$ 0.006 & 1.467 $\pm$ 0.001 & -1.873 $\pm$ 0.001 & & 1.806 $\pm$ 0.009 & 11010.50&-189.74 \\
5 & 3.100 $\pm$ 0.006 & 0.800 $\pm$ 0.001 & -1.852 $\pm$ 0.001 & & 1.444 $\pm$ 0.001 & 8904.12&-10.64 \\
\hline
\end{tabular}
\caption{Spectral parameters of $\mathrm{\gamma-ray}$ SED fitted with four different models (Power Law, Broken Power-Law, Log-Parabola, and PLExpCutoff). See Figure \ref{fig_gamma_sed} for the model plots.}
\label{tab:gamma_ray_sed_param}
\end{center}
}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.96\textwidth]{plots/SED_gamma-ray_combined.pdf}
\caption{$\mathrm{\gamma-ray}$ SEDs obtained from Fermi-LAT data for each flare. The data is fitted with Power Law, Log-Parabola, Broken Power-Law, and Power Law with Exponential Cutoff models. The 0.1-300 GeV energy range is divided into 28 bins using \href{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/likeSED.py}{likeSED.py} (a Fermi user-contributed tool) resulting in one spectral point for each bin . The five plots in this figure have common x and y axes. The results of the fit are given in Table \ref{tab:gamma_ray_sed_param}. }
\label{fig_gamma_sed}
\end{figure*}
\section{Gamma-ray Spectrum}\label{sec:gamma_ray_SED}
The $\gamma$-ray spectral analysis is done following unbinned likelihood method.
Analysis was performed using an user-contributed tool \textit{likeSED3.py}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/likeSED.py}} with NewMinuit algorithm for the Flares 1-5. During the modelling the parameters of sources outside the 10 degree ROI were kept constant while the ones inside 10 degree were allowed to vary. The spectrum of the source (4FGL J0348.5-2749) was then divided into 28 bins in the 0.1-300 GeV energy range giving the same number of spectral points. The Power-Law, Log-Parabola, Broken Power-Law, and Power-Law with Exponential Cut-off (PLExpCutoff) models (see equations \ref{eqn:power_law}-\ref{eqn:plexpcutoff}) were used for the fitting these points. The parameters for each of the fits are given in Table \ref{tab:gamma_ray_sed_param} and the corresponding model plots are shown in Figure \ref{fig_gamma_sed}.
\begin{enumerate}
\item Power Law:
\begin{equation}
\frac{dN}{dE} = N_0 (E/E_0)^{\gamma}
\label{eqn:power_law}
\end{equation}
\item Broken Power-Law:
\begin{equation}
\frac{dN}{dE} = N_0 \times
\begin{cases}
(E/E_b)^{\gamma_1},& \text{if } E < E_b\\
(E/E_b)^{\gamma_2}, & \text{otherwise}
\end{cases}
\label{eqn:broken_power_law}
\end{equation}
\item Log-Parabola:
\begin{equation}
\frac{dN}{dE} = N_0 \left(\frac{E}{E_b}\right)^{-(\alpha + \beta \log{(E/E_b)})}
\label{eqn:log_parabola}
\end{equation}
\item PLExpCutoff:
\begin{equation}
\frac{dN}{dE} = N_0 \left(\frac{E}{E_0}\right)^{\gamma_1} \exp{(-(E/E_c))}
\label{eqn:plexpcutoff}
\end{equation}
The Test Statistic (TS) value was chosen as a measure for the quality of the model fit and is computed from the likelihood values obtained from the modelling. The Test Statistic (TS) is given by $\mathrm{TS = -2ln(L_{max,0}/L_{max,1})}$, where $\mathrm{L_{max,0}}$ is the maximum likelihood value without a source at a position and $\mathrm{L_{max,1}}$ is the maximum likelihood value with a source.
We have also examined he curvature in the spectrum by estimating the TS$_{curve}$ = 2 [log $\mathcal{L}(LP/BPL)$ - log $\mathcal{L}(PL)$], where $\mathcal{L}$ is the likelihood function (\citealt{2012ApJS..199...31N}). Higher the value with negative sign suggest a better fit. Considering that, we noted that for flare 1 and 2 gave better fit with BPL and the flare 3, 4, and 5 is represented best by the LP. A similar results were also reported for the other FSRQ by \citet{Britto2016} and \citet{Prince2017}.
We also noticed that, in Table 2, for the power-law case when the source goes from low flux state to high flux state the spectral index becomes softer and softer, suggesting a softer-when-brighter behaviour. This is a rare behavior seen in FSRQ. Most of the FSRQ show a harder-when-brighter behavior as can be seen in \citet{Britto2016} and \citet{Prince2017}.
\end{enumerate}
\section{Multi-frequency SED modelling }\label{sec4}
\begin{table*}
\setlength{\extrarowheight}{3pt}
\begin{tabular}{|cccccc|}
\hline
\multicolumn{1}{|c|}{Parameter} & \multicolumn{1}{c|}{Units} & \multicolumn{1}{c|}{Flare 1} & \multicolumn{1}{c|}{Flare 2} & \multicolumn{1}{c|}{Flare 3} & Flare 5 \\ \hline
\multicolumn{1}{|c|}{$\mathrm{\gamma_{min}}$} & \multicolumn{1}{c|}{Lorentz-factor} & \multicolumn{1}{c|}{1.000E+02} & \multicolumn{1}{c|}{1.748E+02} & \multicolumn{1}{c|}{1.000E+02} & 1.000E+02 \\
\multicolumn{1}{|c|}{$\mathrm{\gamma_{max}}$} & \multicolumn{1}{c|}{Lorentz-factor} & \multicolumn{1}{c|}{5.000E+05} & \multicolumn{1}{c|}{1.000E+06} & \multicolumn{1}{c|}{1.800E+06} & 5.000E+05 \\
\multicolumn{1}{|c|}{N} & \multicolumn{1}{c|}{$1/cm^3$} & \multicolumn{1}{c|}{4.669E+01} & \multicolumn{1}{c|}{2.053E+02} & \multicolumn{1}{c|}{7.128E+01} & 7.226E+01 \\
\multicolumn{1}{|c|}{$\mathrm{\gamma_{cut}}$} & \multicolumn{1}{c|}{Lorentz-factor} & \multicolumn{1}{c|}{1.489E+04} & \multicolumn{1}{c|}{1.179E+04} & \multicolumn{1}{c|}{1.055E+04} & 4.921E+04 \\
\multicolumn{1}{|c|}{p} & \multicolumn{1}{c|}{Low Energy Spectral Slope} & \multicolumn{1}{c|}{2.506E+00} & \multicolumn{1}{c|}{2.454E+00} & \multicolumn{1}{c|}{2.423E+00} & 2.374E+00 \\
\multicolumn{1}{|c|}{R} & \multicolumn{1}{c|}{cm} & \multicolumn{1}{c|}{1.190E+17} & \multicolumn{1}{c|}{2.273E+16} & \multicolumn{1}{c|}{8.655E+16} & 1.263E+17 \\
\multicolumn{1}{|c|}{$\mathrm{R_H}$} & \multicolumn{1}{c|}{cm} & \multicolumn{1}{c|}{1.000E+17} & \multicolumn{1}{c|}{1.000E+17} & \multicolumn{1}{c|}{1.000E+17} & 1.000E+17 \\
\multicolumn{1}{|c|}{B} & \multicolumn{1}{c|}{gauss} & \multicolumn{1}{c|}{3.402E-02} & \multicolumn{1}{c|}{1.598E-01} & \multicolumn{1}{c|}{1.048E-01} & 2.418E-02 \\
\multicolumn{1}{|c|}{$\mathrm{\delta_D}$} & \multicolumn{1}{c|}{Lorentz-factor} & \multicolumn{1}{c|}{3.903E+01} & \multicolumn{1}{c|}{4.075E+01} & \multicolumn{1}{c|}{2.404E+01} & 2.527E+01 \\
\multicolumn{1}{|c|}{$\mathrm{z_{cosm}}$} & \multicolumn{1}{c|}{Redshift} & \multicolumn{1}{c|}{9.910E-01} & \multicolumn{1}{c|}{9.910E-01} & \multicolumn{1}{c|}{9.910E-01} & 9.910E-01 \\
\multicolumn{1}{|c|}{$\mathrm{\tau_{BLR}}$} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{5.285E-01} & \multicolumn{1}{c|}{2.888E-01} & \multicolumn{1}{c|}{7.232E-05} & 1.000E+00 \\
\multicolumn{1}{|c|}{$\mathrm{R_{BLR_{in}}}$} & \multicolumn{1}{c|}{cm} & \multicolumn{1}{c|}{1.000E+18} & \multicolumn{1}{c|}{1.000E+18} & \multicolumn{1}{c|}{1.000E+18} & 1.000E+18 \\
\multicolumn{1}{|c|}{$\mathrm{R_{BLR_{out}}}$} & \multicolumn{1}{c|}{cm} & \multicolumn{1}{c|}{2.000E+18} & \multicolumn{1}{c|}{2.000E+18} & \multicolumn{1}{c|}{2.000E+18} & 2.000E+18 \\
\multicolumn{1}{|c|}{$\mathrm{L_{Disk}}$} & \multicolumn{1}{c|}{erg/s} & \multicolumn{1}{c|}{7.786E+41} & \multicolumn{1}{c|}{2.542E+43} & \multicolumn{1}{c|}{2.573E+43} & 1.360E+42 \\
\multicolumn{1}{|c|}{$\mathrm{T_{Disk}}$} & \multicolumn{1}{c|}{K} & \multicolumn{1}{c|}{2.640E+04} & \multicolumn{1}{c|}{2.533E+05} & \multicolumn{1}{c|}{1.913E+05} & 1.358E+05 \\ \hline
\multicolumn{6}{|c|}{Energy Densities} \\ \hline
\multicolumn{1}{|c|}{Parameter} & \multicolumn{1}{c|}{Units} & \multicolumn{1}{c|}{Flare 1} & \multicolumn{1}{c|}{Flare 2} & \multicolumn{1}{c|}{Flare 3} & Flare 5 \\ \hline
\multicolumn{1}{|c|}{$\mathrm{U_{BLR}}$} & \multicolumn{1}{c|}{$\mathrm{erg/cm^3}$} & \multicolumn{1}{c|}{$1.6602 \times 10^{-3}$} & \multicolumn{1}{c|}{$3.229 \times 10^{-2}$} & \multicolumn{1}{c|}{$2.847 \times 10^{-6}$} & $2.300 \times 10^{-3}$ \\
\multicolumn{1}{|c|}{$\mathrm{U_{Disk}}$} & \multicolumn{1}{c|}{$\mathrm{erg/cm^3}$} & \multicolumn{1}{c|}{$1.679 \times 10^{-3}$} & \multicolumn{1}{c|}{$5.844 \times 10^{-2}$} & \multicolumn{1}{c|}{$2.728 \times 10^{-2}$} & $1.550 \times 10^{-3}$ \\
\multicolumn{1}{|c|}{$\mathrm{U_e}$} & \multicolumn{1}{c|}{$\mathrm{erg/cm^3}$} & \multicolumn{1}{c|}{$9.9986 \times 10^{-3}$} & \multicolumn{1}{c|}{$7.554 \times 10^{-2}$} & \multicolumn{1}{c|}{$1.594 \times 10^{-2}$} & $1.880 \times 10^{-2}$ \\
\multicolumn{1}{|c|}{$\mathrm{U_B}$} & \multicolumn{1}{c|}{$\mathrm{erg/cm^3}$} & \multicolumn{1}{c|}{$4.605 \times 10^{-5}$} & \multicolumn{1}{c|}{$1.017 \times 10^{-3}$} & \multicolumn{1}{c|}{$4.370 \times 10^{-4}$} & $2.325 \times 10^{-5}$ \\ \hline
\multicolumn{6}{|c|}{Jct Power} \\ \hline
\multicolumn{1}{|c|}{Parameter} & \multicolumn{1}{c|}{Units} & \multicolumn{1}{c|}{Flare 1} & \multicolumn{1}{c|}{Flare 2} & \multicolumn{1}{c|}{Flare 3} & Flare 5 \\ \hline
\multicolumn{1}{|c|}{$\mathrm{P_{jet}}$} & \multicolumn{1}{c|}{$10^{45}$ ergs/s} & \multicolumn{1}{c|}{20.403} & \multicolumn{1}{c|}{6.185} & \multicolumn{1}{c|}{6.678} & 18.053 \\
\multicolumn{1}{|c|}{$\mathrm{P_e}$} & \multicolumn{1}{c|}{$10^{45}$ ergs/s} & \multicolumn{1}{c|}{20.309} & \multicolumn{1}{c|}{6.103} & \multicolumn{1}{c|}{6.499} & 18.030 \\
\multicolumn{1}{|c|}{$\mathrm{P_B}$} & \multicolumn{1}{c|}{$10^{45}$ ergs/s} & \multicolumn{1}{c|}{0.0935} & \multicolumn{1}{c|}{0.0821} & \multicolumn{1}{c|}{0.178} & 0.0223 \\ \hline
\end{tabular}
\caption{\label{tab:SED-model-parameters} The parameters obtained from modelling the multi-frequency SEDs of Flares 1, 2, 3, and 5 using JetSeT. The Power-Law with Cutoff (PLC) model was chosen for the radiating electrons. The energy densities and jet powers due to electrons and emission region magnetic field are subsequently computed. Additionally, the energy density of the Broad Line Region (BLR) and the accretion disk are computed in the rest frame of the emission region.}
\end{table*}
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/flare1_SEDplot_SSC-EC-best-fit-lsb-plc.pdf}}
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/flare2_SEDplot_SSC-EC-best-fit-lsb-plc.pdf}}\\ \subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/flare3_SEDplot_SSC-EC-best-fit-lsb-plc.pdf}}
\subfloat[]{\includegraphics[width = 0.5 \textwidth]{plots/flare5_SEDplot_SSC-EC-best-fit-lsb-plc.pdf}}
\caption{Multi-frequency SEDs obtained from Fermi-LAT, Swift-XRT, and Swift-UVOT for Flares 1, 2, 3, and 5 are modeled using \href{https://jetset.readthedocs.io/en/1.1.2/}{JetSeT}. The y-axis range for Flare 3 is different from the rest of the plots to accommodate for weaker EC BLR component.}
\label{fig_broadband_sed_fit}
\end{figure*}
The emission mechanisms of a blazar can be understood better from the modelling of broadband spectral energy distribution. We obtained the broadband SED using data from Fermi-LAT, Swift-XRT, and Swift-UVOT data for each of the flare periods indicated in the Figure \ref{fig_broadband_lc} except Flare 4 due to lack of Swift data in that period. A typical broadband SED of a blazar has a two-hump structure with the lower energy hump associated with Synchrotron emission and the Inverse Compton scattered photons are assumed to be responsible for the higher energy hump.
We used JetSeT\footnote{\url{https://andreatramacere.github.io/jetsetdoc/html/index.html}} (\cite{2020ascl.soft09001T}, \cite{2011ApJ...739...66T}, \cite{2009A&A...501..879T}) code for modelling the broadband SED. The Power-Law with Cut-off (PLC) model was considered for the electron distribution with low energy spectral slope $p$. The non-thermal Synchrotron emissions are assumed to be responsible for the lower energy emissions i.e., the first hump in the SED. The higher energy photons are produced via the Synchrotron Self-Compton (SSC) and External Compton (EC) emission mechanisms. These two mechanisms are special cases of the Inverse Compton mechanism and are assumed to be responsible for the second hump. The seed photons for SSC emissions are the Synchrotron photons which are produced when electrons and positrons are accelerated in the strong magnetic field in the emission region near the central black hole. For EC emissions, the photon inputs are mainly sourced from (1) the direct emission from the disk, (2) the reprocessed emission from BLR in the optical-UV frequency range, and (3) the reprocessed emission in infrared region from the dusty torus.
The model considers a spherically symmetric region with electron density (N) as the emission region with radius R, an entangled magnetic field B, and a bulk Lorentz factor $\Gamma$. The region subtends a small angle $\theta$ and is located at a distance $\mathrm{R_H}$ from the central black hole. The corresponding beaming factor is $\mathrm{\delta_D = 1/(\Gamma(1-\beta \cos{\theta}))}$. The synchrotron emission wavelength depends on the speed of the relativistic electrons and is bounded by the lorentz factors $\mathrm{\gamma_{min}}$ and $\mathrm{\gamma_{max}}$. The Broad Line Region (BLR) has inner and outer radii $\mathrm{(R_{BLR_{in}}}$ \& $\mathrm{{R_{BLR_{out}})}}$. The accretion disk has a luminosity $\mathrm{L_{Disk}}$ and a temperature $\mathrm{T_{Disk}}$. The Synchrotron emission was restricted to the frequency range $10^{10}-10^{18}$ Hz for effective fitting. Similarly we constrained the Inverse Compton frequency range to $10^{21}-10^{29}$ Hz. In case of FSRQs, generally EC is the dominant mechanism for $\gamma$-ray emission, as is evident for this source also. The seed photons for inverse Compton scattering are provided by the accretion disk as shown in the Figure \ref{fig_broadband_sed_fit}.
The energy density of the accretion disk ($U_{Disk}$) in the frame of the emission region (\cite{dermer2009high}) is calculated using
\begin{equation}
U'_{Disk} = \frac{0.207 \; R_g \; L_{Disk}}{\pi \; c \; R_H^3 \; \Gamma^3}
\label{eqn:energy_density_disk}
\end{equation}
where $R_g$ is the Schwarzschild radius of the central black hole. For the energy density of the Broad Line Region ($U_{BLR}$) in the frame of the emission region is estimated using the relation (\cite{10.1093/mnras/280.1.67}, \cite{10.1111/j.1365-2966.2008.13360.x})
\begin{equation}
U'_{BLR} \sim \frac{17}{12} \; \frac{a \; L_{Disk} \; \Gamma^2}{4\pi \; R_{BLR}^2 \; c}
\label{eqn:energy_density_BLR}
\end{equation}
where $a$ is the covering factor i.e., the fraction of the reprocessed isotropic radiation incident on the BLR and it is assumed to be equal to 0.1. The total power of the jet is estimated from energy densities of leptons ($U'_e$), cold protons ($U'_p$), and magnetic field ($U'_B$) in the co-moving frame of the jet using the relation
\begin{equation}
P_{jet} = \pi R^2 c^2 \Gamma^2 (U'_e + U'_p + U'_B)
\label{eqn:jet_power}
\end{equation}
while the power due to magnetic field ($P_B$) and leptons ($P_e$) are given by
\begin{equation}
P_B = \frac{1}{8} \;c \; R^2 \; \Gamma^2 \; B^2
\end{equation}
\begin{equation}
P_e = \frac{3c\Gamma^2}{4 R} \int_{E_{min}}^{E_{max}} E\; PLC(E) dE
\end{equation}
where $PLC(E)$ is the Power-Law with Cutoff model as a function of energy and $(E_{min}, E_{max})$ represent the energy range of the leptons considered for modelling. The respective values of parameters, energy densities, and jet power are mentioned in Table \ref{tab:SED-model-parameters}. The modelling was performed for different values of $\gamma_{min}$ and $\gamma_{max}$ and are adjusted for the best fit. The $R_H$, $R_{BLR_{in}}$, and $R_{BLR_{out}}$ are set to the default values given in JetSeT. The jet power ($P_{jet}$) decreases for Flares 2 and 3, even though there is an increase in accretion disk luminosity ($L_{Disk}$), mostly due to decrease in estimated size of the emission size ($R$) by an order of magnitude (see equation \ref{eqn:jet_power}). The $\gamma_{cut}$ value remains of the same order for the entire period of analysis indicating similar cut-off energy. In the low energy region, the spectral slope ($p$) fluctuates minimally around 2.4 which can be attributed to stability of seed photon emissions. Swift-XRT and Swift-UVOT data for the period in Flare 4 is not available for broadband SED modelling.
The variability time scales from section \ref{section:variability} can be used to estimate the size of the emission region using $R' = ct_{var}\delta_D/(1+z)$. For Flare 1, using the minimum $\gamma-ray$ variability ($t_{var} = 1.343 \; days$) we get a value of $6.819 \times 10^{16} \; cm$ for $R'$ and the corresponding value from the modelling is $1.190 \times 10^{17} \; cm$. Similar calculations for Flares 2, 3, and 5 give the values $7.120 \times 10^{16} \; cm$, $4.118 \times 10^{16} \; cm$, and $4.200 \times 10^{16} \; cm$ respectively, which are again compatible with the values obtained from modelling ($\sim 10^{17} cm$).
The distance of the emission region from the central black hole can be estimated from $d \sim c \delta_D^2 t_{var}/(1+z)$. Taking $t_{var} = 1.343 \; days$ the estimated $d$ value for Flare 1 is $2.661 \times 10^{18} \; cm$ ($R_H = 1.0 \times 10^{17} \; cm$ from the modelling). The estimates for the rest of the Flares are of the same order.
\section{Power Spectral Density}\label{sec5}
We have derive the power spectral density or PSD using the discrete Fourier transformation (DFT). The shape of the PSD is best fitted with powerlaw using the "power spectrum response" (PSRESP)" method.
For an evenly sampled light curve f(t$_{i}$) the rms-normalized PSD is the squared modulus of the DFT. Assuming the light curve is sampled over time steps t$_{i}$ and have total number of points N, the rms-normalized PSD can be defined as,
\begin{equation}
P(\nu{_k}) = \frac{2 T}{\mu^2 N^2} \Bigg\{ \Big[ \sum_{i=1}^{N} f(t_{i}) cos(2\pi\nu_{k}t_i) \Big]^2 + \Big[ \sum_{i=1}^{N} f(t_{i}) sin(2\pi\nu_{k}t_i) \Big]^2 \Bigg \}
\end{equation}
where $\mu$ is the mean of the light curve and $\nu_k$ = $i$/T, $i$ = 1, 2, 3,....N/2 with the maximum frequency, Nyquist frequency, $\nu_{Nyq}$ = N/2T.
The constant noise level is also estimated in form of normalised Poisson noise using the relation,
\begin{equation}
P_{Poisson} = \frac{\sigma^2_{err}}{\mu^2(\nu_{Nyq} - \nu_{min})}
\end{equation}
where $\sigma^2_{err}$ is the mean variance of the measurement uncertainty.
Further, we have used "Power Spectral Response (PSRESP)" method to tackle the distortions in PSD caused by the Fourier transform and estimate the best fit power law slope to the PSD. This method has already been used by many authors and currently one of the best method to describe the best fit PSD (\citealt{Uttley2002, Chatterjee2008, Max2014, Meyer_2019, Bhattacharyya_2020, Goyal_2022}). The detail procedure of the PSRESP method can be seen in \citep{Bhattacharyya_2020, Goyal_2022}. In the PSRESP method, we choose the range of PSD slope starting from 0.5 to 3.0 with steps 0.05 and corresponding to each slope success fraction is defined (see \citet{Bhattacharyya_2020} for more details on success fraction). The best fit PSD slope is determined as 2.15$\pm$0.87. As it has been seen that the blazar variability is a stochastic process and can be fitted with a single power-law. The PSD slope, $\beta$ =1 corresponds to pink or flicker noise and $\beta$ = 2 represent the red-noise. In our case, $\beta$ covers the range 1.28$-$3.02 suggesting variability in the blazar is a stochastic process of correlated coloured-noise type (\citealt{Goyal2020}).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{plots/PSD-pks0346.pdf}
\caption{Best it PSD for the gamma-ray light curve derived using the PSRESP method.}
\label{fig:PSD}
\end{figure}
The PSD is derived for the long-term gamma-ray light curve and the resultant PSD is shown in Figure \ref{fig:PSD}.
The PSD can also be used to characterize the variability in other wavebands. The earlier results suggest that the variability in high energy bands (X-ray and gamma-ray) is characterized by pink or flicker noise (\citealt{Abdo_2010, Isobe_2014, Abdollahi_2017} ) and in lower energy (radio and optical) by damped/red-noise type processes (\citealt{Max2014, Nilsson2018}).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{plots/lag_100.png}
\caption{Self-correlation plot of $\gamma$-ray lightcurve. Plot shows maximum correlation at a lag zero which is obvious. Secondary peaks are observed at minimum lag of around 50 days.}
\label{fig_auto_correlation}
\end{figure}
\section{Self-Correlation Study}\label{sec6}
As the data points of gamma-ray light curve are discrete Self-Correlation study of gamma ray light curve was carried out using the Discrete Correlation Function
(DCF). Without interpolating between the data points, DCF can compute the correlation coefficient. The correlation coefficient calculated using DFC is not bounded between $\pm 1$. The unbinned DCF function (\cite{edelson1988discrete}) can be calculated for two time series data sets with data point 'i' in series 1 and data point 'j' in series 2 as:
\begin{equation}
DCF (\tau) = \sum_{i,j} \frac{UDCF_{ij}}{M}
\end{equation}
Where,
\begin{equation}
UDCF_{ij}=\frac{(a_i - \Bar{a}) (b_j - \Bar{b})}{\sqrt{(\sigma^2_a-e^2_a)(\sigma^2_b - e^2_b)}}
\end{equation}
Where, $\tau$ is the DCF bin size, $\Delta t_ij = (t_j - t_i)$ is the lag for the pair $(a_i, b_i)$ and $M$ is the total number of pairs for which $(\tau - \Delta\tau/2) \le \Delta t_ij \le (\tau + \Delta\tau/2)$. $\Bar{a}$ and $\Bar{b}$ are the average of $a_i$ and $b_i$ respectively. $\sigma$ and $e$ are the standard deviation and measurement error associated with each set. The error in DCF can be calculated as:
\begin{equation}
\sigma_{DCF (\tau)} = \frac{1}{M -1} \sqrt{\sum_{i,j}{\left(UDCF_{ij} - DCF(\tau)\right)^2}}
\end{equation}
A positive correlation coefficient implies that the first time series is leading with respect to the second time series and a negative correlation coefficient implies that the first time series lagging behind the second one. For auto-correlation study, both the time series are same, here $\gamma$-ray light curve.
Self-correlation study of the gamma-ray light curve has been done using DCF. As expected auto correlation shows peak at zero lag figure \ref{fig_auto_correlation}. However, the second significant peak is found at a lag of around 52 days. Which implies that the source might have some periodic nature with periodicity of around 52 days, or the source is gravitationally lensed by some intermediate line-of-sight massive object(s). We do not find any previous gravitational lensing study of the source PKS 0346-27, which restrict us from making any strong statement about the secondary peaks.
\section{Results and Discussions}\label{res_dis}
In this paper we analysed the $\gamma$-ray flaring activity for the blazar PKS 0346-27 during period 2019 January-2021 December (MJD 58484-59575). Most of the time during this slot this source was in high state. We also analysed the follow-up observations in X-ray and UV bands. The archival data of Swift X-ray, UVOT and Fermi-gamma ray space telescope allowed us to investigate multi-frequency SED of the source. We study the possible physical mechanism of the source and its jet parameters through theoretical modelling of the multi-wavelength SED during four instances when the source was in high state in $\gamma$-ray energies. Study of this source for an earlier period has been reported in \cite{angioni2019large}.
Our study shows the presence of fast variability during the brightest phase of flaring activity. We found that the fast rise and decays are more frequent which constrains the emission region to be very compact. Statistical distribution of Rise and Decay times for $\gamma$-ray lightcurve is shown in figure \ref{fig:rise_and_decay_time_distribution}. We identify five flaring episodes for $\gamma$-ray lightcurve. We fit each flares with Sum-of-Exponentials (SoE). The individual flares 1,2 and 4 are again subdivided into two parts (a and b) for a better fit. A $\chi^2$ value of the order of $10^{-6}$ to $10^{-7}$ is obtained for the fit. A total of 39 peaks were modeled and corresponding rise and decay time were estimated.
We obtained the $\gamma$-ray SEDs for each flare (figure \ref{fig_gamma_sed}). $\gamma$-ray SEDs were fitted with Power Law, Log-Parabola, Broken Power-Law and Power Law with exponential cutoff models. Fitting parameters obtained from this study are shown in Table \ref{tab:gamma_ray_sed_param}. We model the multi-frequency SED of the source from $\gamma$-ray data of Fermi-LAT, X-ray data of Swift-XRT and UV data of Swift-UVOT for Flares 1,2,3 and 5. SED of all flares shows clear signature of synchrotron, SSC and $EC_{Disk}$ components. A clear thermal signature from the accretion disk were also obtained in \cite{angioni2019large}.
We have also produced the power spectral density for this source and a powerlaw seems to produce the best fit with slope 2.15$\pm$0.87 suggesting variability in this source is dominated by stochastic process.
We have done self-correlation study of the $\gamma$-ray lightcurve. Figure \ref{fig_auto_correlation} shows maximum correlation at zero lag, which is obvious in case of auto correlation. However, the plot shows secondary peaks at around 52 days, Which implies that the source might have some periodic nature with periodicity of around 52 days, or the source is gravitationally lensed by some intermediate line-of-sight massive object(s). We cannot draw a strong conclusion about the source being gravitationally lensed or not as we do not have much supporting observations and previous studies.
The hard $\gamma$-ray spectrum of PKS 0346-27 could make it a potential target for the upcoming Cherenkov telescope array (CTA)\footnote{\url{https://www.cta-observatory.org/}} in the low-energy tail of the CTA energy range; however, in the high-energy peak after 10 GeV, the SED modelling shows a steep cutoff. Incidentally, if PKS 0346-27 was detected in the TeV energy range, it would be the highest redshift $(z = 0.991)$ Very High Energy (VHE) source, surpassing the blazar S3 0218+35 (z = 0.944, \cite{ahnen2016detection}) which is gravitationally lensed as well as surpassing the blazar PKS 1441+25 (z = 0.940, \cite{ahnen2015very}). Because of this, VHE detection of PKS 0346-27 would also be relevant for Extra-galactic Background Light (EBL) studies.
Fermi-LAT has been an important tool for observing high energy blazars. In future, it will play a crusial role in order to observe more high redshift $(z \ge 1)$ blazars of GeV energy range. Continued monitoring of GeV sky will help to establish the duty cycles of $\gamma$-ray activities in relativistic jets. Moreover, Fermi-LAT is an invaluable tool in order to help ground-based $\gamma$-ray observatories such as Cherenkov telescopes to point their telescopes in appropriate direction of the source. It will serve the same for upcoming CTA. Therefore, continued observation of Fermi-LAT in CTA era would be a great instrument to understand more physics of blazar jets, both in local and high redshift $z \ge 1$ universe.
\section*{Acknowledgements}
S. Pramanick acknowledges the support of \href{https://online-inspire.gov.in/}{DST-INSPIRE} Scholarship and Prime Minister's Research Fellowship (\href{https://www.pmrf.in/}{PMRF}). R. Prince is grateful for the support of the Polish Funding Agency National Science Centre, project 2017/26/A/ST9/-00756 (MAESTRO 9) and MNiSW grant DIR/WK/2018/12. D. Bose acknowledges support of Ramanujan Fellowship-SB/S2/RJN-038/2017. This work is made use of Fermi telescope data and Fermitool package. This work is also made use of publicly available package JetSet and PSRESP. We used a Fermi-user-contributed tool \href{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/likeSED.py}{likeSED.py} for figure \ref{fig_gamma_sed}.
\section*{Data Availability}
For this work we have used data from Fermi-LAT, Swift-XRT and Swift-UVOT telescopes. All the data are available in the public domain. Details are given in section 2.
\bibliographystyle{mnras}
|
1,108,101,563,822 | arxiv | \section{Introduction}
Modern versions
\cite{cavities}
of the classic
Michelson-Morley
\cite{mm}
and Kennedy-Thorndike
\cite{kt}
experiments are among the most
sensitive tests of Lorentz invariance,
the symmetry behind special relativity.
Typically
these experiments search for minute
changes in the resonant frequencies of
electromagnetic cavities.
High quality factors allow for precise
tracking of the frequencies,
giving extreme sensitivities to
possible deviations from perfect
Lorentz symmetry.
However,
the effects of some forms of Lorentz violation
increase with wavelength.
As a result,
high sensitivities may be achieved
using very-low-frequency resonances,
such as those that naturally occur
in the Earth-ionosphere cavity,
despite their relatively low
quality factors.
In this work,
we consider signals of Lorentz violation
that would appear in Schumann resonances,
the lowest-frequency standing waves that
form in the atmosphere
\cite{schum1,schum2}.
We obtain conservative bounds by
comparison with observations
\cite{schumdata}.
The Earth's surface and ionosphere
form a cavity of immense size,
leading to resonances with
very long wavelengths.
The lowest-frequency resonances
have wavelengths that are comparable to
the circumference of the Earth
and have frequencies as low as 8 Hz.
Violations of Lorentz invariance
are described by the
Standard-Model Extension (SME)
\cite{ck,kost}.
The SME is a theoretical framework
that provides a basis
for many experimental and
theoretical studies of Lorentz violation
\cite{cpt,smetables},
including those involving
atoms \cite{ccexpt,spaceexpt},
hadrons \cite{hadrons,hadronth},
fermions \cite{electrons,electronth,muons,neutrinos},
the Higgs boson \cite{higgs},
gravity \cite{gravity},
and photons
\cite{cavities,photons,km_cmb,km_astro,bire,km}.
In addition to resonant-cavity experiments,
searches for Lorentz violation in photons
using the SME approach include
astrophysical searches
for vacuum birefringence
\cite{bire,km_cmb,km_astro,km}
and dispersion
\cite{km_astro,km}.
The goal of the SME is the characterization
of all violations of Lorentz symmetry that
are consistent with known physics
using effective field theory
\cite{kpo}.
While motivated in part
by the possibility of spontaneous
symmetry breaking in strings
\cite{ks,kp},
it encompassed violations
with other origins
\cite{ncqed,coup,qg,fn,bj,ss,bluhm,gm}.
Much of the work on Lorentz violation
has focused on the minimal SME,
which includes operators of renormalizable
dimension in a flat spacetime.
While nonrenormalizable operators
and curved spacetimes are of general interest
\cite{kost,coup,gravity,km_cmb,km_astro,kle},
Schumann resonances are particularly
sensitive to the dimension-three $CPT$-odd
Lorentz-violating operators of the
minimal-SME photon sector.
In cavities,
Lorentz violation can introduce
frequencies that depend on the
orientation of the cavity,
signalling rotation violations,
and dependence on velocity resulting
from boost violations
\cite{km}.
The quantity determining their sensitivity
is the dimensionless
fractional frequency shift $\delta\nu/\nu$.
To date, cavity experiments have
focused primarily on one class of violations,
namely the dimension-four $CPT$-even
Lorentz-violating operators
of the minimal SME.
The coefficients associated with
these operators $(\kfd{4})^{\alpha\beta\gamma\delta}$
are dimensionless.
Therefore,
dimensional analysis
suggests frequency shifts of the form
$\delta\nu/\nu \sim (\kfd{4})^{\alpha\beta\gamma\delta}$,
which implies little or no dependence on frequency.
Consequently,
there is little advantage to using
low-frequency resonances.
In contrast,
the coefficients associated with
the dimension-three operators
$(\kafd{3})_\kappa$ have mass-dimension one.
Therefore, we naively expect
shifts in frequency that depend on the
ratio $(\kafd{3})_\kappa/\nu$.
Given that $\nu\sim 10^{-23}$ GeV
for Schumann resonances,
we naively expect sensitivities
on the order of $10^{-23}$ GeV to $\kafd{3}$
coefficients,
assuming at least order-one
sensitivity to $\delta\nu/\nu$.
While not as sensitive as
birefringence tests
\cite{km_astro},
the bounds obtained here
represent the first terrestrial
bounds on the dimension-three operators,
providing a valuable check
on existing astrophysical constraints.
The structural outline of this paper
is as follows.
Section \ref{sec_bg} provides some
basic theory behind our calculation.
In Sec.\ \ref{sec_cal} we derive
modified wave equations and describe
a numerical method of determining
the effects of Lorentz violations
on Schumann resonances.
The results of our calculation
are discussed in Sec.\ \ref{sec_results}.
Unless otherwise stated,
we use the notation and conventions
of Refs.\ \cite{ck,km_astro}.
\section{Background}
\label{sec_bg}
In this section we discuss
the theory behind the calculation
of the Earth-ionosphere resonances
in the presence of dimension-three
Lorentz violations.
We begin by discussing the
conductivity of the atmosphere
and the Schumann resonances
in the usual case.
We then review the modified
electrodynamics including
the $CPT$-odd operators of
the minimal SME.
\subsection{Conductivity profile}
Resonances in the Earth-ionosphere cavity
are excited by a number of man-made
and natural phenomena,
lightning being a primary source.
The surface of the Earth forms the
lower boundary and,
in our calculation,
is treated as a perfect conductor.
The ionosphere forms a lossy upper boundary
with a finite conductivity profile
that increases with altitude.
The conductivity of the lower atmosphere
can be approximated by a ``knee''-model
that separates into two layers
with exponentially increasing conductivity
\cite{knee}:
\begin{align}
\sigma(r) &\simeq
\left\{\begin{array}{ll}
\infty \quad & r<R \ , \\
\sigma_0 \exp{\Frac{r-r_0}{\xi_l}} \quad & R<r<r_0 \ , \\
\sigma_0 \exp{\Frac{r-r_0}{\xi_u}} \quad & r_0<r \ ,
\end{array}\right.
\end{align}
where
$r$ is the distance for the center of the Earth,
and
$R \simeq 6400\ {\rm km }
\simeq 3.2 \times 10^{22}\ {\rm GeV}^{-1}$
is the Earth radius.
The lower layer is dominated by positive
and negative ions.
The upper layer approximates
the bottom of the ionosphere,
which is dominated by free electrons.
The knee radius $r_0$ is the boundary
between the layers,
and $\sigma_0$ is the conductivity
at this transition.
In units where $c=\hbar=\epsilon_0=\mu_0=1$,
we adopt values
$r_0=1.009 R$,
$\xi_l=0.007 R$,
$\xi_u=0.0005 R$, and
$\sigma_0=1.3 R^{-1}$
in the numerical calculations that follow.
This profile yields frequencies
and quality factors that closely
match the observed resonances in the
Lorentz-invariant limit.
In the conventional case,
both transverse-magnetic (TM) and
transverse-electric (TE) modes
may be excited in the cavity.
However,
TE modes oscillate in the radial direction,
implying wavelengths comparable to the
height of the ionosphere,
yielding frequencies in the kHz range.
In contrast,
the lowest-frequency TM modes
vary little in the radial direction
but form standing waves that encircle the Earth.
As a result,
the wavelengths are comparable
to the Earth's circumference,
yielding frequencies as low as 8 Hz.
Our goal is to understand the effects
of Lorentz violations on these
low-frequency Schumann resonances.
\subsection{Modified electrodynamics}
The lagrangian governing
electromagnetic waves,
including dimension-three
Lorentz-violating operators,
is given by
\cite{ck}
\begin{equation}
{\cal L} = -\Frac14 F_{\mu\nu}F^{\mu\nu}
+{\textstyle{1\over 2}}\epsilon^{\kappa\lambda\mu\nu}(\kafd{3})_\kappa A_\lambda F_{\mu\nu} \ ,
\label{lagrangian}
\end{equation}
where $F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu$
is the field strength.
The resulting theory preserves
the usual gauge symmetry,
but violates $CPT$ invariance.
Lorentz and $CPT$ violations are
controlled by the constant
coefficients $(\kafd{3})_\kappa$,
which include a pseudoscalar $(\kafd{3})_0$
and pseudovector $\mbf\kafd{3}$.
The $(\kafd{3})_\kappa$ coefficients
are assumed to be constant,
which leads to energy-momentum
conservation.
More generally one can consider
Lorentz-violating backgrounds
with spacetime variations.
These types of violations
are particularly important
when considering Lorentz
violations in curved spacetimes
\cite{kost,gravity}.
However,
in scenarios where violations
originate in the moments
shortly after the big bang,
it is likely that any variation
has expanded,
leading to little fluctuation
over experimentally relevant
time and length scales.
Consequently,
the idea presented here probes $(\kafd{3})_\kappa$
in our local neighborhood.
Astrophysical tests rely on light
that has propagated across much of
the visible universe.
So they test Lorentz
invariance over much larger scales
and could be drastically affected
by variations in $(\kafd{3})_\kappa$.
Therefore,
while current bounds from
astrophysical searches for birefringence
currently lie at the $10^{-42}$ GeV level
\cite{km_astro},
terrestrial tests provide an important
complementary set of constraints.
The equations of motion resulting
from Eq.\ \rf{lagrangian} provide
Lorentz-violating inhomogeneous
Maxwell equations.
We are primarily interested
in harmonic solutions
and consider electric and magnetic
fields of the form
$\mbf E(t) =\mbf E(\omega) e^{-i\omega t},
\mbf B(t) =\mbf B(\omega) e^{-i\omega t}$.
Together with the usual
homogeneous equations,
we arrive at a Lorentz-violating
electrodynamics with a
modified Amp\'ere law and a
conventional Faraday law:
\begin{align}
\hspace*{-3pt}&
\mbf\nabla\times\mbf B +i\omega\mbf E - 2\, (\kafd{3})_0\, \mbf B + 2\, \mbf\kafd{3}\times\mbf E
-\sigma\mbf E = \mbf J_s \ , \notag \\
\hspace*{-3pt}&
\mbf\nabla\times\mbf E -i\omega\mbf B = 0 \ .
\label{max1}
\end{align}
Here we have included the usual
source current and conductivity terms.
These conventional source terms result
if we assume usual coupling to matter.
The source current is not needed
in determining the resonances
but could be used in modeling the effects
of individual lightning strikes.
The conductivity term is necessary to obtain
realistic frequencies and quality factors,
which are affected by the
profile of the upper boundary
and the losses it introduces.
\section{Calculation}
\label{sec_cal}
Our goal is to
calculate the resonant frequencies
that result from the modified Maxwell equations.
We begin by expanding into spherical
harmonics and deriving modified wave equations.
A numerical calculation is used to
estimate the resulting resonant frequencies.
The results are discussed
in the following section.
\subsection{Wave equations}
We begin our search for resonances
by first expressing the modified Maxwell
equations \rf{max1}
in spherical coordinates using
the helicity-basis and identities
discussed in Appendix \ref{sec_sw}.
This involves writing the
Maxwell equations in covariant form,
then using relation \rf{J-ident2}
to express them in terms of
$\partial/\partial r$
and covariant angular-momentum
ladder operators $J_\pm$.
Dropping the source current,
the result is
\begin{align}
0 &= J_+B_-+J_-B_+ +(\omega+i\sigma)r E_r
+2ir(\kafd{3})_0B_r
\notag \\ &\quad
+ 2r(\kafd{3})_+E_- - 2r(\kafd{3})_-E_+ \ ,
\label{amp_r} \\
0 &= \pm \Frac{\prt\phantom{r}}{\prt r} r B_\pm
- J_\pm B_r +(\omega+i\sigma)r E_\pm
+2ir(\kafd{3})_0 B_\pm
\notag \\ &\quad
\pm 2r(\kafd{3})_r E_\pm \mp 2r(\kafd{3})_\pm E_r \ ,
\label{amp_pm} \\
0 &= J_+ E_- +J_- E_+ -\omega r B_r \ ,
\label{far_r} \\
0 &= \pm \Frac{\prt\phantom{r}}{\prt r} r E_\pm
- J_\pm E_r -\omega r B_\pm \ ,
\label{far_pm}
\end{align}
where $E_r$, $B_r$ are
the radial field components,
and $E_\pm$, $B_\pm$ are
negative/positive helicity components,
as discussed in the appendix.
The components of $\mbf\kafd{3}$ are defined by
$(\kafd{3})_a=\mbf{\hat e}_a\cdot \mbf\kafd{3}$,
where ${\hat e}_a$ are the helicity
basis vectors.
The components $E_\pm$ and $B_\pm$ are
referred to as spin-weighted functions
with spin-weight-$(\pm 1)$,
while $E_r$, $B_r$ have a weight of zero.
Spin weight and helicity are
equivalent up to a sign.
Eqs.\ \rf{amp_r} and \rf{far_r} provide
two scalar relations, while
\rf{amp_pm} and \rf{far_pm}
have a spin weight of $\pm 1$.
We can expand the field components in
spin-weighted spherical harmonics.
These provide the generalization of
the familiar spherical harmonics
to spin-weighted functions.
The expansion takes the form
\begin{align}
E_r &= \sum \Frac{1}{r} E^{(0E)}_{jm} \, \syjm{0}{jm} \ ,
\\
E_\pm & = \sum \sqrt{\Frac{j(j+1)}{2}}\Frac{1}{r}
(\mp E^{(1E)}_{jm} - i E^{(1B)}_{jm} ) \, \syjm{\pm1}{jm} \ ,
\\
B_r &= \sum \Frac{1}{r} B^{(0B)}_{jm} \, \syjm{0}{jm} \ ,
\\
B_\pm & = \sum \sqrt{\Frac{j(j+1)}{2}}\Frac{1}{r}
(\mp B^{(1B)}_{jm} - i B^{(1E)}_{jm} ) \, \syjm{\pm1}{jm} \ ,
\end{align}
where
$E^{(0E)}_{jm}$, $E^{(1E)}_{jm}$, $E^{(1B)}_{jm}$,
$B^{(0B)}_{jm}$, $B^{(1B)}_{jm}$, and $B^{(1E)}_{jm}$
are $r$-dependent field coefficients.
They are associated with
total-angular-momentum eigenmodes
and have $E$-type parity, $(-1)^j$, or
$B$-type parity, $(-1)^{j+1}$.
The $\sqrt{j(j+1)/2}$ and $1/r$
factors in the expansions are
for convenience.
Using these expansions,
we can express the Maxwell equations
in terms of
$E^{(0E)}_{jm}$, $E^{(1E)}_{jm}$, $E^{(1B)}_{jm}$,
$B^{(0B)}_{jm}$, $B^{(1B)}_{jm}$, and $B^{(1E)}_{jm}$.
First, using the ladder operators $J_\mp$
to lower/raise the spin-$(\pm 1)$ relations
\rf{amp_pm} and \rf{far_pm}
we arrive at six scalar Maxwell equations.
We then use the spherical-harmonic
expansions of the fields and the
orthogonality relation \rf{orth}
to get relations between
the six expansion coefficients.
Some algebra yields
three $E$-parity equations
and three $B$-parity equations:
\begin{align}
0 &= j(j+1)B^{(1E)}_{jm} - i(\omega+i\sigma)rE^{(0E)}_{jm} + r {\mathcal K}^{(0E)}_{jm} \ , \label{E2} \\
0 &= \Frac{\prt\phantom{r}}{\prt r} B^{(1E)}_{jm} - i(\omega+i\sigma) E^{(1E)}_{jm} + \Frac{1}{j(j+1)}{\mathcal K}^{(1E)}_{jm} \ , \label{E3} \\
0 &= r\Frac{\prt\phantom{r}}{\prt r} B^{(1B)}_{jm} - B^{(0B)}_{jm} + i (\omega+i\sigma)rE^{(1B)}_{jm} \notag \\ & \quad + \Frac{r}{j(j+1)}{\mathcal K}^{(1B)}_{jm} \ , \label{B3} \\
0 &= j(j+1) E^{(1B)}_{jm} + i \omega r B^{(0B)}_{jm} \ , \label{B1} \\
0 &= \Frac{\prt\phantom{r}}{\prt r} E^{(1B)}_{jm} + i\omega B^{(1B)}_{jm} \ , \label{B2} \\
0 &= r \Frac{\prt\phantom{r}}{\prt r} E^{(1E)}_{jm} - E^{(0E)}_{jm} - i\omega r B^{(1E)}_{jm} \ , \label{E1}
\end{align}
where the Lorentz- and $CPT$-violating
contributions have been collected into the
field combinations
\begin{align}
{\mathcal K}^{(0E)}_{jm} &= 2(\kafd{3})_0 B^{(0B)}_{jm} + 2i|\mbf\kafd{3}|mE^{(1E)}_{jm}
\notag \\ & \quad
+ 2|\mbf\kafd{3}|(j-1){\mathcal C}_{jm} \EoBjm{(j-1)m}
\notag \\ & \quad
- 2|\mbf\kafd{3}|(j+2){\mathcal C}_{(j+1)m} \EoBjm{(j+1)m} \ ,
\label{kr} \\
{\mathcal K}^{(1E)}_{jm} &= 2(\kafd{3})_0 j(j+1)B^{(1B)}_{jm}
\notag \\ & \quad
+2i|\mbf\kafd{3}|mE^{(1E)}_{jm}
+2i|\mbf\kafd{3}|mE^{(0E)}_{jm}
\notag \\ & \quad
+2|\mbf\kafd{3}|(j^2+2j-1){\mathcal C}_{jm}\EoBjm{(j-1)m}
\notag \\ & \quad
+2|\mbf\kafd{3}|(j^2-2){\mathcal C}_{(j+1)m}\EoBjm{(j+1)m} \ ,
\label{kE} \\
{\mathcal K}^{(1B)}_{jm} &= -2(\kafd{3})_0 j(j+1)B^{(1E)}_{jm} -2i|\mbf\kafd{3}|mE^{(1B)}_{jm}
\notag \\ & \quad
-2|\mbf\kafd{3}|(j+1){\mathcal C}_{jm}\EzEjm{(j-1)m}
\notag \\ & \quad
+2|\mbf\kafd{3}|j{\mathcal C}_{(j+1)m}\EzEjm{(j+1)m}
\notag \\ & \quad
+2|\mbf\kafd{3}|(j^2+2j-1){\mathcal C}_{jm}\EoEjm{(j-1)m}
\notag \\ & \quad
+2|\mbf\kafd{3}|(j^2-2){\mathcal C}_{(j+1)m}\EoEjm{(j+1)m} \ ,
\label{kB}
\end{align}
where
${\mathcal C}_{jm} = \sqrt{(j^2-m^2)/(4j^2-1)}$.
Here we take the
angular-momentum quantization axis along
the direction of $\mbf\kafd{3}$.
Note that Eqs.\ \rf{E2}-\rf{B3}
correspond to the modified spherical Amp\'ere law
and Eqs.\ \rf{B1}-\rf{E1} are the
usual Faraday law.
In the conventional case,
where all coefficients for Lorentz violation are zero,
rotational symmetry implies that
resonances are eigenmodes of angular momentum
with definite $j$ and $m$ values.
The symmetry also implies degeneracy in $m$.
So the different resonant frequencies
correspond to different values of
the total-angular-momentum index $j$.
Setting ${\mathcal K}^{(0E)}_{jm}={\mathcal K}^{(1E)}_{jm}={\mathcal K}^{(1B)}_{jm}=0$ in Eqs.\ \rf{E2}-\rf{E1},
we also note that the Lorentz-invariant case
splits according to parity.
The $B$-parity resonances
correspond to the high-frequency TE modes,
while low-frequency TM Schumann resonances
are the $E$-parity modes.
Allowing for Lorentz violations,
the new symmetries of the system lead
to several generic predictions.
Rotational symmetry is preserved
in the event that we have only
isotropic violations associated with
coefficient $(\kafd{3})_0$.
This implies that the indexing and
degeneracies of the modes is the same,
but the frequencies may change.
In contrast,
the vector $\mbf\kafd{3}$ breaks
the usual degeneracy.
The system remains symmetric under
rotations about $\mbf\kafd{3}$.
So we expect
resonances that are
eigenmodes of these rotations
with eigenvalues $m$, as usual.
However,
these coefficients break
spherical symmetry,
implying the index $j$ is no longer
associated with resonances.
As a result,
the usual $2j+1$ degeneracies should break,
yielding modes with definite $m$
but indefinite $j$.
Consequently,
we expect two types of
effects that would signify possible
Lorentz violation.
One is a split of degeneracies leading to
additional resonant frequencies.
This results from anisotropic violations.
The other effect is a shift in frequencies
that may result from either
anisotropic and isotropic violations.
The above first-order differential
equations can be reduced to second-order
modified wave equations.
We begin by using the Faraday law,
Eqs.\ \rf{B1}-\rf{E1},
to eliminate the magnetic field.
We also use Eq.\ \rf{E2}
to eliminate the electric field
component $E^{(0E)}_{jm}$ in favor of
$E^{(1E)}_{jm}$, $E^{(1B)}_{jm}$, and ${\mathcal K}^{(0E)}_{jm}$.
The result of this process is three
coupled equations relating the
three sets of field components
$E^{(1E)}_{jm}$, $E^{(1B)}_{jm}$, and ${\mathcal K}^{(0E)}_{jm}$:
\begin{widetext}
\begin{align}
0 &= \Frac{\prt^2\phantom{r}}{\prt r^2} E^{(1E)}_{jm}
+ \Frac{p^2_j}{\omega+i\sigma}\big(\Frac{\prt\phantom{r}}{\prt r}\Frac{\omega+i\sigma}{p^2_j}\big) \Frac{\prt\phantom{r}}{\prt r} E^{(1E)}_{jm} + p^2_jE^{(1E)}_{jm}
+\Frac{ip^2_j}{\omega+i\sigma}\Frac{\prt\phantom{r}}{\prt r} \Frac{1}{rp^2_j}{\mathcal K}^{(0E)}_{jm}
+2i|\mbf\kafd{3}|\Frac{m\omega}{(\omega+i\sigma)j(j+1)} {\mathcal K}^{(0E)}_{jm}
\notag \\ &\quad
-2(\kafd{3})_0 \Frac{p^2_j}{(\omega+i\sigma)\omega} \drE^{(1B)}_{jm}
-2|\mbf\kafd{3}|\Frac{p^2_jm}{(\omega+i\sigma)j(j+1)}E^{(1E)}_{jm}
+2|\mbf\kafd{3}|\Frac{m}{(\omega+i\sigma)r} \drE^{(1E)}_{jm}
\notag \\ &\quad
+2i|\mbf\kafd{3}|\Frac{p^2_j(j^2+2j-1){\mathcal C}_{jm}}{(\omega+i\sigma)j(j+1)} \EoBjm{(j-1)m}
+2i|\mbf\kafd{3}|\Frac{p^2_j(j^2-2){\mathcal C}_{(j+1)m}}{(\omega+i\sigma)j(j+1)} \EoBjm{(j+1)m} \ ,
\displaybreak[0]
\label{E_eq}\\
0 &= \Frac{\prt^2\phantom{r}}{\prt r^2} E^{(1B)}_{jm} + p^2_j E^{(1B)}_{jm}
-2|\mbf\kafd{3}|\Frac{m\omega}{j(j+1)}E^{(1B)}_{jm}
+2(\kafd{3})_0 \Frac{(\omega+i\sigma)\omega}{p^2_j} \Frac{\prt\phantom{r}}{\prt r} E^{(1E)}_{jm}
+2i(\kafd{3})_0 \Frac{\omega}{p^2_j r} {\mathcal K}^{(0E)}_{jm}
\notag \\ &\quad
-2i|\mbf\kafd{3}| \Frac{(j^2+2j-1){\mathcal C}_{jm}\omega}{j(j+1)} \EoEjm{(j-1)m}
-2i|\mbf\kafd{3}| \Frac{(j^2-2){\mathcal C}_{(j+1)m}\omega}{j(j+1)} \EoEjm{(j+1)m}
-2i|\mbf\kafd{3}| \Frac{(j-1){\mathcal C}_{jm}\omega}{p^2_{j-1}r} \Frac{\prt\phantom{r}}{\prt r} \EoEjm{(j-1)m}
\notag \\ &\quad
+2|\mbf\kafd{3}| \Frac{{\mathcal C}_{jm}\omega^2}{jp^2_{j-1}} \kzEjm{(j-1)m}
+2i|\mbf\kafd{3}| \Frac{(j+2){\mathcal C}_{(j+1)m}\omega}{p^2_{j+1}r} \Frac{\prt\phantom{r}}{\prt r} \EoEjm{(j+1)m}
-2|\mbf\kafd{3}| \Frac{{\mathcal C}_{(j+1)m}\omega^2}{(j+1)p^2_{j+1}} \kzEjm{(j+1)m} \ ,
\displaybreak[0]
\label{B_eq}\\
0 &= {\mathcal K}^{(0E)}_{jm} - 2i (\kafd{3})_0 \Frac{j(j+1)}{\omega r} E^{(1B)}_{jm} - 2i|\mbf\kafd{3}| mE^{(1E)}_{jm}
\notag \\ &\quad
- 2|\mbf\kafd{3}| (j-1){\mathcal C}_{jm}\EoBjm{(j-1)m}
+ 2|\mbf\kafd{3}| (j+2){\mathcal C}_{(j+1)m}\EoBjm{(j+1)m} \ ,
\label{K_eq}
\end{align}
\end{widetext}
where we define $p^2_j=\omega(\omega+i\sigma)-j(j+1)/r^2$.
Note that we could use Eq.\ \rf{K_eq}
to eliminate ${\mathcal K}^{(0E)}_{jm}$.
However, for simplicity,
we treat ${\mathcal K}^{(0E)}_{jm}$ as a dynamical
field on equal footing with $E^{(1E)}_{jm}$ and $E^{(1B)}_{jm}$.
The field components $E^{(1E)}_{jm}$ and $E^{(1B)}_{jm}$
correspond to the transverse part
of the electric field,
which vanishes at the surfaces
of a perfect conductor.
This implies
${\mathcal K}^{(0E)}_{jm}$ vanishes on the surfaces as well.
So we take
$E^{(1E)}_{jm}$, $E^{(1B)}_{jm}$, and ${\mathcal K}^{(0E)}_{jm}$
as independent fields that vanish
at the boundaries of the cavity.
\subsection{Numerical frequencies}
We calculate the resonant frequencies
that result from the modified electrodynamics
by considering discrete
radii $r_n=R+\delta r (n+{\textstyle{1\over 2}})$,
where $n$ is an integer.
Defining discrete field coefficients
at these points,
$\EoEjm{njm}$, $\EoBjm{njm}$, and $\kzEjm{njm}$,
and using discrete derivatives,
wave equations \rf{E_eq}-\rf{K_eq}
can be written in the form of an
infinite-dimensional matrix equation.
Discrete resonances correspond to frequencies
where nontrivial field configurations exist.
These can be estimated by truncating
the matrix at finite index values
and searching for $\omega$ where
the truncated matrix is singular.
These $\omega$ are complex in general.
The real parts give the
resonant frequencies $\nu={\rm Re}\, \omega/(2\pi)$,
while the ratios of the real and
imaginary parts determine
the quality factors
$Q= -{\rm Re}\, \omega/{\rm Im}\, \omega/2$
of the modes.
In the event that $(\kafd{3})_\kappa =0$,
Eqs.\ \rf{E_eq}-\rf{K_eq} reduce to two wave equations,
one for $B$ modes and one for the Schumann $E$ modes.
However, nonzero $(\kafd{3})_\kappa$ coefficients
mix the two parities,
and resonances will no longer
possess definite
$E$ or $B$ parity.
We also note that no mixing of fields with
different $m$ values occurs,
but mixing across $j$ values results from
the vector $\mbf\kafd{3}$, as expected.
As a result,
all resonances have definite $m$ values,
and $m$ may be fixed in the calculation.
To determine the resonances,
we take 100 different $r$ values,
$n=0,\ldots 99$,
uniformly spaced between
$R$ and $1.01 R$.
This corresponds to a penetration
of about $0.001 R\simeq 6.4$ km
into the highly conductive ionosphere.
For fixed $m$,
we create a matrix including terms
corresponding to these $n$ values
and the ten lowest $j$ values
that are relevant, $j \geq |m|$.
The result is a square matrix
with dimension $100\times 10 \times 3$.
We use a row-reduction method
to determine its determinant
for different values of $\omega$
and search for roots.
\section{Results}
\label{sec_results}
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{\label{k0fig}
Calculated resonant frequencies
and quality factors for three different
values of $|(\kafd{3})_0|$.
Circles represent the resonances.
The Lorentz-invariant limit
is shown with $\times$ symbols
for comparison.
The shaded regions indicate the
corresponding widths at half maximum
for the three lowest resonances
in the Lorentz-invariant case.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{\label{kfig}
Resonant frequencies and $Q$ factors for
three values of $|\mbf\kafd{3}|$.
The $|\mbf\kafd{3}|=0$ case ($\times$ symbols)
is shown for comparison.
The shaded regions indicate the
corresponding widths at half maximum
for the three lowest resonances
in the Lorentz-invariant case.
Each plot includes resonances
for index values
$m=0$ (stars),
$m=\pm 1$ (filled/empty circles),
$m=\pm 2$ (filled/empty squares),
and
$m=\pm 3$ (filled/empty triangles).}
\end{figure}
While any combination of
coefficients for Lorentz violation
is possible, for simplicity,
we next consider the effects
of $(\kafd{3})_0$ and $|\mbf \kafd{3} |$ separately.
We first consider a nonzero
$(\kafd{3})_0$ coefficient.
Figure \ref{k0fig}
shows the three lowest-frequency
resonances for three values
of $|(\kafd{3})_0|$.
The three resonances shown
correspond to $j=1,2,3$
and are degenerate in $m$.
The results are independent
of the sign of $(\kafd{3})_0$ due to
the symmetry of this case.
The figure also shows the
calculated resonant frequency
and $Q$ factor for the Lorentz-invariant
limit.
These are in good agreement
with the observed resonances
of
$(\nu,Q) = (7.8\,{\rm Hz},4.0)$,
$(14.1\,{\rm Hz},4.5)$,
$(20.3\,{\rm Hz},5.0)$
\cite{schumdata}.
From the figure we see
that values of $(\kafd{3})_0$
on the order of $10^{-21}$ GeV
significantly affect
both the resonant frequencies
and $Q$ factors.
In particular,
values of $4\times 10^{-21}$ GeV
drastically alter
all three modes.
We therefore adopt a conservative limit of
$|(\kafd{3})_T| < 4\times 10^{-21}$
on the time-like part of $(\kafd{3})_\kappa$
in the standard Sun-center frame
described in Ref.\ \cite{km}.
This translates to a bound of
\begin{equation}
|\kVdjm{3}{00}| < 14\times 10^{-21}
\end{equation}
on the spherical coefficient
from Ref.\ \cite{km_cmb}.
Note that the above bound is
about two orders of magnitude
larger than the naive prediction.
This can be understood from
the fact that the
$(\kafd{3})_\kappa$ coefficients mix
$E$- and $B$-parity modes.
The $B$-parity resonances
have frequencies that
are at much higher frequencies.
This leads to a seesaw effect
in the Maxwell equations
that suppresses perturbations
in the resonances.
As a result,
relatively large mixing
in the wave equations must
be present for significant
changes to manifest
in the low-frequency modes.
Small changes to the conductivity
profile can lead to large changes
in the resonances,
implying that our confidence in the
bound on $(\kafd{3})_T$ is somewhat weakened by
our knowledge of the atmospheric conductivity,
which is a complicated and variable system.
Much cleaner bounds can be
placed on the pseudovector part
$\mbf\kafd{3}$ since it leads to a
breakdown of the usual
$2j+1$ degeneracy among modes
with identical $j$ eigenvalues.
These bounds too are complicated by
imperfections in our conductivity model,
including the missing day-side/night-side
and polar asymmetries
that are present in the real atmosphere
\cite{schum2}.
These conventional anisotropies
can also break the degeneracies,
but they would only add to the effects
we are bounding.
We therefore neglect them
in our analysis.
However,
these features could be significant
in more detailed studies involving
field configurations.
For example,
local fields may experience variability
that includes daily and annual fluctuations
from conventional physics.
Extracting a sidereal dependence
caused by the rotation of the Earth
with respect to the fixed $\mbf\kafd{3}$
vector might be possible
but is beyond the scope of this work.
Figure \ref{kfig} shows the resonances
for three different values of $|\mbf\kafd{3}|$.
We notice shifts in the frequencies and
$Q$ factors as well as the expected
$2j+1$ splitting of the resonances.
In particular, values of
$|\mbf\kafd{3}| = 8\times 10^{-21}$ GeV
lead to new resonances separated
by frequency intervals comparable
to the resonance widths.
These multiple resonances would
be evident in the data if they existed.
Therefore, we take
$|\mbf\kafd{3}| < 8\times 10^{-21}$ GeV
as a conservative bound.
The relation to the spherical coefficients
is given by $|\mbf{\kafd{3}}|=\frac{1}{\sqrt{4\pi}}
\big(6|\kVdjm{3}{11}|^2+3|\kVdjm{3}{10}|^2\big)^{1/2}$
\cite{km_astro}.
So our bound leads to two constraints on
spherical coefficients for Lorentz violation of
\begin{align}
|\kVdjm{3}{11}| &< 12 \times 10^{-21} \mbox{ GeV} , \notag\\
|\kVdjm{3}{10}| &< 16 \times 10^{-21} \mbox{ GeV} .
\end{align}
These completely bound the vector ($j=1$)
dimension-three Lorentz-violating operators.
Again,
these constraints are less stringent than
the naive estimate.
\section{Discussion}
In this work,
we used Schumann resonances
in the Earth-ionosphere cavity to
place bounds on the
order of $10^{-20}$ GeV
on $CPT$-odd $j=1$ coefficients
of the minimal SME.
Similar bounds are places on $j=0$
scalar coefficients assuming
the actual conductivity profile of
the atmosphere is not significantly
different from our model profile.
These bounds constitute the
first terrestrial bounds
on dimension-three Lorentz-violating
operators.
While not as sensitive as
astrophysical searches
for vacuum birefringence,
the techniques used in this work
test Lorentz invariance in
our local neighborhood,
giving a bound on coefficients
over solar-system length scales.
In contrast,
astrophysical tests probe
Lorentz violation over cosmological
scales and may be obfuscated by
spacetime variations
or domains in the
Lorentz-violating backgrounds.
So local tests play
an important role in our
search for new physics.
Laboratory-based experiments
may also provide local tests
of Lorentz invariance.
Current cavity experiments
utilize high $Q$ factors that
allow for sensitivities to
$\delta\nu/\nu$ on the order
of parts in $10^{15}$ or better
\cite{cavities}.
This suggests improved bounds
may be possible in laboratory
experiments.
A rough estimate yields sensitives to
$(\kafd{3})_\kappa$ from about
$10^{-25}$ GeV
in optical cavities to around
$10^{-30}$ GeV
in lower-frequency microwave cavities.
Future studies involving Schumann
resonances may be able to
improve on the above bounds.
Precise tracking of the resonances
may allow for sidereal searches
that would indicate rotation violations
from $\mbf\kafd{3}$.
Also, boost violations from $(\kafd{3})_\kappa$ coefficients
can lead to annual dependences that may
be discernible.
Regardless,
the constraints obtained here demonstrate
the potential of resonator experiments
as tests of dimension-three Lorentz violations
in the atmosphere and in the laboratory.
|
1,108,101,563,823 | arxiv | \section{Introduction}
The motif, defined as a small connected subgraph that recurs in a
graph, is the basic building block, or functional unit, of complex
networks \cite{motif_block}. In real-world networks (e.g., gene
regulatory networks), motifs represent the elementary interaction
patterns between small groups of nodes, and the relative frequencies
with which motifs appear represent different functions of the
network \cite{Motif, Review_motif, Superfamily}. Although it has
been found that there is a topological relationship between the
large-scale attributes (scale-free and hierarchical) and local
interaction patterns (subgraph based) \cite{motif_pnas}, it remains
unclear whether there is a relationship between small functional
units and other structure properties such as rich-club connections
of complex networks. In our previous study we find that rich-club
connections can dominate some global properties (e.g., assortativity
and transitivity) of a network \cite{Xu_unpublished}, which implies
the possible relation between the rich-club property and the
network's subgraph organization.
The rich-club property refers to the organization pattern of rich
nodes \cite{Richnode_PA}, especially whether rich nodes tend to
connect to one another, or with the remaining nodes
\cite{Richclub_origin, Colizza_richclub, NP_comment, Constraint,
APL_richclub, Zhou_richclub}. Because rich nodes often play a
central role in the static property of, and dynamic processes on,
complex networks \cite{Attack, Cascading_failure, Super_spreader},
significant attention has been paid to the prominent effects of the
richest elements \cite{AS_Xu} and the organization among them
\cite{Weight_richclub, Xu_unpublished}. A systematic framework is
needed to clearly understand the roles of rich nodes in different
real-world networks with distinct degree distributions.
In this study, we find the influences of rich nodes and their
organization pattern depend largely on the degree distributions of
complex networks. Rich nodes are important in scale-free networks
\cite{BA}, because a power-law degree distribution indicates that
the majority of nodes participate in at most one or two motifs,
while a few rich nodes take part in a very large number of small
subgraphs. Manipulating a very small number of rich-club connections
therefore can strongly affect the frequencies of the basic
functional blocks (motifs) for a heterogeneous network. In
comparison, for the network with a homogeneous degree distribution
(e.g., the network of US power grid), the links among rich nodes
show a tiny effect on the whole network. The main reason behind this
is that all nodes (including rich nodes) in a homogeneous network
are engaged in only a few interactions, and there are no hubs
linking to a significantly larger number of other nodes.
These results are helpful in understanding the origin of motifs and
motif clusters in real-world complex networks, and the mechanisms by
which how small subgraphs aggregate into larger superstructures. Our
finding has an important potential application: we can build a
framework to optimize and control the functional behaviors of
complex networks. In most cases we can not regenerate or redesign a
real-world network, but manipulating a small number of rich-club
connections gives us a chance to optimize the structure of the
network and control the relative frequencies of small functional
units in a predictable manner.
Furthermore, although pioneer studies have developed a series of
methods to judge whether a network has rich-club properties
\cite{Constraint, Colizza_richclub, Zhou_richclub}, these approaches
are based on how many links there are among rich nodes instead of
how these links affect the whole network. Based on subgraph ratio
profile, the topological structure among rich nodes can be uncovered
from the inspection of the basic functional units. In this study we
develop a novel method to judge whether a network has a rich-club or
not. The new method does not calculate how many links connect to
rich nodes compared with its randomized version while it depends on
how the organization pattern of rich nodes affects the appearance of
different motifs.
Taken together, these findings indicate the strong ties between the
local subgraphs and rich-club properties of complex networks, which
complements our understanding of a network's topological and
functional organization. Because each network can be characterized
by a set of distinct types of subgraphs and rich-club connections
are a significant property, our findings are expected to provide new
insights in understanding the evolution of dynamical networks and
design new models for characterizing real-world networks. Our work
is a step in an ongoing effort to bridge the local topology of a
network and its global statistical features.
\section{Method}
\subsection{Link rewiring algorithms}
Here we select the top $0.5\%$ of the nodes with the
highest degree as rich nodes in a network and manipulate
the connections among them. We use link rewiring algorithms
to generate the network with rich-club and the network
without rich-club, respectively. The basic idea is very
similar to the random rewiring method
\cite{Correlation_Science}, while the main difference is
that our new method only switches the links among rich
nodes and a small number of low-degree nodes. First we make
rich nodes fully connected to one another, so they form a
completely connected rich-club. Secondly, we completely
eradicate the edges among rich nodes, so that the network
has no rich-club.
\begin{figure}[htbp]
\includegraphics[width=0.48\textwidth]{fig_1.eps}
\caption{(Color online) (a) and (b) are the two connection patterns
for the four end nodes of a pair of links. (a) rich-club connection,
where one link connects to the two rich nodes and the other link
connects to the other low-degree nodes; (b) non-rich-club
connection, where one link connects to one rich node and one
low-degree node, and the other link connects to the two remaining
nodes. Using the link rewiring algorithms, we can obtain (c) the
network with rich-club, or (d) the network without rich-club.}
\label{fig1}
\end{figure}
Now we specify the rewiring algorithms. First we make all rich nodes
fully connected to generate a network with a significant rich-club.
If there is a link between two rich nodes, their structure remains
unchanged [Fig. \ref{fig1}(a)]. If there is no link between two rich
nodes, we perform the operation from Fig. \ref{fig1}(b) to
\ref{fig1}(a). That is, we select another two low-degree nodes that
respectively connect to the two rich nodes while do not connect to
each other [Fig. \ref{fig1}(b)]. Then we cut the two links between
the rich nodes and their low-degree neighbors, and connect the two
rich nodes as well as the two low-degree nodes, respectively [Fig.
\ref{fig1}(a)]. After repeating this process until all rich nodes
form a completely connected rich-club, we can obtain the network
with a full-connected rich-club [Fig. \ref{fig1}(c)].
Secondly, we completely eradicate the edges among rich
nodes, so that the network has no rich-club property. If
there is no link between two rich nodes, we will do nothing
[Fig. \ref{fig1}(b)]. If there is a link between two rich
nodes, we do the operation from Fig. \ref{fig1}(a) to
\ref{fig1}(b). We randomly select another pair of
low-degree nodes which connect to each other while do not
connect to either of the two rich nodes [Fig.
\ref{fig1}(a)]. Then we cut off the links both between the
two low-degree nodes and between two rich nodes
respectively, and let each rich node connect to one
low-degree node [Fig. \ref{fig1}(b)]. Repeating the above
process until the links among the whole rich nodes are
completely eradicated, we will get a network without
rich-club property [Fig. \ref{fig1}(d)].
Because we use the rewiring method, the degree of every
node in the original network exactly remains unchanged. For
the topological structure of the original network, there is
only small variation induced by manipulating rich-club
connections, so we can monitor how the subgraph frequencies
are affected by the rich-club property. Furthermore, we can
compare the results of the subgraph ratio profile for the
original network, the network with rich-club, and the
network without rich-club, to make more reliable inference
of whether the original network has a rich-club property or
not.
\subsection{Motif clusters of rich nodes in non-rich-club and rich-club networks}
Each network will be scanned for all possible $n$-node subgraphs (we
choose $n=4$). In a network with a skewed degree distribution, rich
nodes have much higher degrees than the overwhelming majority, so
whether they connect to each other to form a rich-club will strongly
affect the frequencies of subgraphs. Actually, rich nodes can absorb
a very large number of subgraphs and form a motif cluster. For
example, the triangles may not distribute uniformly within a
scale-free network but tend to aggregate around the hubs, because a
node with $k$ links can carry up to $k^2$ triangles
\cite{motif_pnas}. The aggregation of motifs into motif clusters is
important, because it implies that the potential functional
properties of the large number of subgraphs also need to be
evaluated at the level of subgraph clusters instead of being
evaluated only at the level of a single subgraph.
\begin{figure}[htbp]
\includegraphics[width=0.48\textwidth]{fig_2.eps}
\caption{(Color online) The demonstration for the aggregation of
non-rich-club motifs when a network has no rich-club and the
aggregation of rich-club motifs when a network has a rich-club.}
\label{fig2}
\end{figure}
Exploring rich-club connections provides a new way to evaluate the
functional properties of abundant subgraphs at the level of subgraph
clusters. A few rich nodes usually take part in a very large number
of small subgraphs and they can form motif clusters in real-world
complex networks. Actually, the organization of rich nodes can
dominate the appearance of particular motifs prominently. In the
non-rich-club network [Fig. \ref{fig2}(b)], rich nodes do not tend
to connect to each other, so the non-rich-club subgraphs [Fig.
\ref{fig2}(d)] will be more common. On the contrary, in the
rich-club network [Fig. \ref{fig2}(c)], rich nodes trend to connect
to each other, so the network will demonstrate a larger number of
the rich-club motifs [Fig. \ref{fig2}(e)]. In the original network
[Fig. \ref{fig2}(a)], rich nodes may or may not connect to each
other. Comparing the appearing frequencies of motifs in the above
three networks, we can conclude whether the original network has a
rich-club property.
It is obvious that by considering the subnetworks of rich nodes, the
frequencies of the non-rich-club motifs and/or rich-club motifs are
remarkably more than those of the randomized versions of the
subnetworks. The inherent existence of two distinct classes of
subgraphs (non-rich-club motifs and rich-club motifs) in a
heterogeneous network demonstrates that, in contrast to the
homogeneous network, the highly abundant motifs can not exist in
isolation but must naturally aggregate into subgraph clusters.
Specifically, in the network with a rich-club, the neighbors of a
highly connected node are linked to each other, therefore the chance
that low-degree nodes participate in highly connected subgraphs is
slim. In a homogeneous network, however, all nodes are engaged in
only a few interactions and the appearance of motifs is the
statistical average of the whole network, for there are no hubs
linking to a significantly higher number of other nodes to form
motif clusters.
\section{Results}
\subsection{Motif distributions in homogeneous and heterogeneous networks}
Table \ref{table1} lists the results of six undirected networks
(including three real-world networks and three model networks)
arranged with $k_{max}/k_s$ increasing. The value of the structural
cutoff degree $k_s$ can be regarded as the first approximation of
the maximum degree within a scale-free network \cite{Degree_cutoff}.
Here $k_{max}/k_s$ is a convenient index that can be used in complex
networks with any degree distribution to show the proportion of
links (or degrees) rich nodes possess in comparison with the
remaining nodes in a network \cite{Xu_unpublished}.
\begin{table}[htbp]
\centering \caption{Statistics of six undirected networks: number of
nodes $n$, average degree $\langle k\rangle$, the exponent of degree
distribution if the distribution follows a power law: $\alpha$ (or
``--'' if not), structural cutoff degree $k_s=\sqrt{\langle k\rangle
n}$ \cite{Degree_cutoff}, maximal degree $k_{max}$. SW is the
network generated by the small-world model \cite{SW}, PG is the
network of US power grid \cite{BA}, BA is the network generated by
the scale-free model \cite{BA}, EPA is the network from the pages
linking to www.epa.gov \cite{Pajek_data}, PFP is the network
generated by the model for the Internet topology \cite{PFP} and AS
is the network of the Internet topology at the level of autonomous
systems \cite{ASdata}.}
\begin{ruledtabular}
\begin{tabular}{ c c c c c c c}
Network & SW & PG & BA & EPA & PFP & AS \\
\hline
$n$ & $5000$ & $4941$ & $5000$ & $4772$ & $5000$ & $5375$ \\
$\langle k\rangle$ & $6.0$ & $2.7$ & $6.0$ & $3.7$ & $6.0$ & $3.9$ \\
$\alpha$ & $-$ & $-$ & $3.0$ & $2.0$ & $2.2$ & $2.2$ \\
$k_{max}$ & $16$ & $19$ & $219$ & $175$ & $1259$ & $1193$ \\
$k_s$ & $173.2$ & $115.4$ & $173.2$ & $132.9$ & $173.2$ & $144.8$ \\
{$k_{max}/k_s$} & $0.09$ & $0.16$ & $1.26$ & $1.32$ & $7.26$ & $8.24$ \\
type & \multicolumn{2}{c}{$k_{max}\ll k_s$} & \multicolumn{2}{c}{$k_{max}\approx k_s$} & \multicolumn{2}{c}{$k_{max}\gg k_s$} \\
\end{tabular}
\end{ruledtabular}
\label{table1}
\end{table}
The low values of $k_{max}/k_s$ for SW and PG mean that the two
networks have a homogeneous degree distribution and the degrees of
rich nodes are close to the majority of nodes. While a high value of
$k_{max}/k_s$ indicates that the network has a heterogeneous degree
distribution and the degrees of a few rich nodes are far larger than
the rest, like BA and EPA. Especially, PFP and AS not only have a
power-law degree distribution, but also possess a few superrich
nodes \cite{AS_Xu} for $k_{max}\gg k_s$ in the two networks.
Although motifs are only local interaction patterns, the
distribution of motifs can greatly reflect the topological
properties of the networks \cite{motif_pnas, Timeseries_pnas}. In
Table \ref{table2}, we list the percentage of heterogeneous-motif,
the percentage of homogeneous-motif, and the percentage of the sum
of heterogeneous-motif and homogeneous-motif for all the networks.
The heterogeneous-motif is an unequal small structure: the blue
vertex represents a rich node, and the other three red vertexes
represent low-degree nodes. This non-equilibrium structure shows
that the three low-degree nodes all attach to the rich node, while
the low-degree nodes do not connect to each other. Obviously, the
rich node has the highest status in the four nodes and this
structure should appear more in a network with a heterogeneous
degree distribution. Especially, as in the case of many real-world
networks, subgraphs with a central node are abundant in a scale-free
network. The homogeneous-motif is a chain-structure, and the
stations of the four nodes are more likely to be equal. We assert
that this structure should frequently occur in a network with a
homogeneous degree distribution.
As we have predicted, the results in Table \ref{table2} show that
the percentage of homogeneous-motif for SW [$61.9\%$] and PG
[$59.4\%$] are larger than the networks with a heterogeneous degree
distribution. The subset of $n$-node subgraphs in a heterogeneous
network often contains a central node, so the heterogeneous-motif
occurs more commonly in heterogeneous networks, such as for BA
[$64.5\%$], EPA [$80.5\%$], PFP [$92.8\%$] and AS [$96.0\%$]. In
summary, with the value of $k_{max}/k_s$ increasing, the ratio of
heterogeneous-motif/homogeneous-motif increases too.
\begin{table}[htbp]
\centering \caption{The first row is the percentage of
heterogeneous-motif, the second row is the percentage of
homogeneous-motif, the third row is the ratio of heterogeneous-motif
and homogeneous-motif, and the fourth row is the percentage of the
sum of the two subgraphs. SW, PG, BA, EPA, PFP and AS represent the
same networks in Table \ref{table1}.}
\begin{ruledtabular}
\begin{tabular}{ c | c c | c c | c c}
Motif & SW & PG & BA & EPA & PFP & AS \\
\hline
\includegraphics[height = 7 mm ]{table_2_a} & $9.7\%$ & $31.3\%$ & $64.5\%$ & $80.5\%$ & $92.8\%$ & $96.0\%$ \\
\hline
\includegraphics[height = 7 mm ]{table_2_b} & $61.9\%$ & $59.4\%$ & $34.6\%$ & $18.5\%$ & $4.7\%$ & $3.1\%$ \\
\hline
\includegraphics[height = 7 mm ]{table_2_c} & $0.16$ & $0.53$ & $1.86$ & $4.35$ & $19.93$ & $31.40$ \\
\hline
\includegraphics[height = 7 mm ]{table_2_d} & $71.6\%$ & $90.7\%$ & $99.1\%$ & $99.0\%$ & $97.5\%$ & $99.1\%$ \\
\end{tabular}
\end{ruledtabular}
\label{table2}
\end{table}
The above results indicate that rich nodes in homogeneous networks
(e.g., SW and PG) only have a very limited effect on the whole
network, for all nodes (including rich nodes) in such type of
networks are engaged in only a few interactions. Obviously,
rich-club connections are more involved in the heterogeneous-motif
in a heterogeneous network, for a node with higher degree has a more
chance to participate in this structure. In a heterogeneous network
(e.g., BA, EPA, PFP and AS), a few rich nodes with much more links
than the overwhelming majority can absorb a very large number of
subgraphs and form motif clusters, which makes rich-club connections
more influential to the whole network.
The percentage of the sum of heterogeneous-motif and
homogeneous-motif for all the networks is very high (up to
$99.1\%$), which means other types of motifs are relative sparse
compared with the above two types of motifs. Therefore, to form a
specific functional block, the absolute frequencies of a particular
subgraph are not necessary very large. Actually, it is enough for
the relative frequencies of the motif for the original network are
statistically higher than those for its randomized version
\cite{Motif}. Moreover, in view of the difficulty in forming the
specific functional blocks in a randomized network, the sparse
distribution of other motifs gives us a chance to control the
appearance of small functional subgraphs in real-world networks by
manipulating rich-club connections.
\subsection{Superfamilies of non-rich-club and rich-club networks}
Because undirected networks have only two types of triads (unclosed
triple and triangle), we only analyze the profile of the six types
of undirected connected tetrads ($4$-node motifs). The normalized
$Z$ scores of tetrads show a significant dependence on the network
size, so we use the abundance of each subgraph $i$ relative to
random networks \cite{Superfamily}:
\begin{equation}
\Delta_i=\frac{Nreal_i - \langle Nrand_i \rangle}{Nreal_i+ \langle
Nrand_i \rangle + \varepsilon}, \label{eq2}
\end{equation}
where $\varepsilon=4$ ensures that $\mid \Delta_i \mid $ is not
misleadingly large when the subgraph appears very few times in both
the real and random networks. The Subgraph Ratio Profile (SRP) is
the vector of $\mid \Delta_i \mid $ normalized to length 1:
\begin{equation}
SRP_i={\Delta_i}/{(\sum{{\Delta_i}^2})^{1/2}}. \label{eq3}
\end{equation}
Network motifs, which are patterns of interconnections occurring in
complex networks are significantly higher than those for randomized
networks \cite{Motif}. The motif pattern reflects the local
structural properties of complex networks and thus can be used to
classify networks. If different types of networks share the similar
result of SRP, these networks can be classified into the same
``superfamily'' \cite{Superfamily}. The networks in the same triad
superfamily share not only some particular types of motifs, but also
very similar proportions of all types of subgraphs.
Here we show the SRP results for the original network, the network
with rich-club, and the network without rich-club in Fig.
\ref{fig3}. If the above three networks belong to the same
superfamily, it means that the rich-club property has weak effect on
the original network, and this result shows the network is a
homogeneous network. If the three networks belong to different
superfamilies, it means that rich-club connections can strongly
affect the structure and function of the original network, and this
result indicates that the network is heterogeneous. Furthermore,
according to the fact that the original network belongs to the same
superfamily as the network with rich-club or the network without
rich-club, we can judge whether the original network has a rich-club
property.
\begin{figure}[htbp]
\includegraphics[width=0.48\textwidth]{fig_3.eps}
\caption{(Color online) The subgraph ratio profile (SRP) for six
undirected networks. SW, PG, BA, EPA, PFP and AS represent the same
networks in Table \ref{table1}.} \label{fig3}
\end{figure}
The networks of SW and PG have a homogeneous degree distribution, so
rich nodes in the two networks are not significantly higher than
others. Therefore, as has been shown in Figs. \ref{fig3}(a) and
\ref{fig3}(b), whether the two networks have rich-club properties
does not have any influence on SRP. Moreover, the original network,
and the networks with and without rich-club all belong to the same
superfamily. The above results indicate whether a homogeneous
network has a rich-club property is not very important, and
rich-club connections can not control the functions of such type of
networks.
Because the networks of BA and EPA have a heterogeneous degree
distribution, rich nodes possess much more links than the
overwhelming majority. Therefore, whether the two networks have
rich-club properties can greatly affect the result of SRP. As is
shown in Figs. \ref{fig3}(c) and \ref{fig3}(d), the original network
and the network with rich-club do not belong to the same
superfamily. Conversely, the original network and the network
without rich-club belong to the same superfamily, so BA and EPA both
have no rich-club property.
For the networks of PFP and AS, they not only have a heterogeneous
degree distribution but also have a few superrich nodes, so whether
the two networks have a rich-club can affect the result of SRP most
significantly. The original network and the network without
rich-club do not belong to the same superfamily. For the original
PFP and the network with rich-club belong to the same superfamily,
PFP has the property of rich-club as is shown in Fig. \ref{fig3}(e).
Basically we can say that AS has a rich-club property, for the
original AS has the very similar SRP to the network with rich-club,
except for the motif $6$ ($4$-node clique) in Fig. \ref{fig3}(f).
Yet the non-consistency of motif $6$ for the original network and
the network with rich-club may be the origin of arguments on whether
the Internet topology has a rich-club property
\cite{Richclub_origin, Colizza_richclub, NP_comment, Constraint}.
\section{Conclusions}
In conclusion, we find that the influences of rich-club connections
strongly depend on the degree distributions of complex networks. Our
findings show that in a homogeneous network, whether the network has
a rich-club or not is not very important for its structure and
function. While rich-club connections in a heterogeneous network
have a crucial implication, for they can partially optimize and
control the function of the whole network.
Our new framework for measuring the subgraph ratio profile can
provide a more impartial judgement on whether a network has a
rich-club. Previous studies put more attention on finding whether
the links among rich nodes appear more frequently in the original
network compared with its randomized counterparts
\cite{Colizza_richclub, NP_comment}. While the actual influence of
rich-clubs in different degree distribution networks has not been
studied. Our approach which is based on the effect of the rich-club
on the network structure and function, is therefore more advanced.
We demonstrate that strong ties between the rich-club property and
local (subgraph-based) structure underscore the importance to
understand the properties of complex networks as fully integrated
systems. Indeed, the abundance of some kinds of local interaction
patterns reflects the rich-club property of a network, raising
intriguing questions about the role of local events in shaping a
network's overall behavior \cite{motif_pnas}. These results indicate
that the analysis described here may have an impact on our
understanding for other types of subgraphs (e.g., cliques
\cite{Clique} and cycles \cite{Cycle}) in complex networks.
Our results show the significance of the rich-club property and
motif distributions in modeling and designing real-world networks
\cite{Network_motif_distribution}. An appropriate model should have
similar structure and function to the real-world network. To meet
this demand the model can be designed from the basic motifs or the
subgraph ratio profile, which can be easily controlled by the
rich-club property.
Our findings also deepen our understanding of the evolution of
dynamical networks. The existence of the dense rich-club motifs
and/or non-rich-club motifs in real-world networks may be a unifying
property of evolved systems, so it is interesting to understand the
rich-club concept from the perspective of network evolution. We
conjecture that the common origin of the local functional blocks and
the rich-club property is primarily the same, because neither the
density and topology of subgraphs nor the rich-club property can be
dissociated from the evolution of the overall network. Following the
framework in this work, we will contrive to bridge the gap between
local topologies of a network and its global statistical features in
future.
\begin{acknowledgments}
This work was supported by PolyU Postdoctoral Fellowships Scheme
(G-YX0N \& G-YX4A). X.-K. Xu and J. Zhang also acknowledge the
National Natural Science Foundation of China under Grant No.
61004104.
\end{acknowledgments}
|
1,108,101,563,824 | arxiv | \section{Introduction}
There has been considerable recent interest in inverse scattering problems for nonlinear partial differential equations (PDEs)~\cite{assylbekov_1,assylbekov_2,carstea,imanuvilov,isakov_1,isakov_2,isakov_3,isakov_4,kang,kurylev,lassas}. There
are numerous applications in various applied fields ranging from optical imaging to seismology. In general terms, the problem to be considered is to reconstruct the coefficients of a nonlinear PDE from boundary measurements. As in the case of inverse problems for linear PDEs, the fundamental questions relate to the
uniqueness, stability and reconstruction of the unknown coefficients. In contrast to the linear case (which is very well studied), the study of inverse problems for nonlinear PDE is still relatively unexplored. There are uniqueness and stability results for a variety of semilinear and quasilinear equations. Reconstruction methods for such problems are just beginning to be developed\cite{carstea,griesmaier,kang}.
In this paper, we consider the inverse problem of recovering the coefficients of a nonlinear elliptic PDE with cubic nonlinearity. This problem appears in optical physics, where the cubic term arises in the study of the Kerr effect---a nonlinear process where self-focusing of light is observed~\cite{boyd}. We show that it is possible to reconstruct the coefficients of the linear and nonlinear terms in a Kerr medium from boundary measurements. This result holds under a smallness condition on the measurements, which also guarantees the stability of the recovery. The reconstruction is based on inversion of the Born series, which expresses the solution to the inverse problem as an explicitly computable functional of the measured data. We note that this method has been extensively studied for linear PDEs~\cite{review}. The extension to the nonlinear setting involves a substantial reworking of the theory, especially the combinatorial structure of the Born series itself. We validate our results with numerical simulations that demonstrate the convergence of the inverse Born series under the expected smallness conditions.
The remainder of this paper is organized as follows. In section 2, we introduce the forward problem and state sufficient conditions for its solvability. The Born series is studied in section 3, the combinatorial structure of the series is characterized, and sufficient conditions for convergence are established. We also derive various estimates that are later used in section 4 to obtain our main result on the convergence of the inverse Born series. In section 5, we present the results of numerical reconstructions of a two-dimensional medium. Our conclusions are presented in section 6. The Appendix contains the proof of Proposition~1.
\section{Forward problem}
We consider the Kerr effect in a bounded domain $\Omega$ in $\mathbb{R}^d$ for $d>2$ with a smooth boundary $\partial\Omega$.
The scalar field $u$ obeys the nonlinear PDE
\begin{align}
\label{baseequation}
\Delta u + k^2(1 + \alpha(x))u + k^2\beta(x) |u|^2 u &= 0 \quad \text{ in } \quad \Omega \ , \\
\frac{\partial u}{\partial \nu } &= g \quad \text{ on } \quad \partial\Omega \ ,
\end{align}
where $k$ is the wavenumber and $\nu$ is the unit outward normal to $\partial\Omega$. The coefficients $\alpha$ and $\beta$ are the linear and nonlinear susceptibilities, respectively~\cite{boyd} and are taken to be real valued, as is the boundary source $g$. It follows that $u$ is real valued so that $|u|^2u= u^3$. More generally, $u$ is complex valued, in which case our results carry over with small modifications.
We now consider the solution $u_0$ to the linear problem
\begin{align}
\label{backgroundequation}
\Delta u_0 + k^2u_0 &= 0 \quad \text{ in } \quad \Omega \ , \\
\frac{\partial u_0}{\partial \nu } &= g \quad \text{ on } \quad \partial\Omega \ .
\end{align}
Following standard procedures, we find that the field $u$ obeys the integral equation
\begin{equation}\label{integralequation}
u(x) = u_0(x) - k^2\int_\Omega G(x, y) \left(\alpha(y)u(y) + \beta(y)u^3(y)\right)dy \ .
\end{equation}
Here the Green's function $G$ obeys
\begin{align}
\Delta_x G(x, y) + k^2 G(x, y) &= \delta(x-y) \quad \text{ in } \quad \Omega \ , \\
\frac{\partial G }{\partial \nu_y} &= 0 \quad \text{ on } \quad \partial\Omega \ .
\end{align}
We define the nonlinear operator $T: C(\overline{\Omega})\rightarrow C(\overline{\Omega})$ by
\begin{equation}\label{Tdef}
T(u) = u_0 - k^2\int_\Omega G(x, y) \left(\alpha(y)u(y) + \beta(y) u^3(y) \right) dy.
\end{equation}
We note that if $u\in C(\overline{\Omega})$ is a fixed point of $T$, that is $u = T(u)$, then $u$ satisfies equation (\ref{integralequation}).
The following result provides conditions for existence of a unique solution to (\ref{integralequation}).
\begin{proposition}
\label{bounds}
Let $T: C(\overline{\Omega})\rightarrow C(\overline{\Omega})$ be defined by (\ref{Tdef}) and define $\mu$ by
\begin{equation}\label{mudef1}
\mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy.
\end{equation}
If there exists $\gamma > 1/2 $ such that $$\Vert\alpha\Vert_\infty < \frac{2\gamma-1}{2\mu (1+\gamma)}$$ and $$\Vert\beta\Vert_\infty < \frac{1}{2\mu \Vert u_0\Vert^2(1+\gamma)^3},$$ then $T$ has a unique fixed point on the ball of radius $\gamma\| u_0\|_\infty$ about $u_0$ in $C(\overline{\Omega})$.
\end{proposition}
The proof is presented in Appendix \ref{fixedpointappendix}.
\section{Born series}
The forward problem is to compute the field $u$ as measured on $\partial\Omega$ when $g$ corresponds to a point source on $\partial\Omega$.
The solution to the forward problem is derived by iteration of the integral equation (\ref{integralequation}). We thus obtain
\begin{equation} \label{born_series}
\phi = K_1 (\zeta) + K_2 (\zeta, \zeta) + K_3( \zeta , \zeta, \zeta )+\cdots \ ,
\end{equation}
where $\phi=u-u_0$ and $\zeta := (\alpha, \beta)$. The forward operators $$K_n: [L^\infty(\Omega)]^{2n}\rightarrow C(\partial\Omega\times\partial\Omega)$$ are constructed below. We will refer to (\ref{born_series}) as the the Born series.
We note that Proposition~\ref{bounds} guarantees convergence of the Born series.
The forward operator $K_n$ is an $n$-linear operator (multilinear of order $n$) on $[ L^\infty(\Omega)^2]^n$.
In the following, we do not denote the dependence of $u_0$ on the source explicitly. The first term in the fixed point iteration is
\begin{equation}
u_1(x) : = T(u_0)(x) = u_0(x) + k^2\int_\Omega G(x, y) \left[\alpha(y) u_0(y)+ \beta(y) u_0^3(y)\right]dy \end{equation}
and thus $K_1$ is defined by
\begin{equation} K_1( \zeta) (x) = k^2\int_\Omega G(x, y) \left[\alpha(y) u_0(y)+ \beta(y) u_0^3(y)\right]dy .
\end{equation}
Next we observe that
\begin{equation}
u_2(x) : = T(u_1)(x) = u_0(x) + k^2\int_\Omega G(x, y) \left[\alpha(y) u_1(y) + \beta (y) u_1^3(y)\right]dy.
\end{equation}
Evidently, expansion of $u_1^3$ leads to terms which are multilinear in $\alpha$ and $\beta$. Subsequent iterates become progressively more complicated. To handle this problem, we introduce the operators: $a,b: C(\overline{\Omega}) \times [ L^\infty(\Omega)]^2\rightarrow C(\overline{\Omega})$, which are defined by
\begin{equation} \label{neq} a(v, \zeta) = k^2\int_{\Omega} G(x, y) \alpha (y) v(y) dy , \end{equation}
and
\begin{equation} \label{beq} b(v, \zeta ) = k^2 \int_{\Omega} G(x, y) \beta (y) v(y) dy . \end{equation}
The above operators have tensor counterparts which are defined as follows.
\begin{definition} \label{def1} Given $T_l = T_l (\zeta_1,\cdots,\zeta_l ),$ a multi-linear operator of order $l$, define the $l+1$ order multilinear operators $BT_l$ and $AT_l$ by
$$ BT_l(\zeta_1, \ldots , \zeta_l, \zeta_{l+1} )= b( T_l (\zeta_1, \ldots , \zeta_l ) , \zeta_{l+1} )$$
and
$$ AT_l(\zeta_1, \ldots , \zeta_l, \zeta_{l+1} )= a( T_l (\zeta_1, \ldots , \zeta_l ) , \zeta_{l+1} )$$
where $b$ and $n$ are given by (\ref{beq}) and (\ref{neq}) respectively.
\end{definition}
We will also need a tensor product of multilinear operators.
\begin{definition} \label{def2}
Given $T_j$ and $T_l$, multilinear operators of order $j$ and $l$ respectively, define the tensor product $T_l\otimes T_j$ by
$$ T_l\otimes T_j (\zeta_1, \ldots , \zeta_l, \zeta_{l+1},\dots, \zeta_{l+j} )= T_l (\zeta_1, \ldots , \zeta_l ) T_j (\zeta_{l+1},\dots, \zeta_{l+j} ).$$
and note that $T_l\otimes T_j$ is a multilinear operator of order $l+j$.
\end{definition}
Note that the tensor product of multilinear operators does not commute. Tensor products are extended to sums of multilinear operators by bilinearity of the tensor product, and the tensor product is also associative. In this notation, we see that if $v$ is a sum of multilinear operators, then
$$ Tv = u_0 + A v+ B v\otimes v \otimes v $$
yields another sum of multilinear operators ($u_0$ is an order zero operator).
\begin{claim} Viewing the $n$th iterate $u_n$ as a sum of multilinear operators, for any $n$ we have that
\begin{equation} u_{n} = u_{n-1} + \mbox{multilinear operators of order}\ \geq n . \end{equation}
\label{termslem}\end{claim}
\begin{proof}
We will prove this by induction. For the base case $n=1$, we have that $u_1 = u_0 + Au_0+Bu_0\otimes u_0 \otimes u_0$, so the statement holds.
Now assume that the statement holds for $u_{n-1}$. Then
\begin{eqnarray} u_n
&=& u_0 + Au_{n-1} + Bu_{n-1}\otimes u_{n-1} \otimes u_{n-1} . \end{eqnarray}
By inductive hypothesis,
$$u_{n-1} = u_{n-2}+ w $$
where $w$ is a sum of operators of order at least $n-1$. Hence we have that
\begin{multline} u_{n-1}\otimes u_{n-1} \otimes u_{n-1} = u_{n-2}\otimes u_{n-2}\otimes u_{n-2} + u_{n-2}\otimes u_{n-2} \otimes w \\+ w\otimes u_{n-2}\otimes u_{n-2} + u_{n-2}\otimes w \otimes u_{n-2} \\ + u_{n-2} \otimes w\otimes w + w\otimes u_{n-2} \otimes w + u_{n-2}\otimes w\otimes w+ w\otimes w\otimes w , \end{multline}
so that $$u_{n-1} \otimes u_{n-1} \otimes u_{n-1}= u_{n-2}\otimes u_{n-2} \otimes u_{n-2} + \mbox{multilinear operators of order} \geq n-1.$$
Applying $A$ to $u_{n-1}$ and $B$ to $u_{n-1}\otimes u_{n-1} \otimes u_{n-1}$, we increase the order of each by one. Hence we have that
\begin{eqnarray} u_n &=& u_0 + Au_{n-2} + Bu_{n-2}\otimes u_{n-2} \otimes u_{n-2} + \mbox{terms of degree} \geq n \\
&=& u_{n-1} + \mbox{terms of degree} \geq n . \end{eqnarray}
The result follows from induction. \end{proof}Given the previous result, we can now define the forward operators.
\begin{definition} The $n$th term of the forward series, $K_n (\zeta,\ldots, \zeta) $ is defined to be the sum of all multilinear operators of order exactly $n$ in the $nth$ iterate $u_n$.
\end{definition}
\subsection{General formula for the forward operators}
Using our tensor notation, the forward series is given by iterations of
$$Tv= u_0 + Av + B v\otimes v\otimes v.$$
Given $u_0$, we have
\begin{eqnarray} u_1 &=& Tu_0 = u_0 + Au_0 + B u_0 \otimes u_0 \otimes u_0 , \nonumber \\ u_2 &=& Tu_1 = u_0 + Au_1 + B u_1 \otimes u_1 \otimes u_1, \nonumber \\ u_{n+1} &=& Tu_n = u_0 + Au_n + B u_n \otimes u_n \otimes u_n. \nonumber \end{eqnarray}
Define $U_n$ to be the sum of the first $n$ forward operators, that is,
\begin{eqnarray} U_n &=& \sum_{i=0}^n K_i(\zeta_1,\ldots\zeta_i) \nonumber \\
&=& u_0 + \sum_{i=1}^n K_i(\zeta_1,\ldots,\zeta_i). \nonumber \end{eqnarray}
We know from Lemma \ref{termslem} that
$$ u_n = U_n + w,$$
where $w$ is a sum of multilinear operators, all of order $> n$. To find $U_{n+1}$, we use the iteration
$$ u_{n+1} = u_0 + A(U_n+w)+ B (U_n + w) \otimes (U_n + w) \otimes (U_n+ w) .$$
We know (also from Lemma \ref{termslem} ) that $K_{n+1}$ will be the sum of all terms here which are of order $n+1$. Since $w$ contains only terms of order $\geq n+1$, after applying $A$ or $B$, the result will be of higher order and hence will not be included in $K_{n+1}$. So any term containing $w$ after expanding out the tensor product can be dropped, and we have that all terms of $K_{n+1}$ will be contained in the sum $$AU_n + B U_n\otimes U_n \otimes U_n.$$ Since $A$ and $B$ each add one to the order, $K_{n+1}$ will consist of $AK_n$ and all terms of the form $$B K_{i_1}\otimes K_{i_2} \otimes K_{i_3} $$
where the ordered triplets $(i_1, i_2, i_3)$ are such that $i_1+i_2+i_3 = n$. Hence we have derived the following:
\begin{eqnarray} K_0 &=& u_0 ,\nonumber \\ K_1 &=& u_0 + Au_0 + B u_0\otimes u_0\otimes u_0 \nonumber, \\
K_{n+1} &=& AK_n + B\sum_{ \substack{ (i_1, i_2, i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} K_{i_1}\otimes K_{i_2}\otimes K_{i_3}. \label{Kformula} \end{eqnarray}
We note that we can count the number of such ordered triples in the above sum to be $$C(n):= n(n+1)/2 + (n+1). $$
\subsection{Bounds on the forward operators.}
In order to analyze the inverse Born series, we will need bounds for the forward operators $K_i$.
We will see that to apply existing convergence results about the inverse Born series we need boundedness of the operators as multilinear forms. We use the notation $| \cdot |_\infty $ to denote the bound on any multilinear operator of order $n$ as follows :
\begin{definition} For any multilinear operator $K$ of order $n$ on $[L^\infty(\Omega)]^{2n}$, we define
$$ | K |_\infty = \sup_{\substack{ \zeta_1,\ldots,\zeta_n }} { \| K(\zeta_1,\dots\zeta_n) \| \over{ \| \zeta_1\|\cdots\| \zeta_n\|}} . $$
\end{definition} Note that, for two multilinear operators $T_1$ and $T_2$ of the same order, we have the triangle inequality
$$| T_1+T_2|_\infty \leq | T_1 |_\infty + |T_2|_\infty.$$
\begin{claim} The forward operator $K_n$ given by (\ref{Kformula}) is a bounded multilinear operator from $[L^\infty(\Omega)]^{2n}$
to $C(\partial\Omega\times\partial\Omega)$ and
\begin{equation}\label{Knbound} | K_n |_\infty \leq \nu_n \mu^{n} \end{equation}
where \begin{equation}\label{mudef} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy, \end{equation},
$$ \nu_0 = \| u_0 \|_{C(\overline{\Omega}\times\partial\Omega )}, $$
and for all $n\geq 1$,
\begin{equation} \label{nudef} \nu_{n+1} = \nu_n + \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} \nu_{i_1}\nu_{i_2}\nu_{i_3} . \end{equation}
\end{claim}
\begin{proof} We first note that for our product operators in Definitions \ref{def1} and \ref{def2}, we have that $$ | BT_l |_\infty \leq \mu | T_l |_\infty , $$ $$ | AT_l |_\infty \leq \mu | T_l |_\infty , $$ and
$$ | T_l \otimes T _j |_\infty \leq | T_l |_\infty | T_j |_\infty. $$
The proof works then, by induction. The base case clearly holds with trivially with $| K_0 |_\infty = \nu_0 $. Assume that for each $i\leq n$, we have
$$ \| K_i \| \leq \nu_i \mu^i.$$ Using (\ref{Kformula}), we obtain
\begin{eqnarray} | K_{n+1} |_\infty &\leq& | AK_n |_\infty + | B \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} K_{i_1}\otimes K_{i_2}\otimes K_{i_3} |_\infty \nonumber \\ &\leq& \mu | K_n |_\infty + \mu \sum_{ \substack{ (i_1,i_2,i_3)\\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} | K_{i_1} |_\infty | K_{i_2}|_\infty | K_{i_3} |_\infty \nonumber \end{eqnarray}
which gives, by the inductive hypothesis
\begin{eqnarray} | K_{n+1} |_\infty &\leq& \nu_n \mu^{n+1} + \mu^{n+1} \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} \nu_{i_1}\nu_{i_2}\nu_{i_3} \nonumber \\
&=& \nu_{n+1}\mu^{n+1}. \nonumber \end{eqnarray}
\end{proof}
\begin{claim} \label{nubound} For the sequence $\{ \nu_n\} $ given by (\ref{nudef}) There exist constants $K$ and $\nu$ (both depending on $\nu_0$ but independent of $n$) such that for any $n\geq 0$, $$ {\nu_n} \leq \nu K^n .$$
\end{claim}
\begin{proof} To prove this, we consider the generating function
$$ P(x) = \sum_{n=0}^\infty \nu_n x^n .$$
We first note that it suffices to prove that this power series has a positive radius of convergence, since if this is the case, then for some positive $x$ the terms $\nu_n x^n
\rightarrow 0 $. In particular, they are bounded by some $\nu$, which would imply that $$\nu_n \leq \nu (1/x)^n.$$
We now show that $P(x)$ is analytic in some nontrivial interval around zero. Consider, formally for now, the cube of $P$,
\begin{eqnarray} (P(x))^3 &=& \sum_{ \substack{ i_1,i_2,i_3 = 0,\ldots , \infty } } x^{i_1} x^{i_2} x^{i_3} \nu_{i_1}\nu_{i_2}\nu_{i_3} \nonumber \\
&=& \sum_{n=0}^\infty f_n x^n \nonumber \end{eqnarray}
where $$ f_n = \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} \nu_{i_1}\nu_{i_2}\nu_{i_3} ,$$
which is exactly as appears in (\ref{nudef}).
Aow, we multiply (\ref{nudef}) by $x^n$ and sum to obtain
$$ \sum_{n=0}^\infty \nu_{n+1} x^n = \sum_{n=0}^\infty \nu_n x^n + \sum_{n=0}^\infty f_n x^n.$$
One checks that the left hand side is simply $(P(x) - \nu_0)/x$, and so the above yields
$$ (P(x) - \nu_0)/x = P(x) + (P(x))^3 . $$
So we have
\begin{equation}\label{polynomial} x(P(x))^3 + (x-1) P(x) + \nu_0 =0. \end{equation}
This polynomial in $P$ is singular, so it is not clear that it has an analytic solution at $x=0$. However, if we differentiate with respect to $x$, we obtain
\begin{equation} \label{ode} P^\prime (x) = - {(P(x))^3+P(x) \over{3x(P(x))^2 + x-1 }} \end{equation}
with $P(0) = \nu_0$. Since the right hand side is an analytic function of $x$ and $P$ in a neighborhood of $(0,\nu_0)$, the ode (\ref{ode}) (with initial condition) has a unique analytic solution in a neighborhood of $x=0$ (see for example Theorem 4.1 of Teschl \cite{Te}) . Integration of (\ref{ode}) combined with the initial condition implies that this analytic solution satisfies (\ref{polynomial}), and hence its coefficients must satisfy (\ref{nudef}).
\end{proof}
\begin{proposition} \label{forwardopbounds} The forward operator $K_n$ given by (\ref{Kformula}) is a bounded multilinear operator from $[L^\infty(\Omega)]^{2n}$ to $C^0(\overline{\Omega}\times\partial\Omega)$, and its bound satisfies
\begin{equation}\label{Knbound} | K_n |_\infty \leq \nu( K \mu)^n ,\end{equation}
where \begin{equation}\label{mudef} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy \end{equation} and $\nu,K,$ both depending on
$ \nu_0 = \| u_0 \|_{C(\overline{\Omega}\times\partial\Omega )}$, are as in Lemma \ref{nubound}.
\end{proposition}
\begin{corollary} The forward Born series
$$ u = u_0 + \sum_{n=1}^\infty K_n (\zeta,\ldots,\zeta) $$
where $K_n$ are given by (\ref{Kformula}) converges in $C(\overline{\Omega})$ for
$$ \| \zeta\|_\infty \leq {1\over{K\mu}} $$
where \begin{equation}\label{mudef} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy, \end{equation} and $\nu,K,$ both depending on
$ \nu_0 = \| u_0 \|_{C(\overline{\Omega})}$, are as in Lemma \ref{nubound}.
\end{corollary}
\section{Inverse Born Series}
The inverse problem is to reconstruct the coefficients $\alpha$ and $\beta$ from measurements of the scattering data $\phi$. We proceed by recalling that the Inverse Born series is defined as
\begin{equation}\label{inversedefinition}
\tilde{\zeta} = \mathcal{K}_1 \phi + \mathcal{K}_2 (\phi) + \mathcal{K}_3( \phi) + \cdots \ ,
\end{equation}
where the data $\phi \in C(\partial\Omega\times\partial\Omega).$ The inverse series was analyzed in \cite{moskow_1} and later studied in \cite{HoSc}. The inverse operator $\mathcal{K}_m$ can be computed from the formulas
\begin{align}
\label{inv_operators}
\mathcal{K}_1 (\phi) &= K_1^{+} (\phi),\\
\mathcal{K}_2(\phi) &=-\mathcal{K}_1\left(K_2 (\mathcal{K}_1(\phi),\mathcal{K}_1(\phi))\right),\\
\mathcal{K}_m(\phi) &= -\sum_{n=2}^{m}\sum_{i_1+\cdots+i_n = m} \mathcal{K}_1{K}_n \left( \mathcal{K}_{i_1}(\phi), \dots, \mathcal{K}_{i_n}(\phi) \right) .
\end{align}
Here $K_1^+$ denotes a regularized pseudoinverse of $K_1$, and $\tilde{\zeta}$ is the series sum, an approximation to $\zeta$, when it exists.
Recall that this inverse series requires forward solves for the background problem only (i.e., applying the forward series operators), and requires a pseudoinverse and regularization of the first linear operator only; $ \mathcal{K}_1 = K_1^+$.
The bounds on the forward operators from Proposition \ref{forwardopbounds} combined with Theorem 2.2 of \cite{HoSc} yield the following result.
\begin{theorem}
If $\Vert \mathcal{K}_1 \phi\Vert < r $, where
where the radius of convergence $r$ is given by
$$
r=\frac{1}{2K\mu} \left[\sqrt{16 C^2+1}-4 C \right],
$$
where $C = \max\{2,\|\mathcal{K}_1\|\nu K\mu\}.$
\begin{equation} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy, \end{equation} and $\nu,K,$ both depending on
$ \nu_0 = \| u_0 \|_{C^0(\Omega\times\partial\Omega )}$, are as in Lemma \ref{nubound}, then the inverse Born series converges.
\end{theorem}
\section{Numerical Experiments}
In this section we present several numerical experiments using the inverse Born series (\ref{inversedefinition}) to reconstruct $\alpha$ and $\beta$ from synthetic data. In all cases, used the FEniCS PDE solver library in Python to create the synthetic data $\phi$. To implement the inverse series we also need the forward operators; for these we use the recursive formula, implemented as in Algorithm \ref{forwardalg}.
\begin{algorithm}
\Fn{compute-K$(n, \alpha, \beta)$}{
\If{$n = 0$}{
\Return $u_0$\;
}
$v_\alpha$ := find-K$(n-1, \alpha(n), \beta(n))$\;
$v_\beta$ := 0\;
\For{$i_1$ = 0 to $n-1$}{
\For{$i_2$ = 0 to $n-i_1-1$}{
$i_3 := n - i_2 - i_1 - 1$\;
$K_{i_1} := $compute-K$(i_1, \alpha(1:i_1), \beta(1:i_1)$\;
$K_{i_2} := $compute-K$(i_2, \alpha(i_1+1:i_1+i_2), \beta(i_1+1:i_1+i_2))$\;
$K_{i_3} := $compute-K$(i_3, \alpha(i_1+i_2+1:n-1), \beta(i_1+i_2+1:n-1))$\;
$v_\beta := v_\beta + K_{i_1} \cdot K_{i_2} \cdot K_{i_3}$\;
}
}
\Return $A(v_\alpha, \alpha(n)) + B(v_\beta, \beta(n))$\;
}
\vskip .3in
\caption{Generation of the terms in the forward series.}
\label{forwardalg} \end{algorithm}
We implement the application of the operators $A$ and $B$ by solving a background PDE (equivalent to integrating against background Green's function kernel), again using the FEniCS PDE solver library in Python; taking care to choose a different FEM mesh from those used for the generation of the synthetic data. The inverse Born series implementation is the same as in previous work, see for example \cite{}, here calling on the above Algorithm \ref{forwardalg} to call the forward operators.
We begin with an experiment in one dimension, on the interval $\Omega = [0, 1]$.
Here, we have only two points on the boundary, meaning we can only take samples at these two points. However, thanks to the nonlinearity, a scaled source has the potential to yield more information. For example, if we scale the source and take the right linear combination of the two solutions, we eliminate $\alpha$. While we don't implement this explicitly, we do capitalize on scaling for the one dimensional example, where we found it improved the reconstruction.
In the following example, we used $12$ scaled sources on each side of the interval and three different frequencies $k=0.9,1,1.1$, for a total of $72$ sources.
We chose the reference functions $\alpha$ and $\beta$ to be
\[
\alpha, \beta = \begin{cases}
\frac{\gamma}{\sqrt{2\pi \epsilon}}e^{\frac{-x^2}{2\epsilon }} & \text{if } 0.4 \leq x \leq 0.6\\
0&\text{otherwise}
\end{cases}
\]
for $\epsilon=0.01$ and $\gamma=0.2$. We see their simultaneous reconstruction in Figure \ref{fig:1d-sources}.
\begin{figure}[H] \label{fig:1d-sources}
\centering
\includegraphics[height=2in]{newimages/1d_new_k.png}
\caption{One dimensional simultaneous reconstruction of $\alpha$ and $\beta$ using varying source strength and frequency}
\label{fig:1d-sources}
\end{figure}
Next we run several experiments in two dimensions, all with domain $\Omega$ the unit disk.
First, we compare the reconstruction of $\alpha$ with $\beta = 0$ (traditional linear problem), to the reconstruction of $\beta$ with $\alpha=0$. In both cases we let the unknown be a piecewise constant. For the first example, we chose the moderate contrast medium (\ref{refmedium1}), and we see reconstruction results in Figure \ref{fig:low}. Next, we increase the contrast by a factor of $4$ in (\ref{refmedium1}) and we see results in Figures \ref{fig:medium} and \ref{fig:medium-cross}. Finally, we choose a very high contrast, the medium (\ref{refmedium1}) multiplied by a factor of $16$. In Figures \ref{fig:high} and \ref{fig:high-cross}, we see that the method does not produce good results for $\alpha$, but is not as bad for $\beta$.
\begin{equation} \label{refmedium1}
\beta,\alpha = \begin{cases}
1 & \text{if } (x-0.3)^2 + y^2 \leq 0.2\\
0 & \text{otherwise}
\end{cases}
\end{equation}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/eta_low.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/beta_only_low.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Independent reconstruction only for low contrast}
\label{fig:low}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/eta_low_cross.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/beta_only_low_cross.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Cross-section of independent reconstruction for low contrast}
\label{fig:low-cross}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/eta_medium.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/beta_only_medium.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Independent reconstruction for medium contrast}
\label{fig:medium}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/eta_medium_cross.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/beta_only_medium_cross.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Cross-section of independent reconstruction only for medium contrast}
\label{fig:medium-cross}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/eta_high.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=2in]{newimages/beta_high.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Independent reconstruction for high contrast}
\label{fig:high}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/eta_high_cross.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/beta_high_cross.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Cross-section of independent reconstruction for high contrast}
\label{fig:high-cross}
\end{figure}
In this next set of examples, we consider the simultaneous reconstruction of $\beta$ and $\alpha$. In the first example, we take
\begin{equation}\label{medium2}
\beta = \alpha = \frac{2}{\sqrt{2\pi\epsilon}}\exp{\left(\frac{-|x-x_0|^2}{2\epsilon}\right)}
\end{equation}
for $x_0=( -.3,3)$ and $\epsilon=0.04$. We see results in Figures \ref{fig:both-medium} and \ref{fig:both-medium-cross}. In the second example, we raise the contrast in (\ref{medium2}) by a factor of $4$ ,and we see that we still get reasonable reconstructions in Figure \ref{fig:both-high} and Figure \ref{fig:both-high-cross}.
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_medium.png}
\caption{Simultaneous $\alpha$ and $\beta$ reconstruction for high contrast}
\label{fig:both-medium}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_medium_cross.png}
\caption{Cross-section $\alpha$ and $\beta$ reconstruction for high contrast}
\label{fig:both-medium-cross}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_high.png}
\caption{Simultaneous $\alpha$ and $\beta$ reconstruction for very high contrast}
\label{fig:both-high}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_high_cross.png}
\caption{Cross-section $\alpha$ and $\beta$ reconstruction for very high contrast}
\label{fig:both-high-cross}
\end{figure}
\section{Discussion}
We have considered the Born and inverse Born series for scalar waves with a cubic nonlinearity of Kerr type. We found a recursive formula for the forward operators in the Born series. This result gives conditions which guarantee convergence of the Born series, and also leads to conditions which guarantee convergence of the inverse Born series. Our results are illustrated results with numerical experiments.
The ideas developed here provide a framework for studying inverse problems for a wide class of nonlinear PDEs with polynomial nonlinearities. The formulas and algorithm for generating the forward operators, the use of the generating functions, and the resulting reconstruction algorithm are readily generalizable to this setting and will be explored in future work.
\section{Acknowledgments}
The authors are indebted to Jonah Blasiak and R. Andrew Hicks for discussions essential to the proof of Lemma \ref{nubound}. S. Moskow was partially supported by the NSF grant DMS-2008441. J. Schotland was supported in part by the NSF grant DMS-1912821 and the AFOSR grant FA9550-19-1-0320.
\section{Introduction}
There has been considerable recent interest in inverse scattering problems for nonlinear partial differential equations (PDEs)~\cite{assylbekov_1,assylbekov_2,carstea,imanuvilov,isakov_1,isakov_2,isakov_3,isakov_4,kang,kurylev,lassas}. There
are numerous applications in various applied fields ranging from optical imaging to seismology. In general terms, the problem to be considered is to reconstruct the coefficients of a nonlinear PDE from boundary measurements. As in the case of inverse problems for linear PDEs, the fundamental questions relate to the
uniqueness, stability and reconstruction of the unknown coefficients. In contrast to the linear case (which is very well studied), the study of inverse problems for nonlinear PDE is still relatively unexplored. There are uniqueness and stability results for a variety of semilinear and quasilinear equations. Reconstruction methods for such problems are just beginning to be developed\cite{carstea,griesmaier,kang}.
In this paper, we consider the inverse problem of recovering the coefficients of a nonlinear elliptic PDE with cubic nonlinearity. This problem appears in optical physics, where the cubic term arises in the study of the Kerr effect---a nonlinear process where self-focusing of light is observed~\cite{boyd}. We show that it is possible to reconstruct the coefficients of the linear and nonlinear terms in a Kerr medium from boundary measurements. This result holds under a smallness condition on the measurements, which also guarantees the stability of the recovery. The reconstruction is based on inversion of the Born series, which expresses the solution to the inverse problem as an explicitly computable functional of the measured data. We note that this method has been extensively studied for linear PDEs~\cite{review}. The extension to the nonlinear setting involves a substantial reworking of the theory, especially the combinatorial structure of the Born series itself. We validate our results with numerical simulations that demonstrate the convergence of the inverse Born series under the expected smallness conditions.
The remainder of this paper is organized as follows. In section 2, we introduce the forward problem and state sufficient conditions for its solvability. The Born series is studied in section 3, the combinatorial structure of the series is characterized, and sufficient conditions for convergence are established. We also derive various estimates that are later used in section 4 to obtain our main result on the convergence of the inverse Born series. In section 5, we present the results of numerical reconstructions of a two-dimensional medium. Our conclusions are presented in section 6. The Appendix contains the proof of Proposition~1.
\section{Forward problem}
We consider the Kerr effect in a bounded domain $\Omega$ in $\mathbb{R}^d$ for $d>2$ with a smooth boundary $\partial\Omega$.
The scalar field $u$ obeys the nonlinear PDE
\begin{align}
\label{baseequation}
\Delta u + k^2(1 + \alpha(x))u + k^2\beta(x) |u|^2 u &= 0 \quad \text{ in } \quad \Omega \ , \\
\frac{\partial u}{\partial \nu } &= g \quad \text{ on } \quad \partial\Omega \ ,
\end{align}
where $k$ is the wavenumber and $\nu$ is the unit outward normal to $\partial\Omega$. The coefficients $\alpha$ and $\beta$ are the linear and nonlinear susceptibilities, respectively~\cite{boyd} and are taken to be real valued, as is the boundary source $g$. It follows that $u$ is real valued so that $|u|^2u= u^3$. More generally, $u$ is complex valued, in which case our results carry over with small modifications.
We now consider the solution $u_0$ to the linear problem
\begin{align}
\label{backgroundequation}
\Delta u_0 + k^2u_0 &= 0 \quad \text{ in } \quad \Omega \ , \\
\frac{\partial u_0}{\partial \nu } &= g \quad \text{ on } \quad \partial\Omega \ .
\end{align}
Following standard procedures, we find that the field $u$ obeys the integral equation
\begin{equation}\label{integralequation}
u(x) = u_0(x) - k^2\int_\Omega G(x, y) \left(\alpha(y)u(y) + \beta(y)u^3(y)\right)dy \ .
\end{equation}
Here the Green's function $G$ obeys
\begin{align}
\Delta_x G(x, y) + k^2 G(x, y) &= \delta(x-y) \quad \text{ in } \quad \Omega \ , \\
\frac{\partial G }{\partial \nu_y} &= 0 \quad \text{ on } \quad \partial\Omega \ .
\end{align}
We define the nonlinear operator $T: C(\overline{\Omega})\rightarrow C(\overline{\Omega})$ by
\begin{equation}\label{Tdef}
T(u) = u_0 - k^2\int_\Omega G(x, y) \left(\alpha(y)u(y) + \beta(y) u^3(y) \right) dy.
\end{equation}
We note that if $u\in C(\overline{\Omega})$ is a fixed point of $T$, that is $u = T(u)$, then $u$ satisfies equation (\ref{integralequation}).
The following result provides conditions for existence of a unique solution to (\ref{integralequation}).
\begin{proposition}
\label{bounds}
Let $T: C(\overline{\Omega})\rightarrow C(\overline{\Omega})$ be defined by (\ref{Tdef}) and define $\mu$ by
\begin{equation}\label{mudef1}
\mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy.
\end{equation}
If there exists $\gamma > 1/2 $ such that $$\Vert\alpha\Vert_\infty < \frac{2\gamma-1}{2\mu (1+\gamma)}$$ and $$\Vert\beta\Vert_\infty < \frac{1}{2\mu \Vert u_0\Vert^2(1+\gamma)^3},$$ then $T$ has a unique fixed point on the ball of radius $\gamma\| u_0\|_\infty$ about $u_0$ in $C(\overline{\Omega})$.
\end{proposition}
The proof is presented in Appendix \ref{fixedpointappendix}.
\section{Born series}
The forward problem is to compute the field $u$ as measured on $\partial\Omega$ when $g$ corresponds to a point source on $\partial\Omega$.
The solution to the forward problem is derived by iteration of the integral equation (\ref{integralequation}). We thus obtain
\begin{equation} \label{born_series}
\phi = K_1 (\zeta) + K_2 (\zeta, \zeta) + K_3( \zeta , \zeta, \zeta )+\cdots \ ,
\end{equation}
where $\phi=u-u_0$ and $\zeta := (\alpha, \beta)$. The forward operators $$K_n: [L^\infty(\Omega)]^{2n}\rightarrow C(\partial\Omega\times\partial\Omega)$$ are constructed below. We will refer to (\ref{born_series}) as the the Born series.
We note that Proposition~\ref{bounds} guarantees convergence of the Born series.
The forward operator $K_n$ is an $n$-linear operator (multilinear of order $n$) on $[ L^\infty(\Omega)^2]^n$.
In the following, we do not denote the dependence of $u_0$ on the source explicitly. The first term in the fixed point iteration is
\begin{equation}
u_1(x) : = T(u_0)(x) = u_0(x) + k^2\int_\Omega G(x, y) \left[\alpha(y) u_0(y)+ \beta(y) u_0^3(y)\right]dy \end{equation}
and thus $K_1$ is defined by
\begin{equation} K_1( \zeta) (x) = k^2\int_\Omega G(x, y) \left[\alpha(y) u_0(y)+ \beta(y) u_0^3(y)\right]dy .
\end{equation}
Next we observe that
\begin{equation}
u_2(x) : = T(u_1)(x) = u_0(x) + k^2\int_\Omega G(x, y) \left[\alpha(y) u_1(y) + \beta (y) u_1^3(y)\right]dy.
\end{equation}
Evidently, expansion of $u_1^3$ leads to terms which are multilinear in $\alpha$ and $\beta$. Subsequent iterates become progressively more complicated. To handle this problem, we introduce the operators: $a,b: C(\overline{\Omega}) \times [ L^\infty(\Omega)]^2\rightarrow C(\overline{\Omega})$, which are defined by
\begin{equation} \label{neq} a(v, \zeta) = k^2\int_{\Omega} G(x, y) \alpha (y) v(y) dy , \end{equation}
and
\begin{equation} \label{beq} b(v, \zeta ) = k^2 \int_{\Omega} G(x, y) \beta (y) v(y) dy . \end{equation}
The above operators have tensor counterparts which are defined as follows.
\begin{definition} \label{def1} Given $T_l = T_l (\zeta_1,\cdots,\zeta_l ),$ a multi-linear operator of order $l$, define the $l+1$ order multilinear operators $BT_l$ and $AT_l$ by
$$ BT_l(\zeta_1, \ldots , \zeta_l, \zeta_{l+1} )= b( T_l (\zeta_1, \ldots , \zeta_l ) , \zeta_{l+1} )$$
and
$$ AT_l(\zeta_1, \ldots , \zeta_l, \zeta_{l+1} )= a( T_l (\zeta_1, \ldots , \zeta_l ) , \zeta_{l+1} )$$
where $b$ and $n$ are given by (\ref{beq}) and (\ref{neq}) respectively.
\end{definition}
We will also need a tensor product of multilinear operators.
\begin{definition} \label{def2}
Given $T_j$ and $T_l$, multilinear operators of order $j$ and $l$ respectively, define the tensor product $T_l\otimes T_j$ by
$$ T_l\otimes T_j (\zeta_1, \ldots , \zeta_l, \zeta_{l+1},\dots, \zeta_{l+j} )= T_l (\zeta_1, \ldots , \zeta_l ) T_j (\zeta_{l+1},\dots, \zeta_{l+j} ).$$
and note that $T_l\otimes T_j$ is a multilinear operator of order $l+j$.
\end{definition}
Note that the tensor product of multilinear operators does not commute. Tensor products are extended to sums of multilinear operators by bilinearity of the tensor product, and the tensor product is also associative. In this notation, we see that if $v$ is a sum of multilinear operators, then
$$ Tv = u_0 + A v+ B v\otimes v \otimes v $$
yields another sum of multilinear operators ($u_0$ is an order zero operator).
\begin{claim} Viewing the $n$th iterate $u_n$ as a sum of multilinear operators, for any $n$ we have that
\begin{equation} u_{n} = u_{n-1} + \mbox{multilinear operators of order}\ \geq n . \end{equation}
\label{termslem}\end{claim}
\begin{proof}
We will prove this by induction. For the base case $n=1$, we have that $u_1 = u_0 + Au_0+Bu_0\otimes u_0 \otimes u_0$, so the statement holds.
Now assume that the statement holds for $u_{n-1}$. Then
\begin{eqnarray} u_n
&=& u_0 + Au_{n-1} + Bu_{n-1}\otimes u_{n-1} \otimes u_{n-1} . \end{eqnarray}
By inductive hypothesis,
$$u_{n-1} = u_{n-2}+ w $$
where $w$ is a sum of operators of order at least $n-1$. Hence we have that
\begin{multline} u_{n-1}\otimes u_{n-1} \otimes u_{n-1} = u_{n-2}\otimes u_{n-2}\otimes u_{n-2} + u_{n-2}\otimes u_{n-2} \otimes w \\+ w\otimes u_{n-2}\otimes u_{n-2} + u_{n-2}\otimes w \otimes u_{n-2} \\ + u_{n-2} \otimes w\otimes w + w\otimes u_{n-2} \otimes w + u_{n-2}\otimes w\otimes w+ w\otimes w\otimes w , \end{multline}
so that $$u_{n-1} \otimes u_{n-1} \otimes u_{n-1}= u_{n-2}\otimes u_{n-2} \otimes u_{n-2} + \mbox{multilinear operators of order} \geq n-1.$$
Applying $A$ to $u_{n-1}$ and $B$ to $u_{n-1}\otimes u_{n-1} \otimes u_{n-1}$, we increase the order of each by one. Hence we have that
\begin{eqnarray} u_n &=& u_0 + Au_{n-2} + Bu_{n-2}\otimes u_{n-2} \otimes u_{n-2} + \mbox{terms of degree} \geq n \\
&=& u_{n-1} + \mbox{terms of degree} \geq n . \end{eqnarray}
The result follows from induction. \end{proof}Given the previous result, we can now define the forward operators.
\begin{definition} The $n$th term of the forward series, $K_n (\zeta,\ldots, \zeta) $ is defined to be the sum of all multilinear operators of order exactly $n$ in the $nth$ iterate $u_n$.
\end{definition}
\subsection{General formula for the forward operators}
Using our tensor notation, the forward series is given by iterations of
$$Tv= u_0 + Av + B v\otimes v\otimes v.$$
Given $u_0$, we have
\begin{eqnarray} u_1 &=& Tu_0 = u_0 + Au_0 + B u_0 \otimes u_0 \otimes u_0 , \nonumber \\ u_2 &=& Tu_1 = u_0 + Au_1 + B u_1 \otimes u_1 \otimes u_1, \nonumber \\ u_{n+1} &=& Tu_n = u_0 + Au_n + B u_n \otimes u_n \otimes u_n. \nonumber \end{eqnarray}
Define $U_n$ to be the sum of the first $n$ forward operators, that is,
\begin{eqnarray} U_n &=& \sum_{i=0}^n K_i(\zeta_1,\ldots\zeta_i) \nonumber \\
&=& u_0 + \sum_{i=1}^n K_i(\zeta_1,\ldots,\zeta_i). \nonumber \end{eqnarray}
We know from Lemma \ref{termslem} that
$$ u_n = U_n + w,$$
where $w$ is a sum of multilinear operators, all of order $> n$. To find $U_{n+1}$, we use the iteration
$$ u_{n+1} = u_0 + A(U_n+w)+ B (U_n + w) \otimes (U_n + w) \otimes (U_n+ w) .$$
We know (also from Lemma \ref{termslem} ) that $K_{n+1}$ will be the sum of all terms here which are of order $n+1$. Since $w$ contains only terms of order $\geq n+1$, after applying $A$ or $B$, the result will be of higher order and hence will not be included in $K_{n+1}$. So any term containing $w$ after expanding out the tensor product can be dropped, and we have that all terms of $K_{n+1}$ will be contained in the sum $$AU_n + B U_n\otimes U_n \otimes U_n.$$ Since $A$ and $B$ each add one to the order, $K_{n+1}$ will consist of $AK_n$ and all terms of the form $$B K_{i_1}\otimes K_{i_2} \otimes K_{i_3} $$
where the ordered triplets $(i_1, i_2, i_3)$ are such that $i_1+i_2+i_3 = n$. Hence we have derived the following:
\begin{eqnarray} K_0 &=& u_0 ,\nonumber \\ K_1 &=& u_0 + Au_0 + B u_0\otimes u_0\otimes u_0 \nonumber, \\
K_{n+1} &=& AK_n + B\sum_{ \substack{ (i_1, i_2, i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} K_{i_1}\otimes K_{i_2}\otimes K_{i_3}. \label{Kformula} \end{eqnarray}
We note that we can count the number of such ordered triples in the above sum to be $$C(n):= n(n+1)/2 + (n+1). $$
\subsection{Bounds on the forward operators.}
In order to analyze the inverse Born series, we will need bounds for the forward operators $K_i$.
We will see that to apply existing convergence results about the inverse Born series we need boundedness of the operators as multilinear forms. We use the notation $| \cdot |_\infty $ to denote the bound on any multilinear operator of order $n$ as follows :
\begin{definition} For any multilinear operator $K$ of order $n$ on $[L^\infty(\Omega)]^{2n}$, we define
$$ | K |_\infty = \sup_{\substack{ \zeta_1,\ldots,\zeta_n }} { \| K(\zeta_1,\dots\zeta_n) \| \over{ \| \zeta_1\|\cdots\| \zeta_n\|}} . $$
\end{definition} Note that, for two multilinear operators $T_1$ and $T_2$ of the same order, we have the triangle inequality
$$| T_1+T_2|_\infty \leq | T_1 |_\infty + |T_2|_\infty.$$
\begin{claim} The forward operator $K_n$ given by (\ref{Kformula}) is a bounded multilinear operator from $[L^\infty(\Omega)]^{2n}$
to $C(\partial\Omega\times\partial\Omega)$ and
\begin{equation}\label{Knbound} | K_n |_\infty \leq \nu_n \mu^{n} \end{equation}
where \begin{equation}\label{mudef} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy, \end{equation},
$$ \nu_0 = \| u_0 \|_{C(\overline{\Omega}\times\partial\Omega )}, $$
and for all $n\geq 1$,
\begin{equation} \label{nudef} \nu_{n+1} = \nu_n + \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} \nu_{i_1}\nu_{i_2}\nu_{i_3} . \end{equation}
\end{claim}
\begin{proof} We first note that for our product operators in Definitions \ref{def1} and \ref{def2}, we have that $$ | BT_l |_\infty \leq \mu | T_l |_\infty , $$ $$ | AT_l |_\infty \leq \mu | T_l |_\infty , $$ and
$$ | T_l \otimes T _j |_\infty \leq | T_l |_\infty | T_j |_\infty. $$
The proof works then, by induction. The base case clearly holds with trivially with $| K_0 |_\infty = \nu_0 $. Assume that for each $i\leq n$, we have
$$ \| K_i \| \leq \nu_i \mu^i.$$ Using (\ref{Kformula}), we obtain
\begin{eqnarray} | K_{n+1} |_\infty &\leq& | AK_n |_\infty + | B \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} K_{i_1}\otimes K_{i_2}\otimes K_{i_3} |_\infty \nonumber \\ &\leq& \mu | K_n |_\infty + \mu \sum_{ \substack{ (i_1,i_2,i_3)\\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} | K_{i_1} |_\infty | K_{i_2}|_\infty | K_{i_3} |_\infty \nonumber \end{eqnarray}
which gives, by the inductive hypothesis
\begin{eqnarray} | K_{n+1} |_\infty &\leq& \nu_n \mu^{n+1} + \mu^{n+1} \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} \nu_{i_1}\nu_{i_2}\nu_{i_3} \nonumber \\
&=& \nu_{n+1}\mu^{n+1}. \nonumber \end{eqnarray}
\end{proof}
\begin{claim} \label{nubound} For the sequence $\{ \nu_n\} $ given by (\ref{nudef}) There exist constants $K$ and $\nu$ (both depending on $\nu_0$ but independent of $n$) such that for any $n\geq 0$, $$ {\nu_n} \leq \nu K^n .$$
\end{claim}
\begin{proof} To prove this, we consider the generating function
$$ P(x) = \sum_{n=0}^\infty \nu_n x^n .$$
We first note that it suffices to prove that this power series has a positive radius of convergence, since if this is the case, then for some positive $x$ the terms $\nu_n x^n
\rightarrow 0 $. In particular, they are bounded by some $\nu$, which would imply that $$\nu_n \leq \nu (1/x)^n.$$
We now show that $P(x)$ is analytic in some nontrivial interval around zero. Consider, formally for now, the cube of $P$,
\begin{eqnarray} (P(x))^3 &=& \sum_{ \substack{ i_1,i_2,i_3 = 0,\ldots , \infty } } x^{i_1} x^{i_2} x^{i_3} \nu_{i_1}\nu_{i_2}\nu_{i_3} \nonumber \\
&=& \sum_{n=0}^\infty f_n x^n \nonumber \end{eqnarray}
where $$ f_n = \sum_{ \substack{ (i_1,i_2,i_3) \\ i_1+i_2+i_3 =n \\ 0\leq i_1,i_2,i_3 \leq n}} \nu_{i_1}\nu_{i_2}\nu_{i_3} ,$$
which is exactly as appears in (\ref{nudef}).
Aow, we multiply (\ref{nudef}) by $x^n$ and sum to obtain
$$ \sum_{n=0}^\infty \nu_{n+1} x^n = \sum_{n=0}^\infty \nu_n x^n + \sum_{n=0}^\infty f_n x^n.$$
One checks that the left hand side is simply $(P(x) - \nu_0)/x$, and so the above yields
$$ (P(x) - \nu_0)/x = P(x) + (P(x))^3 . $$
So we have
\begin{equation}\label{polynomial} x(P(x))^3 + (x-1) P(x) + \nu_0 =0. \end{equation}
This polynomial in $P$ is singular, so it is not clear that it has an analytic solution at $x=0$. However, if we differentiate with respect to $x$, we obtain
\begin{equation} \label{ode} P^\prime (x) = - {(P(x))^3+P(x) \over{3x(P(x))^2 + x-1 }} \end{equation}
with $P(0) = \nu_0$. Since the right hand side is an analytic function of $x$ and $P$ in a neighborhood of $(0,\nu_0)$, the ode (\ref{ode}) (with initial condition) has a unique analytic solution in a neighborhood of $x=0$ (see for example Theorem 4.1 of Teschl \cite{Te}) . Integration of (\ref{ode}) combined with the initial condition implies that this analytic solution satisfies (\ref{polynomial}), and hence its coefficients must satisfy (\ref{nudef}).
\end{proof}
\begin{proposition} \label{forwardopbounds} The forward operator $K_n$ given by (\ref{Kformula}) is a bounded multilinear operator from $[L^\infty(\Omega)]^{2n}$ to $C^0(\overline{\Omega}\times\partial\Omega)$, and its bound satisfies
\begin{equation}\label{Knbound} | K_n |_\infty \leq \nu( K \mu)^n ,\end{equation}
where \begin{equation}\label{mudef} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy \end{equation} and $\nu,K,$ both depending on
$ \nu_0 = \| u_0 \|_{C(\overline{\Omega}\times\partial\Omega )}$, are as in Lemma \ref{nubound}.
\end{proposition}
\begin{corollary} The forward Born series
$$ u = u_0 + \sum_{n=1}^\infty K_n (\zeta,\ldots,\zeta) $$
where $K_n$ are given by (\ref{Kformula}) converges in $C(\overline{\Omega})$ for
$$ \| \zeta\|_\infty \leq {1\over{K\mu}} $$
where \begin{equation}\label{mudef} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy, \end{equation} and $\nu,K,$ both depending on
$ \nu_0 = \| u_0 \|_{C(\overline{\Omega})}$, are as in Lemma \ref{nubound}.
\end{corollary}
\section{Inverse Born Series}
The inverse problem is to reconstruct the coefficients $\alpha$ and $\beta$ from measurements of the scattering data $\phi$. We proceed by recalling that the Inverse Born series is defined as
\begin{equation}\label{inversedefinition}
\tilde{\zeta} = \mathcal{K}_1 \phi + \mathcal{K}_2 (\phi) + \mathcal{K}_3( \phi) + \cdots \ ,
\end{equation}
where the data $\phi \in C(\partial\Omega\times\partial\Omega).$ The inverse series was analyzed in \cite{moskow_1} and later studied in \cite{HoSc}. The inverse operator $\mathcal{K}_m$ can be computed from the formulas
\begin{align}
\label{inv_operators}
\mathcal{K}_1 (\phi) &= K_1^{+} (\phi),\\
\mathcal{K}_2(\phi) &=-\mathcal{K}_1\left(K_2 (\mathcal{K}_1(\phi),\mathcal{K}_1(\phi))\right),\\
\mathcal{K}_m(\phi) &= -\sum_{n=2}^{m}\sum_{i_1+\cdots+i_n = m} \mathcal{K}_1{K}_n \left( \mathcal{K}_{i_1}(\phi), \dots, \mathcal{K}_{i_n}(\phi) \right) .
\end{align}
Here $K_1^+$ denotes a regularized pseudoinverse of $K_1$, and $\tilde{\zeta}$ is the series sum, an approximation to $\zeta$, when it exists.
Recall that this inverse series requires forward solves for the background problem only (i.e., applying the forward series operators), and requires a pseudoinverse and regularization of the first linear operator only; $ \mathcal{K}_1 = K_1^+$.
The bounds on the forward operators from Proposition \ref{forwardopbounds} combined with Theorem 2.2 of \cite{HoSc} yield the following result.
\begin{theorem}
If $\Vert \mathcal{K}_1 \phi\Vert < r $, where
where the radius of convergence $r$ is given by
$$
r=\frac{1}{2K\mu} \left[\sqrt{16 C^2+1}-4 C \right],
$$
where $C = \max\{2,\|\mathcal{K}_1\|\nu K\mu\}.$
\begin{equation} \mu = k^2\sup_{x\in \Omega} \int_\Omega | G(x,y) | dy, \end{equation} and $\nu,K,$ both depending on
$ \nu_0 = \| u_0 \|_{C^0(\Omega\times\partial\Omega )}$, are as in Lemma \ref{nubound}, then the inverse Born series converges.
\end{theorem}
\section{Numerical Experiments}
In this section we present several numerical experiments using the inverse Born series (\ref{inversedefinition}) to reconstruct $\alpha$ and $\beta$ from synthetic data. In all cases, used the FEniCS PDE solver library in Python to create the synthetic data $\phi$. To implement the inverse series we also need the forward operators; for these we use the recursive formula, implemented as in Algorithm \ref{forwardalg}.
\begin{algorithm}
\Fn{compute-K$(n, \alpha, \beta)$}{
\If{$n = 0$}{
\Return $u_0$\;
}
$v_\alpha$ := find-K$(n-1, \alpha(n), \beta(n))$\;
$v_\beta$ := 0\;
\For{$i_1$ = 0 to $n-1$}{
\For{$i_2$ = 0 to $n-i_1-1$}{
$i_3 := n - i_2 - i_1 - 1$\;
$K_{i_1} := $compute-K$(i_1, \alpha(1:i_1), \beta(1:i_1)$\;
$K_{i_2} := $compute-K$(i_2, \alpha(i_1+1:i_1+i_2), \beta(i_1+1:i_1+i_2))$\;
$K_{i_3} := $compute-K$(i_3, \alpha(i_1+i_2+1:n-1), \beta(i_1+i_2+1:n-1))$\;
$v_\beta := v_\beta + K_{i_1} \cdot K_{i_2} \cdot K_{i_3}$\;
}
}
\Return $A(v_\alpha, \alpha(n)) + B(v_\beta, \beta(n))$\;
}
\vskip .3in
\caption{Generation of the terms in the forward series.}
\label{forwardalg} \end{algorithm}
We implement the application of the operators $A$ and $B$ by solving a background PDE (equivalent to integrating against background Green's function kernel), again using the FEniCS PDE solver library in Python; taking care to choose a different FEM mesh from those used for the generation of the synthetic data. The inverse Born series implementation is the same as in previous work, see for example \cite{}, here calling on the above Algorithm \ref{forwardalg} to call the forward operators.
We begin with an experiment in one dimension, on the interval $\Omega = [0, 1]$.
Here, we have only two points on the boundary, meaning we can only take samples at these two points. However, thanks to the nonlinearity, a scaled source has the potential to yield more information. For example, if we scale the source and take the right linear combination of the two solutions, we eliminate $\alpha$. While we don't implement this explicitly, we do capitalize on scaling for the one dimensional example, where we found it improved the reconstruction.
In the following example, we used $12$ scaled sources on each side of the interval and three different frequencies $k=0.9,1,1.1$, for a total of $72$ sources.
We chose the reference functions $\alpha$ and $\beta$ to be
\[
\alpha, \beta = \begin{cases}
\frac{\gamma}{\sqrt{2\pi \epsilon}}e^{\frac{-x^2}{2\epsilon }} & \text{if } 0.4 \leq x \leq 0.6\\
0&\text{otherwise}
\end{cases}
\]
for $\epsilon=0.01$ and $\gamma=0.2$. We see their simultaneous reconstruction in Figure \ref{fig:1d-sources}.
\begin{figure}[H] \label{fig:1d-sources}
\centering
\includegraphics[height=2in]{newimages/1d_new_k.png}
\caption{One dimensional simultaneous reconstruction of $\alpha$ and $\beta$ using varying source strength and frequency}
\label{fig:1d-sources}
\end{figure}
Next we run several experiments in two dimensions, all with domain $\Omega$ the unit disk.
First, we compare the reconstruction of $\alpha$ with $\beta = 0$ (traditional linear problem), to the reconstruction of $\beta$ with $\alpha=0$. In both cases we let the unknown be a piecewise constant. For the first example, we chose the moderate contrast medium (\ref{refmedium1}), and we see reconstruction results in Figure \ref{fig:low}. Next, we increase the contrast by a factor of $4$ in (\ref{refmedium1}) and we see results in Figures \ref{fig:medium} and \ref{fig:medium-cross}. Finally, we choose a very high contrast, the medium (\ref{refmedium1}) multiplied by a factor of $16$. In Figures \ref{fig:high} and \ref{fig:high-cross}, we see that the method does not produce good results for $\alpha$, but is not as bad for $\beta$.
\begin{equation} \label{refmedium1}
\beta,\alpha = \begin{cases}
1 & \text{if } (x-0.3)^2 + y^2 \leq 0.2\\
0 & \text{otherwise}
\end{cases}
\end{equation}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/eta_low.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/beta_only_low.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Independent reconstruction only for low contrast}
\label{fig:low}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/eta_low_cross.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/beta_only_low_cross.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Cross-section of independent reconstruction for low contrast}
\label{fig:low-cross}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/eta_medium.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/beta_only_medium.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Independent reconstruction for medium contrast}
\label{fig:medium}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/eta_medium_cross.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/beta_only_medium_cross.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Cross-section of independent reconstruction only for medium contrast}
\label{fig:medium-cross}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[height=2in]{newimages/eta_high.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=2in]{newimages/beta_high.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Independent reconstruction for high contrast}
\label{fig:high}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/eta_high_cross.png}
\caption{$\alpha$ reconstruction with $\beta = 0$}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=1.5in]{newimages/beta_high_cross.png}
\caption{$\beta$ reconstruction with $\alpha = 0$}
\label{fig:sub2}
\end{subfigure}
\caption{Cross-section of independent reconstruction for high contrast}
\label{fig:high-cross}
\end{figure}
In this next set of examples, we consider the simultaneous reconstruction of $\beta$ and $\alpha$. In the first example, we take
\begin{equation}\label{medium2}
\beta = \alpha = \frac{2}{\sqrt{2\pi\epsilon}}\exp{\left(\frac{-|x-x_0|^2}{2\epsilon}\right)}
\end{equation}
for $x_0=( -.3,3)$ and $\epsilon=0.04$. We see results in Figures \ref{fig:both-medium} and \ref{fig:both-medium-cross}. In the second example, we raise the contrast in (\ref{medium2}) by a factor of $4$ ,and we see that we still get reasonable reconstructions in Figure \ref{fig:both-high} and Figure \ref{fig:both-high-cross}.
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_medium.png}
\caption{Simultaneous $\alpha$ and $\beta$ reconstruction for high contrast}
\label{fig:both-medium}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_medium_cross.png}
\caption{Cross-section $\alpha$ and $\beta$ reconstruction for high contrast}
\label{fig:both-medium-cross}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_high.png}
\caption{Simultaneous $\alpha$ and $\beta$ reconstruction for very high contrast}
\label{fig:both-high}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=2in]{newimages/both_high_cross.png}
\caption{Cross-section $\alpha$ and $\beta$ reconstruction for very high contrast}
\label{fig:both-high-cross}
\end{figure}
\section{Discussion}
We have considered the Born and inverse Born series for scalar waves with a cubic nonlinearity of Kerr type. We found a recursive formula for the forward operators in the Born series. This result gives conditions which guarantee convergence of the Born series, and also leads to conditions which guarantee convergence of the inverse Born series. Our results are illustrated results with numerical experiments.
The ideas developed here provide a framework for studying inverse problems for a wide class of nonlinear PDEs with polynomial nonlinearities. The formulas and algorithm for generating the forward operators, the use of the generating functions, and the resulting reconstruction algorithm are readily generalizable to this setting and will be explored in future work.
\section{Acknowledgments}
The authors are indebted to Jonah Blasiak and R. Andrew Hicks for discussions essential to the proof of Lemma \ref{nubound}. S. Moskow was partially supported by the NSF grant DMS-2008441. J. Schotland was supported in part by the NSF grant DMS-1912821 and the AFOSR grant FA9550-19-1-0320.
|
1,108,101,563,825 | arxiv | \section{Introduction}\label{sintro}
{\em Spectral graph theory} has traditionally been the study of the relation between properties of (undirected) graphs and the spectrum of the adjacency matrix, Laplacian matrix, or signless Laplacian of the graph \cite{BH}. The distance matrix of a graph was introduced in the study of a data communication problem \cite{GP71} and has attracted a lot of interest recently (see, e.g, \cite{AH14} for a survey on distance spectra of graphs). Recently the distance Laplacian and distance signless Laplacian of a graph have been studied (see, for example, \cite{AH13}). Spectral theory of digraphs is a developing area of research but so far focused primarily on the spectral radius of the adjacency matrix (see \cite{B10} for a survey on spectra of digraphs).
A graph $G = (V(G),E(G))$ consists of a finite set $V(G) = \{v_1, \dots, v_n\}$ of vertices and a set $E(G)$ of two-element subsets $\{v_i, v_j\}$ called {\em edges}. A digraph $\Gamma = (V(\Gamma),E(\Gamma))$ consists of a finite set $V(\Gamma) = \{v_1, \dots, v_n\}$ of vertices and a set $E(\Gamma)$ of ordered pairs of distinct vertices $(v_i, v_j)$ called {\em arcs}. Observe that neither a graph nor digraph can have a loop (an edge or arc with the vertices equal). For a digraph $\Gamma$ (respectively, graph $G$), a {\em dipath} (respectively, {\em path}) from $u$ to $v$ is a sequence of vertices and arcs (respectively, edges)
$u=w_1, e_1=(w_1,w_2),w_2, e_2=(w_2,w_3),\dots,w_k, e_k=(w_k,w_{k+1}),w_{k+1}=v$ (in a path, the arcs are replaced by unordered edges). A digraph (or graph) of order at least two is {\em strongly connected} (or {\em connected}) if for every pair of vertices $u, v$, there is a dipath (or path) from $u$ to $v$.
The {\em adjacency matrix} of $\Gamma$ (or $G$), denoted by $\mathcal A(\Gamma)$ (or $\mathcal A(G)$), is the $n \times n$ matrix with $(i,j)$ entry equal to $1$ if $(v_i, v_j)$ (or $\{v_i, v_j\}$) is an arc (or edge) of $\Gamma$ (or $G$), and $0$ otherwise. The {\em Laplacian matrix} of $\Gamma$ (or $G$), denoted by $L(\Gamma)$ (or $L(G)$), is defined as $D(\Gamma) - \mathcal A(\Gamma)$ (or $D(G) - \mathcal A(G)$), where $D(\Gamma)$ (or $D(G)$) is the diagonal matrix having the $i$-th diagonal entry equal to the {\em out-degree} (or {\em degree}) of the vertex $v_i$, i.e., the number of arcs (or edges) starting at $v_i$. The matrix $D(\Gamma) + \mathcal A(\Gamma)$ (or $D(G) + \mathcal A(G)$) is called the {\em signless Laplacian matrix} of $\Gamma$ (or $G$) and is denoted by $Q(\Gamma)$ (or $Q(G)$). For a strongly connected digraph $\Gamma$ (or a connected graph $G$), the {\em distance matrix}, denoted $\mathcal D(\Gamma)$ (or $\mathcal D(G)$), is the $n \times n$ matrix with $(i,j)$ entry equal to $d(v_i,v_j)$, the {\em distance} from $v_i$ to $v_j$, i.e., the length of a shortest dipath (or path) from $v_i$ to $v_j$; use of a distance matrix implies the digraph (or graph) is strongly connected (or connected).
The {\em transmission} of vertex $v_i$ is defined as $t} %{\operatorname{trans}(v_i) = \sum_{j=1}^n d(v_i,v_j)$. The transmission of a vertex in a digraph could have been called the out-transmission because it is the sum of the out-distances, i.e., the distances from $v_i$ to other vertices.
The {\em distance Laplacian} matrix and the {\em distance signless Laplacian} matrix, denoted by ${\D^L}$ and ${\D^Q}$, respectively, are defined by ${\D^L}(\Gamma) = T(\Gamma) - \mathcal D(\Gamma)$ and ${\D^Q}(\Gamma) = T(\Gamma) + \mathcal D(\Gamma)$, where $T(\Gamma)$ is the diagonal matrix with $t} %{\operatorname{trans}(v_i)$ as the $i$-th diagonal entry; ${\D^L}(G)$ and ${\D^Q}(G)$ are defined analogously. A digraph is {\em out-regular} or {\em $r$-out-regular} if every vertex has out-degree $r$. A strongly connected digraph is {\em transmission regular} or {\em $t$-transmission regular} if every vertex has transmission $t$. The terms {\em regular}, {\em $r$-regular}, {\em transmission regular}, and {\em $t$-transmission regular} are defined analogously for graphs.
For a real $n \times n$ matrix $M$, the {\em algebraic multiplicity} $\operatorname{mult}_M(z)$ of a number $z\in{\mathbb C}$ with respect to $M$ is the number of times $(x-z)$ appears as a factor in the characteristic polynomial $p(x)$ of $M$, and the {\em geometric multiplicity} $\operatorname{gmult}_M(z)$ is the dimension of the eigenspace $ES_M(z)$ of $M$ relative to $z$ ($\operatorname{mult}_M(z)=\operatorname{gmult}_M(z)=0$ if $z$ is not an eigenvalue of $M$). The {\em spectrum} of $M$, denoted by $\operatorname{spec}(M)$, is the multiset whose elements are the $n$ (complex) eigenvalues of $M$ (i.e., the number of times each eigenvalue appears in $\operatorname{spec}(M)$ is its algebraic multiplicity). The spectrum is often written as $\operatorname{spec}(M)=\{\lambda_1^{(m_1)},\dots,\lambda_q^{(m_q)}\}$ where $\lambda_1,\dots, \lambda_q$ are the distinct eigenvalues of $M$ and $m_1,\dots,m_q$ are the (algebraic) multiplicities.
There are several spectra associated with a digraph $\Gamma$, namely, $\operatorname{spec}_\mathcal A(\Gamma)=\operatorname{spec}(\mathcal A(\Gamma))$ ({\em adjacency spectrum}), $\operatorname{spec}_L(\Gamma)=\operatorname{spec}(L(\Gamma))$ ({\em Laplacian spectrum}), $\operatorname{spec}_Q(\Gamma)=\operatorname{spec}(Q(\Gamma))$ ({\em signless Laplacian spectrum}), $\operatorname{spec}_{\D}(\Gamma)=\operatorname{spec}(\mathcal D(\Gamma))$ ({\em distance spectrum}), $\operatorname{spec}_{\DL}(\Gamma)=\operatorname{spec}({\D^L}(\Gamma))$ ({\em distance Laplacian spectrum}), and $\operatorname{spec}_{\DQ}(\Gamma)=\operatorname{spec}({\D^Q}(\Gamma))$ ({\em distance signless Laplacian spectrum}). For a graph $G$, the relevant spectra are $\operatorname{spec}_\mathcal A(G)=\operatorname{spec}(\mathcal A(G))$, $\operatorname{spec}_L(G)=\operatorname{spec}(L(G))$, $\operatorname{spec}_Q(G)=\operatorname{spec}(Q(G))$, $\operatorname{spec}_{\D}(G)=\operatorname{spec}(\mathcal D(G))$, $\operatorname{spec}_{\DL}(G)=\operatorname{spec}({\D^L}(G))$, and $\operatorname{spec}_{\DQ}(G)=\operatorname{spec}({\D^Q}(G))$, with the same terminology.
This paper contributes to the study of the spectra of digraphs, particularly by presenting new results on eigenvalues and eigenvectors of the distance matrix of various products of digraphs. In Section \ref{scartprod}, we analyze constructions of matrices (sums of Kronecker products) that produce the adjacency and distance matrices. We use the Jordan canonical form to derive formulas for the spectra of these constructions in terms of the spectra of the original matrices, and apply these results to determine the adjacency spectrum of a Cartesian product of two digraphs in terms of the adjacency spectra of the digraphs, and to determine the distance spectrum of a Cartesian product of two transmission regular digraphs in terms of the distance spectra of the digraphs. These formulas show that Cartesian products provide a method for building infinite families of transmission regular digraphs with few distinct distance eigenvalues; this is discussed in Section \ref{sSRD}. In some cases we establish formulas for the Jordan canonical form, geometric multiplicities of eigenvalues, or eigenvectors of the constructed matrix. In Section \ref{slexprod}, we investigate the spectra of lexicographic products of digraphs by similar methods. Section \ref{sDirectStrongprod} gives a brief discussion on the spectra of the direct and strong products.
In the remainder of this introduction, we define various digraph products and the matrix constructions that describe the matrices associated with these digraphs, and state elementary results we will use.
\subsection{Digraph products and matrix constructions}\label{s:mtx-prod}
Let $\Gamma$ and $\Gamma'$ be digraphs of orders $n$ and $n'$, respectively. We consider the four standard associative digraph products, namely the {\em Cartesian product} $\Gamma \, \Box\, \Gamma'$, the {\em lexicographic product} $\Gamma \circ \Gamma'$, the {\em direct product} $\Gamma \times \Gamma'$ and the {\em strong product} $\Gamma \boxtimes \Gamma'$ \cite{H18}. Each has vertex set $V(\Gamma) \times V(\Gamma')$ and their arc sets are:
\[
\begin{array}{lcl}
E(\Gamma \, \Box\, \Gamma') & = & \{ ((x,x'),(y,y') )\ | \ x'=y' \mbox{ and } (x,y) \in E(\Gamma), \mbox{ \bf or } x=y \mbox{ and } (x',y') \in E(\Gamma') \},\\
E(\Gamma \circ \Gamma') & = & \{ ((x,x'),(y,y') )\ | \ (x,y) \in E(\Gamma), \mbox{ \bf or } x=y \mbox{ and } (x',y') \in E(\Gamma') \},\\
E(\Gamma\times\Gamma') & = & \{ ((x,x'),(y,y') )\ | \ (x,y) \in E(\Gamma) \mbox{ and } (x',y') \in E(\Gamma') \}, \mbox{ and }\\
E(\Gamma\boxtimes \Gamma') & = & E(\Gamma\, \Box\,\Gamma') \cup E(\Gamma\times\Gamma').
\end{array}
\]
Rather than establishing spectral results just for the matrices associated with these digraph products, we develop a general theory of the spectra of matrices constructed in a specified form as a sum of Kronecker products of matrices with the identity or with the all ones matrix.
The {\em Kronecker product} of an $n\times n$ matrix $A=[a_{ij}]$ and a $n'\times n'$ matrix $A'$ denoted by $A\otimes A'$,
is the $nn'\times nn'$ block matrix
\[A\otimes A'=
\mtx{
a_{11}A'&a_{12}A'&\cdots&a_{1n}A'\\
a_{21}A'&a_{22}A'&\cdots&a_{2n}A'\\
\vdots&\vdots&\ddots&\vdots\\
a_{n1}A'&a_{n2}A'&\cdots&a_{nn}A'}.\]
Let $M\in\C^{n\times n}$ and $M'\in{\mathbb C}^{n'\times n'}$. We use the following notation: The $n\times n$ identity matrix is denoted by $\I_n$. The $n\times n$ all ones matrix is denoted by $\mathbb{J}_n$. The all ones $n$-vector is denoted by $\bone$. The all zeros matrix is denoted by $O$. The all zeros vector is denoted by $\boldzero$. Define the matrix constructions
\[M\,{\scriptsize{\fbox{$\I$}}} \, M'=M\otimes \I_{n'}+\I_n\otimes M'\in\mathbb{C}^{(nn')\times (nn')},\]
\[M\,{\scriptsize{\fbox{$\J$}}} \, M'=M\otimes \mathbb{J}_{n'}+\mathbb{J}_n\otimes M'\in\mathbb{C}^{(nn')\times (nn')},\]
and
\[M\circ M'=M\otimes \mathbb{J}_{n'}+\I_n\otimes M'\in\mathbb{C}^{(nn')\times (nn')}.\]
Then, as in the case with graphs,
\[\mathcal A(\Gamma\, \Box\,\Gamma')=\mathcal A(\Gamma) \,{\scriptsize{\fbox{$\I$}}} \, \mathcal A(\Gamma')
\mbox{ \ and \ }
\mathcal D(\Gamma\, \Box\,\Gamma')=\mathcal D(\Gamma)\,{\scriptsize{\fbox{$\J$}}} \, \mathcal D(\Gamma').\]
The matrix construction $M\circ M'$ arises naturally for the adjacency matrix of the lexicographic product, because $\mathcal A(\Gamma\circ\Gamma')=\mathcal A(\Gamma) \circ \mathcal A(\Gamma')$ (as is the case for graphs), and has some uses for the distance matrix $\mathcal D(\Gamma\circ\Gamma')$, as discussed in Section \ref{slexprod} (in particular, see Observation \ref{obs:lex-longcycle}).
For many cases, we determine the spectrum of the construction of $M$ and $M'$ by using the construction of Jordan canonical forms of $M$ and $M'$ to obtain a triangular matrix that is similar to the construction of $M$ and $M'$. In one case, $M\,{\scriptsize{\fbox{$\J$}}} \, M'$, we obtain a significantly stronger result, producing an explicit formula for the Jordan canonical form of the product of $M$ and $M'$ in terms of the Jordan canonical forms of $M$ and $M'$. This allows the determination of the geometric multiplicities of the eigenvalues of the construction from the geometric multiplicities of the eigenvalues of $M$ and $M'$. We also show that such a determination is not possible for $M\,{\scriptsize{\fbox{$\I$}}} \, M'$ (see Example \ref{ex:geometric_mult_cartesian_prod_counterexample}).
In another case, $M\circ M'$, we determine the geometric multiplicities of the eigenvalues of the construction from the geometric multiplicities of the eigenvalues of $M$ and $M'$ and the geometry of the eigenspaces of $M$ and $M'$.
\subsection{Useful lemmas}
The following result is used throughout the paper (there are many ways it could be proved).
\begin{lemma}
\label{lem_sylvester_1645_10nov}
Consider the block matrices $E=\begin{bmatrix}
A & C\\
O & B
\end{bmatrix}
$
and
$F=\begin{bmatrix}
A & O\\
O & B
\end{bmatrix}
$
where $A\in\C^{n\times n}$, $B\in{\mathbb C}^{n'\times n'}$, and suppose that $\operatorname{spec}(A)\cap\operatorname{spec}(B)=\emptyset$. Then $E$ and $F$ are similar.
\end{lemma}
\begin{proof}
The Sylvester equation $AX-XB=C$ has a unique solution $X \in {\mathbb C}^{n \times n'}$, since $\operatorname{spec}(A)\cap\operatorname{spec}(B)=\emptyset$ \cite [Theorem 2.4.4.1]{HJ}. Then, $P^{-1}EP=F$ where $P=\begin{bmatrix}
\I & -X\\
O & \I
\end{bmatrix}
$; observe that
$P^{-1}=\begin{bmatrix}
\I & X\\
O & \I
\end{bmatrix}
$.
\end{proof}
The next lemma is well known, and follows from standard facts about Kronecker products (see, for example, \cite[Fact 11.4.16]{HLA-Ch11}).
\begin{lemma}\label{lem:kron-basis} Let ${\bf a}_1,\dots,{\bf a}_k\in{\mathbb R}^n$ be linearly independent and let ${\bf b}_1,\dots,{\bf b}_{k'}\in{\mathbb R}^{n'}$ be linearly independent. Then
${\bf a}_i\otimes{\bf b}_j$ for $i=1,\dots,k, j=1,\dots,k'$
are linearly independent in ${\mathbb R}^{nn'}$.
\end{lemma}
\section{Cartesian products}\label{scartprod}
In this section we derive formulas for the adjacency and distance spectra of a Cartesian product of two digraphs in terms of the adjacency and distance spectra of the digraphs under certain conditions. In the case of out-regular digraphs for adjacency matrices, or transmission regular digraphs for distance matrices, these results extend naturally to the (distance) Laplacian and (distance) signless Laplacian matrices.
These formulas show that Cartesian products provide a method for building infinite families of transmission regular digraphs with few distinct distance eigenvalues; this is discussed in Section \ref{sSRD}.
The formulas (and the idea of constructing digraphs with few distance eigenvalues) parallel similar results for graphs. However, the proofs of the eigenvalue formulas are quite different.
Formulas analogous to the ones we derive for digraphs are known for graphs.
In the case of graphs, each of the matrices involved is real and symmetric, so its eigenvalues are real and there is a basis of eigenvectors. Furthermore, the distance matrix of a transmission regular graph $G$ of order $n$ commutes with $\mathbb{J}_n$, allowing simultaneous diagonalization of $\mathcal D(G)$ and $\mathbb{J}_n$. Unfortunately, the eigenvalues of distance or adjacency matrices of digraphs may be non-real and there may not be a basis of eigenvectors. Examples include the directed cycle $\vec C_n$, which has eigenvalues $1,\omega,\dots,\omega^{n-1}$ where $\omega=e^{(2\pi i)/n}$. A transmission regular digraph of diameter two that lacks a basis of eigenvectors is exhibited in the next example.
\begin{figure}[h!]
\begin{center}
\scalebox{.8}{\begin{tikzpicture}
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v2}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum
size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$1$},x=2.1397cm,y=0.0cm]{v0}
\Vertex[style={minimum
size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$2$},x=5.0cm,y=1.208cm]{v1}
\Vertex[style={minimum
size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$3$},x=2.7507cm,y=5.0cm]{v2}
\Vertex[style={minimum
size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$4$},x=0.0cm,y=3.7133cm]{v3}
\Edge[lw=0.1cm,style={color=cv0v1,},](v0)(v1)
\Edge[lw=0.1cm,style={color=cv0v1,},](v0)(v3)
\Edge[lw=0.1cm,style={color=cv0v1,},](v1)(v2)
\Edge[lw=0.1cm,style={post,color=cv2v0,},](v2)(v0)
\Edge[lw=0.1cm,style={post,,color=cv3v2,},](v3)(v2)
\end{tikzpicture}}
\caption{\label{fig:TRegNoEvec} A transmission regular digraph with diameter two and no basis of eigenvectors. Here and elsewhere, a bold line indicates both arcs are present. }\vspace{-5pt}
\end{center}
\end{figure}
\begin{example} {\rm Let $\Gamma$ be the digraph shown in Figure \ref{fig:TRegNoEvec}. Then $\mathcal D(\Gamma)=\mtx{0 & 1 & 2 & 1 \\
1 & 0 & 1 & 2 \\
1 & 1 & 0 & 2 \\
1 & 2 & 1 & 0}$, $\operatorname{spec}_{\D}(\Gamma)=\{4, -1, -1, -2\}$, and every eigenvector for $-1$ is a multiple of $[4, -1, -1, -1]^T$.
}\end{example}
If the matrices $M$ and $M'$ are real and symmetric, then formulas for the spectra of $M\,{\scriptsize{\fbox{$\I$}}} \, M'$ and $M\,{\scriptsize{\fbox{$\J$}}} \, M'$ in terms of those of $M$ and $M'$ are well known. The formula for $\operatorname{spec}(M\,{\scriptsize{\fbox{$\I$}}} \, M')$ is also known without any other assumptions.
\begin{remark}{\rm Let $M\in\C^{n\times n}$ with $\operatorname{spec}(M)=\{\lambda_1,\dots,\lambda_n\}$ and $M'\in{\mathbb C}^{n'\times n'}$ with $\operatorname{spec}(M')=\{\lambda'_1,\dots,\lambda'_{n'}\}$.
Then
$\operatorname{spec}(M\,{\scriptsize{\fbox{$\I$}}} \, M')=\{\lambda_i+\lambda_j' : i=1,\dots,n, \ j=1,\dots,n'\}$ \cite[Theorem 4.4.5]{HJ2}.
This implies the (known) formula for the adjacency spectra of cartesian products of any digraphs:
Let $\Gamma$ and $\Gamma'$ be digraphs of orders $n$ and $n'$, respectively, with
$\operatorname{spec}_{\mathcal A}(\Gamma)=\{\alpha_1,\alpha_2,\dots,\alpha_n\}$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=\{\alpha_1',\alpha'_2,\dots,\alpha'_{n'}\}$ \cite[Theorem 3]{EH80}.
Then
$
\operatorname{spec}_{\mathcal A}(\Gamma{\, \Box\,}\Gamma')=\left\{\alpha_i+\alpha_j' : i=1,\dots,n, \ j=1,\dots,n' \right\}.
$
}\end{remark}
As the next example shows, the geometric multiplicity of the eigenvalues of $M\,{\scriptsize{\fbox{$\I$}}} \, M'$ is not entirely determined from the eigenvalues of $M$ and $M'$ and their geometric multiplicities.
\begin{example}
\label{ex:geometric_mult_cartesian_prod_counterexample}
{\rm Let
\[
M=
\begin{bmatrix}
0& 0 & 0\\
0 & 0 & 1\\
0 & 0 & 0
\end{bmatrix},
\hspace{.4cm}
M'_1=\begin{bmatrix}
0 & 1 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 0 & 0
\end{bmatrix},
\hspace{.4cm}
M'_2=\begin{bmatrix}
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{bmatrix}.
\]
We observe that the eigenvalue $0$ has geometric multiplicity $2$ (and algebraic multiplicity 4) for both $M'_1$ and $M'_2$. Nevertheless, one can check that $\operatorname{rank}(M\,{\scriptsize{\fbox{$\I$}}} \, M'_1)=6$ while $\operatorname{rank}(M\,{\scriptsize{\fbox{$\I$}}} \, M'_2)=7$, so that the geometric multiplicity of $0$ for $M\,{\scriptsize{\fbox{$\I$}}} \, M'_1$ and for $M\,{\scriptsize{\fbox{$\I$}}} \, M'_2$ differs.
}\end{example}
The formula for $\operatorname{spec}(M\,{\scriptsize{\fbox{$\I$}}} \, M')$ can be proved by using the Jordan canonical form (as we do in other theorems)
The geometric multiplicity of the eigenvalues of $M\,{\scriptsize{\fbox{$\I$}}} \, M'$ is fully determined from the Jordan canonical forms of $M$ and $M'$; these, in turn, are fully determined from their Weyr characteristics (see for example \cite[\S $3.1$]{HJ1}). We conclude that, in addition to the geometric multiplicities, other elements of the Weyr characteristics of $M$ and $M'$ determine the geometric multiplicity of the eigenvalues of $M\,{\scriptsize{\fbox{$\I$}}} \, M'$.
Next we turn our attention to $M\,{\scriptsize{\fbox{$\J$}}} \, M'$.
\begin{proposition}\label{prop:MJ-JCF}
Suppose $M\in\R^{n\times n}$ is a nonnegative matrix that satisfies $M\bone_n=\rho\bone_n$. Then $\rho$ is the spectral radius of $M$ and there exists an invertible matrix $C\in\C^{n\times n}$ such that
\[C^{-1}\mathbb{J}_n C=\mtx{n & \boldzero^T \\ \boldzero & O} \mbox{ and } C^{-1}MC=\mtx{\rho & {\bf x}^T \\ \boldzero & R}\]
for some Jordan matrix $R$ and ${\bf x}\in{\mathbb R}^{n-1}$. If in addition $M$ is irreducible, then $\operatorname{J}_M=\mtx{\rho & \boldzero^T \\ \boldzero & R}$.
\end{proposition}
\begin{proof} Since $ M$ is a nonnegative matrix that satisfies $M\bone_n=\rho\bone_n$, its spectral radius is $\rho$. Choose a basis of (real) eigenvectors ${\bf c}_1=\bone, {\bf c}_2,\dots,{\bf c}_n$ for $\mathbb{J}_n$ and define $C_1=\mtx{{\bf c}_1 & {\bf c}_2 &\dots &{\bf c}_n}$. Then
$C_1^{-1}\mathbb{J}_n C_1=\mtx{n & \boldzero^T \\ \boldzero & O}$ and $ C_1^{-1}MC_1=\mtx{\rho & {\bf y}^T \\ \boldzero & B}$. Choose $C_2\in{\mathbb C}^{(n-1)\times(n-1)}$ such that $C_2^{-1}BC_2=\operatorname{J}_B$. Then
$C^{-1}MC=\mtx{\rho & {\bf x}^T \\ \boldzero & R}$ with $C=C_1([1]\oplus C_2)$ and $R=\operatorname{J}_B$.
If $ M$ is irreducible, then $\rho$ is a simple eigenvalue and $\operatorname{J}_M$ has the required form. \end{proof}
Observe that any Jordan matrix $R$ can be expressed as $R=D+N$ where $D$ is a diagonal matrix and $N$ is nilpotent. Then for any $c\in{\mathbb R}$, $\operatorname{J}_{cR}=cD+N$.
\begin{theorem}\label{thm:M-JCF-CartProd}
Suppose $M\in\R^{n\times n}$, $M'\in{\mathbb R}^{n'\times n'}$ are irreducible nonnegative matrices that satisfy $M\bone_n=\rho\bone_n$ and $M'\bone_{n'}=\rho'\bone_{n'}$. Let $\operatorname{J}_M=\begin{bmatrix}
\rho & \boldzero^T \\
\boldzero & D+N
\end{bmatrix}$ and $\operatorname{J}_{M'}=\begin{bmatrix}
\rho' & \boldzero^T \\
\boldzero & D'+N'
\end{bmatrix}$, where $D$ and $D'$ are diagonal and $N$ and $N'$ are nilpotent. Then
\[\operatorname{J}_{M\,{\scriptsize{\fbox{$\J$}}} \, M'}=
\begin{bmatrix}
n\rho'+n'\rho & \boldzero^T & \boldzero^T & \boldzero^T \\
\boldzero & nD'+N' & O & O \\
\boldzero & O & n'D+N & O \\
\boldzero & O & O & O
\end{bmatrix} .
\]
\end{theorem}
\begin{proof} Let $R=D+N$ and $R'=D'+N'$. Use Proposition \ref{prop:MJ-JCF} to choose $C$ and $C'$ such that $C^{-1}\mathbb{J}_n C=\mtx{n & \boldzero^T \\ \boldzero & O}=\operatorname{diag}(n,0,\dots,0)$, $ C^{-1}MC=\mtx{\rho & {\bf x}^T \\ \boldzero & R}$, $C'^{-1}\mathbb{J}_{n'} C'=\mtx{n' & \boldzero^T \\ \boldzero & O}=\operatorname{diag}(n',0,\dots,0)$, and $ C'^{-1}M'C'=\mtx{\rho' & {\bf x}'^T \\ \boldzero & R'}$.
Then \\
$(C^{-1}\otimes C'^{-1})(M\,{\scriptsize{\fbox{$\J$}}} \, M')(C\otimes C')=$\\
$\mtx{\rho & {\bf x}^T \\ \boldzero & R}\otimes \operatorname{diag}(n',0,\dots,0)+\operatorname{diag}(n,0,\dots,0)\otimes \mtx{\rho' & {\bf x}'^T \\ \boldzero & R'}=$ \\
\renewcommand{\arraystretch}{1.3}
\[{\scriptsize \left[ \begin{array}{cc|cc|cc|c|cc}
\rho n' & \boldzero^T & x_1n' & \boldzero^T& x_2n' & \boldzero^T & \cdots & x_{n-1}n' &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O& \cdots & \boldzero & O \\
\hline
0 & \boldzero^T& r_{11}n' & \boldzero^T & r_{12}n' & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
0 & \boldzero^T & 0 & \boldzero^T & r_{22}n' & \boldzero^T& \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
\vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \ddots &\vdots &\vdots\\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 & \boldzero^T& \cdots & r_{nn}n' &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\end{array}\right] +
\left[ \begin{array}{cc|cc|cc|c|cc}
n \rho' & n {\bf x}'^T & 0 & \boldzero^T& 0 & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & n R' & \boldzero & O & \boldzero & O& \cdots & \boldzero & O \\
\hline
0 & \boldzero^T& 0 & \boldzero^T& 0 & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 &\boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
\vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \ddots &\vdots &\vdots\\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 & \boldzero^T& \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O
\end{array}\right]=
}\]
\begin{equation}\label{eq:CP1} \left[ \begin{array}{cc|cc|cc|c|cc}
\rho n' +n\rho' & n{\bf x}'^T & x_1n' & \boldzero^T& x_2n' & \boldzero^T & \cdots & x_{n-1}n' &\boldzero^T \\
\boldzero & nR' & \boldzero & O & \boldzero & O& \cdots & \boldzero & O \\
\hline
0 & \boldzero^T& r_{11}n' & \boldzero^T & r_{12}n' & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
0 & \boldzero^T & 0 & \boldzero^T & r_{22}n' & \boldzero^T& \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
\vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \ddots &\vdots &\vdots\\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 & \boldzero^T& \cdots & r_{nn}n' &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\end{array}\right]\!\!.
\end{equation}
\renewcommand{\arraystretch}{1}
The matrix in \eqref{eq:CP1} is permutation similar to
\begin{equation}\label{eq:CP2} \left[ \begin{array}{cccccc}
\rho n' +n\rho' & n{\bf x}'^T & n'{\bf x}^T & \boldzero^T& \cdots & \boldzero^T \\
\boldzero & nR' & O & O& \cdots & O \\
\boldzero& O & n'R & O & \cdots & O \\
\boldzero & O & O & O & \cdots & O \\
\vdots & \vdots &\vdots &\vdots & \ddots &\vdots\\
\boldzero & O & O & O & \cdots & O
\end{array}\right]\!\!.
\end{equation}
Since $\rho n' +n\rho'$ is not an eigenvalue of $nR'$ or $n'R$, Lemma \ref{lem_sylvester_1645_10nov} implies that the Jordan canonical form of the matrix in \eqref{eq:CP2} is \[\begin{bmatrix}
n\rho'+n'\rho & \boldzero^T & \boldzero^T & \boldzero^T \\
\boldzero & \operatorname{J}_{nR'} & O & O \\
\boldzero & O & \operatorname{J}_{n'R} & O \\
\boldzero & O & O & O
\end{bmatrix}=\begin{bmatrix}
n\rho'+n'\rho & \boldzero^T & \boldzero^T & \boldzero^T \\
\boldzero & nD'+N' & O & O \\
\boldzero & O & n'D+N & O \\
\boldzero & O & O & O
\end{bmatrix}\!\!.\]
\end{proof}
\begin{corollary}\label{cor:spectrum_cart_prod_general_matrices} Suppose $M\in\R^{n\times n}$, $M'\in{\mathbb R}^{n'\times n'}$ are irreducible nonnegative matrices that satisfy $M\bone_n=\rho\bone_n$ and $M'\bone_{n'}=\rho'\bone_{n'}$. Let $\operatorname{spec}(M)=\{\rho,\lambda_2,\dots,\lambda_n\}$ and $\operatorname{spec}(M')=\{\rho',\lambda'_2,\dots,\lambda'_{n'}\}$. Then
\[
\operatorname{spec}(M\,{\scriptsize{\fbox{$\J$}}} \, M')=\{n\rho'+n'\rho,n'\lambda_2,\dots,n'\lambda_n,n\lambda'_2,\dots,n\lambda'_{n'},0^{(n-1)(n'-1)}\}.
\]
\end{corollary}
Considering $M$ and $M'$ in Corollary \ref{cor:spectrum_cart_prod_general_matrices} to be the distance matrices of two transmission regular digraphs we immediately obtain the next result.
\begin{theorem}\label{thm:TRcartprod-dig_new_new} Let $\Gamma$ and $\Gamma'$ be transmission regular digraphs of orders $n$ and $n'$ with transmissions $t$ and $t'$, and let $\operatorname{spec}_{\mathcal D}(\Gamma)=(t,\partial_2,\dots,\partial_n)$, $\operatorname{spec}_{\mathcal D}(\Gamma')=(t',\partial'_2,\dots,\partial'_{n'})$.
Then
\[\operatorname{spec}_{\mathcal D}(\Gamma{\, \Box\,}\Gamma')=\{nt'+n't, n'\partial_2,\dots,n'\partial_n,n\partial'_2,\dots,n\partial'_{n'},0^{(n-1)(n'-1)}\}.\]
\end{theorem}
The formula for the distance spectrum of a Cartesian product of graphs (analogous to that in Theorem \ref{thm:TRcartprod-dig_new_new}) was originally proved by Indulal for distance regular graphs \cite[Theorem 2.1]{I2009}, and it was noted in \cite{AP15} that the proof applies to transmission regular graphs. The proof used the facts that the distance matrix of a transmission regular graph commutes with $\mathbb{J}$ and every real symmetric matrix has a basis of eigenvectors.
Having found the spectrum of $\mathcal D(\Gamma\, \Box\,\Gamma')$, we now focus on describing its eigenvectors.
\begin{theorem}
\label{prop:evectors_cart_prod_general_matrices}
Let $M\in\mathbb{R}^{n\times n}$ and $M'\in\mathbb{R}^{n'\times n'}$ be irreducible nonnegative matrices, and suppose that $M\bone_n=\rho\bone_n$, $M'\bone_{n'}=\rho'\bone_{n'}$ for some $\rho,\rho'\geq 0$. Let $\{{\bf v}_2, \dots, {\bf v}_k\}$ be a linearly independent set of eigenvectors of $M$ with $M {\bf v}_i = \lambda_i {\bf v}_i, \ \lambda_i \in \operatorname{spec}(M)$, and let $\{{\bf v}_2', \dots, {\bf v}_{k'}'\}$ be a linearly independent set of eigenvectors of $M'$ with $M' {\bf v}_{j}' = \lambda_j' {\bf v}_j', \ \lambda_j' \in \operatorname{spec}(M')$. Then
\begin{enumerate}[$(1)$]
\item \label{cartevec_1} $\bone_n\otimes\bone_{n'}$ is an eigenvector of ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'$ corresponding to the spectral radius, $n\rho'+n'\rho$.
\item \label{cartevec_2} For $i=2, \dots, k$, \
${\bf v}_i \otimes \bone_{n'} + \gamma_i\bone_n \otimes \bone_{n'}$, where $\gamma_i =\frac{{\bf v}_i^T\bone_n \rho'}{n'\lambda_i-n'\rho - n\rho'}$, is an eigenvector of ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'$ corresponding to the eigenvalue $n'\lambda_i$.
\item \label{cartevec_3} For $j=2, \dots, k'$, \
$\bone_n \otimes {\bf v}_j' +\gamma'_j\bone_n \otimes \bone_{n'}$, where $\gamma_j' =\frac{{\bf v}_j'^T\bone_{n'} \rho}{n\lambda'_j-n\rho' - n'\rho}$, is an eigenvector of $M\,{\scriptsize{\fbox{$\J$}}} \, M'$ corresponding to the eigenvalue $n\lambda'_j$.
\item \label{cartnullvec} Let $\{{\bf z}_1, \dots, {\bf z}_{n-1}\}$, respectively $\{{\bf z}'_1, \dots, {\bf z}'_{n'-1}\}$, be a linearly independent set of null vectors of $\mathbb{J}_n$, respectively $\mathbb{J}_{n'}$. Then, for $i=1, \dots, n-1, \ j=1, \dots, n'-1$, ${\bf z}_i \otimes {\bf z}_{j}'$ is a null vector of ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'$.
\end{enumerate}
Furthermore, the set of eigenvectors of ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'$ described in \eqref{cartevec_1}--\eqref{cartnullvec} is linearly independent. If $\{{\bf v}_1=\bone_n,{\bf v}_2, \dots,$ $ {\bf v}_n\}$ and $\{{\bf v}_1'=\bone_{n'},{\bf v}_2', \dots, {\bf v}_{n'}'\}$ are bases of eigenvectors for $M$ and $M'$, then the set of eigenvectors of ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'$ described in \eqref{cartevec_1}--\eqref{cartnullvec} is a basis of eigenvectors.
\end{theorem}
\begin{proof} $\null$
\begin{enumerate}[(1)]
\item ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'(\bone_n\otimes\bone_{n'}) = [{M} \otimes \mathbb{J}_{n'}+\mathbb{J}_n\otimes{M}'](\bone_n\otimes\bone_{n'}) = ({\rho}\bone_n \otimes n' \bone_{n'}) + (n \bone_n \otimes {\rho}' \bone_{n'}) = {\rho}n' (\bone_n\otimes\bone_{n'}) + n{\rho}'(\bone_n\otimes\bone_{n'}) = (n{\rho}' + n'{\rho}) (\bone_n\otimes\bone_{n'})$.
\item For simplicity, let ${\bf v} = {\bf v}_i, \lambda = \lambda_i$, and $\gamma = \gamma_i$. As $|\lambda| \leq {\rho}$, $\gamma$ is well-defined and satisfies $({\bf v}^T\bone_n){\rho}' + \gamma {\rho} n' + \gamma n {\rho}' - n'\lambda \gamma = 0$. Moreover, ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}' ({\bf v} \otimes \bone_{n'} + \gamma \bone_n \otimes \bone_{n'}) = [{M} \otimes \mathbb{J}_{n'} + \mathbb{J}_n\otimes{M}']({\bf v} \otimes \bone_{n'} + \gamma \bone_n \otimes \bone_{n'}) = ({M} \otimes \mathbb{J}_{n'})({\bf v} \otimes \bone_{n'}) + (\mathbb{J}_n\otimes{M}')({\bf v} \otimes \bone_{n'}) + ({M} \otimes \mathbb{J}_{n'})(\gamma \bone_n \otimes \bone_{n'}) + (\mathbb{J}_n\otimes{M}')(\gamma \bone_n \otimes \bone_{n'}) =
\lambda {\bf v} \otimes n'\bone_{n'} + ({\bf v}^T\bone_n) \bone_n \otimes {\rho}'\bone_{n'} + \gamma {\rho} \bone_n \otimes n' \bone_{n'} + \gamma n \bone_n \otimes {\rho}'\bone_{n'} =
n'\lambda ({\bf v} \otimes \bone_{n'} + \gamma \bone_n \otimes \bone_{n'}) + (({\bf v}^T\bone_n){\rho}' + \gamma {\rho} n' + \gamma n {\rho}' - n'\lambda \gamma)( \bone_n \otimes \bone_{n'})=n'\lambda ({\bf v} \otimes \bone_{n'} + \gamma \bone_n \otimes \bone_{n'})$.
\item The proof is analogous to that of \eqref{cartevec_2}.
\item ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'({\bf z}_i \otimes{\bf z}'_j) =
[{M} \otimes \mathbb{J}_{n'} + \mathbb{J}_n\otimes{M}']({\bf z}_i \otimes {\bf z}'_j) =
(M \otimes \mathbb{J}_{n'})({\bf z}_i \otimes {\bf z}'_j) + (\mathbb{J}_n \otimes M')({\bf z}_i \otimes {\bf z}'_j) =
M{\bf z}_i \otimes \mathbb{J}_{n'} {\bf z}'_j + \mathbb{J}_n {\bf z}_i \otimes M' {\bf z}'_j =
M{\bf z}_i \otimes \boldzero + \boldzero \otimes M' {\bf z}'_j = \boldzero$.\vspace{-5pt}
\end{enumerate}
Note that $({\bf z}_i \otimes {\bf z}_{j}')^T({\bf v}\otimes\bone_{n'}) = ({\bf z}_i^T \otimes {\bf z}_{j}'^T)({\bf v} \otimes\bone_{n'}) = {\bf z}_i^T {\bf v} \otimes {{\bf z}_j'}^T \bone_{n'} = 0$ for any vector ${\bf v}$, and similarly $({\bf z}_i \otimes {\bf z}_{j}')^T(\bone_n\otimes{\bf v}') = 0$ for any vector ${\bf v}'$. Thus, the null vectors ${\bf z}_i \otimes {\bf z}_{j}'$ are orthogonal to the eigenvectors in \eqref{cartevec_1}--\eqref{cartevec_3}. Moreover, the eigenvectors in \eqref{cartevec_1}--\eqref{cartevec_3} are linearly independent by Lemma \ref{lem:kron-basis}, hence the eigenvectors of ${M}{\,{\scriptsize{\fbox{$\J$}}} \,}{M}'$ in \eqref{cartevec_1}--\eqref{cartnullvec} are linearly independent. The statement regarding being a basis follows from the dimension.
\end{proof}
Next we apply Theorem \ref{prop:evectors_cart_prod_general_matrices} to provide a description of the eigenvectors of the Cartesian product of two transmission regular digraphs.
\begin{theorem}\label{thm:evectors_cart_prod_digraphs} Let $\Gamma$ and $\Gamma'$ be transmission regular digraphs of orders $n$ and $n'$ with transmissions $t$ and $t'$. Let $\{{\bf v}_1=\bone_n, \dots, {\bf v}_k\}$ be a linearly independent set of eigenvectors of $\mathcal D(\Gamma)$ with ${\bf v}_i $ an eigenvector corresponding to $\partial_i \in \operatorname{spec}_{\D}(\Gamma)$, and let $\{{\bf v}_1'=\bone_{n'}, \dots, {\bf v}_{k'}'\}$ be a linearly independent set of eigenvectors of $\mathcal D(\Gamma')$ with ${\bf v}_j' $ an eigenvector corresponding to $\partial_j' \in \operatorname{spec}_{\D}(\Gamma')$. Then
\begin{enumerate}
\item \label{Dcartevec_1} $\bone_n\otimes\bone_{n'}$ is an eigenvector of $\mathcal D(\Gamma\, \Box\,\Gamma')$ corresponding to the spectral radius, $nt'+n't$.
\item \label{Dcartevec_2} For $i=2, \dots, k$, \
${\bf v}_i \otimes \bone_{n'} + \gamma_i\bone_n \otimes \bone_{n'}$, where $\gamma_i =\frac{{\bf v}_i^T\bone_n t'}{n'\partial_i-n't - nt'}$, is an eigenvector of $\mathcal D(\Gamma\, \Box\,\Gamma')$ corresponding to the eigenvalue $n'\partial_i$.
\item \label{Dcartevec_3} For $j=2, \dots, k'$, \
$\bone_n \otimes {\bf v}_j' +\gamma'_j\bone_n \otimes \bone_{n'}$, where $\gamma_j' =\frac{{\bf v}_j'^T\bone_{n'} t}{n\partial'_j-nt' - n't}$, is an eigenvector of $\mathcal D(\Gamma\, \Box\,\Gamma')$ corresponding to the eigenvalue $n\partial'_j$.
\item \label{Dcartnullvec} Let $\{{\bf z}_1, \dots, {\bf z}_{n-1}\}$, respectively $\{{\bf z}'_1, \dots, {\bf z}_{n'-1}\}$, be a linearly independent set of null vectors of $\mathbb{J}_n$, respectively $\mathbb{J}_{n'}$. Then, for $i=1, \dots, n-1, \ j=1, \dots, n'-1$, ${\bf z}_i \otimes {\bf z}_{j}'$ is a null vector of $\mathcal D(\Gamma\, \Box\,\Gamma')$
\end{enumerate}
Furthermore, the set of eigenvectors of $\mathcal D(\Gamma\, \Box\,\Gamma')$ described in \eqref{Dcartevec_1}--\eqref{Dcartnullvec} is linearly independent. If $\{{\bf v}_1=\bone_n,{\bf v}_2, \dots, {\bf v}_n\}$ and $\{{\bf v}_1'=\bone_{n'},{\bf v}_2', \dots, {\bf v}_{n'}'\}$ are bases of eigenvectors for $\mathcal D(\Gamma)$ and $\mathcal D(\Gamma')$, then the set of eigenvectors of $\mathcal D(\Gamma\, \Box\,\Gamma')$ described in \eqref{Dcartevec_1}--\eqref{Dcartnullvec} is a basis of eigenvectors.
\end{theorem}
\begin{remark}{\rm
If $\Gamma$ and $\Gamma'$ are symmetric digraphs (which is equivalent to considering them as undirected graphs), then their distance matrices are symmetric. As a consequence, $\gamma_i$ and $\gamma'_j$ are always zero in Theorem \ref{thm:evectors_cart_prod_digraphs}, which yields the simpler expression for the eigenvectors of $\mathcal D(\Gamma\, \Box\,\Gamma')$ used in \cite{I2009}.
}\end{remark}
Next we consider the distance Laplacian and the distance signless Laplacian matrices of a Cartesian product of transmission regular digraphs.
\begin{proposition} Let $\Gamma$ and $\Gamma'$ be transmission regular digraphs of orders $n\geq 2$ and $n'\geq 2$ with transmissions $t$ and $t'$ respectively, and let $\operatorname{spec}_{\DL}(\Gamma)=\{{0, \partial^L_2},\dots,\partial^L_n\}$ and $\operatorname{spec}_{\DL}(\Gamma')=\{{0, {\partial^L}'_2},\dots,{\partial^L}'_{n'}\}$. Then $t(\Gamma\, \Box\, \Gamma')=nt'+n't$ and
\[\begin{aligned} \operatorname{spec}_{\DL}(\Gamma \, \Box\, \Gamma')&=\{ 0\}\cup\{ n t'+n' \partial^L_2,\dots,n t'+n' \partial^L_n\}\cup\{ n' t+n {\partial^L}'_2,\dots,n' t+n {\partial^L}'{n'}\}\\
& \null\hspace{5mm} \cup\ \{(nt'+n't)^{((n-1)(n'-1))}\}.\end{aligned}\]
\end{proposition}
\begin{proof} Since ${\D^L}(\Gamma)=t\I_n-\mathcal D(\Gamma)$, $\operatorname{spec}_{\D}(\Gamma)=\{t,t-\partial^L_2,\dots,t-\partial^L_n\}$ and $\operatorname{spec}_{\D}(\Gamma')=\{t',t'-{\partial^L}'_2,\dots,$ $t'-{\partial^L}'_{n'}\}$. Then by Theorem \ref{thm:TRcartprod-dig_new_new},
\[\begin{aligned} \operatorname{spec}_{\D}(\Gamma \, \Box\, \Gamma')&=\{nt'+ n't\}\cup\{ n' (t-\partial^L_2),\dots,n'(t-\partial^L_n)\}\\
& \null\hspace{5mm}\cup\ \{ n (t'-{\partial^L}'_2),\dots,n(t'-{\partial^L}'_{n'})\} \cup \{0^{((n-1)(n'-1))}\}.\end{aligned}\]
For transmission regular digraphs, the distance spectral radius is the transmission, so $t} %{\operatorname{trans}(\Gamma \, \Box\, \Gamma')= nt'+n't$, and the formula for $\operatorname{spec}_{\DL}(\Gamma \, \Box\, \Gamma')$ follows from that for $\operatorname{spec}_{\D}(\Gamma \, \Box\, \Gamma')$.
\end{proof}
\begin{proposition} Let $\Gamma$ and $\Gamma'$ be transmission regular digraphs of orders $n\geq 2$ and $n'\geq 2$ with transmissions $t$ and $t'$ respectively, and let $\operatorname{spec}_{\DQ}(\Gamma)=\{2t,{\partial^Q_2},\dots,\partial^Q_{n}\}$ and $\operatorname{spec}_{\DQ}(\Gamma')=\{2t',{{\partial^Q}'_2},\dots,{\partial^Q}'_{n'}\}$. Then
\[\begin{aligned} \operatorname{spec}_{\DQ}(\Gamma \, \Box\, \Gamma')&=\{ 2nt'+2n't\}\cup\{ n t'+n' \partial^Q_2,\dots,n t'+n' \partial^Q_{n}\}\cup\{ n' t+n {\partial^Q}'_2,\dots,n' t+n {\partial^Q}'{n'}\}\\
& \null\hspace{5mm} \cup\ \{(nt'+n't)^{((n-1)(n'-1))}\}.\end{aligned}\]
\end{proposition}
\begin{proof} Since ${\D^Q}(\Gamma)=t\I_n+\mathcal D(\Gamma)$, $\operatorname{spec}_{\D}(\Gamma)=\{t,\partial^Q_2-t,\dots,\partial^Q_{n}-t\}$ and $\operatorname{spec}_{\D}(\Gamma')=\{t',{\partial^Q}'_2-t',\dots,$ ${\partial^Q}'_{n'}-t'\}$. Then by Theorem \ref{thm:TRcartprod-dig_new_new}, \vspace{-5pt}
\[\begin{aligned} \operatorname{spec}_{\D}(\Gamma \, \Box\, \Gamma')&=\{ nt'+n't\}\cup\{ n' (\partial^Q_2-t),\dots,n'(\partial^Q_{n}-t))\}\\ \vspace{-3pt}
& \null\hspace{5mm} \cup\ \{ n ({\partial^Q}'_2-t'),\dots,n({\partial^Q}'_{n'}-t')\} \cup \{0^{((n-1)(n'-1))}\}.\end{aligned}\]
The formula for $\operatorname{spec}_{\DQ}(\Gamma \, \Box\, \Gamma')$ follows from that for $\operatorname{spec}_{\D}(\Gamma \, \Box\, \Gamma')$.
\end{proof}
\section{Lexicographic products}\label{slexprod}
Motivated by the results in \cite{I2009}, we investigate the spectra of lexicographic products of digraphs.
Recall that for graphs $G$ and $G'$ of orders $n$ and $n'$ the {\em lexicographic product} $G \circ G'$ is the graph with vertex set $V(G \circ G')=V(G) \times V(G')$ and edge set
$E(G \circ G')=\{ \{(x,x'),(y,y')\} \ | \ \{x, y\} \in E(G), \mbox{ or } x=y \mbox{ and } \ \{x',y'\} \in E(G') \}.$
The next two results appeared in \cite{CDS} and \cite{I2009} respectively, where the authors used the notation $G[G']$ for $G \circ G'$.
\begin{theorem}
\label{thm:adjacency_lexprodgraphs}{\rm \cite[p. 72]{CDS}}
Let $G$ and $G'$ be graphs of orders $n$ and $n'$, respectively, such that $G'$ is $r'$-regular.
Let $\operatorname{spec}_{\mathcal{A}}(G)=(\alpha_1,\alpha_2,\dots,\alpha_n)$ and $\operatorname{spec}_{\mathcal{A}}(G')=(r',\alpha'_2,\dots,\alpha'_{n'})$.
Then, \vspace{-5pt}
\[
\operatorname{spec}_{\mathcal{A}}(G{\circ}G')=\left\{n'\alpha_i + r', \ i = 1, \dots, n \right\} \cup \left\{{\alpha'_j}^{(n)}, \ j = 2, \dots, n' \right\}.\vspace{-5pt}
\]
\end{theorem}
\begin{theorem}\label{thm:lexprodgraphs}{\rm \cit
{I2009}}
Let $G$ and $G'$ be graphs of orders $n\geq 2$ and $n'$, respectively, such that $G$ is connected and $G'$ is $r'$-regular. Let $\operatorname{spec}_{\D}(G)=\{\partial_1, \dots, \partial_n\}$ and $\operatorname{spec}_\mathcal A(G')=\{r', \alpha'_2, \dots, \alpha'_{n'}\}$. Then, \vspace{-5pt}
\[\operatorname{spec}_{\D}(G \circ G')=\{ n'\partial_i + 2n' -2-r', \ i = 1, \dots, n\}\cup\{ -(\alpha'_j +2)^{(n)}, \ j = 2, \dots, n'\}.\vspace{-5pt}\]
\end{theorem}
To derive results on the spectra of lexicographic products of digraphs, we first investigate the spectra of the matrix product $M\circ M'$ as defined in Section \ref{s:mtx-prod}.
\begin{theorem}
\label{thm:spectrum_lex_prod_general_matrices} Let $M\in\mathbb{R}^{n\times n}$ and $M'\in\mathbb{R}^{n'\times n'}$ be irreducible nonnegative matrices such that $M'\bone_{n'}=\rho'\bone_{n'}$ for some $\rho'\in\mathbb{R}$. Let $\operatorname{spec}(M)=\{\rho(M)=\lambda_1,\lambda_2,\dots,\lambda_n\}$ and $\operatorname{spec}(M')=\{\rho',\lambda'_2,\dots,\lambda'_{n'}\}$. Then\vspace{-5pt}
\[
\operatorname{spec}(M\circ M')= \left\{n'\lambda_i + \rho', \ i = 1, \dots, n \right\} \cup \left\{{\lambda'_j}^{(n)}, \ j = 2, \dots, n' \right\}.
\vspace{-5pt}\]
\end{theorem}
\begin{proof} Choose $C$ such that $ C^{-1}MC=\mtx{\lambda_1 & \boldzero^T \\ \boldzero & R}=\operatorname{J}_{M}$ where the diagonal elements of $R$ are $\lambda_2,\dots,\lambda_{n}$. Use Proposition \ref{prop:MJ-JCF} to choose $C'$ such that $C'^{-1}\mathbb{J}_{n'} C'=\mtx{n' & \boldzero^T \\ \boldzero & O}=\operatorname{diag}(n',0,\dots,0)$ and $ C'^{-1}M'C'=\mtx{\rho' & {\bf x}'^T \\ \boldzero & R'}$ where ${\bf x}'\in{\mathbb R}^{n'-1}$ and $R'$ is the part of $\operatorname{J}_{M'}$ associated with eigenvalues $\lambda'_2,\dots,\lambda'_{n'}$, all of which differ from $\rho'$.
Then
$(C^{-1}\otimes C'^{-1})(M\circ M')(C\otimes C')=$
\renewcommand{\arraystretch}{1.3}
$\mtx{\lambda_1 & \boldzero^T \\ \boldzero & R}\otimes \operatorname{diag}(n',0,\dots,0)+\I_n\otimes \mtx{\rho' & {\bf x}'^T \\ \boldzero & R'}=$\\
\[{\scriptsize \left[ \begin{array}{cc|cc|cc|c|cc}
\lambda_1 n'& \boldzero^T & 0 & \boldzero^T& 0 & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O& \cdots & \boldzero & O \\
\hline
0 & \boldzero^T& \lambda_2 n' & \boldzero^T & r_{12}n' & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
0 & \boldzero^T & 0 & \boldzero^T & \lambda_3 n' & \boldzero^T& \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\hline
\vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \ddots &\vdots &\vdots\\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 & \boldzero^T& \cdots & \lambda_n n' &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & O \\
\end{array}\right] +
\left[ \begin{array}{cc|cc|cc|c|cc}
\rho' & {\bf x}'^T & 0 & \boldzero^T& 0 & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & R' & \boldzero & O & \boldzero & O& \cdots & \boldzero & O \\
\hline
0 & \boldzero^T& \rho' & {\bf x}'^T& 0 & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & R' & \boldzero & O & \cdots & \boldzero & O \\
\hline
0 & \boldzero^T & 0 & \boldzero^T & \rho' & {\bf x}'^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & R' & \cdots & \boldzero & O \\
\hline
\vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \ddots &\vdots &\vdots\\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 & \boldzero^T& \cdots & \rho' & {\bf x}'^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & R'
\end{array}\right]=
}\]
\[{\scriptsize \left[ \begin{array}{cc|cc|cc|c|cc}
\lambda_1 n' +\rho'& {\bf x}'^T & 0 & \boldzero^T& 0 & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & R' & \boldzero & O & \boldzero & O& \cdots & \boldzero & O \\
\hline
0 & \boldzero^T& \lambda_2n'+\rho' & {\bf x}'^T & r_{12}n' & \boldzero^T & \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & R' & \boldzero & O & \cdots & \boldzero & O \\
\hline
0 & \boldzero^T & 0 & \boldzero^T & \lambda_3n'+\rho' & {\bf x}'^T& \cdots & 0 &\boldzero^T \\
\boldzero & O & \boldzero & O & \boldzero & R' & \cdots & \boldzero & O \\
\hline
\vdots & \vdots &\vdots &\vdots &\vdots &\vdots & \ddots &\vdots &\vdots\\
\hline
0 & \boldzero^T & 0 & \boldzero^T & 0 & \boldzero^T& \cdots & \lambda_nn'+\rho' &{\bf x}'^T \\
\boldzero & O & \boldzero & O & \boldzero & O & \cdots & \boldzero & R' \\
\end{array}\right]}\!.
\]
\renewcommand{\arraystretch}{1.3}
Since $(C^{-1}\otimes C'^{-1})(M\circ M')(C\otimes C')$ is an upper triangular matrix, the multiset of its diagonal elements is $\operatorname{spec}(M\circ M')$.
The multiset of diagonal elements is $\left\{n'\lambda_i + \rho', \ i = 1, \dots, n \right\} \cup \left\{{\lambda'_j}^{(n)}, \ j = 2, \dots, n' \right\}$.
\end{proof}
Even if $M$ and $M'$ are diagonalizable, it need not be the case that $M\circ M'$ is diagonalizable, as the next example shows.
\begin{example} \label{example_lexp_non_diag}
{\rm Consider the matrices
\[
M=
\begin{bmatrix}
0 & \frac{1}{3}(28-\sqrt{7})\\
\frac{1}{3}(28-\sqrt{7}) & 0
\end{bmatrix},\quad
M'=
\begin{bmatrix}
12 & 6 & 12\\
7 & 13 & 10\\
6 & 15 & 9
\end{bmatrix}
\]
and observe that they are both irreducible nonnegative matrices, and $M'\bone_3=30\,\bone_3$. Since $\operatorname{spec}(M)=\left\{\frac{1}{3}(28-\sqrt{7}),-\frac{1}{3}(28-\sqrt{7})\right\}$ and $\operatorname{spec}(M')=\left\{30,2+\sqrt{7},2-\sqrt{7}\right\}$, we see that both $M$ and $M'$ are diagonalizable. However, one finds that
\[
\operatorname{J}_{M\circ M'}=
\begin{bmatrix}
58-\sqrt{7} & 0 & 0 & 0 & 0 & 0 \\
0 & 2+\sqrt{7} & 1 & 0 & 0 & 0 \\
0 & 0 & 2+\sqrt{7} & 0 & 0 & 0 \\
0 & 0 & 0 & 2+\sqrt{7} & 0 & 0 \\
0 & 0 & 0 & 0 & 2-\sqrt{7} & 0 \\
0 & 0 & 0 & 0 & 0 & 2-\sqrt{7}
\end{bmatrix}\!\!,
\]
which means that $M\circ M'$ is not diagonalizable.
}
\end{example}
Based on the apparent anomaly of Example \ref{example_lexp_non_diag}, we now investigate the geometric multiplicities of the eigenvalues of $M\circ M'$.
\begin{theorem}
\label{thm_geometric_mult_lex_product_matrices}
Let $M\in\mathbb{R}^{n\times n}$, $M'\in\mathbb{R}^{n'\times n'}$ be irreducible nonnegative matrices such that $M'\bone_{n'}=\rho'\bone_{n'}$ for some $\rho'\in\mathbb{R}$. Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-\rho'}{n'}$,
$g=\operatorname{gmult}_{M}(\tilde z)$
and $g'=\operatorname{gmult}_{M'}(z)$. Then
\begin{equation}
\label{eqn_geom_mult_description_1551_10_nov}
\operatorname{gmult}_{M\circ M'}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&g &\mbox{if }\;z\not\in\operatorname{spec}(M')\setminus\{\rho'\},\;\tilde z\in\operatorname{spec}(M);\\
&ng' &\mbox{if }\; z\in\operatorname{spec}(M')\setminus\{\rho'\},\;\tilde z\not\in\operatorname{spec}(M);\\
&ng'+g &\mbox{if }\;z\in\operatorname{spec}(M')\setminus\{\rho'\},\;\tilde z\in\operatorname{spec}(M),\;ES_{M'}(z)\perp \bone_{n'} ;\\
&{ng'} &\mbox{if }\;z\in\operatorname{spec}(M')\setminus\{\rho'\},\;\tilde z\in\operatorname{spec}(M),\;ES_{M'}(z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\end{equation}
\end{theorem}
\begin{proof} The eigenvalues of $M\circ M'$ take two forms:
$n'\lambda + \rho'$ for $\lambda\in\operatorname{spec}(M)$ and $n$ copies of $\lambda'$ for $ \lambda'\in\operatorname{spec}(M')$ and $\lambda'\ne\rho'$.
Observe that $z=n'\tilde z +\rho'$, so $z$ takes the first form if and only if $\tilde z\in\operatorname{spec}(M)$.
The last case in \eqref{eqn_geom_mult_description_1551_10_nov} is thus immediate. The first two cases in \eqref{eqn_geom_mult_description_1551_10_nov} concern the situation in which there is no overlap between the values of the two forms. Consider the structure of the matrix $(C^{-1}\otimes C'^{-1})(M\circ M')(C\otimes C')$ as given in the proof of Theorem \ref{thm:spectrum_lex_prod_general_matrices}. Then these two cases are a consequence of Lemma \ref{lem_sylvester_1645_10nov} after a suitable permutation of the rows and columns of $(C^{-1}\otimes C'^{-1})(M\circ M')(C\otimes C')$.
The remaining two cases happen when $z\in\operatorname{spec}(M')\setminus\{\rho'\}$ and $\tilde z\in\operatorname{spec}(M)$, so $z =\lambda'=n'\lambda+\rho'$ for $\lambda\in\operatorname{spec}(M)$, $ \lambda'\in\operatorname{spec}(M')$, and $\rho'\ne \lambda'$.
Let $V$ be a matrix of generalized eigenvectors for $M'$ corresponding to $\operatorname{J}_{M'}$, and define the vector ${\bf a}=[a_i]\in\mathbb{R}^{n'}$ by \vspace{-5pt}
\[
a_i=
\begin{array}{l}
\left\{\begin{aligned}
&0 &\mbox{if }\; V\textbf{e}_i\perp\bone_{n'};\\
&1 &\mbox{if }\; V\textbf{e}_i\not\perp\bone_{n'}.
\end{aligned}
\right.
\end{array}
\vspace{-5pt}\]
We rescale the columns of $V$ in such a way that $\bone_{n'}^TV=n'{\bf a}^T$. Notice that this implies $V\textbf{e}_1=\bone_{n'}$. Let $\mathbb{J}'=\mathbb{J}_{n'}$. We claim that $C'=V-\frac{1}{n'}\mathbb{J}' V+\bone_{n'}\textbf{e}_1^T$ satisfies the requirements for $C'$ in the proof of Theorem \ref{thm:spectrum_lex_prod_general_matrices}. Furthermore, we claim the first row of $C'^{-1}M' C'$ is $\rho'\textbf{e}_1^T+{\bf a}^T\operatorname{J}_{M'} -\rho'{\bf a}^T$. For convenience, we define $\hat{\bf x}=\operatorname{J}_{M'}^T{\bf a}-\rho'{\bf a}$. Observe that the first entry of $\hat {\bf x}$ is zero, since $\hat{\bf x}^T\textbf{e}_1={\bf a}^T\operatorname{J}_{M'}\textbf{e}_1
-\rho'{\bf a}^T\textbf{e}_1=\rho'{\bf a}^T\textbf{e}_1-\rho'{\bf a}^T\textbf{e}_1=0$. Therefore, $\hat{\bf x}^T=[0\ {\bf x}'^T]$ in the notation of the proof of Theorem \ref{thm:spectrum_lex_prod_general_matrices}.
First, we show that $C'$ is invertible: \vspace{-5pt
\[
C'\textbf{e}_1=V\textbf{e}_1-\frac{1}{n'}\mathbb{J}' V\textbf{e}_1+\bone_{n'}\textbf{e}_1^T\textbf{e}_1=\bone_{n'}
-\bone_{n'}+\bone_{n'}=\bone_{n'}=V\textbf{e}_1
\vspace{-5pt}
\]
and, for $i=2,\dots,n'$, \vspace{-5pt}
\[
C'\textbf{e}_i=V\textbf{e}_i-\frac{1}{n'}\mathbb{J}' V\textbf{e}_i+\bone_{n'}\textbf{e}_1^T\textbf{e}_i=
V\textbf{e}_i-\frac{1}{n'}\bone_{n'}n'{\bf a}^T\textbf{e}_i=
V\textbf{e}_i-a_iV\textbf{e}_1,
\vspace{-5pt}
\]
so that $C'\textbf{e}_i$ is obtained from $V\textbf{e}_i$ by adding a scalar multiple of $V\textbf{e}_1$. Hence, $\det(C')=\det(V)\neq 0$.
Moreover,
\[
\begin{aligned}
C'(n'\textbf{e}_1\textbf{e}_1^T)&=n'V\textbf{e}_1\textbf{e}_1^T-\mathbb{J}' V\textbf{e}_1\textbf{e}_1^T+n'\bone_{n'}\textbf{e}_1^T\textbf{e}_1\textbf{e}_1^T=
n'\bone_{n'}\textbf{e}_1^T-n'\bone_{n'}\textbf{e}_1^T+n'\bone_{n'}\textbf{e}_1^T\\
&=n'\bone_{n'}\textbf{e}_1^T=\mathbb{J}' V-\mathbb{J}' V+n'\bone_{n'}\textbf{e}_{1}^T=\mathbb{J}' V-\frac{1}{n'}\mathbb{J}'^2V+\mathbb{J}'\bone_{n'}\textbf{e}_1^T=\mathbb{J}' C'
\end{aligned}
\]
so $C'^{-1}\mathbb{J}_{n'} C'=\mtx{n' & \boldzero^T \\ \boldzero & O}$. Finally,
\[
\begin{aligned}
M'C'&=M'V-\frac{1}{n'}M'\mathbb{J}'V+M'\bone_{n'}\textbf{e}_1^T=V\operatorname{J}_{M'}-\frac{\rho'}{n'}\bone_{n'}\bone_{n'}^TV+\rho'\bone_{n'}\textbf{e}_1^T\\
&=V\operatorname{J}_{M'}-\rho'\bone_{n'}{\bf a}^T+\rho'\bone_{n'}\textbf{e}_1^T.\\
C'(\operatorname{J}_{M'}+\textbf{e}_1\hat{\bf x}^T)&=\left(V-\frac{1}{n'}\mathbb{J}'V+\bone_{n'}\textbf{e}_1^T\right)(\operatorname{J}_{M'}+\textbf{e}_1\hat{\bf x}^T) \\
&=V\operatorname{J}_{M'}-\frac{1}{n'}\mathbb{J}'V\operatorname{J}_{M'}+\bone_{n'}\textbf{e}_1^T\operatorname{J}_{M'}+V\textbf{e}_1\hat{\bf x}^T-\frac{1}{n'}\mathbb{J}'V\textbf{e}_1\hat{\bf x}^T+\bone_{n'}\textbf{e}_1^T\textbf{e}_1\hat{\bf x}^T\\
&=V\operatorname{J}_{M'}- \bone_{n'}{\bf a}^T\operatorname{J}_{M'}+\rho'\bone_{n'}\textbf{e}_1^T
+\bone_{n'}\hat{\bf x}^T\\
&=M'C'+\rho'\bone_{n'}{\bf a}^T- \bone_{n'}{\bf a}^T\operatorname{J}_{M'}+\bone_{n'}\hat{\bf x}^T\\
&=M'C'+\bone_{n'}\left(\rho'{\bf a}^T-{\bf a}^T\operatorname{J}_{M'}+\hat{\bf x}^T\right)\\
&=M'C'+\bone_{n'}(-\hat{\bf x}^T+\hat{\bf x}^T)\\
&=M'C'.
\end{aligned}
\]
Therefore, $C'^{-1}M'C'=\operatorname{J}_{M'}+\textbf{e}_1\hat{\bf x}^T$, and the claim is true.
Let us now focus on the entries of $\hat{\bf x} = [\hat x_i]$. We have already noticed that $\hat{x}_1=0$. Furthermore, for $i=2,\dots,n'$, we have that $\hat{x}_{i}=\hat{\bf x}^{T}\textbf{e}_i={\bf a}^T\operatorname{J}_{M'}\textbf{e}_i-
\rho'{\bf a}^T\textbf{e}_i=\lambda' a_i+\delta_i a_{i-1}-\rho'a_i$, where
$\lambda'=(\operatorname{J}_{M'})_{ii}$, $\delta_i=0$ if $V\textbf{e}_i$ is an eigenvector of $M'$, and $\delta_i=1$ otherwise. Suppose now that $ES_{M'}(\lambda')\perp \bone_{n'}$. Then, whenever $\delta_i=0$ with $\lambda'=(\operatorname{J}_{M'})_{ii}$, $V\textbf{e}_i\perp \bone_{n'}$, so $a_i=0$ and $\hat{x}_i=0$. On the other hand, if $ES_{M'}(\lambda')\not\perp \bone_{n'}$, we can find some $i$ such that $\lambda'=(\operatorname{J}_{M'})_{ii}$, $\delta_i=0$, and $a_i=1$, which means that $\hat{x}_i=\lambda'-\rho'\neq 0$.
Take $z\in\mathbb{C}$ and suppose that $z\in\operatorname{spec}(M')\setminus\{\rho'\}$ and $\tilde z\in\operatorname{spec}(M)$. Define $u=\operatorname{mult}_M(\tilde z)$ and $ u'=\operatorname{mult}_{M'}(z)$. We can permute the rows and columns of $(C^{-1}\otimes C'^{-1})(M\circ M')(C\otimes C')$ in such a way that all the appearances of $z$ on the diagonal are grouped together in a square block $B$. By virtue of Lemma \ref{lem_sylvester_1645_10nov}, the Jordan blocks relative to the eigenvalue $z$ only depend on $B$. We observe that $B$ has order $t=nu'+u$. Hence, $\operatorname{gmult}_{M\circ M'}(z)=t-\operatorname{rank}(B-z\I_t)$. If $ES_{M'}(z)\perp \bone_{n'}$, from the discussion above we see that the entries in $\hat{\bf x}$ do not influence the rank of $B-z\I_t$, since they can be reduced to zero by subtracting suitable rows of $B-z\I_t$. As a consequence, $\operatorname{rank}(B-z\I_t)=n(u'-g')+u-g$ and hence,
\[
\operatorname{gmult}_{M\circ M'}(z)=nu'+u-nu'+ng'-u+g=ng'+g.
\]
If $ES_{M'}(z)\not\perp \bone_{n'}$, on the other hand, again using the discussion above we see that $\operatorname{rank}(B-z\I_t)=n(u'-g')+u$. Indeed, in this case, there exists $i\in\{2,\dots,n'\}$ such that $z=(\operatorname{J}_{M'})_{ii}$, $\delta_i=0$, and $\hat{x}_i\neq 0$. Therefore, every row of $B-z\I_t$ containing $\hat{\textbf{x}}^T$ is linearly independent from the remaining rows of $B-z\I_t$, and, thus, it increases the rank by $1$. This yields
\[
\operatorname{gmult}_{M\circ M'}(z)=nu'+u-nu'+ng'-u=ng'.
\]
\end{proof}
\begin{example}{\rm
We now test Theorem \ref{thm_geometric_mult_lex_product_matrices} on the matrices $M$ and $M'$ defined in Example \ref{example_lexp_non_diag}. As predicted by Theorem \ref{thm:spectrum_lex_prod_general_matrices} we have that
\[
\operatorname{spec}(M\circ M')=\{58-\sqrt{7},(2+\sqrt{7})^{(3)},(2-\sqrt{7})^{(2)}\}.
\]
\begin{itemize}
\item
If $z=58-\sqrt{7}$ then $\tilde z=\frac{1}{3}(28-\sqrt{7})$. This corresponds to the first case of \eqref{eqn_geom_mult_description_1551_10_nov}, and hence we obtain $\operatorname{gmult}_{M\circ M'}(58-\sqrt{7})=\operatorname{gmult}_M(\frac{1}{3}(28-\sqrt{7}))=1$.
\item
If $z=2-\sqrt{7}$ then $\tilde z=\frac{1}{3}(-28-\sqrt{7})$. This corresponds to the second case of \eqref{eqn_geom_mult_description_1551_10_nov}, and hence we obtain $\operatorname{gmult}_{M\circ M'}(2-\sqrt{7})=n\,\operatorname{gmult}_{M'}(2-\sqrt{7})=2$.
\item
If $z=2+\sqrt{7}$ then $\tilde z=-\frac{1}{3}(28-\sqrt{7})$. Moreover, we find that $ES_{M'}(2+\sqrt{7})=\operatorname{span}(\textbf{v})$ with $\textbf{v}^T=\begin{bmatrix}
\frac{24-4\sqrt{7}}{-25+7\sqrt{7}}
&
\frac{30-28\sqrt{7}}{-201+45\sqrt{7}}
&
1
\end{bmatrix}
$.
Since $\textbf{v}^T\bone_3=\frac{13-\sqrt{7}}{-75+21\sqrt{7}}\neq 0$, we see that $ES_{M'}(2+\sqrt{7})\not\perp\bone_3$, so that this corresponds to the fourth case of \eqref{eqn_geom_mult_description_1551_10_nov}, and hence we obtain $\operatorname{gmult}_{M\circ M'}(2+\sqrt{7})=n\,\operatorname{gmult}_{M'}(2+\sqrt{7})=2$.
\end{itemize}
Notice that $\operatorname{gmult}_{M\circ M'}(2+\sqrt{7})<\operatorname{mult}_{M\circ M'}(2+\sqrt{7})$, which implies that $M\circ M'$ is not diagonalizable (as computed in Example \ref{example_lexp_non_diag}).
}\end{example}
We can apply Theorem \ref{thm:spectrum_lex_prod_general_matrices} and Theorem \ref{thm_geometric_mult_lex_product_matrices} to derive results on the adjacency spectra of lexicographic products of digraphs, and for the Laplacian and signless Laplacian spectra of lexicographic products of digraphs with additional conditions. The first part of the following result was proved in \cite{EH80} for the case where $\Gamma'$ is a regular digraph (all row and column sums of its adjacency matrix are equal), and is an extension of results known to hold for graphs (see. e.g., \cite{BKPS}).
\begin{corollary}\label{thm:TRlexprod-dig_new_new} Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that
$\Gamma'$ is $r'$-out-regular.
Let $\operatorname{spec}_{\mathcal A}(\Gamma)=(\alpha_1,\alpha_2,\dots,\alpha_n)$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=(r',\alpha'_2,\dots,\alpha'_{n'})$.
Then,
\[
\operatorname{spec}_{\mathcal A}(\Gamma{\circ}\Gamma')=\left\{n'\alpha_i + r', \ i = 1, \dots, n \right\} \cup \left\{{\alpha'_j}^{(n)}, \ j = 2, \dots, n' \right\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-r'}{n'}$,
$g=\operatorname{gmult}_{\mathcal A(\Gamma)}(\tilde z)$, and
$g'=\operatorname{gmult}_{\mathcal A(\Gamma')}(z)$. Then
\[
\operatorname{gmult}_{\mathcal A(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&g &\mbox{if }\;z\not\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal A}(\Gamma);\\
&ng' &\mbox{if }\; z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\not\in\operatorname{spec}_{\mathcal A}(\Gamma);\\
&ng'+g &\mbox{if }\;z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal A}(\Gamma),\;ES_{\mathcal A(\Gamma')}(z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal A}(\Gamma),\;ES_{\mathcal A(\Gamma')}(z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
If both $\Gamma$ and $\Gamma'$ are out-regular, then $\Gamma\circ\Gamma'$ is out-regular, too. As a consequence, both the spectrum of the Laplacian matrix and of the signless Laplacian matrix of $\Gamma\circ\Gamma'$ are obtained via shifting the spectrum of its adjacency matrix. Corollary \ref{cor_spectrum_laplacian_lexp} and Corollary \ref{cor_spectrum_SIGNLESS_laplacian_lexp} are then derived from Corollary \ref{thm:TRlexprod-dig_new_new} using basic algebraic manipulations.
\begin{corollary}\label{cor_spectrum_laplacian_lexp} Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma$ is $r$-out-regular and $\Gamma'$ is $r'$-out-regular. Let $\operatorname{spec}_L(\Gamma)=(0,\alpha^L_2,\dots,\alpha^L_n)$ and $\operatorname{spec}_L(\Gamma')=(0,\alpha^{L'}_2,\dots,\alpha^{L'}_{n'})$.
Then $\Gamma\circ\Gamma'$ is $(rn'+r')$-out-regular and
\[
\operatorname{spec}_L(\Gamma{\circ}\Gamma')=\{0\}\cup\{n'\alpha^L_i, \ i=2,\dots,n\}\cup \{(\alpha^{L'}_{j}+rn')^{(n)}, \ j=2,\dots,n'\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z}{n'}$, $\hat z=z-rn'$,
$g=\operatorname{gmult}_{L(\Gamma)}(\tilde z)$,
and $g'=\operatorname{gmult}_{L(\Gamma')}(\hat z)$. Then
\[
\operatorname{gmult}_{L(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&g &\mbox{if }\;\hat z\not\in\operatorname{spec}_{L}(\Gamma')\setminus\{0\},\;\tilde z\in\operatorname{spec}_{L}(\Gamma);\\
&ng' &\mbox{if }\; \hat z\in\operatorname{spec}_{L}(\Gamma')\setminus\{0\},\;\tilde z\not\in\operatorname{spec}_{L}(\Gamma);\\
&ng'+g &\mbox{if }\;\hat z\in\operatorname{spec}_{L}(\Gamma')\setminus\{0\},\;\tilde z\in\operatorname{spec}_{L}(\Gamma),\;ES_{L(\Gamma')}(\hat z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;\hat z\in\operatorname{spec}_{L}(\Gamma')\setminus\{0\},\;\tilde z\in\operatorname{spec}_{L}(\Gamma),\;ES_{L(\Gamma')}(\hat z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
\begin{corollary}\label{cor_spectrum_SIGNLESS_laplacian_lexp} Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma$ is $r$-out-regular and $\Gamma'$ is $r'$-out-regular. Let $\operatorname{spec}_Q(\Gamma)=(\alpha^Q_1,\alpha^Q_2,\dots,\alpha^Q_n)$ and $\operatorname{spec}_Q(\Gamma')=(\alpha^{Q'}_1,\alpha^{Q'}_2,\dots,\alpha^{Q'}_{n'})$. Then
\[
\operatorname{spec}_Q(\Gamma{\circ}\Gamma')=\{n'\alpha^Q_i+2r', \ i=1,\dots,n\}\cup\{(\alpha^{Q'}_j+rn')^{(n)}, \ j=2,\dots,n'\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-2r'}{n'}$, $\hat z=z-rn'$,
$g=\operatorname{gmult}_{Q(\Gamma)}(\tilde z)$,
and $g'=\operatorname{gmult}_{Q(\Gamma')}(\hat z)$. Then
\[
\operatorname{gmult}_{Q(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&g &\mbox{if }\;\hat z\not\in\operatorname{spec}_{Q}(\Gamma')\setminus\{2r'\},\;\tilde z\in\operatorname{spec}_{Q}(\Gamma);\\
&ng' &\mbox{if }\; \hat z\in\operatorname{spec}_{Q}(\Gamma')\setminus\{2r'\},\;\tilde z\not\in\operatorname{spec}_{Q}(\Gamma);\\
&ng'+g &\mbox{if }\;\hat z\in\operatorname{spec}_{Q}(\Gamma')\setminus\{2r'\},\;\tilde z\in\operatorname{spec}_{Q}(\Gamma),\;ES_{Q(\Gamma')}(\hat z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;\hat z\in\operatorname{spec}_{Q}(\Gamma')\setminus\{2r'\},\;\tilde z\in\operatorname{spec}_{Q}(\Gamma),\;ES_{Q(\Gamma')}(\hat z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
The lexicographic product $\Gamma\circ \Gamma'$ is strongly connected if and only if $\Gamma$ is strongly connected \cite{H18}, but $\Gamma'$ need not be. If $\Gamma'$ is not strongly connected, then $d_{\Gamma'}(x',y')=\infty$ when there is no dipath from $x'$ to $y'$. Due to this subtlety, in this section only we list any requirements for strong connectivity explicitly.
For a vertex $x$ of a strongly connected digraph $\Gamma$, $\xi_\Gamma(x)$ is the length of a shortest (nontrivial) dicycle containing $x$. If $\Gamma$ has at least one dicycle, the minimum length of a dicycle in $\Gamma$ is called the {\em girth} of $\Gamma$, denoted $g(\Gamma)$.
\begin{proposition}\label{distlex} {\rm \cite{H18}}
If $\Gamma, \Gamma'$ are digraphs such that $\Gamma$ is strongly connected, the distance formula for the lexicographic product $\Gamma \circ \Gamma'$ is
\[
d_{\Gamma \circ \Gamma'}((x,x'),(y,y')) = \left\{ \begin{array}{lr} d_{\Gamma}(x,y) & if \ x \ne y\\
\min \{\xi_\Gamma(x), \ d_{\Gamma'}(x',y')\} & if \ x = y.
\end{array} \right.
\]
\end{proposition}
\begin{obs}\label{obs:lex-longcycle}
If $\Gamma$ and $ \Gamma'$ are strongly connected digraphs such that $\operatorname{diam} \Gamma' \leq \displaystyle g(\Gamma)$,
then
the distance formula in Proposition \ref{distlex} becomes
\[
d_{\Gamma \circ \Gamma'}((x,x'),(y,y')) = \left\{ \begin{array}{lr} d_{\Gamma}(x,y) & if \ x \ne y\\
d_{\Gamma'}(x',y') & if \ x = y.
\end{array} \right.
\]
In this case, by a suitable ordering of vertices, the distance matrix $\mathcal D(\Gamma \circ \Gamma')$ can be written in the form
$\mathcal D(\Gamma \circ \Gamma') = \mathcal D(\Gamma) \otimes \mathbb{J}_{n'} + \I_n \otimes \mathcal D(\Gamma')=\mathcal D(\Gamma)\circ\mathcal D(\Gamma')$.
\end{obs}
The {\em complement} of a digraph $\Gamma=(V,E)$ is the digraph $\overline\Gamma=(V,\overline E)$ where $\overline E$ consists of all arcs not in $\Gamma$. \vspace{-5pt}
\begin{obs}\label{obs:lex-doubly-directed}
If $\Gamma$ and $ \Gamma'$ are digraphs such that $\Gamma$ is strongly connected and every vertex is incident with a doubly directed arc, then $\xi_\Gamma(x) = 2$ for any vertex $x$ of $\Gamma$. In this case, by a suitable ordering of vertices, the distance matrix $\mathcal D(\Gamma \circ \Gamma')$ can be written in the form
$\mathcal D(\Gamma \circ \Gamma') = \mathcal D(\Gamma) \otimes \mathbb{J}_{n'} + \I_n \otimes (\mathcal A(\Gamma') + 2\mathcal A(\overline{\Gamma'}))=\mathcal D(\Gamma)\circ (\mathcal A(\Gamma') + 2\mathcal A(\overline{\Gamma'}))$
as derived in \cite{I2009} for graphs.
\end{obs}
We can apply Theorem \ref{thm:spectrum_lex_prod_general_matrices} and Theorem \ref{thm_geometric_mult_lex_product_matrices} to provide results on the distance spectra of lexicographic products of digraphs which satisfy certain hypotheses.
The next result is an immediate consequence of Observation \ref{obs:lex-longcycle}, Theorem \ref{thm:spectrum_lex_prod_general_matrices}, and Theorem \ref{thm_geometric_mult_lex_product_matrices}.
\begin{corollary}\label{cor_spectrum_distance_lexp_long_cycle} Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma'$ is $t'$-transmission regular, and $\operatorname{diam} \Gamma' \leq \displaystyle g(\Gamma)$.
Let $\operatorname{spec}_{\mathcal D}(\Gamma)=(\partial_1,\partial_2,\dots,\partial_n)$ and $\operatorname{spec}_{\mathcal D}(\Gamma')=(t',\partial'_2,\dots,\partial'_{n'})$.
Then
\[
\operatorname{spec}_{\mathcal D}(\Gamma{\circ}\Gamma')=\left\{n'\partial_i + t', \ i = 1, \dots, n \right\} \cup \left\{{\partial'_j}^{(n)}, \ j = 2, \dots, n' \right\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-t'}{n'}$,
$ g=\operatorname{gmult}_{\mathcal D(\Gamma)}(\tilde z)$,
and $g'=\operatorname{gmult}_{\mathcal D(\Gamma')}(z)$. Then
\[
\operatorname{gmult}_{\mathcal D(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&g &\mbox{if }\;z\not\in\operatorname{spec}_{\mathcal D}(\Gamma')\setminus\{t'\},\;\tilde z\in\operatorname{spec}_{\mathcal D}(\Gamma);\\
&ng' &\mbox{if }\; z\in\operatorname{spec}_{\mathcal D}(\Gamma')\setminus\{t'\},\;\tilde z\not\in\operatorname{spec}_{\mathcal D}(\Gamma);\\
&ng'+g &\mbox{if }\;z\in\operatorname{spec}_{\mathcal D}(\Gamma')\setminus\{t'\},\;\tilde z\in\operatorname{spec}_{\mathcal D}(\Gamma),\;ES_{\mathcal D(\Gamma')}(z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;z\in\operatorname{spec}_{\mathcal D}(\Gamma')\setminus\{t'\},\;\tilde z\in\operatorname{spec}_{\mathcal D}(\Gamma),\;ES_{\mathcal D(\Gamma')}(z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
To establish a result about the distance matrix of a lexicographic product when every vertex of the first factor is incident with a doubly directed arc, we make use of Observation \ref{obs:lex-doubly-directed}, Theorem \ref{thm:spectrum_lex_prod_general_matrices}, Theorem \ref{thm_geometric_mult_lex_product_matrices}, and the next proposition.
\begin{proposition}\label{o:adj-comp}
Let $\Gamma$ be an $r$-out-regular digraph with $\operatorname{spec}_\mathcal A(\Gamma)=\{r,\alpha_2,\dots,\alpha_{n}\}$ and let $B= \mathcal A(\Gamma)+2\mathcal A(\overline{\Gamma})$. Then $B$ is an irreducible nonnegative matrix, $\operatorname{spec}(B)=\{2n-2-r,-(\alpha_2+2),\dots,-(\alpha_{n}+2)\}$, and $\rho(B)=2n-2-r$. Furthermore, $\operatorname{gmult}_B(-\alpha_j-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(\alpha_j)$ for $\alpha_j\ne r$ and $\operatorname{gmult}_B(-r-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(r)-1$. \vspace{-5pt}
Suppose ${\bf v}_j$ is an eigenvector of $\mathcal A(\Gamma)$ for eigenvalue $\alpha_j$ for $j=2,\dots,k$, and define $\beta_j=\frac{2{\bf v}_j^T\bone_{n}}{r-\alpha_j-2n}$. Then $\bone_{n}$ is an eigenvector of $B$ for eigenvalue $2n-2-r$, and ${\bf v}_j+\beta_j\bone_{n}$ is an eigenvector of $B$ for eigenvalue $-\alpha_j-2$ for $j=2,\dots,k$.
\end{proposition}
\begin{proof}
Observe first that every off-diagonal entry of $B= \mathcal A(\Gamma)+2\mathcal A(\overline{\Gamma})$ is nonzero, so $B$ is an irreducible nonnegative matrix. Furthermore, $\mathcal A(\overline{\Gamma})=\mathbb{J}_{n}-\I_{n}-\mathcal A(\Gamma)$, so $B= 2\mathbb{J}_{n}-2\I_{n}-\mathcal A(\Gamma)$.
Hence,
\[
\begin{aligned}
B\bone_{n}=2\mathbb{J}_{n}\bone_{n}-2\I_{n}\bone_{n}-\mathcal A(\Gamma)\bone_{n}=(2n-2-r)\bone_{n}
\end{aligned}
\]
and $2n-2-r$ is the spectral radius of $B$.
Let $\operatorname{J}_{\mathcal A(\Gamma)}=\begin{bmatrix}
r & {\bf y}^T\\
\boldzero & R
\end{bmatrix}$. Apply Proposition \ref{prop:MJ-JCF} to choose $C$ such that $C^{-1}\mathbb{J}_{n}C=\begin{bmatrix}
n & \boldzero^T\\
\boldzero & O
\end{bmatrix}$ and $C^{-1}\mathcal A(\Gamma)C=\begin{bmatrix}
r & \textbf{x}^T\\
\boldzero & R
\end{bmatrix}$
for some Jordan matrix $R$ and $\textbf{x}\in \mathbb{R}^{n-1}$. Then
\begin{eqnarray}
C^{-1}BC&=&2C^{-1}\mathbb{J}_{n}C-2C^{-1}\I_{n}C-C^{-1}\mathcal A(\Gamma)C\nonumber\\
&=&\label{e:JCF-314}
\begin{bmatrix}
2n & \boldzero^T\\
\boldzero & O
\end{bmatrix}
-2\I_{n}
-
\begin{bmatrix}
r & \textbf{x}^T\\
\boldzero & R
\end{bmatrix}
=
\begin{bmatrix}
2n-2-r & -\textbf{x}^T \\
\boldzero & -2\I_{n-1}-R
\end{bmatrix}\!,
\end{eqnarray} %
which shows that $\operatorname{spec}(B)=\{2n-2-r,-(\alpha_2+2),\dots,-(\alpha_{n}+2)\}$.
Since $B$ is irreducible, $2n-2-r$ is a simple eigenvalue of $B$. Applying Lemma \ref{lem_sylvester_1645_10nov} to \eqref{e:JCF-314}, we see that
\[
\operatorname{J}_{B}=
\begin{bmatrix}
2n-2-r & \boldzero^T \\
\boldzero & -2\I_{n-1}-R
\end{bmatrix}
\]
so that $\operatorname{gmult}_{B}(-\alpha_j-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(\alpha_j)$, $j=2,\dots,n$ for $\alpha_j\ne r$ and $\operatorname{gmult}_B(-r-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(r)-1$.
Observe that $r-\alpha_j-2n\neq 0$ because $|\alpha_j|\le r<n$, where the second inequality is due to the fact that $r$ is the out-degree of each vertex in $\Gamma$. Hence,
$
|r-\alpha_j-2n|\geq 2n-|r|-|\alpha_j|> 0
$ and $\beta_j$ is well defined.
It is immediate that $\bone_n$ is an eigenvector for $2n-2-r$, and
\[\begin{aligned}
B({\bf v}_j+\beta_j\bone_{n})
&= 2\bone_{n}^T{\bf v}_j \bone_{n}+2n\beta_j\bone_{n} -2{\bf v}_j-2\beta_j\bone_{n}-\alpha_j{\bf v}_j-\beta_jr\bone_n\\
&= (-\alpha_j-2){\bf v}_j+\left(\frac{2\bone_{n}^T{\bf v}_j}{\beta_j}+2n-2-r\right)\beta_j\bone_n\\
&= (-\alpha_j-2)({\bf v}_j+\beta_j\bone_{n}).
\end{aligned}\]
\end{proof}
\begin{theorem}\label{thm:TRlexprod-dig_doubly_directed}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively,
such that every vertex of $\Gamma$ is incident with a doubly directed arc,
and all vertices in $\Gamma'$ have out-degree $r'$. Let $\operatorname{spec}_{\mathcal D}(\Gamma)=(\partial_1,\partial_2,\dots,\partial_n)$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=(r',\alpha'_2,\dots,\alpha'_{n'})$.
Then
\[
\operatorname{spec}_{\mathcal D}(\Gamma{\circ}\Gamma')=\left\{n'\partial_i + 2n' - 2 - r', \ i = 1, \dots, n \right\} \cup \left\{{-(\alpha'_j+2)}^{(n)}, \ j = 2, \dots, n' \right\}\!.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-2n'+2+r'}{n'}$,
$g=\operatorname{gmult}_{\mathcal D(\Gamma)}(\tilde z)$,
and $g'=\operatorname{gmult}_{\mathcal A(\Gamma')}(-z-2)$. Then
\[
\operatorname{gmult}_{\mathcal D(\Gamma \circ \Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&g &\mbox{if }\;-z-2\not\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D}(\Gamma);\\
&ng' &\mbox{if }\; -z-2\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\not\in\operatorname{spec}_{\mathcal D}(\Gamma);\\
&ng'+g &\mbox{if }\;-z-2\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D}(\Gamma),\;ES_{\mathcal A(\Gamma')}(-z-2)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;-z-2\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D}(\Gamma),\;ES_{\mathcal A(\Gamma')}(-z-2)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{theorem}
\begin{proof}
By Observation \ref{obs:lex-doubly-directed}, $\mathcal D(\Gamma \circ \Gamma') = \mathcal D(\Gamma)\circ (\mathcal A(\Gamma') + 2\mathcal A(\overline{\Gamma'}))$.
Let $M'= \mathcal A(\Gamma') + 2\mathcal A(\overline{\Gamma'})$, so $\operatorname{spec}(M')=\{2n'-2-r',-(\alpha_2'+2),\dots,-(\alpha_{n'}'+2)\}$
by Proposition \ref{o:adj-comp}.
The first part of the theorem then follows from
Theorem \ref{thm:spectrum_lex_prod_general_matrices}.
Since $\Gamma'$ strongly connected, $r'$ is a simple eigenvalue and $\operatorname{gmult}_{M'}(-\alpha'_j-2)=\operatorname{gmult}_{\mathcal A(\Gamma')}(\alpha'_j)$ for $j=2,\dots,n'$ by Proposition \ref{o:adj-comp}. We now claim that $ES_{M'}(-\alpha'_j-2)\perp \bone_{n'}$ exactly when $ES_{\mathcal A(\Gamma')}(\alpha'_j)\perp\bone_{n'}$. First, if $ES_{\mathcal A(\Gamma')}(\alpha'_j)\perp\bone_{n'}$, then given $\textbf{v}\in ES_{\mathcal A(\Gamma')}(\alpha'_j)$,
\[
M'\textbf{v}=2\mathbb{J}_{n'}\textbf{v}-2\I_{n'}\textbf{v}-\mathcal A(\Gamma')\textbf{v}=(-\alpha'_j-2)\textbf{v}
\]
so that $ES_{\mathcal A(\Gamma')}(\alpha'_j)\subseteq ES_{M'}(-\alpha'_j-2)$. Since $\operatorname{gmult}_{M'}(-\alpha'_j-2)=\operatorname{gmult}_{\mathcal A(\Gamma')}(\alpha'_j)$, we conclude that $ES_{\mathcal A(\Gamma')}(\alpha'_j)= ES_{M'}(-\alpha'_j-2)$ and the claim follows in this case. Suppose now that $ES_{\mathcal A(\Gamma')}(\alpha'_j)\not\perp\bone_{n'}$, and let $\textbf{w}\in ES_{\mathcal A(\Gamma')}(\alpha'_j)$, $\textbf{w}\not\perp\bone_{n'}$.
Define $\tilde{\textbf{w}}=\textbf{w}+\beta_j\bone_{n'}$ with $\beta_j=\frac{2\textbf{w}^T\bone_{n'}}{r'-\alpha'_j-2n'}$ as in Proposition \ref{o:adj-comp}, so that $\tilde{\textbf{w}}\in ES_{M'}(-\alpha'_j-2)$. The claim then follows since
\[
\begin{aligned}
\tilde{\textbf{w}}^T\bone_{n'}&=(\textbf{w}+\beta_j\bone_{n'})^T\bone_{n'}=\textbf{w}^T\bone_{n'}+n'\beta_j
=
\textbf{w}^T\bone_{n'}\left(1+\frac{2n'}{r'-\alpha'_j-2n'}\right)\\
&=
\textbf{w}^T\bone_{n'}
\left(
\frac{r'-\alpha'_j}{r'-\alpha'_j-2n'}
\right)\neq 0
\end{aligned}
\]
because $r'$ is a simple eigenvalue. The second part of the theorem is then a direct consequence of Theorem \ref{thm_geometric_mult_lex_product_matrices}.
\end{proof}
Using Corollary \ref{cor_spectrum_distance_lexp_long_cycle} and Theorem \ref{thm:TRlexprod-dig_doubly_directed} we obtain a description for the spectrum of the (signless) distance Laplacian matrix of $\Gamma\circ\Gamma'$ under certain conditions. This is done in Corollary \ref{cor_spectrum_dist_lap_lex_long_cycle}, Corollary \ref{cor_spectrum_SIGNLESS_dist_lap_lex_long_cycle}, Corollary \ref{cor_spectrum_dist_lap_lex_doubly_dir} and Corollary \ref{cor_spectrum_dist_SIGNLESS_lap_lex_doubly_dir}.
\begin{corollary}
\label{cor_spectrum_dist_lap_lex_long_cycle}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma$ is $t$-transmission regular, $\Gamma'$ is $t'$-transmission regular and $\operatorname{diam} \Gamma' \leq \displaystyle g(\Gamma)$.
Let $\operatorname{spec}_{\mathcal D^L}(\Gamma)=(0,\partial^L_2,\dots,\partial^L_n)$ and $\operatorname{spec}_{\mathcal D^L}(\Gamma')=(0,{\partial^L}'_2,\dots,{\partial^L}'_{n'})$.
Then $\Gamma\circ\Gamma'$ is $(tn'+t')$-transmission regular and
\[
spec_{\mathcal D^L}(\Gamma\circ\Gamma')=\{0\}\cup\{n'\partial^L_i, \ i=2,\dots,n\}\cup \{({{\partial^L}'_j}+tn')^{(n)}, \ j=2,\dots,n'\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z}{n'}$, $\hat z=z-tn'$, $ g=\operatorname{gmult}_{\mathcal D^L(\Gamma)}(\tilde z),$ and $ g'=\operatorname{gmult}_{\mathcal D^L(\Gamma')}(\hat z)$. Then
\[
\operatorname{gmult}_{\mathcal D^L(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&ng' &\mbox{if }\; \hat z\in\operatorname{spec}_{\mathcal D^L}(\Gamma')\setminus\{0\},\;\tilde z\not\in\operatorname{spec}_{\mathcal D^L}(\Gamma);\\
&g &\mbox{if }\;\hat z\not\in\operatorname{spec}_{\mathcal D^L}(\Gamma')\setminus\{0\},\;\tilde z\in\operatorname{spec}_{\mathcal D^L}(\Gamma);\\
&ng'+g &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal D^L}(\Gamma')\setminus\{0\},\;\tilde z\in\operatorname{spec}_{\mathcal D^L}(\Gamma),\;ES_{\mathcal D^L(\Gamma')}(\hat z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal D^L}(\Gamma')\setminus\{0\},\;\tilde z\in\operatorname{spec}_{\mathcal D^L}(\Gamma),\;ES_{\mathcal D^L(\Gamma')}(\hat z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
\begin{corollary}
\label{cor_spectrum_SIGNLESS_dist_lap_lex_long_cycle}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma$ is $t$-transmission regular, $\Gamma'$ is $t'$-transmission regular and $\operatorname{diam} \Gamma' \leq \displaystyle g(\Gamma)$.
Let $\operatorname{spec}_{\mathcal D^Q}(\Gamma)=(\partial^Q_1,\partial^Q_2,\dots,\partial^Q_n)$ and $\operatorname{spec}_{\mathcal D^Q}(\Gamma')=({\partial^Q}'_1,{\partial^Q}'_2,\dots,{\partial^Q}'_{n'})$.
Then
\[
spec_{\mathcal D^Q}(\Gamma\circ\Gamma')=\{n'\partial^Q_i+2t', \ i=1,\dots,n\}
\cup \{({{{\partial^Q}'_j}+tn'})^{(n)}, \ j=2,\dots,n'\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-2t'}{n'}$, $\hat z=z-tn'$, $ g=\operatorname{gmult}_{\mathcal D^Q(\Gamma)}(\tilde z)$, and $ g'=\operatorname{gmult}_{\mathcal D^Q(\Gamma')}(\hat z)$. Then
\[
\operatorname{gmult}_{\mathcal D^Q(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&ng' &\mbox{if }\; \hat z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma')\setminus\{2t'\},\;\tilde z\not\in\operatorname{spec}_{\mathcal D^Q}(\Gamma);\\
&g &\mbox{if }\;\hat z\not\in\operatorname{spec}_{\mathcal D^Q}(\Gamma')\setminus\{2t'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma);\\
&ng'+g &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma')\setminus\{2t'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma),\;ES_{\mathcal D^Q(\Gamma')}(\hat z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma')\setminus\{2t'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma),\;ES_{\mathcal D^Q(\Gamma')}(\hat z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]\end{corollary}
\begin{corollary}
\label{cor_spectrum_dist_lap_lex_doubly_dir}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma$ is $t$-transmission regular, every vertex of $\Gamma$ is incident with a doubly directed arc, and all vertices in $\Gamma'$ have out-degree $r'$. Let $\operatorname{spec}_{\mathcal D^L}(\Gamma)=(0,\partial^L_2,\dots,\partial^L_n)$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=(r',\alpha'_2,\dots,\alpha'_{n'})$.
Then $\Gamma\circ\Gamma'$ is $(tn'+2n'-2-r')$-transmission regular and
\[
\operatorname{spec}_{\mathcal D^L}(\Gamma{\circ}\Gamma')=\{0\}\cup\{n'\partial^L_i, \ i=2,\dots,n\}\cup\{(tn'+2n'+\alpha'_j-r')^{(n)}, \ j=2,\dots,n'\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z}{n'}$, $\hat z=z-tn'-2n'+r'$, $ g=\operatorname{gmult}_{\mathcal D^L(\Gamma)}(\tilde z),$ and $ g'=\operatorname{gmult}_{\mathcal A(\Gamma')}(\hat z)$. Then
\[
\operatorname{gmult}_{\mathcal D^L(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&ng' &\mbox{if }\; \hat z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\not\in\operatorname{spec}_{\mathcal D^L}(\Gamma);\\
&g &\mbox{if }\;\hat z\not\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^L}(\Gamma);\\
&ng'+g &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^L}(\Gamma),\;ES_{\mathcal A(\Gamma')}(\hat z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^L}(\Gamma),\;ES_{\mathcal A(\Gamma')}(\hat z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
\begin{corollary}
\label{cor_spectrum_dist_SIGNLESS_lap_lex_doubly_dir}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that $\Gamma$ is $t$-transmission regular, every vertex of $\Gamma$ is incident with a doubly directed arc, and all vertices in $\Gamma'$ have out-degree $r'$. Let $\operatorname{spec}_{\mathcal D^Q}(\Gamma)=(\partial^Q_1,\partial^Q_2,\dots,\partial^Q_n)$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=(r',\alpha'_2,\dots,\alpha'_{n'})$.
Then
\[
\operatorname{spec}_{\mathcal D^Q}(\Gamma{\circ}\Gamma')=\{n'\partial^Q_i+4n'-4-2r', \ i=1,\dots,n\}\cup\{(tn'+2n'-r'-\alpha'_j-4)^{(n)}, \ j=2,\dots,n'\}.
\]
Given $z\in \mathbb{C}$, define $\tilde z=\frac{z-4n'+4+2r'}{n'}$, $\hat z=tn'+2n'-r'-4-z$, $ g=\operatorname{gmult}_{\mathcal D^Q(\Gamma)}(\tilde z),$ and $ g'=\operatorname{gmult}_{\mathcal A(\Gamma')}(\hat z)$. Then
\[
\operatorname{gmult}_{\mathcal D^Q(\Gamma\circ\Gamma')}(z)=
\begin{array}{l}
\left\{\begin{aligned}
&ng' &\mbox{if }\; \hat z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\not\in\operatorname{spec}_{\mathcal D^Q}(\Gamma);\\
&g &\mbox{if }\;\hat z\not\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma);\\
&ng'+g &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma),\;ES_{\mathcal A(\Gamma')}(\hat z)\perp \bone_{n'} ;\\
&ng' &\mbox{if }\;\hat z\in\operatorname{spec}_{\mathcal A}(\Gamma')\setminus\{r'\},\;\tilde z\in\operatorname{spec}_{\mathcal D^Q}(\Gamma),\;ES_{\mathcal A(\Gamma')}(\hat z)\not\perp \bone_{n'} ;\\
&0 &\mbox{otherwise}.
\end{aligned}\right.
\end{array}
\]
\end{corollary}
We next provide a description of the eigenvectors of $M \circ M'$ from the eigenvectors of $M$ and $M'$, addressing the first two cases in Theorem \ref{thm_geometric_mult_lex_product_matrices}.
\begin{theorem}
\label{prop:evectors_lex_prod_general_matrices}
Let $M\in\mathbb{R}^{n\times n}$ and $M'\in\mathbb{R}^{n'\times n'}$ be irreducible nonnegative matrices, and suppose that $M'\bone_{n'}=\rho'\bone_{n'}$ for some $\rho'\in\mathbb{R}$. Let $\{{\bf v}_1, \dots, {\bf v}_k\}$ be a linearly independent set of eigenvectors
with $M {\bf v}_i = \lambda_i {\bf v}_i$,
and let $\{\bone_{n'},{\bf v}_2', \dots, {\bf v}_{k'}'\}$ be a linearly independent set of eigenvectors
with $M' {\bf v}_{j}' = \lambda_j' {\bf v}_j'$,
Then
\begin{enumerate}[(1)]
\item \label{lexevec_1} For $i=1, \dots, k$, \, ${\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $M \circ M'$ corresponding to the eigenvalue $n'\lambda_i + \rho'$.
\item \label{lexevec_2} For $j=2, \dots, k'$, for $i=1, \dots, k$, define $\gamma_{ij} = \frac{-\lambda_i{{\bf v}_j'}^T\bone_{n'}}{\rho' + n'\lambda_i - \lambda_j'}$ when $\lambda_j' \neq n'\lambda_i + \rho'$. Then
${\bf v}_i \otimes {\bf v}_j' +\gamma_{ij}{\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $M\circ M'$ for the eigenvalue $\lambda_j'$.
\end{enumerate}
Furthermore, the set of eigenvectors of ${M}\circ{M}'$ described in \eqref{lexevec_1} and \eqref{lexevec_2} is linearly independent.
\end{theorem}
\begin{proof} First, $(M\circ M')({\bf v}_i \otimes \bone_{n'}) = (M\otimes \mathbb{J}_{n'}+\I_n\otimes M')({\bf v}_i \otimes \bone_{n'})
= (M{\bf v}_i)\otimes(\mathbb{J}_{n'} \bone_{n'}) + (\I_n{\bf v}_i) \otimes (M'\bone_{n'})
= (\lambda_i{\bf v}_i \otimes n'\bone_{n'}) + ({\bf v}_i \otimes \rho'\bone_{n'})
= (n'\lambda_i + \rho')({\bf v}_i \otimes \bone_{n'})$. For the second statement,\vspace{-5pt}
\[\begin{aligned} (M\circ M')({\bf v}_i \otimes {\bf v}_j' +\gamma_{ij}{\bf v}_i \otimes \bone_{n'}) &= \\
(M\otimes \mathbb{J}_{n'}+\I_n\otimes M')({\bf v}_i \otimes {\bf v}_j' +\gamma_{ij}{\bf v}_i \otimes \bone_{n'}) &=\\
(M{\bf v}_i)\otimes(\mathbb{J}_{n'} {\bf v}_j') + (\I_n{\bf v}_i) \otimes (M'{\bf v}_j') + \gamma_{ij} (M{\bf v}_i)\otimes(\mathbb{J}_{n'} \bone_{n'}) + \gamma_{ij}(\I_n{\bf v}_i) \otimes (M'\bone_{n'})&=\\
\lambda_i {{\bf v}_j'}^T\bone_{n'} ({\bf v}_i \otimes \bone_{n'}) + \lambda_j'({\bf v}_i \otimes {\bf v}_j') + \gamma_{ij} \lambda_i n'({\bf v}_i \otimes \bone_{n'}) + \gamma_{ij} \rho'({\bf v}_i \otimes \bone_{n'})&=\\
\lambda_j' ({\bf v}_i \otimes {\bf v}_j' +\gamma_{ij}{\bf v}_i \otimes \bone_{n'}).& \end{aligned}\]
since $ - \lambda_j' \gamma_{ij} + {\lambda_i {\bf v}_j'}^T\bone_{n'} + \gamma_{ij} \lambda_i n' + \gamma_{ij} \rho' = 0.$
The eigenvectors are linearly independent by Lemma \ref{lem:kron-basis} and elementary linear algebra. \end{proof}
In Corollaries \ref{cor_e_vec_adjacency_lex}, \ref{cor_e_vec_distance_lex_bigcycle}, and \ref{cor_e_vec_distance_lex_doubly_directed},
Theorem \ref{prop:evectors_lex_prod_general_matrices} is applied to provide a description of the eigenvectors of the adjacency and distance matrices of the lexicographic product of two digraphs. Analogous results can be obtained for the (signless) Laplacian and for the (signless) distance Laplacian matrices with appropriate additional hypotheses by using analogous arguments.
\begin{corollary}
\label{cor_e_vec_adjacency_lex}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$, respectively, such that
$\Gamma'$ is $r'$-out-regular. Let $\{{\bf v}_1, \dots, {\bf v}_k\}$ be a linearly independent set of eigenvectors
with $\mathcal{A}(\Gamma) {\bf v}_i = \alpha_i {\bf v}_i$,
and let $\{\bone_{n'},{\bf v}_2', \dots, {\bf v}_{k'}'\}$ be a linearly independent set of eigenvectors
with $\mathcal{A}(\Gamma') {\bf v}_{j}' = \alpha_j' {\bf v}_j'.$
Then
\begin{enumerate}[(1)]
\item \label{lexevec_1_adj_lex} For $i=1, \dots, k$, \, ${\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $\mathcal{A}(\Gamma\circ\Gamma')$ corresponding to the eigenvalue $n'\alpha_i + r'$.
\item \label{lexevec_2_adj_lex} For $j=2, \dots, k'$, for $i=1, \dots, k$, define $\gamma_{ij} = \frac{-\alpha_i{{\bf v}_j'}^T\bone_{n'}}{r' + n'\alpha_i - \alpha_j'}$ when $\alpha_j' \neq n'\alpha_i + r'$. Then
${\bf v}_i \otimes {\bf v}_j' +\gamma_{ij}{\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $\mathcal{A}(\Gamma\circ\Gamma')$ for the eigenvalue $\alpha_j'$. \vspace{-5pt}
\end{enumerate}
Furthermore, the set of eigenvectors of $\mathcal{A}(\Gamma\circ\Gamma')$ described in \eqref{lexevec_1_adj_lex} and \eqref{lexevec_2_adj_lex} is linearly independent.
\end{corollary}
\begin{corollary}
\label{cor_e_vec_distance_lex_bigcycle}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$
such that $\Gamma'$ is $t'$-transmission regular and $\operatorname{diam} \Gamma' \leq \displaystyle g(\Gamma)$. Let $\{{\bf v}_1, \dots, {\bf v}_k\}$ be a linearly independent set of eigenvectors
with $\mathcal{D}(\Gamma) {\bf v}_i = \partial_i {\bf v}_i,$
and let $\{\bone_{n'},{\bf v}_2', \dots, {\bf v}_{k'}'\}$ be a linearly independent set of eigenvectors
with $\mathcal{D}(\Gamma') {\bf v}_{j}' = \partial_j' {\bf v}_j'.$
Then
\begin{enumerate}[(1)]
\item \label{lexevec_1_dist_lex_bigcycle} For $i=1, \dots, k$, \, ${\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $\mathcal{D}(\Gamma\circ\Gamma')$ corresponding to the eigenvalue $n'\partial_i + t'$.
\item \label{lexevec_2_dist_lex_bigcycle} For $j=2, \dots, k'$, \ for $i=1, \dots, k$, define $\gamma_{ij} = \frac{-\partial_i{{\bf v}_j'}^T\bone_{n'}}{t' + n'\partial_i - \partial_j'}$ when $\partial_j' \neq n'\partial_i + t'$. Then
${\bf v}_i \otimes {\bf v}_j' +\gamma_{ij}{\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $\mathcal{D}(\Gamma\circ\Gamma')$ for the eigenvalue $\partial_j'$.\vspace{-5pt}\end{enumerate}
Furthermore, the set of eigenvectors of $\mathcal{D}(\Gamma\circ\Gamma')$ described in \eqref{lexevec_1_dist_lex_bigcycle} and \eqref{lexevec_2_dist_lex_bigcycle} is linearly independent.
\end{corollary}
\begin{corollary}
\label{cor_e_vec_distance_lex_doubly_directed}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs of orders $n$ and $n'$ such that every vertex is incident with a doubly directed arc and all vertices in $\Gamma'$ have out-degree $r'$. Let $\{{\bf v}_1, \dots, {\bf v}_k\}$ be a linearly independent set of eigenvectors
with $\mathcal{D}(\Gamma) {\bf v}_i = \partial_i {\bf v}_i$
and let $\{\bone_{n'},{\bf v}_2', \dots, {\bf v}_{k'}'\}$ be a linearly independent set of eigenvectors
with $\mathcal{A}(\Gamma') {\bf v}_{j}' = \alpha_j' {\bf v}_j'$.
Then
\begin{enumerate}[(1)]
\item \label{lexevec_1_dist_lex_doubly_directed} For $i=1, \dots, k$, \, ${\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $\mathcal{D}(\Gamma\circ\Gamma')$ corresponding to the eigenvalue $n'\partial_i + 2n'-2-r'$.
\item \label{lexevec_2_dist_lex_doubly_directed} For $j=2, \dots, k'$, \ for $i=1, \dots, k$, define $\beta_j=\frac{2{\bf v}_j'^T\bone_{n'}}{r'-\alpha'_j-2n'}$ and $\gamma_{ij} = \frac{-\partial_i({{\bf v}_j'}^T\bone_{n'}+n'\beta_j)}{2n'-r'+n'\partial_i+\alpha'_j}$ when $\alpha'_j\neq -2n'+r'-n'\partial_i$. Then
${\bf v}_i \otimes {\bf v}_j' +(\beta_j+\gamma_{ij}){\bf v}_i \otimes \bone_{n'}$ is an eigenvector of $\mathcal{D}(\Gamma\circ\Gamma')$ for the eigenvalue $-\alpha'_j-2$.\vspace{-5pt}
\end{enumerate}
Furthermore, the set of eigenvectors of $\mathcal{D}(\Gamma\circ\Gamma')$ described in \eqref{lexevec_1_dist_lex_doubly_directed} and \eqref{lexevec_2_dist_lex_doubly_directed} is linearly independent.
\end{corollary}
\section{Direct products and strong products}\label{sDirectStrongprod}
For digraphs $\Gamma$ and $\Gamma'$, $\mathcal A(\Gamma\times\Gamma')=\mathcal A(\Gamma)\otimes \mathcal A(\Gamma')$ \cite{EH80} and $\mathcal A(\Gamma\boxtimes\Gamma')=\mathcal A(\Gamma\, \Box\,\Gamma') + \mathcal A(\Gamma\times\Gamma')$; the formulas for graphs are analogous. The spectrum of the adjacency matrix of a direct product in terms of the constituents is known:
\begin{theorem}{\rm \cite{EH80}}\label{thm:direct} Let $\Gamma$ and $\Gamma'$ be digraphs of orders $n$ and $n'$, respectively, having spectra
$\operatorname{spec}_{\mathcal A}(\Gamma)=\{\alpha_1,\alpha_2,\dots,\alpha_n\}$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=\{\alpha_1',\alpha'_2,\dots,\alpha'_{n'}\}$.
Then
\[
\operatorname{spec}_{\mathcal A}(\Gamma{\times}\Gamma')=\left\{\alpha_i\alpha_j' : i=1,\dots,n, \ j=1,\dots,n' \right\}.
\]
\end{theorem}
\begin{theorem} Let $\Gamma$ and $\Gamma'$ be digraphs of orders $n$ and $n'$, with
$\operatorname{spec}_{\mathcal A}(\Gamma)=\{\alpha_1,\alpha_2,\dots,\alpha_n\}$ and $\operatorname{spec}_{\mathcal A}(\Gamma')=\{\alpha_1',\alpha'_2,\dots,\alpha'_{n'}\}$.
Then
\[
\operatorname{spec}_{\mathcal A}(\Gamma{\boxtimes}\Gamma')=\left\{\alpha_i\alpha_j'+\alpha_i+\alpha_j' : i=1,\dots,n, \ j=1,\dots,n' \right\}.
\]
\end{theorem}
\begin{proof}
Choose $C$ and $C'$ such that $C^{-1}\mathcal A(\Gamma)C=\operatorname{J}_{\mathcal A(\Gamma)}$ and $C'^{-1}\mathcal A(\Gamma')C'=\operatorname{J}_{\mathcal A(\Gamma')}$. Consider \[(C^{-1}\otimes C'^{-1})\mathcal A(\Gamma\boxtimes\Gamma')(C\otimes C')=(C^{-1}\otimes C'^{-1})\mathcal A(\Gamma\, \Box\,\Gamma')(C\otimes C')+(C^{-1}\otimes C'^{-1})\mathcal A(\Gamma\times\Gamma')(C\otimes C').\]
As in the proof of \cite[Theorem 4.4.5]{HJ2}, $(C^{-1}\otimes C'^{-1})\mathcal A(\Gamma\, \Box\,\Gamma')(C\otimes C')$ is an upper triangular matrix with diagonal entries $\left\{\alpha_i+\alpha_j' : i=1,\dots,n, \ j=1,\dots,n' \right\}.$ The proof of Theorem \ref{thm:direct}, which utilizes a result from Lancaster \cite[p. 259-260]{L69}, shows $(C^{-1}\otimes C'^{-1})\mathcal A(\Gamma\times\Gamma')(C\otimes C')$ is an upper triangular matrix with diagonal entries $\left\{\alpha_i\alpha_j' : i=1,\dots,n, \ j=1,\dots,n' \right\}.$
Therefore $(C^{-1}\otimes C'^{-1})\mathcal A(\Gamma\boxtimes\Gamma')(C\otimes C')$ is an upper triangular matrix with diagonal entries $\left\{\alpha_i\alpha_j'+\alpha_i+\alpha_j' : i=1,\dots,n, \ j=1,\dots,n' \right\}$.
\end{proof}
Since the direct product of strongly connected digraphs is not necessarily strongly connected, the distance matrix may be undefined. However, the strong product of strongly connected digraphs is strongly connected, and the following distance formula is known.
\begin{proposition}\label{diststrong} {\rm \cite[Proposition 10.2.1]{H18}}
Let $\Gamma$ and $\Gamma'$ be strongly connected digraphs. Then the distance formula for the strong product $\Gamma \boxtimes \Gamma'$ is
\[
d_{\Gamma \boxtimes \Gamma'}((x,x'),(y,y')) = \max \{d_{\Gamma} (x,y), d_{\Gamma'} (x',y')\}.
\]
\end{proposition}
Given this formula for distance, the methods developed here do not seem to be applicable to determining the spectra of distance matrices of strong products of digraphs.
\section{Directed strongly regular graphs}\label{sSRD}
In this section we discuss directed strongly regular graphs (DSRGs), a special class of digraphs all of which have diameter at most two and are {\em regular}, meaning all vertices have in-degree and out-degree equal to some common value $k$; such a digraph is also called {\em $k$-regular}. A DSRG requires additional properties, and it is noteworthy that a DSRG has exactly three distinct eigenvalues; we apply our Cartesian product formula to a DSRG to produce an infinite family of graphs with three distinct eigenvalues.
Before defining a DSRG, we first prove a more general result about $k$-regular digraphs with diameter at most two, which is analogous to a result for graphs. Note that any such digraph of order $n$ is transmission regular with transmission $2n-2-k$.
\begin{proposition}\label{Diam2Prop}
Let $\Gamma$ be a $k$-regular digraph of order $n$ and diameter at most $2$ with $\operatorname{spec}_\mathcal A(\Gamma)=\{k,\alpha_2,\dots,\alpha_{n}\}$. Then $\operatorname{spec}_\mathcal D(\Gamma)=\{2n-2-k,-(\alpha_2+2),\dots,-(\alpha_{n}+2)\}$, $\bone_n$ is an eigenvector of $\mathcal D(\Gamma)$ for eigenvalue $2n-2-k$, and if ${\bf v}_i$ is an eigenvector of $\mathcal A(\Gamma)$ for $\alpha_i\ne k$, then ${\bf v}_i$ is an eigenvector of $\mathcal D(\Gamma)$ for $-2-\alpha_i$. Furthermore, $\operatorname{gmult}_{\mathcal D(\Gamma)}(-\alpha_i-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(\alpha_i)$ for $\alpha_i\ne k$ and $\operatorname{gmult}_{\mathcal D(\Gamma)}(-k-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(k)-1$. \end{proposition}
\begin{proof}
Because $\mathcal D(\Gamma)=\mathcal A(\Gamma)+2\mathcal A(\overline{\Gamma})$, all the statements except the geometric multiplicity of eigenvalue $-k-2$ of $\mathcal D(\Gamma)$ will follow from Proposition \ref{o:adj-comp} once we show that $\bone^T{\bf v}_i=0$ for $\alpha_i\ne k$.
Since $\Gamma$ is $k$-regular,
$\mathcal A(\Gamma)\mathbb{J}_n=k\mathbb{J}_n=\mathbb{J}_n \mathcal A(\Gamma)$. Let $\bone_n^T{\bf v}_i=c_i$, so $\mathbb{J}_n{\bf v}_i=c_i\bone_n$. Then\vspace{-3pt}
\[c_ik\bone_n=c_i\mathcal A(\Gamma)\bone_n=\mathcal A(\Gamma)\mathbb{J}_n{\bf v}_i=\mathbb{J}_n\mathcal A(\Gamma){\bf v}_i=\mathbb{J}_n\alpha_i{\bf v}_i=c_i\alpha_i\bone_n.\vspace{-5pt}\]
Since $k\ne \alpha_i$, this implies $c_i=0$. To see that $\operatorname{gmult}_{\mathcal D(\Gamma)}(-k-2)=\operatorname{gmult}_{\mathcal A(\Gamma)}(k)-1$, choose an orthogonal basis of eigenvectors for $ES_{\mathcal A(\Gamma)}(k)$ that includes $\bone_n$.
\end{proof}
Strongly regular graphs are a well studied family of graphs which are of particular interest because they have exactly three eigenvalues. Duval \cite{D88} defined a {\em directed strongly regular graph}, here denoted by $\Gamma(n,k,s,a,c)$, to be a digraph $\Gamma$ of order $n$ such that \vspace{-3pt}
\[\mathcal A(\Gamma)^2=s\I_n+a \mathcal A(\Gamma) + c (\mathbb{J}_n-\I_n-\mathcal A(\Gamma)) \text{ and } \mathcal A(\Gamma)\mathbb{J}_n=\mathbb{J}_n\mathcal A(\Gamma)=k\mathbb{J}_n. \vspace{-3pt}\]
Such a digraph is $k$-regular and each vertex is incident with $s$ doubly directed arcs. The number of directed paths of length two from a vertex $v$ to a vertex $u$ is $a$ if $(v,u)$ is an arc in $\Gamma$ and $c$ if $(v,u)$ is not an arc in $\Gamma$. Duval originally used the notation $\Gamma(n,k,\mu,\lambda,t)$ where $\lambda=a$, $\mu=c$, and $t=s$ in our notation. We use $s$ rather than $t$ to follow the distance matrix literature in using $t$ for transmission. Both usages $G(n,k,a,c)$ and $G(n,k,\lambda,\mu)$ appear in the literature for strongly regular graphs, and we avoid using $\lambda$ since it has been used throughout this paper as an eigenvalue. The reordering $\Gamma(n,k,t,\lambda,\mu)$ of Duval's original notation $\Gamma(n,k,\mu,\lambda,t)$ has become popular in more recent literature since it more closely follows the standard ordering for strongly regular graphs.
Duval computed the next formula for the eigenvalues of $\mathcal A(\Gamma(n,k,s,a,c))$.
\begin{theorem}{\rm \cite{D88}}\label{t:DSRG-specA}
Let $\Gamma=\Gamma(n,k,s,a,c)$. The spectrum of $\mathcal A(\Gamma)$ consists of the three eigenvalues
\[\theta_1=k,\ \theta_2=\frac{1}{2}\left(a -c + \sqrt{(c-a)^2+4(s-c)} \right)\!,\mbox{ and } \theta_3=\frac{1}{2}\left( a -c - \sqrt{(c-a)^2+4(s-c)} \right)\vspace{-3pt} \]
with multiplicities \vspace{-3pt}
\[\operatorname{mult}(\theta_1)=1,\ \operatorname{mult}(\theta_2)=-\frac{k+\theta_3(n-1)}{\theta_2-\theta_3}, \text{ and } \operatorname{mult}(\theta_3)=\frac{k+\theta_2(n-1)}{\theta_2-\theta_3}.\vspace{-5pt} \]
\end{theorem}
Duval's theorem and Proposition \ref{Diam2Prop} determine the $\mathcal D$-spectrum of a direct strongly regular graph.
\begin{corollary}
Let $\Gamma=\Gamma(n,k,s,a,c)$. The spectrum of $\mathcal D(\Gamma)$ consists of the three eigenvalues
{\scriptsize\[\partial_1=2n-2-k,\, \partial_2=-2-\frac{1}{2}\left(a -c + \sqrt{(c-a)^2+4(s-c)} \right)\!,\,\mbox{and } \partial_3=-2-\frac{1}{2}\left( a -c - \sqrt{(c-a)^2+4(s-c)} \right) \]}
with multiplicities $\operatorname{mult}(\partial_i)=\operatorname{mult}(\theta_i)$ for $i=1,2,3$.
\end{corollary}
In \cite{J03}, J\o rgensen proved that the adjacency matrix of every DSRG is diagonalizable and thus has a basis of eigenvectors. By Proposition \ref{Diam2Prop}, this property is also true of the distance matrix of a DSRG. Note that this property does not hold for all transmission regular digraphs of diameter at most 2: Figure \ref{fig:TRegNoEvec} is an example of a digraph $\Gamma$ that does not have a basis of eigenvectors; note that the digraph obtained from $\Gamma$ by reversing every arc is not transmission regular, whereas reversing every arc in a DSRG produces a DSRG.
Cartesian products provide a method of forming digraphs on a large number of vertices with few distinct distance eigenvalues. Applying Theorem \ref{thm:TRcartprod-dig_new_new} to transmission regular digraphs $\Gamma$ on $n$ vertices and $\Gamma '$ on $n'$ vertices, we see that $\Gamma\, \Box\,\Gamma '$ has $nn'$ vertices but at most $n+n'$ distinct eigenvalues. The number of distinct eigenvalues can be much lower if the spectra of $\Gamma$ and $\Gamma'$ share some common values or if they contain $0$ as an eigenvalue. \
\begin{proposition}\label{p:cp-DSRG} Suppose $\Gamma$ is a transmission regular digraph of order $n$ with $\operatorname{spec}_\mathcal D(\Gamma)=\{t=\partial_1, \partial_2^{(m)}, 0^{(n-1-m)}\}$.
Define $\Gamma_{\ell}=\Gamma\, \Box\,\dots\, \Box\,\Gamma$, the Cartesian product of $\ell$ copies of $\Gamma$. Then the order of $\Gamma_\ell$ is $n^\ell$ and $\operatorname{spec}_{\mathcal D}(\Gamma_{\ell})=\{\ell t\,n^{\ell-1},\left(\partial_2\,n^{\ell-1}\right)^{(m\ell)},0^{(n^\ell-1-m\ell)}\}$.
\end{proposition}
\begin{proof}
We prove the claim by induction. When $\ell=2$, Theorem \ref{thm:TRcartprod-dig_new_new} implies $\operatorname{spec}_{\mathcal D}(\Gamma_{2})=\{2nt,\left(\partial_2\,n\right)^{(2m)},$ $0^{(n^2-1-2m)}\}$.
Now assume $\operatorname{spec}_{\mathcal D}(\Gamma_{\ell})=\{\ell t\,n^{\ell-1},\left(\partial_2\,n^{\ell-1}\right)^{(m\ell)},0^{(n^\ell-1-m\ell)}\}$. Since $\Gamma_{\ell+1}=\Gamma_\ell\, \Box\, \Gamma$, applying Theorem \ref{thm:TRcartprod-dig_new_new} again we get
\[\begin{aligned}
\operatorname{spec}_{\mathcal D}(\Gamma_{\ell+1})&=\{n\,\ell tn^{\ell-1}+n^{\ell}t, \left(n\partial_2\,n^{\ell-1}\right)^{(m\ell)},0^{(n^\ell-1-m\ell)},\left(n^\ell\partial_2\right)^{(m)},0^{(n-1-m)},0^{(n^\ell-1)(n-1)}\}\\
&=\{t(\ell+1)n^{\ell}, \left(\partial_2\,n^{\ell}\right)^{(m(\ell+1))},0^{\left(n^{\ell+1}-1-m(\ell+1)\right)}\}.
\end{aligned}\]
\end{proof}
\begin{example}{\rm
The DSRG $\Gamma=\Gamma(8,4,3,1,3)$ has spectrum $\operatorname{spec}_{\mathcal D}(\Gamma)=\{10,-2^{(5)},0^{(2)}\}$. Therefore this digraph allows us to construct examples of arbitrarily large digraphs with only three distinct eigenvalues. By Proposition \ref{p:cp-DSRG}, $\Gamma_\ell$ has order $8^\ell$ and $\operatorname{spec}_{\mathcal D}(\Gamma_{\ell})=\{10\ell(8^{\ell-1}),\left(-2(8^{\ell-1})\right)^{(5\ell)},0^{(8^{\ell}-1-5\ell)}\}$.}
\end{example}
\begin{figure}[h!]
\centering
\scalebox{.7}{ \begin{tikzpicture}
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
\definecolor{cfv0}{rgb}{1.0,1.0,1.0}
\definecolor{clv0}{rgb}{0.0,0.0,0.0}
\definecolor{cv1}{rgb}{0.0,0.0,0.0}
\definecolor{cfv1}{rgb}{1.0,1.0,1.0}
\definecolor{clv1}{rgb}{0.0,0.0,0.0}
\definecolor{cv2}{rgb}{0.0,0.0,0.0}
\definecolor{cfv2}{rgb}{1.0,1.0,1.0}
\definecolor{clv2}{rgb}{0.0,0.0,0.0}
\definecolor{cv3}{rgb}{0.0,0.0,0.0}
\definecolor{cfv3}{rgb}{1.0,1.0,1.0}
\definecolor{clv3}{rgb}{0.0,0.0,0.0}
\definecolor{cv4}{rgb}{0.0,0.0,0.0}
\definecolor{cfv4}{rgb}{1.0,1.0,1.0}
\definecolor{clv4}{rgb}{0.0,0.0,0.0}
\definecolor{cv5}{rgb}{0.0,0.0,0.0}
\definecolor{cfv5}{rgb}{1.0,1.0,1.0}
\definecolor{clv5}{rgb}{0.0,0.0,0.0}
\definecolor{cv6}{rgb}{0.0,0.0,0.0}
\definecolor{cfv6}{rgb}{1.0,1.0,1.0}
\definecolor{clv6}{rgb}{0.0,0.0,0.0}
\definecolor{cv7}{rgb}{0.0,0.0,0.0}
\definecolor{cfv7}{rgb}{1.0,1.0,1.0}
\definecolor{clv7}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv0v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv1v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv2v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv3v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv4v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv4v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv4v5}{rgb}{0.0,0.0,0.0}
\definecolor{cv4v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v1}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v6}{rgb}{0.0,0.0,0.0}
\definecolor{cv5v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v0}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv6v7}{rgb}{0.0,0.0,0.0}
\definecolor{cv7v2}{rgb}{0.0,0.0,0.0}
\definecolor{cv7v3}{rgb}{0.0,0.0,0.0}
\definecolor{cv7v4}{rgb}{0.0,0.0,0.0}
\definecolor{cv7v5}{rgb}{0.0,0.0,0.0}
\Vertex[style={minimum
size=1.0cm,draw=cv0,fill=cfv0,text=clv0,shape=circle},LabelOut=false,L=\hbox{$1$},x=3cm,y=4.5cm]{v0}
\Vertex[style={minimum
size=1.0cm,draw=cv1,fill=cfv1,text=clv1,shape=circle},LabelOut=false,L=\hbox{$2$},x=4.5cm,y=3cm]{v1}
\Vertex[style={minimum
size=1.0cm,draw=cv2,fill=cfv2,text=clv2,shape=circle},LabelOut=false,L=\hbox{$3$},x=4.5cm,y=1.5cm]{v2}
\Vertex[style={minimum
size=1.0cm,draw=cv3,fill=cfv3,text=clv3,shape=circle},LabelOut=false,L=\hbox{$4$},x=3cm,y=0cm]{v3}
\Vertex[style={minimum
size=1.0cm,draw=cv4,fill=cfv4,text=clv4,shape=circle},LabelOut=false,L=\hbox{$5$},x=1.5cm,y=0cm]{v4}
\Vertex[style={minimum
size=1.0cm,draw=cv5,fill=cfv5,text=clv5,shape=circle},LabelOut=false,L=\hbox{$6$},x=0cm,y=1.5cm]{v5}
\Vertex[style={minimum
size=1.0cm,draw=cv6,fill=cfv6,text=clv6,shape=circle},LabelOut=false,L=\hbox{$7$},x=0cm,y=3cm]{v6}
\Vertex[style={minimum
size=1.0cm,draw=cv7,fill=cfv7,text=clv7,shape=circle},LabelOut=false,L=\hbox{$8$},x=1.5cm,y=4.5cm]{v7}
\Edge[lw=0.1cm,style={post, color=cv0v1,},](v0)(v1)
\Edge[lw=0.1cm,style={color=cv0v2,},](v0)(v2)
\Edge[lw=0.1cm,style={color=cv0v2,},](v0)(v5)
\Edge[lw=0.1cm,style={color=cv0v2,},](v0)(v6)
\Edge[lw=0.1cm,style={post,color=cv1v2,},](v1)(v2)
\Edge[lw=0.1cm,style={color=cv0v2,},](v1)(v3)
\Edge[lw=0.1cm,style={color=cv0v2,},](v1)(v4)
\Edge[lw=0.1cm,style={color=cv0v2,},](v1)(v5)
\Edge[lw=0.1cm,style={post,color=cv2v3,},](v2)(v3)
\Edge[lw=0.1cm,style={color=cv0v2,},](v2)(v4)
\Edge[lw=0.1cm,style={color=cv0v2,},](v2)(v7)
\Edge[lw=0.1cm,style={post,,color=cv3v0,},](v3)(v0)
\Edge[lw=0.1cm,style={color=cv0v2,},](v3)(v6)
\Edge[lw=0.1cm,style={color=cv0v2,},](v3)(v7)
\Edge[lw=0.1cm,style={post,color=cv4v5,},](v4)(v5)
\Edge[lw=0.1cm,style={color=cv0v2,},](v4)(v6)
\Edge[lw=0.1cm,style={post,color=cv5v6,},](v5)(v6)
\Edge[lw=0.1cm,style={color=cv0v2,},](v5)(v7)
\Edge[lw=0.1cm,style={post,color=cv6v7,},](v6)(v7)
\Edge[lw=0.1cm,style={post,color=cv7v4,},](v7)(v4)
\end{tikzpicture}}
\caption{$\Gamma(8,4,3,1,3)$}
\label{fig:my_label}
\end{figure}
Because directed strongly regular graphs are transmission regular, the ${\D^L}$ and ${\D^Q}$ eigenvalues of directed strongly regular graphs are immediate.
\begin{corollary}
Let $\Gamma=\Gamma(n,k,s,a,c)$. The spectrum of ${\D^L}(\Gamma)$ consists of the three eigenvalues
{\scriptsize \[\partial^L_1=0,\ \partial^L_2=2n-k+\frac{1}{2}\left( a -c + \sqrt{(c-a)^2+4(s-c)} \right)\!, \mbox{ and }\partial^L_3=2n-k+\frac{1}{2}\left( a -c - \sqrt{(c-a)^2+4(s-c)} \right) \]}
with multiplicities $\operatorname{mult}(\partial^L_i)=\operatorname{mult}(\theta_i)$ for $i=1,2,3$.
The spectrum of ${\D^Q}(\Gamma)$ consists of the three eigenvalues
{\scriptsize\[ \partial^Q_1=4n-4-2k,\ \partial^Q_2=2n-k-4-\frac{1}{2}\left( a -c + \sqrt{(c-a)^2+4(s-c)} \right)\!, \mbox{ and }\partial^Q_3=2n-k-4-\frac{1}{2}\left( a -c - \sqrt{(c-a)^2+4(s-c)} \right) \]}
with multiplicities $\operatorname{mult}(\partial^Q_i)=\operatorname{mult}(\theta_i)$ for $i=1,2,3$.
\end{corollary}
Because directed strongly regular graphs are out-regular, the Laplacian and signless Laplacian eigenvalues of directed strongly regular graphs are also immediate from Theorem \ref{t:DSRG-specA}.
While the eigenvalues for $\mathcal A(\Gamma)$, $L(\Gamma)$, $Q(\Gamma)$, $\mathcal D(\Gamma)$, ${\D^L}(\Gamma)$, and ${\D^Q}(\Gamma)$ can be non-real, this is not true for most DSRGs. For a DSRG that is not equivalent to a graph and is not a doubly regular tournament $\Gamma(2k+1,k,0,a,a+1)$, Duval proved $(c-a)^2+4(s-c)=d^2$ for some positive integer $d$, which implies all eigenvalues of $\mathcal A(\Gamma)$, $L(\Gamma)$, $Q(\Gamma)$, $\mathcal D(\Gamma)$, ${\D^L}(\Gamma)$, and ${\D^Q}(\Gamma)$ are rational. In the case of graphs, it is well known that these spectra are real. Before we consider the only remaining case, we need the following lemma from Klin et al.
\begin{lemma}{\rm \cite{KMMZ04}}
Let $\Gamma$ be a regular non-empty digraph without doubly directed arcs. Then $\mathcal A(\Gamma)$ has at least one non-real eigenvalue.
\end{lemma}
Applying the previous lemma, we obtain the next result about instances of non-real eigenvalues in a DSRG.
\begin{corollary}
For the DSRG $\Gamma=\Gamma(n,k,s,a,c)$, the spectra of $\mathcal A(\Gamma)$, $L(\Gamma)$, $Q(\Gamma)$, $\mathcal D(\Gamma)$, ${\D^L}(\Gamma)$, and ${\D^Q}(\Gamma)$ contain non-real eigenvalues if and only if $\Gamma= \Gamma(2k+1,k,0,a,a+1)$.
\end{corollary}
\bigskip
{\bf Acknowledgment.} The research of Minerva Catral was supported by a Faculty Development Leave from Xavier University. The research of Carolyn Reinhart was supported by NSF DMS 1839918.
|
1,108,101,563,826 | arxiv | \section*{Appendix}
\subsection{Streaming}
You don't even need a graph. You don't even need a row. You only need a \emph{sample} from a row's distribution!
\subsection{Structure Preserving Properties}
\subsection{Scalability}
\end{document}
\section{Introduction}
The sparsity of a network representation is both a strength and a weakness.
Sparsity enables the design of efficient discrete algorithms, but can make it harder to generalize in statistical learning.
Machine learning applications in networks (such as network classification \cite{getoor2007introduction,sen2008collective}, content recommendation \cite{fouss2007random}, anomaly detection \cite{chandola2009anomaly}, and missing link prediction \cite{liben2007link}) must be able to deal with this sparsity in order to survive.
In this paper we introduce \emph{deep learning} (unsupervised feature learning) \cite{deepfuture} techniques, which have proven successful in natural language processing, into network analysis for the first time.
We develop an algorithm (\textsc{DeepWalk}) that learns \emph{social representations} of a graph's vertices, by modeling a stream of short random walks.
Social representations are latent features of the vertices that capture neighborhood similarity and community membership.
These latent representations encode social relations in a continuous vector space with a relatively small number of dimensions.
\textsc{DeepWalk}\ generalizes neural language models to process a special language composed of a set of randomly-generated walks.
These neural language models have been used to capture the semantic and syntactic structure of human language\cite{senna1}, and even logical analogies \cite{regularities}.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.48\columnwidth}
\includegraphics[width=\columnwidth]{figures/karate_graph.pdf}
\caption{Input: Karate Graph}
\label{fig:toy_example_graph}
\end{subfigure}
\begin{subfigure}[b]{0.48\columnwidth}
\includegraphics[width=\columnwidth]{figures/karate.pdf}
\caption{Output: Representation}
\label{fig:toy_example_embedding}
\end{subfigure}
\caption{Our proposed method \emph{learns} a latent space representation of social interactions in $\mathbb{R}^d$. The learned representation encodes community structure so it can be easily exploited by standard classification methods. Here, our method is used on Zachary's Karate network \cite{zachary1977information} to generate a latent representation in $\mathbb{R}^2$.
Note the correspondence between community structure in the input graph and the embedding. Vertex colors represent a modularity-based clustering of the input graph.
}
\label{fig:toy_example}
\end{figure}
\textsc{DeepWalk}\ takes a graph as input and produces a latent representation as an output.
The result of applying our method to the well-studied Karate network is shown in Figure \ref{fig:toy_example}.
The graph, as typically presented by force-directed layouts, is shown in Figure \ref{fig:toy_example_graph}.
Figure \ref{fig:toy_example_embedding} shows the output of our method with 2 latent dimensions.
Beyond the striking similarity, we note that linearly separable portions of (\ref{fig:toy_example_embedding}) correspond to clusters found through modularity maximization in the input graph (\ref{fig:toy_example_graph}) (shown as vertex colors).
To demonstrate \textsc{DeepWalk}'s potential in real world scenarios, we evaluate its performance on challenging multi-label network classification problems in large heterogeneous graphs.
In the relational classification problem, the links between feature vectors violate the traditional \emph{i.i.d.} assumption.
Techniques to address this problem typically use approximate inference techniques \cite{neville2000iterative,Pearl:1988:PRI:534975} to leverage the dependency information to improve classification results.
We distance ourselves from these approaches by learning label-independent representations of the graph.
Our representation quality is not influenced by the choice of labeled vertices, so they can be shared among tasks.
\textsc{DeepWalk}\ outperforms other latent representation methods for creating \emph{social dimensions} \cite{Tang:2009:RLV:1557019.1557109,Tang:2011:Leveraging}, especially when labeled nodes are scarce.
Strong performance with our representations is possible with very simple linear classifiers (e.g. logistic regression).
Our representations are general, and can be combined with any classification method (including iterative inference methods).
\textsc{DeepWalk}\ achieves all of that while being an online algorithm that is trivially parallelizable.
Our contributions are as follows:
\begin{itemize}
\item We introduce deep learning as a tool to analyze graphs, to build robust representations that are suitable for statistical modeling. \textsc{DeepWalk}\ learns structural regularities present within short random walks.
\item We extensively evaluate our representations on multi-label classification tasks on several social networks.
We show significantly increased classification performance in the presence of label sparsity, getting improvements 5\%-10\% of Micro $F_1$, on the sparsest problems we consider.
In some cases, \textsc{DeepWalk}'s representations can outperform its competitors even when given 60\% less training data.
\item We demonstrate the scalability of our algorithm by building representations of web-scale graphs, (such as YouTube) using a parallel implementation.
Moreover, we describe the minimal changes necessary to build a streaming version of our approach.
\end{itemize}
The rest of the paper is arranged as follows. In Sections \ref{sec:problem} and \ref{sec:SRL}, we discuss the problem formulation of classification in data networks, and how it relates to our work. In Section \ref{sec:method} we present \textsc{DeepWalk}, our approach for Social Representation Learning. We outline ours experiments in Section \ref{sec:experimental_design}, and present their results in Section \ref{sec:experiments}. We close with a discussion of related work in Section \ref{sec:related}, and our conclusions.
\section{Problem Definition}
\label{sec:problem}
We consider the problem of classifying members of a social network into one or more categories.
More formally, let $G=(V, E)$, where $V$ are the members of the network, and $E$ be its edges, $E \subseteq (V \times V)$.
Given a partially labeled social network $G_L = (V,E,X,Y)$, with attributes $X \in \mathbb{R}^{|V|\times S}$ where $S$ is the size of the feature space for each attribute vector, and $Y \in \mathbb{R}^{|V|\times |\mathcal{Y}|}$, $\mathcal{Y}$ is the set of labels.
In a traditional machine learning classification setting, we aim to learn a hypothesis $H$ that maps elements of $X$ to the labels set $\mathcal{Y}$.
In our case, we can utilize the significant information about the dependence of the examples embedded in the structure of $G$ to achieve superior performance.
In the literature, this is known as the relational classification (or the \emph{collective classification} problem \cite{sen2008collective}).
Traditional approaches to relational classification pose the problem as an inference in an undirected Markov network, and then use iterative approximate inference algorithms (such as the iterative classification algorithm \cite{neville2000iterative}, Gibbs Sampling \cite{geman1984stochastic}, or label relaxation \cite{hummel1983foundations}) to compute the posterior distribution of labels given the network structure.
We propose a different approach to capture the network topology information.
Instead of mixing the label space as part of the feature space, we propose an unsupervised method which learns features that capture the graph structure \emph{independent} of the labels' distribution.
This separation between the structural representation and the labeling task avoids cascading errors, which can occur in iterative methods \cite{neville2008bias}.
Moreover, the same representation can be used for multiple classification problems concerning that network.
Our goal is to learn $X_E \in \mathbb{R}^{|V|\times d}$, where $d$ is small number of latent dimensions.
These low-dimensional representations are distributed; meaning each social phenomena is expressed by a subset of the dimensions and each dimension contributes to a subset of the social concepts expressed by the space.
Using these structural features, we will augment the attributes space to help the classification decision.
These features are general, and can be used with any classification algorithm (including iterative methods).
However, we believe that the greatest utility of these features is their easy integration with simple machine learning algorithms. They scale appropriately in real-world networks, as we will show in Section \ref{sec:experiments}.
\section{Learning Social Representations}
\label{sec:SRL}
We seek learning social representations with the following characteristics:
\begin{itemize}
\item \textbf{Adaptability} - Real social networks are constantly evolving; new social relations should not require repeating the learning process all over again.
\item \textbf{Community aware} - The distance between latent dimensions should represent a metric for evaluating social similarity between the corresponding members of the network.
This allows generalization in networks with homophily.
\item \textbf{Low dimensional} - When labeled data is scarce, low-dimensional models generalize better, and speed up convergence and inference.
\item \textbf{Continuous} -
We require latent representations to model partial community membership in continuous space.
In addition to providing a nuanced view of community membership, a continuous representation has smooth decision boundaries between communities which allows more robust classification.
\end{itemize}
Our method for satisfying these requirements learns representation for vertices from a stream of short random walks, using optimization techniques originally designed for language modeling.
Here, we review the basics of both random walks and language modeling, and describe how their combination satisfies our requirements.
\subsection{Random Walks}
We denote a random walk rooted at vertex $v_i$ as $\mathcal{W}_{v_i}$.
It is a stochastic process with random variables $\mathcal{W}^1_{v_i},\mathcal{W}^2_{v_i},\dots{},\mathcal{W}^k_{v_i}$ such that $\mathcal{W}^{k+1}_{v_i}$ is a vertex chosen at random from the neighbors of vertex $v_k$.
Random walks have been used as a similarity measure for a variety of problems in content recommendation \cite{fouss2007random} and community detection \cite{andersen2006local}.
They are also the foundation of a class of \emph{output sensitive} algorithms which use them to compute local community structure information in time sublinear to the size of the input graph \cite{spielman2004nearly}.
It is this connection to local structure that motivates us to use a \emph{stream} of short random walks as our basic tool for extracting information from a network.
In addition to capturing community information, using random walks as the basis for our algorithm gives us two other desirable properties. First,
local exploration is easy to parallelize. Several random walkers (in different threads, processes, or machines) can simultaneously explore different parts of the same graph.
Secondly, relying on information obtained from short random walks make it possible to accommodate small changes in the graph structure without the need for global recomputation.
We can iteratively update the learned model with new random walks from the changed region in time sub-linear to the entire graph.
\subsection{Connection: Power laws}
Having chosen online random walks as our primitive for capturing graph structure, we now need a suitable method to capture this information.
If the degree distribution of a connected graph follows a power law (is \emph{scale-free}), we observe that the frequency which vertices appear in the short random walks will also follow a power-law distribution.
Word frequency in natural language follows a similar distribution, and techniques from language modeling account for this distributional behavior.
To emphasize this similarity we show two different power-law distributions in Figure \ref{fig:power_law}.
The first comes from a series of short random walks on a scale-free graph, and the second comes from the text of 100,000 articles from the English Wikipedia.
A core contribution of our work is the idea that techniques which have been used to model natural language (where the symbol frequency follows a power law distribution (or \emph{Zipf's law})) can be re-purposed to model community structure in networks.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\columnwidth}
\includegraphics[width=\columnwidth]{figures/powerlaw/youtube-powerlaw}
\caption{YouTube Social Graph}
\label{fig:powerlaw-youtube}
\end{subfigure}
\begin{subfigure}[b]{0.48\columnwidth}
\includegraphics[width=\columnwidth]{figures/powerlaw/wiki-powerlaw}
\caption{Wikipedia Article Text}
\label{fig:powerlaw-wiki}
\end{subfigure}
\caption{The power-law distribution of vertices appearing in short random walks (\ref{fig:powerlaw-youtube}) follows a power-law, much like the distribution of words in natural language (\ref{fig:powerlaw-wiki}).
}
\label{fig:power_law}
\end{figure}
We spend the rest of this section reviewing the growing work in language modeling, and transforming it to learn representations of vertices which satisfy our criteria.
\subsection{Language Modeling}
The goal of language modeling is estimate the likelihood of a specific sequence of words appearing in a corpus.
More formally, given a sequence of words $$W_{1}^{n} = (w_0, w_1, \cdots, w_n)$$ where $w_i \in \mathcal{V}$ ($\mathcal{V}$ is the vocabulary), we would like to maximize the $\Pr(w_n| w_0, w_1, \cdots, w_{n-1})$ over all the training corpus.
\comment{
The performance of the model will be evaluated on held out testing corpus using a perplexity score $PP$ as defined below
\begin{equation}
PP = \frac{1}{\sqrt[n]{P(W_1^n)}}
\end{equation}}
Recent work in representation learning has focused on using probabilistic neural networks to build general representations of words which extend the scope of language modeling beyond its original goals.
In this work, we present a generalization of language modeling to explore the graph through a stream of short random walks.
These walks can be thought of short sentences and phrases in a special language.
The direct analog is to estimate the likelihood of observing vertex $v_i$ given all the previous vertices visited so far in the random walk.
$$\Pr\big(v_{i}\mid( v_1, v_2, \cdots, v_{i-1})\big)$$
Our goal is to learn a latent representation, not only a probability distribution of node co-occurrences, and so we introduce a mapping function $\Phi \colon v \in V \mapsto \mathbb{R}^{|V|\times d}$.
This mapping $\Phi$ represents the latent social representation associated with each vertex $v$ in the graph.
(In practice, we represent $\Phi$ by a $|V| \times d$ matrix of free parameters, which will serve later on as our $X_E$.)
The problem then, is to estimate the likelihood:
\begin{equation}
\Pr\Big(v_i \mid \big(\Phi(v_1), \Phi(v_2), \cdots, \Phi(v_{i-1})\big)\Big)
\end{equation}
However as the walk length grows, computing this objective function becomes unfeasible.
A recent relaxation in language modeling \cite{word2vec1,word2vec2} turns the prediction problem on its head.
First, instead of using the context to predict a missing word, it uses one word to predict the context.
Secondly, the context is composed of the words appearing to right side of the given word as well as the left side.
Finally, it removes the ordering constraint on the problem.
Instead, the model is required to maximize the probability of any word appearing in the context without the knowledge of its offset from the given word.
In terms of vertex representation modeling, this yields the optimization problem:
\begin{equation}
\begin{aligned}
& \underset{\Phi}{\text{minimize}}
& & -\log \Pr\big(\{v_{i-w}, \cdots, v_{i-1}, v_{i+1}, \cdots, v_{i+w}\}\mid \Phi(v_i) \big) \\
\end{aligned}
\label{eq:objective}
\end{equation}
We find these relaxations are particularly desirable for social representation learning.
First, the order independence assumption better captures a sense of `nearness' that is provided by random walks.
Moreover, this relaxation is quite useful for speeding up the training time by building small models as one vertex is given at a time.
Solving the optimization problem from Eq. \ref{eq:objective} builds representations that capture the shared similarities in local graph structure between vertices.
Vertices which have similar neighborhoods will acquire similar representations (encoding co-citation similarity), and allowing generalization on machine learning tasks.
By combining both truncated random walks and neural language models we formulate a method which satisfies all of our desired properties.
This method generates representations of social networks that are low-dimensional, and exist in a continuous vector space.
Its representations encode latent forms of community membership, and because the method outputs useful intermediate representations, it can adapt to changing network topology.
\section{Method}
\label{sec:method}
\begin{figure*}
\begin{subfigure}[b]{0.30\textwidth}
\adjustbox{trim={0.0\width} {.0\height} {0.7\width} {.05\height},clip}{\includegraphics{figures/deepwalk_overview}}
\caption{Random walk generation.}
\label{fig:graph}
\end{subfigure}
\begin{subfigure}[b]{0.30\textwidth}
\adjustbox{trim={.3\width} {.15\height} {0.4\width} {.05\height},clip}{\includegraphics{figures/deepwalk_overview}}
\caption{Representation mapping.}
\label{fig:phi}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\adjustbox{trim={.605\width} {.1\height} {0.0\width} {.04\height},clip}{\includegraphics{figures/deepwalk_overview}}
\caption{Hierarchical Softmax.}
\label{fig:hsm}
\end{subfigure}
\caption{Overview of \textsc{DeepWalk}.
We slide a window of length $2w+1$ over the random walk $\mathcal{W}_{v_4}$, mapping the central vertex $v_1$ to its representation $\Phi(v_1)$.
Hierarchical Softmax factors out $\Pr(v_3 \mid \Phi(v_1))$ and $\Pr(v_5 \mid \Phi(v_1))$ over sequences of probability distributions corresponding to the paths starting at the root and ending at $v_3$ and $v_5$.
The representation $\Phi$ is updated to maximize the probability of $v_1$ co-occurring with its context $\{v_3, v_5\}$.
}
\end{figure*}
In this section we discuss the main components of our algorithm.
We also present several variants of our approach and discuss their merits.
\subsection{Overview}
As in any language modeling algorithm, the only required input is a corpus and a vocabulary $\mathcal{V}$.
\textsc{DeepWalk}\ considers a set of short truncated random walks its own corpus, and the graph vertices as its own vocabulary ($\mathcal{V} = V$).
While it is beneficial to know the $V$ and the frequency distribution of vertices in the random walks ahead of the training, it is not necessary for the algorithm to work as we will show in \ref{sec:hsm}.
\subsection{Algorithm: {\large \textsc{DeepWalk}}}
The algorithm consists of two main components; first a random walk generator and second an update procedure.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE graph $G(V,E)$\\ window size $w$\\ embedding size $d$\\ walks per vertex $\mathcal{\gamma}$ \\ walk length $t$
\ENSURE matrix of vertex representations $\Phi \in \mathbb{R}^{|V| \times d}$
\STATE Initialization: Sample $\Phi$ from $\mathcal{U}^{|V| \times d}$
\STATE Build a binary Tree $T$ from $V$
\FOR{$i=0$ to $\mathcal{\gamma}$}
\STATE $\mathcal{O} = \text{Shuffle}(V)$
\FOR{\textbf{each} $v_i \in \mathcal{O}$}
\STATE $\mathcal{W}_{v_i} = RandomWalk(G, v_i, $t$) $
\STATE SkipGram($\Phi$, $\mathcal{W}_{v_i}$, $w$)
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{\textsc{DeepWalk}($G$, $w$, $d$, $\gamma$, $t$)}
\label{alg:deepwalk}
\end{algorithm}
The random walk generator takes a graph $G$ and samples uniformly a random vertex $v_i$ as the root of the random walk $\mathcal{W}_{v_i}$.
A walk samples uniformly from the neighbors of the last vertex visited until the maximum length ($t$) is reached.
While we set the length of our random walks in the experiments to be fixed, there is no restriction for the random walks to be of the same length.
These walks could have restarts (i.e. a teleport probability of returning back to their root), but
our preliminary results did not show any advantage of using restarts.
In practice, our implementation specifies a number of random walks $\mathcal{\gamma}$ of length $t$ to start at each vertex.
Lines 3-9 in Algorithm \ref{alg:deepwalk} shows the core of our approach.
The outer loop specifies the number of times, $\mathcal{\gamma}$, which we should start random walks at each vertex.
We think of each iteration as making a `pass' over the data and sample one walk per node during this pass.
At the start of each pass we generate a random ordering to traverse the vertices.
This is not strictly required, but is well-known to speed up the convergence of stochastic gradient descent.
In the inner loop, we iterate over all the vertices of the graph.
For each vertex $v_i$ we generate a random walk $|\mathcal{W}_{v_i}| = t$, and then use it to update our representations (Line 7).
We use the SkipGram algorithm \cite{word2vec1} to update these representations in accordance with our objective function in Eq. \ref{eq:objective}.
\subsubsection{SkipGram}
SkipGram is a language model that maximizes the co-occurrence probability among the words that appear within a window, $w$, in a sentence \cite{word2vec1}.
Algorithm \ref{alg:skipgram} iterates over all possible collocations in random walk that appear within the window $w$ (lines 1-2).
For each, we map each vertex $v_j$ to its current representation vector $\Phi(v_j) \in \mathbb{R}^d$ (See Figure \ref{fig:phi}).
Given the representation of $v_j$, we would like to maximize the probability of its neighbors in the walk (line 3).
We can learn such posterior distribution using several choices of classifiers.
For example, modeling the previous problem using logistic regression would result in a huge number of labels that is equal to $|V|$ which could be in millions or billions.
Such models require large amount of computational resources that could span a whole cluster of computers \cite{nnlm}.
To speed the training time, Hierarchical Softmax \cite{hsm1,hsm2} can be used to approximate the probability distribution.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\FOR{\textbf{each} $v_j \in \mathcal{W}_{v_i}$}
\FOR{\textbf{each} $u_k \in \mathcal{W}_{v_i}[j-w: j+w]$}
\STATE $J(\Phi) = - \log{\Pr(u_k \mid \Phi(v_j))}$
\STATE $\Phi = \Phi - \alpha * \frac{\partial J}{\partial \Phi}$
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{SkipGram($\Phi$, $\mathcal{W}_{v_i}$, $w$)}
\label{alg:skipgram}
\end{algorithm}
\subsubsection{Hierarchical Softmax}
\label{sec:hsm}
Given that $u_k \in V$, calculating $\Pr(u_k \mid \Phi(v_j))$ in line 3 is not feasible.
Computing the partition function (normalization factor) is expensive.
If we assign the vertices to the leaves of a binary tree, the prediction problem turns into maximizing the probability of a specific path in the tree (See Figure \ref{fig:hsm}).
If the path to vertex $u_k$ is identified by a sequence of tree nodes $(b_0, b_1, \dots, b_{\ceil{\log|V|}})$, ($b_0$ = root, $b_{\ceil{\log|V|}} = u_k$) then $$\Pr(u_k \mid \Phi(v_j)) = \prod_{l=1}^{\ceil{\log|V|}} \Pr(b_l \mid \Phi(v_j)) $$
Now, $\Pr(b_l \mid \Phi(v_j))$ could be modeled by a binary classifier that is assigned to the parent of the node $b_l$.
This reduces the computational complexity of calculating $\Pr(u_k \mid \Phi(v_j))$ from $O(|V|)$ to $O(\log |V|)$.
We can speed up the training process further, by assigning shorter paths to the frequent vertices in the random walks.
Huffman coding is used to reduce the access time of frequent elements in the tree.
\subsubsection{Optimization}
The model parameter set is $\{\Phi, T\}$ where the size of each is $O(d|V|)$.
Stochastic gradient descent (SGD) \cite{sgd} is used to optimize these parameters (Line 4, Algorithm \ref{alg:skipgram}).
The derivatives are estimated using the back-propagation algorithm.
The learning rate $\alpha$ for SGD is initially set to 2.5\% at the beginning of the training and then decreased linearly with the number of vertices that are seen so far.
\comment{
\subsection{Complexity}
\todo{I want to show performance vs L for blogcatalog, I want to say that we converge quickly, we can reduce the size of walks in future work.}
The total length of the walks is $L = \gamma t$, then complexity of our approach is $O(dwL\log |V|)$.
Figure \ref{} shows the relation between the quality of the embeddings and length of the walks seen so far in the training.
While our model build representations that are quite useful at early stages.
The complexity of the graph structure plays a main role in how much fast we can build a good estimate of the graph topology.
As the number of possible transitions in all our walks is bounded by the number of edges $m$, we can bound our complexity to be $O(dwm\log |V|)$.
The choice of $w$ in free scale networks is bounded by the network diameter, as there is decreasing amount of information in the collocations that span the whole graph.
As $w$ is bounded by a constant, we can see that our complexity is $O(dm\log |V|)$
}
\subsection{Parallelizability}
As shown in Figure \ref{fig:power_law} the frequency distribution of vertices in random walks of social network and words in a language both follow a power law.
This results in a long tail of infrequent vertices, therefore, the updates that affect $\Phi$ will be sparse in nature.
This allows us to use asynchronous version of stochastic gradient descent (ASGD), in the multi-worker case.
Given that our updates are sparse and we do not acquire a lock to access the model shared parameters, ASGD will achieve an optimal rate of convergence \cite{hogwild}.
While we run experiments on one machine using multiple threads, it has been demonstrated that this technique is highly scalable, and can be used in very large scale machine learning \cite{largedeep}.
Figure \ref{fig:parallel} presents the effects of parallelizing \textsc{DeepWalk}. It shows the speed up in processing \textsc{BlogCatalog}\ and \textsc{Flickr}\ networks is consistent as we increase the number of workers to 8 (Figure \ref{fig:parallel_speed}).
It also shows that there is no loss of predictive performance relative to the running \textsc{DeepWalk}\ serially (Figure \ref{fig:parallel_performance}).
\subsection{Algorithm Variants}
Here we discuss some variants of our proposed method, which we believe may be of interest.
\subsubsection{Streaming}
\label{variant_streaming}
One interesting variant of this method is a \emph{streaming} approach, which could be implemented without knowledge of the entire graph.
In this variant small walks from the graph are passed directly to the representation learning code, and the model is updated directly.
Some modifications to the learning process will also be necessary.
First, using a decaying learning rate will no longer be possible. Instead, we can initialize the learning rate $\alpha$ to a small constant value.
This will take longer to learn, but may be worth it in some applications.
Second, we cannot necessarily build a tree of parameters any more.
If the cardinality of $V$ is known (or can be bounded), we can build the Hierarchical Softmax tree for that maximum value.
Vertices can be assigned to one of the remaining leaves when they are first seen.
If we have the ability to estimate the vertex frequency a priori, we can also still use Huffman coding to decrease frequent element access times.
\subsubsection{Non-random walks}
Some graphs are created as a by-product of agents interacting with a sequence of elements (e.g. users' navigation of pages on a website).
When a graph is created by such a stream of \emph{non-random} walks, we can use this process to feed the modeling phase directly.
Graphs sampled in this way will not only capture information related to network structure, but also to the frequency at which paths are traversed.
In our view, this variant also encompasses language modeling. Sentences can be viewed as purposed walks through an appropriately designed language network, and
language models like SkipGram are designed to capture this behavior.
This approach can be combined with the streaming variant (Section \ref{variant_streaming}) to train features on a continually evolving network without ever explicitly constructing the entire graph.
Maintaining representations with this technique could enable web-scale classification without the hassles of dealing with a web-scale graph.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/scalability_time.pdf}
\caption{Running Time}
\label{fig:parallel_speed}
\end{subfigure}%
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/scalability_perf.pdf}
\caption{Performance}
\label{fig:parallel_performance}
\end{subfigure}
\caption{Effects of parallelizing \textsc{DeepWalk}}
\label{fig:parallel}
\end{figure}
\input{figures/graphs_overview}
\section{Experimental Design}
\label{sec:experimental_design}
In this section we provide an overview of the datasets and methods which we will use in our experiments. Code and data to reproduce our results will be available at the first author's website.
\subsection{Datasets}
An overview of the graphs we consider in our experiments is given in Figure \ref{table.graph_info}.
\begin{itemize}[itemsep=1pt, topsep=5pt, partopsep=0pt]
\item \textsc{BlogCatalog} \cite{Tang:2009:RLV:1557019.1557109} is a network of social relationships provided by blogger authors. The labels represent the topic categories provided by the authors.
\item \textsc{Flickr} \cite{Tang:2009:RLV:1557019.1557109} is a network of the contacts between users of the photo sharing website. The labels represent the interest groups of the users such as `\emph{black and white photos}'.
\item \textsc{YouTube} \cite{tang2009scalable} is a social network between users of the popular video sharing website. The labels here represent groups of viewers that enjoy common video genres (e.g. \emph{anime} and \emph{wrestling}).
\end{itemize}
\subsection{Baseline Methods}
To validate the performance of our approach we compare it against a number of baselines:
\begin{itemize}[itemsep=0pt, topsep=5pt, partopsep=0pt]
\item \text{SpectralClustering} \cite{Tang:2011:Leveraging}: This method generates a representation in $\mathbb{R}^d$ from the $d$-smallest eigenvectors of $\widetilde{\mathcal{L}}$, the normalized graph Laplacian of $G$.
Utilizing the eigenvectors of $\widetilde{\mathcal{L}}$ implicitly assumes that graph cuts will be useful for classification.
\item \text{Modularity} \cite{Tang:2009:RLV:1557019.1557109}: This method generates a representation in $\mathbb{R}^d$ from the top-$d$ eigenvectors of $B$, the Modularity matrix of $G$.
The eigenvectors of $B$ encode information about modular graph partitions of $G$\cite{newman2006modularity}. Using them as features assumes that modular graph partitions will be useful for classification.
\item \text{EdgeCluster} \cite{tang2009scalable}: This method uses $k$-means clustering to cluster the adjacency matrix of $G$. Its has been shown to perform comparably to the \text{Modularity}\text{} method, with the added advantage of scaling to graphs which are too large for spectral decomposition.
\item wvRN\cite{Macskassy03asimple}: The weighted-vote Relational Neighbor is a relational classifier.
Given the neighborhood $\mathcal{N}_i$ of vertex $v_i$, wvRN estimates $\Pr(y_i|\mathcal{N}_i)$ with the (appropriately normalized) weighted mean of its neighbors (i.e $\Pr(y_i|\mathcal{N}_i) = \frac{1}{Z}\sum_{v_j \in \mathcal{N}_i}{w_{ij}\Pr(y_j \mid \mathcal{N}_j)}$).
It has shown surprisingly good performance in real networks, and has been advocated as a sensible relational classification baseline \cite{macskassy2007classification}.
\item Majority: This na\"{\i}ve method simply chooses the most frequent labels in the training set.
\end{itemize}
\section{Experiments}
\label{sec:experiments}
\input{figures/blog_catalog_table}
\input{figures/flickr_table}
\input{figures/youtube_table}
In this section we present an experimental analysis of our method. We thoroughly evaluate it on a number of multi-label classification tasks, and analyze its sensitivity across several parameters.
\subsection{Multi-Label Classification}
To facilitate the comparison between our method and the relevant baselines, we use the exact same datasets and experimental procedure as in \cite{Tang:2009:RLV:1557019.1557109,tang2009scalable}.
Specifically, we randomly sample a portion ($T_R$) of the labeled nodes, and use them as training data.
The rest of the nodes are used as test.
We repeat this process 10 times, and report the average performance in terms of both Macro-$F_1$ and Micro-$F_1$.
When possible we report the original results \cite{Tang:2009:RLV:1557019.1557109,tang2009scalable} here directly.
For all models we use a one-vs-rest logistic regression implemented by LibLinear \cite{REF08a} for classification.
We present results for \textsc{DeepWalk}\ with ($\gamma=80$, $w=10$, $d=128$).
The results for (\text{SpectralClustering}, \text{Modularity}, \text{EdgeCluster}) use Tang and Liu's preferred dimensionality, $d=500$.
\subsubsection{BlogCatalog}
In this experiment we increase the training ratio ($T_R$) on the \textsc{BlogCatalog}\text{} network from 10\% to 90\%.
Our results are presented in Table \ref{tbl:blogcatalog}. Numbers in bold represent the highest performance in each column.
\textsc{DeepWalk}\ performs consistently better than \text{EdgeCluster}, \text{Modularity}, and \text{wvRN}.
In fact, when trained with only 20\% of the nodes labeled, \textsc{DeepWalk}\ performs better than these approaches when they are given 90\% of the data.
The performance of \text{SpectralClustering}\ proves much more competitive, but \textsc{DeepWalk}\ still outperforms when labeled data is sparse on both Macro-$F_1$ ($T_R\leq20\%$) and Micro-$F_1$ ($T_R\leq60$\%).
This strong performance when only small fractions of the graph are labeled is a core strength of our approach.
In the following experiments, we investigate the performance of our representations on even more sparsely labeled graphs.
\input{figures/stability.tex}
\subsubsection{Flickr}
In this experiment we vary the training ratio ($T_R$) on the \textsc{Flickr}\ network from 1\% to 10\%.
This corresponds to having approximately 800 to 8,000 nodes labeled for classification in the entire network.
Table \ref{tbl:flickr} presents our results, which are consistent with the previous experiment. \textsc{DeepWalk}\ outperforms all baselines by at least 3\% with respect to Micro-$F_1$.
Additionally, its Micro-$F_1$ performance when only 3\% of the graph is labeled beats all other methods even when they have been given 10\% of the data.
In other words, \textsc{DeepWalk}\ can outperform the baselines with 60\% less training data.
It also performs quite well in Macro-$F_1$, initially performing close to \text{SpectralClustering}, but distancing itself to a 1\% improvement.
\subsubsection{YouTube}
The \textsc{YouTube}\ network is considerably larger than the previous ones we have experimented on, and its size prevents two of our baseline methods (\text{SpectralClustering}\text{} and \text{Modularity}) from running on it.
It is much closer to a real world graph than those we have previously considered.
The results of varying the training ratio ($T_R$) from 1\% to 10\% are presented in Table \ref{tbl:youtube}.
They show that \textsc{DeepWalk}\ significantly outperforms the scalable baseline for creating graph representations, \text{EdgeCluster}. When 1\% of the labeled nodes are used for test, the Micro-$F_1$ improves by 14\%.
The Macro-$F_1$ shows a corresponding 10\% increase.
This lead narrows as the training data increases, but \textsc{DeepWalk}\ ends with a 3\% lead in Micro-$F_1$, and an impressive 5\% improvement in Macro-$F_1$.
This experiment showcases the performance benefits that can occur from using social representation learning for multi-label classification.
\textsc{DeepWalk}, can scale to large graphs, and performs exceedingly well in such a sparsely labeled environment.
\subsection{Parameter Sensitivity}
In order to evaluate how changes to the parameterization of \textsc{DeepWalk}\ effect its performance on classification tasks, we conducted experiments on two multi-label classifications tasks (\textsc{Flickr}, and \textsc{BlogCatalog}).
For this test, we have fixed the window size and the walk length to sensible values $(w=10,t=40)$ which should emphasize local structure.
We then vary the number of latent dimensions ($d$), the number of walks started per vertex ($\mathcal{\gamma}$), and the amount of training data available ($T_R$) to determine their impact on the network classification performance.
\subsubsection{Effect of Dimensionality}
Figure \ref{fig:stability_dimensions} shows the effects of increasing the number of latent dimensions available to our model.
Figures \ref{fig:stability_flickr-dims_vs_training} and \ref{fig:stability_blogcatalog-dims_vs_training} examine the effects of varying the dimensionality and training rate.
The performance is quite consistent between both \textsc{Flickr}\ and \textsc{BlogCatalog}\, and show that the optimal dimensionality for a model is dependent on the number of training examples. (Note that 1\% of \textsc{Flickr}\ has approximately as many labeled examples as 10\% of \textsc{BlogCatalog}).
Figures \ref{fig:stability_flickr-dims_vs_passes} and \ref{fig:stability_blogcatalog-dims_vs_training} examine the effects of varying the dimensionality and number of walks per vertex.
The relative performance between dimensions is relatively stable across different values of $\gamma$.
These charts have two interesting observations. The first is that there is most of the benefit is accomplished by starting $\gamma = 30$ walks per node in both graphs.
The second is that the relative difference between different values of $\gamma$ is quite consistent between the two graphs.
\textsc{Flickr}\ has an order of magnitude more edges than \textsc{BlogCatalog}, and we find this behavior interesting.
These experiments show that our method can make useful models of various sizes. They also show that the performance of the model depends on the number of random walks it has seen, and the appropriate dimensionality of the model depends on the training examples available.
\subsubsection{Effect of sampling frequency}
Figure \ref{fig:stability_passes} shows the effects of increasing $\gamma$, the number of random walks that we start from each vertex.
The results are very consistent for different dimensions (Fig.\ \ref{fig:stability_flickr-passes_vs_dims}, Fig.\ \ref{fig:stability_blogcatalog-passes_vs_dims}) and the amount of training data (Fig.\ \ref{fig:stability_flickr-passes_vs_training}, Fig.\ \ref{fig:stability_blogcatalog-passes_vs_training}).
Initially, increasing $\gamma$ has a big effect in the results, but this effect quickly slows ($\gamma > 10$).
These results demonstrate that we are able to learn meaningful latent representations for vertices after only a small number of random walks.
\comment{
\section{Discussion}
\label{sec:discussion}
\subsection{Why does it work?}
\subsubsection{H1. Random walk is a better way to de-sparsify than PCA on $(A, L, B)$}
Idea: Machine learning is easier on dense things, and some methods make better dense things
\subsubsection{H2. Decoupling feature space learning from label space avoids cascading errors.}
\subsubsection{H3. Short random walks are enough (partial topology)}
\subsubsection{H4. Plotting these things in 2-D will reveal differences on Cora}
\subsection{Limitations}
\todo{Improve this}
\subsubsection{Homophily Assumption}
Using random walks to extract information from graphs places limits on the type of information that we can hope to extract from the network.
Specifically, random walk methods have a strong homophily assumption, expecting that random walk distance is correlated in some way with the output space.
While this assumption holds in many real world in many real-world graphs, it might not be a good assumption in some (e.g. in bipartite networks \cite{Gallagher:2008:UGE:1401890.1401925}).
This emphasis on homophily also means that random walk methods can not effectively capture heterophily as some relational classification approaches can \cite{neville2000iterative}.
\subsubsection{Scale}
The largest correlations we can hope to model are limited by the length of the window.
}
\section{Related Work}
\label{sec:related}
The main differences between our proposed method and previous work can be summarized as follows:
\begin{enumerate} [itemsep=0pt, topsep=5pt, partopsep=0pt]
\item We \emph{learn} our latent social representations, instead of computing statistics related to centrality \cite{gallagher2010leveraging} or partitioning \cite{Tang:2011:Leveraging}.
\item We do not attempt to extend the classification procedure itself (through collective inference \cite{sen2008collective} or graph kernels \cite{kondor2002diffusion}).
\item We propose a scalable online method which uses only local information. Most methods require global information and are offline \cite{Tang:2009:RLV:1557019.1557109,tang2009scalable,Tang:2011:Leveraging,Henderson:2011:YKG:2020408.2020512}.
\item We apply unsupervised representation learning to graphs.
\end{enumerate}
In this section we discuss related work in network classification and unsupervised feature learning.
\subsection{Relational Learning}
Relational classification (or \emph{collective classification}) methods \cite{Macskassy03asimple,neville2000iterative,Pearl:1988:PRI:534975,geman1984stochastic} use links between data items as part of the classification process.
Exact inference in the collective classification problem is NP-hard, and solutions have focused on the use of approximate inference algorithm which may not be guaranteed to converge \cite{sen2008collective}.
The most relevant relational classification algorithms to our work incorporate community information by learning clusters \cite{Neville:2005:LRA:1090193.1090201}, by adding edges between nearby nodes \cite{Gallagher:2008:UGE:1401890.1401925}, by using PageRank \cite{cohenASOM}, or by extending relational classification to take additional features into account \cite{wang2013multi}.
Our work takes a substantially different approach.
Instead of a new approximation inference algorithm, we propose a procedure which learns representations of network structure which can then be used by existing inference procedure (including iterative ones).
A number of techniques for generating features from graphs have also been proposed \cite{gallagher2010leveraging,Tang:2009:RLV:1557019.1557109,tang2009scalable,Tang:2011:Leveraging,Henderson:2011:YKG:2020408.2020512}.
In contrast to these methods, we frame the feature creation procedure as a representation learning problem.
Graph Kernels \cite{vishwanathan2010graph} have been proposed as a way to use relational data as part of the classification process, but are quite slow unless approximated \cite{kang2012fast}.
Our approach is complementary; instead of encoding the structure as part of a kernel function, we learn a representation which allows them to be used directly as features for any classification method.
\subsection{Unsupervised Feature Learning}
Distributed representations have been proposed to model structural relationship between concepts \cite{distributed}.
These representations are trained by the back-propagation and gradient descent.
Computational costs and numerical instability led to these techniques to be abandoned for almost a decade.
Recently, distributed computing allowed for larger models to be trained \cite{nnlm}, and the growth of data for unsupervised learning algorithms to emerge \cite{erhanhelps}.
Distributed representations usually are trained through neural networks, these networks have made advancements in diverse fields such as computer vision \cite{vision1}, speech recognition \cite{speech1}, and natural language processing \cite{senna1}.
\section{Conclusions}
\label{sec:conclusion}
We propose \textsc{DeepWalk}, a novel approach for learning latent social representations of vertices.
Using local information from truncated random walks as input, our method learns a representation which encodes structural regularities.
Experiments on a variety of different graphs illustrate the effectiveness of our approach on challenging multi-label classification tasks.
As an online algorithm, \textsc{DeepWalk}\textsc{} is also scalable. Our results show that we can create meaningful representations for graphs too large to run spectral methods on.
On such large graphs, our method significantly outperforms other methods designed to operate for sparsity.
We also show that our approach is parallelizable, allowing workers to update different parts of the model concurrently.
In addition to being effective and scalable, our approach is also an appealing generalization of language modeling.
This connection is mutually beneficial.
Advances in language modeling may continue to generate improved latent representations for networks.
In our view, language modeling is actually sampling from an unobservable language graph. We believe that insights obtained from modeling observable graphs may in turn yield improvements to modeling unobservable ones.
Our future work in the area will focus on investigating this duality further, using our results to improve language modeling, and strengthening the theoretical justifications of the method.
|
1,108,101,563,827 | arxiv | \section{Introduction}
The fifth generation (5G) of mobile networks is now rolling out globally and the newly deployed 5G infrastructure is expected to provide commercial services in 2020. Driven by the relentless growth of wireless data traffic over the past three decades, modern wireless communication systems (from 2G to nowadays 4G) have been consistently engineered and developed towards providing better mobile broadband services, with higher and higher data rates to subscribers. The trend is envisioned to continue in 5G as it will need to carry 10,000 times more traffic \cite{Nokiawhitepaper}. Besides, with the grand ambition of offering the connectivity to anything that may benefit from being connected, 5G cellular systems are envisaged to support two brand new service categories in addition to the conventional mobile broadband one: massive machine-type communication (mMTC) \cite{Dawy_WCM_2017} and ultra-reliable low-latency communication (URLLC) \cite{Soldani_NM_2018}.
The mMTC service refers to providing wireless connectivity for a massive number (tens of thousands) of low-cost and low-energy machine-type devices (MTDs) in a relatively large area. The mMTC can find potential applications in smart metering, smart agriculture, logistics, fleet management, etc. The traffic of these applications is characterized as massive yet sporadic small-packet transmissions which require the support of high spectrum efficiency and network scalability. Furthermore, the network maintenance cost can be huge due to the large amount of nodes. As such, ultra-high energy efficiency is demanded to achieve long battery lifetime of MTDs. On the other hand, URLLC is a service category not present in today's mobile systems, which targets for mission-critical applications requiring low end-to-end latency with high reliability. Examples are fault detection and isolation in power systems, detection and responses to hazardous road conditions, self-driven vehicles, remote surgery, smart factories, and augmented reality.
Nevertheless, with the continuing deployment of 5G cellular systems in practice, it gradually becomes clear that 5G is inadequate to fulfill the promised vision of being an enabler for the ``Internet of Everything", especially the most innovative URLLC part, due to its inherent limitations \cite{Saad2019}. While the enormous network capacity growth is achievable through conventional methods of moving to higher parts of the radio spectrum and network densifications, realizing URLLC will involve a departure from the underlying theoretical principles of wireless communications. More specifically, the coupling and contradictory requirements of low latency and high reliability render the design of URLLC systems a challenging task, since wireless channels are highly dynamic and are susceptible to fading, interference, blockage, and high pathloss, especially when there are lots of moving devices, metallic reflectors and electromagnetic radiation equipments~\cite{Durisi2016pieee,Popovski18,Ji2018WCM,Bennis2018pieee,Chen2018mag}. Such design challenges will be further escalated to provide the envisioned \emph{scalable URLLC} (sURLLC) services in wireless systems beyond 5G. As defined in \cite{Saad2019}, sURLLC will scale the 5G URLLC across the device dimension by seamlessly integrating 5G URLLC with legacy mMTC. In sURLLC, the augmented triple reliability-latency-scalability tradeoff will need to be carefully dealt with, which calls for a totally new design framework.
In wireless communications, channel diversity, which refers to a measure of transmitting multiple copies of the same information through independent links along different time/frequency/spatial axises, is one of the most important techniques for boosting system reliability by effectively combating channel fading and interference~\cite{Tse05book}. As the diversity order increases, wireless channels will gradually become more stable and the chance of requesting retransmissions from the receiver side will correspondingly decrease~\cite{Popovski18}. However, in realizing sURLLC, time diversity is not preferred since achieving time diversity is at the cost of additional delay, especially in slow fading channels. Besides, delivering the information along distinct frequency channels will consume additional bandwidth, which is costly for the operations under 6\,GHz where the frequency band has already been over-crowded~\cite{ACMA15}. Thus, harnessing the spatial diversity by deploying multiple antennas at the transmitter and/or receiver side becomes the most appealing solution, and there have been extensive studies on various diversity technologies for conventional MIMO systems, see e.g.,~\cite{Tse03may, Marzetta2000tit} and references therein.
Recently, massive multiple-input multiple-output (MIMO) technology~\cite{Marzetta10}, which scales up the conventional MIMO by deploying a large number of antennas at the transmitter and/or receiver side, has been regarded as an indispensable {building} block for ensuring ultrahigh reliability~\cite{Popovski18}.
Actually, massive MIMO has already been an integrating part of 5G communications for its great potential~\cite{Andrews14, Popovski14}. The main advantages of massive MIMO include high array gain, high spatial multiplexing gain and the immune to fast fading in rich scattering environments. The fluctuations of wireless channels can be averaged out in massive MIMO, and high reliability can be maintained for short packets without the need of strong channel coding. Despite the advantages mentioned above, the mechanisms of leveraging massive MIMO for realizing sURLLC are still largely unexplored.
The reliability and latency gains associated with massive MIMO systems depend critically on the acquisition of the instantaneous channel state information (CSI)~\cite{Lu14Oct}. In conventional communication systems, the estimation of instantaneous CSI is commonly achieved by transmitting {certain} known pilot symbols that are orthogonal to different users, where the channel estimation overhead is relatively low compared with the long data payload. However, the packets in sURLLC applications are typically very short and thus the overhead induced by channel estimation becomes non-negligible and will reduce the effective transmission rate significantly~\cite{Popovski16}. Moreover, sending pilots will cause significant delay in short-packet communications, especially over fast fading channels where the pilot symbols need to be dense in the time-frequency grid. As a matter of fact, obtaining instantaneous CSI is one of the most severe limiting factors to exploit the full potential of massive MIMO, where the latency introduced by the channel estimation in massive MIMO {constitutes} a major barrier for meeting the extreme delay requirement~\cite{Popovski18}. To reduce the latency in massive MIMO, the transmission protocol should depend on as less of the channel knowledge of small-scale fading as possible~\cite{Popovski18}.
Nevertheless, the knowledge of channel statistics remains crucial to provide high reliability requirements, especially when the precise knowledge of instantaneous CSI is not available. Noncoherent detection, where no instantaneous CSI is required, can be the key supporting asset for low-latency applications. It was showed in~\cite{Giuseppe2017arxiv} that noncoherent transmission is more energy efficient than pilot-assisted transmission schemes, even when the number of pilot symbols and their power are optimized.
We note that there have been considerable efforts on designing single-user noncoherent massive single-input multiple-output (SIMO) systems, see e.g.,~\cite{Goldsmith16twc,Popovski16tsp,Xie2019sys,Gao18iotj}, which demonstrated that simple energy-based modulation and detector can be sufficient for reliable detection by leveraging the massive number of antennas. By considering the single-user scenario, these work implicitly assumed that an orthogonal multiple access (OMA) mechanism (e.g., time-division multiple access) has been adopted at the data link layer to support the co-existence of multiple users. However, OMA mechanisms normally have poor scalability---the channel access latency scales up linearly as the number of end-devices increases, and thus are no longer suitable for the more challenging sURLLC applications with a large forecasted number of devices. One effective solution to address this scalability issue is to break the orthogonality of existing OMA protocols and empower a new non-orthogonal and noncoherent massive MIMO (nn-mMIMO) framework. It is worth mentioning here that non-orthogonal multiple access (NOMA) has recently received tremendous attentions from the mobile communication research community as a promising technology for 5G cellular system, see a recent comprehensive survey \cite{Dai2018CST}, where the primary goal of applying NOMA is to boost spectral efficiency and user fairness. Existing NOMA solutions along this research line normally require the estimation of instantaneous CSI such that the optimization of
power allocation/control for different signal streams can be conducted at the transmitter side and successive interference cancellation can be implemented for detecting multiple users' signals at the receiver side. These NOMA solutions are thus not applicable anymore for nn-mMIMO enabled sURLLC applications.
Enabling NOMA in massive MIMO is in fact straightforward when the instantaneous CSI is available, which can be achieved by applying space domain multiple access (SDMA). However, how to empower the non-orthogonal access of multiple users at the same time in noncoherent massive MIMO systems becomes a non-trivial task as beamforming techniques can not be used anymore. Very recently, a new constellation domain-based NOMA methodology towards enabling nn-mMIMO has been developed in~\cite{Goldsmith16tit,Zhang2018JSAC,Xu2019ICC,chen2019icps}, which allows the simultaneous channel access of multiple devices at the data link layer without the availability of instantaneous CSI at the physical layer. However, all the designs in~\cite{Goldsmith16tit,Zhang2018JSAC,Xu2019ICC,chen2019icps} only considered one-shot communications (i.e., the received signal is decoded in a symbol-by-symbol manner). As such, the phase information of the transmitted signals is lost at the receiver side and thus only unipolar PAM constellations can be used, which largely limits the system reliability performance as the number of devices increases.
Towards enabling sURLLC, in this paper we develop a new nn-mMIMO framework that can perform joint noncoherent detection of the uplink signals from multiple devices over more than one time slots, where the transmitted signals are allowed to use the more robust QAM constellations. \emph{The main contributions of this paper are two-fold}:
Firstly, we apply a noncoherent maximum likelihood (ML) receiver which {relies} only on the second-order channel statistics and no instantaneous CSI is needed at either the transmitter or receiver sides. For the considered ML receiver, we systematically design a uniquely-factorable multiuser space-time modulation (UF-MUSTM) scheme to enable the concurrent transmission of multiple devices to a noncoherent receiver equipped with a large number of antennas. We further identify the necessary and sufficient conditions for the receiver to recover the transmitted signals from all users. Note that our design connects to the conventional space-time code design.
Up to now, most of the existing space-time code designs, such as~\cite{Marzetta2000tit, Varanasi01, Aazhang03tit, Zhang11tit}, considered point-to-point MIMO systems, where all the transmitting antennas are connected to the same transmitter, and hence the transmitted information-carrying signals are accessible by all the antennas. However, in our considered UF-MUSTM based nn-mMIMO system, the signals transmitted from different users are not allowed to fully collaborate, which dramatically limits the codebook design. Particularly, the widely used unitary space-time code design is in general intractable for the considered multiuser massive MIMO system.
Secondly, we further optimize the proposed design framework by jointly design the constellations of multiple users. We note that the performance analysis for non-unitary codeword of MUSTM is extremely challenging, if not possible, as shown in~\cite{Varanasi01, Aazhang03tit}. Confronting such a challenge, we propose a max-min Kullback-Leibler (KL) {divergence}-based design criterion, based on which we jointly optimize the transmit powers of all users and the sub-constellation assignment among them. Note that the basic idea of this paper has been presented in the conference version \cite{Dong2019icps}, in which we only consider the simple scenario that all users adopt 4-QAM. In this paper, we have extended the design to the more general case that all users can adopt the larger QAM with not necessarily the same orders, which makes the optimization problem more complex and harder to resolve. We manage to resolve the formulated optimization problem in closed-form. Simulations are provided to demonstrate the superiority of the proposed design over the state-of-the-art benchmarking schemes.
{The remainder of this paper is organized as follows. In Sec. II, we describe the system model, the noncoherent detector, as well as the signal design. The design and optimization of the proposed UF-MUSTM framework is elaborated in Sec. III. Simulations are conducted and the corresponding results are discussed in Sec. IV. The conclusions are finally drawn in Sec. V. }
\section{System Model, Noncoherent Detector, and Signal Design}
\subsection{System Model and Noncoherent ML Detector}\label{sec:systemmodel}
We consider a massive {multiple-input multiple-output (MIMO)} system consisting of $K$ single-antenna users transmitting simultaneously to a base station (BS) with $M$ ($M\gg K$) receiving antennas on the same time-frequency grid. By using a discrete-time complex baseband-equivalent model, the received signal at the antenna array of BS in the $t$-th time slot\footnote{Each time slot refers to one symbol duration throughout this paper.}, defined as $\mathbf{y}_t=[y_{1,t},\ldots, y_{M,t}]^T$, can be expressed by
\begin{align*}
\mathbf{y}_t=\mathbf{H}\mathbf{x}_t +{\boldsymbol\xi}_t,
\end{align*}
where $\mathbf{x}_t=[ x_{1,t},\ldots, x_{K,t}]^T$ represents the transmitted signals from all $K$ users, ${\boldsymbol\xi}_t$ is an additive circularly-symmetric complex Gaussian (CSCG) noise vector with covariance $\sigma^2 \mathbf{I}_M$. We let $\mathbf{H}=\mathbf{G}\mathbf{D}^{1/2}$ denote the $M\times K$ complex channel matrix between the receiver antenna array and all users, where $\mathbf{G}$ characterizes the small-scale fading caused by local scattering while $\mathbf{D} ={\rm diag}\{\beta_1,\cdots, \beta_K\}$ with $\beta_k>0$ capturing the propagation loss due to distance and shadowing. All the entries of $\mathbf{G}$ are assumed to be i.i.d. complex Gaussian distributed with zero mean and unit variance. The channel coefficients are assumed to suffer from block fading which are quasi-static in the current block and change to other independent values in the next block with a channel coherence time $T_c \ge K$. We consider a space-time block modulation (STBM)~\cite{Aazhang03tit} scheme over $T$ time slots and the received signal vectors can be stacked together into a matrix form given by
\begin{align}\label{eqn:vecrecsignal}
\mathbf{Y}_T=\mathbf{H}\mathbf{X}_T +{\mathbf\Xi}_T,
\end{align}
where $\mathbf{Y}_T=[\mathbf{y}_1,\ldots, \mathbf{y}_T]$, $\mathbf{X}_T=[\mathbf{x}_1, \ldots, \mathbf{x}_T]$ and ${\mathbf \Xi}_T=[\boldsymbol{\xi}_1,\cdots, \boldsymbol{\xi}_T]$.
\begin{assumption}\label{assumption1}
Throughout this paper, we adopt the following assumptions:
\begin{enumerate}
\item The small scale channel fading matrix $\mathbf{G}$ is completely unknown to the BS and all the users, while the large scale fading matrix $\mathbf{D}$ is available at the BS that will be leveraged to optimize the system performance;
\item The transmitted signals are subject to an instantaneous average power constraint\footnote{Note that our design can be directly extended to the case with peak power constraint.}: $\mathbb E \{|x_{k,t}|^2\} \le P_k$, $k=1,\ldots, K$, $t=1,\ldots, T$. For convenience, we assume that the users are labeled in an ascending order with $P_1\beta_1\le \ldots \le P_K \beta_K$. \hfill{\rm $\blacksquare$}
\end{enumerate}
\end{assumption}
In this work, we apply a noncoherent ML detector which is optimal for uniformly distributed discrete input signals in terms of error probability. We note that~\eqref{eqn:vecrecsignal} can be reformulated as $\mathbf{Y}_T^H=\mathbf{X}_T^H \mathbf{D}^{1/2}\mathbf{G}^H +{\boldsymbol \Xi}_T^H$. With the help of~\cite{Petersen12}, the vectorized form of the received signal can then be written as
\begin{align*}
\mathbf{y}={\rm vec}(\mathbf{Y}_T^H)=(\mathbf{I}_{M}\otimes \mathbf{X}_T^H \mathbf{D}^{1/2}){\rm vec}(\mathbf{G}^H) +{\rm vec}(\boldsymbol{\Xi}_T^H).
\end{align*}
As all the entries of $\mathbf{G}$ and $\boldsymbol \Xi$ are i.i.d. CSCG, we immediately have $\mathbb E[\mathbf{y}] =\mathbf{0}$, and the covariance matrix of $\mathbf{y}$ can be calculated as
$\mathbf{R}_{\mathbf{y}|\mathbf{X}_T}=\mathbb E[\mathbf{y}\mathbf{y}^H]
=\mathbf{I}_M \otimes (\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T +\sigma^2 \mathbf{I}_T)$.
The conditional distribution of the received signal $\mathbf{y}$ at BS for any transmitted signal matrix $\mathbf{X}_T$ can then be given by $p({\mathbf{y}|\mathbf{X}_T})=\frac{1}{\pi^{K M}\det({\mathbf{R}_{\mathbf{y}|\mathbf{X}_T}})} \exp(-\mathbf{y}^H \mathbf{R}_{\mathbf{y}|\mathbf{X}_T}^{-1}\mathbf{y})$,
where $\mathbf{R}_{\mathbf{y}|\mathbf{X}_T}=\mathbf{I}\otimes (\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T +\sigma^2 \mathbf{I}_T)$.
The noncoherent ML detector can estimate the transmitted information carrying matrix from the received signal vector $\mathbf{y}$ by resolving the following optimization problem:
\begin{align}\label{eqn:MLdetector}
\widehat{\mathbf{X}}_T={\arg\min}_{\mathbf{X}_T}~\mathbf{y}^H \mathbf{R}_{\mathbf{y}|\mathbf{X}_T}^{-1}\mathbf{y}+\log \det(\mathbf{R}_{\mathbf{y}|\mathbf{X}_T}).
\end{align}
From~\eqref{eqn:MLdetector}, we can observe that the detector relies on the sufficient statistics of the transmitted signal matrix $\mathbf{R}_{\mathbf{y}|\mathbf{X}_T}=\mathbf{I}\otimes (\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T +\sigma^2 \mathbf{I}_T)$. The detailed discussion regarding the signal design is given in the following subsection.
\subsection{Unique Identification of the Transmitted Signal Matrix}
In this subsection, we first identify what conditions the transmitted signal matrix must satisfy to ensure the unique identification of the transmitted signal matrix $\mathbf{X}_T$.
We can observe from~\eqref{eqn:MLdetector} that, to achieve reliable communication between all users and the BS in the considered nn-mMIMO system, the BS must be able to uniquely determine each transmitted signal matrix ${\mathbf X}_T$ once $\mathbf{R}=\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T$ has been identified, which can be formally stated as follows:
\begin{proposition}\label{proposition:UFCM}
Any reliable communications for the multiuser nn-mMIMO system described in~\eqref{eqn:vecrecsignal} require that, for the transmitted signal matrix selected from ${\mathcal M}^{K \times T}\subseteq \mathbb C^{K \times T}$, if and only if there exist any two signal matrices $\mathbf{X}_T, \widetilde{\mathbf{X}}_T\in {\mathcal M}^{K \times T}$ satisfying $\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T= \widetilde{\mathbf{X}}_T^H \mathbf{D}\widetilde{\mathbf{X}}_{T}$, then we have ${\mathbf{X}}_T=\widetilde{\mathbf{X}}_T$.\hfill{\rm $\blacksquare$}
\end{proposition}
The proof is provided in Appendix\ref{append:prop1}.
Inspired by Proposition~\ref{proposition:UFCM}, to facilitate our system design, we introduce the concept of uniquely-factorable multiuser space-time modulation (UF-MUSTM), the formal definition of which is given as follows:
\begin{definition}\label{def:udmustm}
A multiuser space-time modulation codebook $\mathcal{S}^{K\times T} \subseteq \mathbb C^{K \times T}$ is said to form a UF-MUSTM codebook if for any pair of codewords $\mathbf{S}, \widetilde{\mathbf{S}} \in \mathcal{S}^{K\times T}$ satisfying $\mathbf{S}^H \mathbf{S} = \widetilde{\mathbf{S}}^H \widetilde{\mathbf{S}}$, we have $\mathbf{S} =\widetilde{\mathbf{S}}$. \hfill{\rm $\blacksquare$}
\end{definition}
Definition~\ref{def:udmustm} motivates us to design a UF-MUSTM codebook for the considered nn-mMIMO system. Therefore, our primary task in the rest of this paper is to develop a new framework for a systematic design of such UF-MUSTM ${\mathcal S}^{K\times T}$.
Before proceeding on, it is worth clarifying here that the UF-MUSTM code design is fundamentally different from existing noncoherent space-time code/modulation designs. Specifically,
\begin{itemize}
\item For the considered UF-MUSTM based nn-mMIMO system, the signals transmitted from different users cannot fully collaborate, and hence the widely used unitary space-time code design is intractable for the considered system. This is fundamentally different from most of conventional space-time code designs for point-to-point MIMO system, where unitary space-time code is feasible since all the transmitting antennas are connected to the same transmitter~\cite{Marzetta2000tit, Aazhang03tit}. Note that the error performance analysis of non-unitary codeword of MUSTM is very challenging as shown in~\cite{Varanasi01}.
\item Our design is asymptotically optimal when the number of BS antennas goes to infinity while keeping the transmitted power fixed. This is in contrast to most previous space-time coding designs which considered the asymptotic regime with the signal-to-noise ratio (SNR) going to infinity~\cite{Varanasi01, Zhang11tit,Aazhang03tit}.
\end{itemize}
\section{Design and Optimization of UF-MUSTM Framework}\label{SecIII}
In this section, we present a UF-MUSTM framework with a slot-by-slot noncoherent ML detector. We find that when the number of receiving antennas increases, the pairwise error probability (PEP) between two codewords will be dominated by the Kullback-Leibler (KL) divergence between them. Motivated by this fact, a max-min KL divergence design criterion is proposed to optimize the transmit powers of all users and the sub-constellation assignment among them.
\subsection{KL Divergence between Transmitted Space-Time Modulation Codewords}
In practice, the computational complexity of the optimal noncoherent ML detector described in~\eqref{eqn:MLdetector} could be prohibitively high. Furthermore, the error performance analysis results available for the block transmission with general block size and ML receiver are too complicated to reveal insightful results for the input codeword design and the corresponding power allocation~\cite{Varanasi01}.
To resolve these problems as well as to reduce the receiver complexity, our main idea is to input a small block size into the ML receiver. If only one time slot is involved in the ML detector given in~\eqref{eqn:MLdetector}, i.e., when $T=1$, the correlation matrix $\mathbf{R}=\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T$ degenerates into a real scalar $\mathbf{x}_1^H\mathbf{D}\mathbf{x}_1=\sum_{k=1}^K \beta_k |x_{k,1}|^2$, where the phase information of the transmitted symbols is lost and information bits from all users can only be modulated on the amplitudes of the transmitted symbols. Such a design typically has a low spectral efficiency~\cite{Goldsmith16twc, Goldsmith16tit, Popovski16tsp}. To improve the spectrum efficiency by allowing constellation with phase information being transmitted by all users, we need to feed the signals received in at least two time slots into the ML decoder~\cite{Varanasi01,Madhowtit,Aazhang03tit}.
As an initial attempt, in this paper we focus on a slot-by-slot ML detection over the first and $t$-th time slots, which is similar to the differential modulation with the hard-decision based noncoherent multiuser detection. More specifically, we let the transmitted signal matrix be $\mathbf{X}_T=[\mathbf{x}_1, \ldots, \mathbf{x}_T]$. For detection purpose, we now stack the transmitted signal of the first and the $t$-th time slot as $\mathbf{X}_t=[\mathbf{x}_1, \mathbf{x}_t]$, and then make the decision on $\mathbf{Y}_t=[\mathbf{y}_1, \mathbf{y}_t]$ by using~\eqref{eqn:MLdetector}. For simplicity, we consider the transmitted signal from the first and second time slots, i.e., $\mathbf{X}_2=[\mathbf{x}_1, \mathbf{x}_2]$, hereafter, and the case of $\mathbf{X}_t$ follows similarly. We denote
$\mathbf{R}_{\mathbf{y}|\mathbf{X}_2}=\mathbf{I}\otimes \mathbf{R}_2$, in which
\begin{align}\label{eqn:correlation2by2}
&\mathbf{R}_2=\mathbf{X}_2^H \mathbf{D}\mathbf{X}_2 +\sigma^2 \mathbf{I}_2
=\begin{bmatrix}
\mathbf{x}_1^H \mathbf{D} \mathbf{x}_1 +\sigma^2 & \mathbf{x}_1^H \mathbf{D} \mathbf{x}_2 \\
\mathbf{x}_2^H \mathbf{D} \mathbf{x}_1 & \mathbf{x}_2^H \mathbf{D} \mathbf{x}_2 +\sigma^2
\end{bmatrix}.
\end{align}
By~\eqref{eqn:correlation2by2}, we have
\begin{align}
\mathbf{R}_2^{-1}
=\frac{1}{(\mathbf{x}_1^H \mathbf{D} \mathbf{x}_1+\sigma^2)(\mathbf{x}_2^H \mathbf{D} \mathbf{x}_2 +\sigma^2)-|\mathbf{x}_1^H \mathbf{D} \mathbf{x}_2|^2}
\begin{bmatrix}
\mathbf{x}_2^H \mathbf{D} \mathbf{x}_2 +\sigma^2 & -\mathbf{x}_1^H \mathbf{D} \mathbf{x}_2\\
-\mathbf{x}_2^H \mathbf{D} \mathbf{x}_1 & \mathbf{x}_1^H \mathbf{D} \mathbf{x}_1+\sigma^2
\end{bmatrix}.
\end{align}
As a consequence, the ML receiver can be reformulated as follows
\begin{align}\label{eqn:simplifiedMLreceiver}
\widehat{\mathbf{X}}_2&={\arg\min}_{\mathbf{X}_2}~\mathbf{y}^H \mathbf{R}_{\mathbf{y}|\mathbf{X}_2}^{-1}\mathbf{y}+\log \det(\mathbf{R}_{\mathbf{y}|\mathbf{X}_2})\nonumber\\
&={\arg\min}_{\mathbf{X}_2}~\frac{ (\mathbf{x}_1^H \mathbf{D} \mathbf{x}_1+\sigma^2) \|\mathbf{y}_2\|^2 + (\mathbf{x}_2^H \mathbf{D} \mathbf{x}_2 +\sigma^2) \|\mathbf{y}_1\|^2 -2 \Re(\mathbf{x}_1^H \mathbf{D} \mathbf{x}_2 \mathbf{y}_2^H \mathbf{y}_1)}{(\mathbf{x}_1^H \mathbf{D} \mathbf{x}_1+\sigma^2)(\mathbf{x}_2^H \mathbf{D} \mathbf{x}_2 +\sigma^2)-|\mathbf{x}_1^H \mathbf{D} \mathbf{x}_2|^2}\nonumber\\
&\qquad\qquad\qquad\qquad\qquad + M \ln \Big[(\mathbf{x}_1^H \mathbf{D} \mathbf{x}_1+\sigma^2)(\mathbf{x}_2^H \mathbf{D} \mathbf{x}_2 +\sigma^2)-|\mathbf{x}_1^H \mathbf{D} \mathbf{x}_2|^2\Big],
\end{align}
where $\mathbf{y}_1$ and $\mathbf{y}_2$ are the received signal vectors in the first and second time slots, respectively.
It can be observed that the diagonal entries in~\eqref{eqn:correlation2by2} are $\mathbf{x}_1^H\mathbf{D}\mathbf{x}_1=\sum_{k=1}^K \beta_k |x_{k,1}|^2$ and $\mathbf{x}_2^H\mathbf{D}\mathbf{x}_2=\sum_{k=1}^K \beta_k |x_{k,2}|^2$, in which the phase information is lost, while the off-diagonal term is $\mathbf{x}_1^H \mathbf{D} \mathbf{x}_2 =\sum_{k=1}^K \beta_k x_{k,1}^*x_{k,2}=\sum_{k=1}^K \beta_k |x_{k,1}||x_{k,2}|\exp\big (j\arg(x_{k,2}) -j\arg(x_{k,1})\big )$, indicating that we can transmit a known reference signal vector $\mathbf{x}_1$ in the first time slot and then transmit the information bearing signal vector $\mathbf{x}_2$ to imitate a ``differential-like" transmission~\cite{2001titZheng}.
The exact PEP is extremely hard to evaluate for the matrix $\mathbf{X}_2$ given above.
Moreover, the exact expression for the PEP does not seem to be tractable for further optimization. Inspired by the Chernoff-Stein Lemma, when the number of receiver antennas $M$ goes to infinity, the PEP will goes to zero exponentially where the exponent is determined by the KL divergence~\cite{Aazhang03tit}. Hence, in this paper we propose to use the KL divergence between the conditional distributions of the received signals for different inputs as the design criterion thanks to its mathematical tractability.
We now derive the KL divergence between the received signals induced by the transmitted signals matrices $\mathbf{X}_2=[\mathbf{x}_1, \mathbf{x}_2]$ and $\widetilde{\mathbf{X}}_2=[\tilde{\mathbf{x}}_1, \tilde{\mathbf{x}}_2]$, which is also the expectation of the likelihood function between two received signal vectors. Essentially, the likelihood function between the received signal vectors corresponding to the two transmitted signals converge in probability to the KL-divergence as the number of receiver antennas increases~\cite{Aazhang03tit}. More specifically, the KL-divergence between the received signals corresponding to the transmitted matrix $\mathbf{X}_2$ and $\widetilde{\mathbf{X}}_2$ can be calculated as
\begin{subequations}
\begin{align*}
&\mathcal{D}_{\rm KL}^{(M)}(\mathbf{X}_2 ||\widetilde{\mathbf{X}}_2)=\mathbb E_{f({\mathbf{y}|\mathbf{X}_2})} \bigg\{ \ln \Big ( \frac{f({\mathbf{y}|\widetilde{\mathbf{X}}_2})}{f({\mathbf{y}|\mathbf{X}_2})} \Big ) \bigg\}\\
&=\mathbb E_{f({\mathbf{y}|\mathbf{X}_2})} \bigg\{ \ln \Big (\frac{\det({\mathbf{R}_{\mathbf{y}|\mathbf{X}_2}}) }{\det({\mathbf{R}_{\mathbf{y}|\widetilde{\mathbf{X}}_2}})}\Big ) +\Big ( \mathbf{y}^H \mathbf{R}_{\mathbf{y}|\widetilde{\mathbf{X}}_2}^{-1}\mathbf{y} -\mathbf{y}^H \mathbf{R}_{\mathbf{y}|\mathbf{X}_2}^{-1}\mathbf{y}\Big )\bigg\}\\
&=\mathbb E_{f({\mathbf{y}|\mathbf{X}_2})} \bigg\{ {\rm tr}\Big (\big ( \mathbf{R}_{\mathbf{y}|\widetilde{\mathbf{X}}_2}^{-1} - \mathbf{R}_{\mathbf{y}|\mathbf{X}_2}^{-1}\big )\mathbf{y}\mathbf{y}^H \Big )\bigg\}+\ln \Big ( \frac{\det({\mathbf{R}_{\mathbf{y}|\mathbf{X}_2}}) }{\det({\mathbf{R}_{\mathbf{y}|\widetilde{\mathbf{X}}_2}})}\Big )\\
&= {\rm tr}\Big (\big (\mathbf{R}_{\mathbf{y}|\widetilde{\mathbf{X}}_2}^{-1} - \mathbf{R}_{\mathbf{y}|\mathbf{X}_2}^{-1}\big )\mathbf{R}_{\mathbf{y}|\mathbf{X}_2} \Big )+\ln \Big (\frac{\det({\mathbf{R}_{\mathbf{y}|\mathbf{X}_2}}) }{\det({\mathbf{R}_{\mathbf{y}|\widetilde{\mathbf{X}}_2}})}\Big )\\
&=M\, \mathcal{D}_{\rm KL}(\mathbf{X}_2 ||\widetilde{\mathbf{X}}_2),
\end{align*}
\end{subequations}
in which
\begin{align}
\mathcal{D}_{\rm KL}(\mathbf{X}_2 ||\widetilde{\mathbf{X}}_2)&= {\rm tr} \big\{ (\mathbf{X}_2^H \mathbf{D}\mathbf{X}_2+\sigma^2 \mathbf{I}_2)(\widetilde{\mathbf{X}}_2^H \mathbf{D}\widetilde{\mathbf{X}}_2 +\sigma^2 \mathbf{I}_2)^{-1} \big\} \nonumber\\
&\qquad - \ln\Big\{ \det\big ((\mathbf{X}_2^H \mathbf{D}\mathbf{X}_2+\sigma^2 \mathbf{I}_2)(\widetilde{\mathbf{X}}_2^H \mathbf{D}\widetilde{\mathbf{X}}_2 +\sigma^2 \mathbf{I}_2)^{-1}\big )\Big\} -2.\label{eqn:KLdistanece}
\end{align}
We can observe from the above expression that $\mathcal{D}_{\rm KL}(\mathbf{X}_2 ||\widetilde{\mathbf{X}}_2)$ is actually the KL-divergence when there is only one receiving antenna. Due to the assumption of the independence of channel coefficients, and the KL divergence with $M$ antennas $\mathcal{D}_{\rm KL}^{(M)}(\mathbf{X}_2 ||\widetilde{\mathbf{X}}_2)$ is $M$ times of $\mathcal{D}_{\rm KL}(\mathbf{X}_2 ||\widetilde{\mathbf{X}}_2)$.
\subsection{QAM Division Based Multiuser Space-Time Modulation}
The main objective of this subsection is to develop a new QAM division based MUSTM design framework for the considered {nn-mMIMO} system. The design is built upon the uniquely decomposable constellation group (UDCG) originally proposed in~\cite{dong16jstsp,Dong2017isit} for the commonly used spectrally efficient QAM signaling.
We now introduce the definition of UDCG as follows:
\begin{definition}\label{def:audcg}
A group of constellations $\{\mathcal{X}_k\}_{k=1}^K$ form a UDCG, denoted by $\big\{ \sum_{k=1}^K x_k: x_k \in \mathcal{X}_k\big\}=\uplus_{k=1}^K \mathcal{X}_k = \mathcal{X}_1 \uplus \ldots \uplus \mathcal{X}_K$, if there exist two groups of $x_k, \tilde x_k \in \mathcal{X}_k$ for $k=1, \cdots, K$ such that $\sum_{k=1}^K x_k =\sum_{k=1}^K \tilde x_k$, then we have $x_k =\tilde x_k$ for $k=1, \cdots, K$.~\hfill{\rm $\blacksquare$}
\end{definition}
As PAM and QAM constellations are commonly used in modern digital communications, which have simple geometric structures, we now give the following construction of UDCG.
\begin{lemma}\label{lemma:UDCG} The UDCG with PAM and QAM constellations can be constructed as follows:
1) \emph{\underline{{UDCG} with PAM Constellation}}:
For two given positive integers $K$ and $N$ ($N \ge K$) and nonnegative integer sequence $\{N_k\}_{k=1}^K$ satisfying $\sum_{k=1}^K N_k=N$, a $2^N$-ary PAM constellation $\mathcal{G} = \{\pm (m-\frac{1}{2})d : m =1,\ldots, 2^{N -1}\}$, with $d$ being the minimum Euclidean distance between the constellation points, can be uniquely decomposed into the sum of $K$ sub-constellations $\{{\mathcal X}_k\}_{k=1}^K$ denoted by $\mathcal{G} = \uplus_{k=1}^K \mathcal{X}_k$, where
$\mathcal{X}_1 = \big\{\pm (m-\frac{1}{2})d\big\}_{m=1}^{2^{N_1 -1}}$,
and
$\mathcal{X}_k =
\big\{\pm (m-\frac{1}{2}) \times 2^{\sum_{\ell=1}^{k-1} N_\ell }d\big\}_{m=1}^{2^{N_k -1}}$ for $k \ge 2$.
2) \emph{\underline{{UDCG} with QAM Constellation}}:
For two positive integers $K$ and $N =N_I+N_Q$ ($N\ge K$), with $N_I$ and $N_Q$ being nonnegative integers that denote the sizes of the in-phase and quadrature components, respectively. Let $\{N_{I,k}\}_{k=1}^K$ and $\{N_{Q,k}\}_{k=1}^K$ denote two given nonnegative integer sequences satisfying $N_I =\sum_{k=1}^K N_{I,k}$ and $N_Q =\sum_{k=1}^K N_{Q,k}$ with $N_k=N_{I,k}+N_{Q,k}>0$. Then, there exists a PAM and QAM mixed constellation $\mathcal{Q} = \uplus_{k =1}^K \mathcal{X}_k$ such that $\mathcal{X}_k= \mathcal{X}_{I,k} \uplus j\mathcal{X}_{Q,k}$, with $j\mathcal{X}_{Q,k} =\{jx:x\in\mathcal{X}_{Q,k}\}$, where $\mathcal{Q}_I =\uplus_{k=1}^K \mathcal{X}_{I,k}$ and $\mathcal{Q}_{Q} =\uplus_{k=1}^K \mathcal{X}_{Q,k}$ are two PAM UDCGs according to the rate allocation $\{N_{I,k}\}_{k=1}^K$ and $\{N_{Q,k}\}_{k=1}^K$, respectively. ~\hfill{\rm $\blacksquare$}
\end{lemma}
With the concept of UDCG, we are now ready to propose a QAM division based UF-MUSTM for the considered nn-mMIMO system with a noncoherent ML receiver given in~\eqref{eqn:simplifiedMLreceiver}. The structure of each transmitted signal matrix is given by $\mathbf{X}_{2}=[\mathbf{x}_1, \mathbf{x}_2]=\mathbf{D}^{-1/2}\mathbf{\Pi} {\mathbf S}_2$, in which
\begin{align}\label{eqn:sigmtx}
\mathbf{S}_2&= [\mathbf{s}_1, \mathbf{s}_2]= \begin{bmatrix}
\frac{1}{\sqrt{p_1}}& \sqrt{p_1} s_{1}\\
\frac{1}{\sqrt{p_2}}& \sqrt{p_2} s_{2}\\
\vdots & \vdots\\
\frac{1}{\sqrt{p_K}} & \sqrt{p_K} s_{K}
\end{bmatrix}.
\end{align}
In our design, the diagonal matrix $\mathbf{D}^{-1/2}$ is used to compensate for the different large scale fading among various users. The vector $\mathbf{p}=[p_1, \ldots, p_K]$ is introduced to adjust the relative transmitting powers between all users, and $\mathbf{s}=[s_1, \ldots, s_K]$ is the information-carrying vector. The instantaneous power constraint can be given by $\mathbb E\{|x_{k,t}|^2\} \le P_k$, $k=1,\ldots, K$ and $t=1,2$.
We let $s_k\in \mathcal{X}_k$, where all $\mathcal{X}_k$'s constitute a UDCG with sum-QAM constellation $\mathcal{Q}$ such that $\mathcal{Q} =\uplus_{k=1}^K \mathcal{X}_k$ as defined in Lemma~\ref{lemma:UDCG}. The rate allocation between the $K$ users are based on the sum-decomposition such that $\sum_{k=1}^K N_k=N$ in which $N_k=N_{I,k}+N_{Q,k} =\log_2 (|\mathcal{X}_k|)$ denotes the bit rate of the user constellation $\mathcal{X}_k$.
The matrix $\mathbf{\Pi}=[\mathbf{e}_{\pi{(1)}}, \ldots, \mathbf{e}_{\pi(K)}]^T$ is a permutation matrix, where $\mathbf{e}_k$ denotes a standard basis column vector of length $K$ with 1 in the $k$-th position and 0 in other positions. $\pi:\{1,\ldots,K\} \to \{1,\ldots, K\}$ is a permutation over $K$ elements characterized by
$\begin{pmatrix}1 &2 &\ldots & K\\\pi(1) &\pi(2)&\ldots&\pi(K)\end{pmatrix}$. We also let $\pi^{-1}:\{1,\ldots,K\} \to \{1,\ldots, K\}$ be a permutation such that $\pi^{-1}(\pi(k))=k$ for $k=1,\ldots, K$. From the above definition, we immediately have $\mathbf{\Pi}^T\mathbf{\Pi}=\mathbf{I}_K$.
For the transmitted signal matrix $\mathbf{X}_{2}$, we have the following desired properties:
\begin{proposition}\label{prop:udcg}
Consider $\mathbf{X}_{2}=\mathbf{D}^{-1/2}{\mathbf \Pi}{\mathbf S}_2$ and $\widetilde{\mathbf{X}}_{2}=\mathbf{D}^{-1/2} \mathbf{\Pi}\widetilde{\mathbf S}_2$, where ${\mathbf S}_2$ and $\widetilde{\mathbf S}_2$ belong to ${\mathcal S}^{K\times 2}$ as described in Definition~\ref{def:udmustm}. If ${\mathbf X}^H_2{\mathbf D}{\mathbf X}_2=\widetilde{\mathbf X}^H_2{\mathbf D}\widetilde{\mathbf X}_2$, then we have ${\mathbf X}_2=\widetilde{\mathbf X}_2$.
\hfill{\rm $\blacksquare$}
\end{proposition}
The proof of Proposition~\ref{prop:udcg} is given in~Appendix\ref{append:prop2}.
\subsection{User-constellation Assignment and Power Allocation}
To further enhance the system reliability performance, we now optimize the user-constellation assignment $\pi$ and power allocation vector $\mathbf{p}$ for the proposed nn-mMIMO framework. For the transmitted signal matrix considered in~\eqref{eqn:sigmtx}, we have
\begin{align}\label{eqn:correlationmatrix}
&\mathbf{X}_2^H \mathbf{D}\mathbf{X}_2 +\sigma^2 \mathbf{I}_2
=\begin{bmatrix}
\mathbf{s}_1^H \mathbf{s}_1 +\sigma^2 & \mathbf{s}_1^H \mathbf{s}_2 \\
\mathbf{s}_2^H \mathbf{s}_1 & \mathbf{s}_2^H \mathbf{s}_2 +\sigma^2
\end{bmatrix}
=\begin{bmatrix}
\sum_{k=1}^K {1}/{p_k}+\sigma^2 & \sum_{k=1}^K s_k\\
\sum_{k=1}^K s_k^* & \sum_{k=1}^K p_k |s_k|^2 +\sigma^2
\end{bmatrix},\nonumber\\
&\widetilde{\mathbf{X}}_2^H \mathbf{D}\widetilde{\mathbf{X}}_2 +\sigma^2 \mathbf{I}_2
=\begin{bmatrix}
\mathbf{s}_1^H \mathbf{s}_1 +\sigma^2 & \mathbf{s}_1^H \tilde{\mathbf{s}}_2 \\
\tilde{\mathbf{s}}_2^H \mathbf{s}_1 & \tilde{\mathbf{s}}_2^H \tilde{\mathbf{s}}_2 +\sigma^2
\end{bmatrix}=\begin{bmatrix}
\sum_{k=1}^K {1}/{p_k}+\sigma^2 & \sum_{k=1}^K \tilde{s}_k\\
\sum_{k=1}^K \tilde{s}_k^* & \sum_{k=1}^K p_k |\tilde{s}_k|^2 +\sigma^2
\end{bmatrix}.
\end{align}
We can see from~\eqref{eqn:correlationmatrix} that $\mathbf{X}_2^H \mathbf{D}\mathbf{X}_2 +\sigma^2 \mathbf{I}_2$ and $\widetilde{\mathbf{X}}_2^H \mathbf{D}\widetilde{\mathbf{X}}_2 +\sigma^2 \mathbf{I}_2$ are independent of the permutation function $\pi$, but depends on the power allocation vector $\mathbf{p}=[p_1, \ldots, p_K]^T$, and the information carrying vectors $\mathbf{s}=[s_1, \ldots, s_K]^T$ and $\tilde{\mathbf{s}}=[\tilde{s}_1, \ldots, \tilde{s}_K]^T$.
In this case, the ML receiver given in~\eqref{eqn:simplifiedMLreceiver} can be further simplified as
\begin{align}
\widehat{\mathbf{X}}_2&={\arg\min}_{\mathbf{X}_2}~\frac{ a \|\mathbf{y}_2\|^2 + b \|\mathbf{y}_1\|^2 -2 \Re( c\mathbf{y}_1^H \mathbf{y}_2)}{ab-|c|^2}\nonumber + M \ln \big(ab-|c|^2\big),
\end{align}
Inserting~\eqref{eqn:correlationmatrix} into~\eqref{eqn:KLdistanece}, and after some algebraic manipulations, we have
\begin{subequations}
\begin{align*}
&\mathcal{D}_{\rm KL}(\mathbf{X}_2 ||\widetilde{\mathbf X}_2)=
f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})+f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}}),
\end{align*}
\end{subequations}
where
\begin{align*}
&f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})=\frac{\big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\big ) \big (\sum_{k=1}^K p_k |s_k|^2 +\sigma^2 \big )-\big|\sum_{k=1}^K s_k\big|^2}{\big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\big )\big (\sum_{k=1}^K p_k |\tilde{s}_k|^2 +\sigma^2\big )-\big|\sum_{k=1}^K \tilde{s}_k\big|^2}\\
&\qquad\qquad\quad-\ln \left[ \frac{\big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\big )\big (\sum_{k=1}^K p_k |s_k|^2 +\sigma^2\big )-\big|\sum_{k=1}^K s_k\big|^2}{\big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\big )\big (\sum_{k=1}^K p_k |\tilde{s}_k|^2 +\sigma^2\big )-\big|\sum_{k=1}^K \tilde{s}_k\big|^2}\right]-1,\\
&f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})=\frac{\big|\sum_{k=1}^K s_k-\sum_{k=1}^K \tilde{s}_k\big|^2}{\big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\big )\big (\sum_{k=1}^K p_k |\tilde{s}_k|^2 +\sigma^2\big )-\big|\sum_{k=1}^K \tilde{s}_k\big|^2}.
\end{align*}
Recall that the power constraints are $\mathbb E\{|x_{k,t}|^2\} \le P_k$ for $k=1,\ldots, K$ and $t=1,2$. That is, for the first and second time slots, we have
$\mathbb E\{|x_{k,1}|^2\}=\frac{1}{p_{\pi(k)}\beta_{k}} \le P_k$, and $\mathbb E\{|x_{k,2}|^2\}=\frac{p_{\pi(k)} E_{\pi(k)}d^2}{\beta_k} \le P_k$, where $E_{k} =\frac{\mathbb E\{|s_{k}|^2\}}{d^2}$. The power constraints can thus be expressed as:
\begin{align}
\frac{1}{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}\le p_k \le \frac{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}{E_k d^2}, \quad k=1,\ldots, K.
\end{align}
Our design can now be formulated into the following optimization problem.
\begin{problem}\label{problem1} Find the optimal power control vector $\mathbf{p}$ and permutation $\pi$ under individual average power constraints:
\begin{subequations}\label{eqn:problem1}
\begin{align}
&\max_{\{\pi,\mathbf{p}\}}\,\min_{\{\mathbf{s}, \tilde{\mathbf{s}}:\,\mathbf{s}\neq \tilde{\mathbf{s}}\}} ~f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})+f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})\\
&{\rm ~s.t.},~\frac{1}{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}\le p_k \le \frac{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}{E_k d^2}, \quad k=1,\ldots, K.
\end{align}
\end{subequations}
\hfill{\rm $\blacksquare$}
\end{problem}
For Problem~\ref{problem1}, we first can attain that $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}}) \ge 0$ by applying the fundamental inequality in information theory~\cite[Lemma\,2.29]{Yeu08network}, where the equality $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})=0$ holds if and only if
\begin{align}\label{eqn:optcondition}
\Big(\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\Big)\Big(\sum_{k=1}^K p_k (|s_k|^2- |\tilde{s}_k|^2) \Big)-\Big(\big|\sum_{k=1}^K s_k\big|^2-\big|\sum_{k=1}^K \tilde{s}_k\big|^2\Big)=0.
\end{align}
Considering the fact that the joint minimization of $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ and $f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ over $\{\mathbf{s}, \tilde{\mathbf{s}}:\mathbf{s}\neq \tilde{\mathbf{s}}\}$ could be extremely tedious, we consider the minimization of $f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ first, which is a lower bound of $\mathcal{D}_{\rm KL}(\mathbf{X}_2 ||\widetilde{\mathbf X}_2)$ as $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})\ge 0$. We will verify the condition when the minimum of $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ and $f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ can be achieved simultaneously. Mathematically, we temporarily focus on solving the following optimization problem:
\begin{problem}\label{pbm:problem2} Find the power control coefficients $\mathbf{p}$ and permutation $\pi$, such that
\begin{subequations}\label{eqn:equalpowerdesigngeneral}
\begin{align}
&\max_{\{\pi, \mathbf{p}\}}\,\min_{\{\mathbf{s}, \tilde{\mathbf{s}}:\,\mathbf{s}\neq \tilde{\mathbf{s}}\}}~f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})=\frac{\big|\sum_{k=1}^K s_k-\sum_{k=1}^K \tilde{s}_k\big|^2}{(\sum_{k=1}^K \frac{1}{p_k}+\sigma^2)(\sum_{k=1}^K p_k |\tilde{s}_k|^2 +\sigma^2)-|\sum_{k=1}^K \tilde{s}_k|^2}\label{eqn:f2objec}\\
&~{\rm s.t.}~\frac{1}{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}\le p_k \le \frac{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}{E_k d^2}, \quad k=1,\ldots, K.\label{eqn:f2constraint}
\end{align}
\end{subequations}
\hfill{\rm $\blacksquare$}
\end{problem}
We first consider the inner optimization problem in Problem~\ref{pbm:problem2}. The denominator of~\eqref{eqn:f2objec} is independent of $\mathbf{s}$ and the numerator is minimized when the sum terms $\sum_{k=1}^K s_k$ and $\sum_{k=1}^K \tilde{s}_k$ are the neighboring points on the sum-constellation, where the minimum value of $\big|\sum_{k=1}^K s_k-\sum_{k=1}^K \tilde{s}_k\big|^2$ is $d^2$. For notation simplicity, we define $\tilde{\mathbf{s}}=(\tilde{\mathbf{v}}+j\tilde{\mathbf{w}})d$, where $\tilde{\mathbf{v}}=[\tilde{v}_1, \ldots, \tilde{v}_K]^T$ and $\tilde{\mathbf{w}}=[\tilde{w}_1, \ldots, \tilde{w}_K]^T$. As the power constraint given in~\eqref{eqn:f2constraint} is independent of $\mathbf{v}$ and $\mathbf{w}$, Problem~\ref{pbm:problem2} can be split into two subproblems:
\begin{subequations}\label{eqn:optinphase}
\begin{align}
&\max_{\tilde{\mathbf{v}}}~f_3(\tilde{\mathbf{v}})=\Big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\Big )\Big (\sum_{k=1}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big )- \Big (\sum_{k=1}^K \tilde{v}_k\Big )^2\\
&{\rm ~s.t.}~\tilde{v}_k \in
\Big\{\pm \Big (m-\frac{1}{2}\Big ) \times 2^{\sum_{\ell=1}^{k-1} N_{I,\ell} }\Big\}_{m=1}^{2^{N_{I,k}-1}},\quad k=1,\ldots, K.
\end{align}
\end{subequations}
and
\begin{subequations}\label{eqn:optquadrature}
\begin{align}
&\max_{\tilde{\mathbf{w}}}~f_4(\tilde{\mathbf{w}})=\Big (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\Big )\Big (\sum_{k=1}^K p_k \tilde{w}_k^2 +\frac{\sigma^2}{d^2}\Big )- \Big (\sum_{k=1}^K \tilde{w}_k\big )^2\\
&{\rm ~s.t.}~\tilde{w}_k \in \Big\{\pm \Big (m-\frac{1}{2}\Big ) \times 2^{\sum_{\ell=1}^{k-1} N_{Q,\ell} }\Big\}_{m=1}^{2^{N_{Q,k}-1}},\quad k=1,\ldots, K.
\end{align}
\end{subequations}
In the following, we only present the maximization of $f_3(\tilde{\mathbf{v}})$ over $\tilde{\mathbf{v}}$ in~\eqref{eqn:optinphase}, and the maximization of $f_4(\tilde{\mathbf{w}})$ over $\tilde{\mathbf{w}}$ given in ~\eqref{eqn:optquadrature} follows similarly and hence is omitted fore brevity. We now rewrite the objective function~(\ref{eqn:optinphase}a) as
\begin{subequations}
\begin{align*}
f_3(\tilde{\mathbf{v}})=&\bigg (\frac{1}{p_1}+\Big (\sum_{k=2}^K \frac{1}{p_k} +\sigma^2 \Big )\bigg )\Big (p_1 \tilde{v}_1^2 +\big (\sum_{k=2}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\big )\Big )- \Big (\tilde{v}_1+\sum_{k=2}^K \tilde{v}_k\Big )^2\\
=&\frac{1}{p_1}\Big (\sum_{k=2}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big )+ p_1 \tilde{v}_1^2\Big (\sum_{k=2}^K \frac{1}{p_k}+\sigma^2\Big )+\Big (\sum_{k=2}^K \frac{1}{p_k}+\sigma^2\Big )\Big (\sum_{k=2}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big )\\
& -2 \tilde{v}_1\big (\sum_{k=2}^K \tilde{v}_k\big )-\big (\sum_{k=2}^K \tilde{v}_k\big )^2\\
=&\frac{1}{p_1}\Big (\sum_{k=2}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big ) +\frac{1}{p_2}\Big (\sum_{k=3}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big )+ p_1 \tilde{v}_1^2\Big (\sum_{k=2}^K \frac{1}{p_k}+\sigma^2\Big )+ p_2 \tilde{v}_2^2\Big (\sum_{k=3}^K \frac{1}{p_k}+\sigma^2\Big )\\
&+\Big (\sum_{k=3}^K \frac{1}{p_k}+\sigma^2\Big )\Big (\sum_{k=3}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big)-\big(\sum_{k=3}^K \tilde{v}_k\big)^2-2 \tilde{v}_1\big(\sum_{k=2}^K \tilde{v}_k\big)-2 \tilde{v}_2\big(\sum_{k=3}^K \tilde{v}_k\big)\\
=&\sum_{\ell=1}^{K-1}\frac{1}{p_\ell} \Big(\sum_{k=\ell+1}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big)+\sum_{\ell=1}^{K-1}p_\ell \tilde{v}_\ell^2 \Big(\sum_{k=\ell+1}^K \frac{1}{p_k} +\sigma^2\Big) +p_K \tilde{v}_K^2\sigma^2 \\
&+\frac{\sigma^4}{d^2}+p_K \tilde{v}_K^2\sigma^2 +\frac{\sigma^4}{d^2}-2\sum_{\ell=1}^{K-1} \tilde{v}_\ell\sum_{k=\ell+1}^K \tilde{v}_k\\
=&f_5(\tilde{\mathbf{v}})-f_6(\tilde{\mathbf{v}}),
\end{align*}
\end{subequations}
where $f_5(\tilde{\mathbf{v}})=\sum_{\ell=1}^{K-1}\frac{1}{p_\ell} \Big(\sum_{k=\ell+1}^K p_k \tilde{v}_k^2 +\frac{\sigma^2}{d^2}\Big)+\sum_{\ell=1}^{K-1}p_\ell \tilde{v}_\ell^2 \Big(\sum_{k=\ell+1}^K \frac{1}{p_k} +\sigma^2\Big) +p_K \tilde{v}_K^2\sigma^2 +\frac{\sigma^4}{d^2}+p_K \tilde{v}_K^2\sigma^2 +\frac{\sigma^4}{d^2}$, and $f_6(\tilde{\mathbf{v}})=2\sum_{\ell=1}^{K-1} \tilde{v}_\ell\sum_{k=\ell+1}^K \tilde{v}_k$.
We then can maximize $f_5(\tilde{\mathbf{v}})-f_6(\tilde{\mathbf{v}})$. In what follows, we will show that the maximization of $f_5(\tilde{\mathbf{v}})$ and the minimization of $f_6(\tilde{\mathbf{v}})$ can be achieved simultaneously. First, we can observe that the maximization of $f_5(\tilde{\mathbf{v}})$ is achieved when $|\tilde{v}_k|$, $k=1,\ldots, K$, are maximized for signal transmitted from every user. We next consider the minimization of $f_6(\tilde{\mathbf{v}})$. To that end, we have,
\begin{align}
\frac{\partial f_6(\tilde{\mathbf{v}})}{\partial \tilde{v}_k} =2\sum_{\ell=1, \ell\neq k}^K \tilde{v}_\ell, k=1, \ldots, K.
\end{align}
The optimal value can be attained by enumeration of $\tilde{v}_K \in \Big\{\big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell }\Big\}_{m=1}^{2^{N_K-1}}$, and $\tilde{v}_K \in \Big\{-\big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell }\Big\}_{m=1}^{2^{N_K-1}}$.
\begin{enumerate}
\item If $\tilde{v}_K \in
\Big\{\big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell }\Big\}_{m=1}^{2^{N_K-1}}$, then for any $\tilde{v}_k \in
\Big\{\pm \big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{k-1} N_\ell }\Big\}_{m=1}^{2^{N_k-1}}$, $k=1,\ldots, K-1$, we have
\begin{align*}
&\frac{\partial f_6(\tilde{\mathbf{v}})}{\partial \tilde{v}_k}
=2 \tilde{v}_K+2\sum_{\ell=1, \ell\neq k}^{K-1} \tilde{v}_\ell\ge 2 \min \tilde{v}_K+ 2\min_{\{\tilde{v}_\ell\}_{\ell=1}^{K-1}} \sum_{\ell=1, \ell\neq k}^{K-1} \tilde{v}_\ell\\
&> 2^{\sum_{\ell=1}^{K-1} N_\ell } + 2\min_{\{ \tilde{v}_\ell\}_{\ell=1}^{K-1}} \sum_{\ell=1}^{K-1} \tilde{v}_\ell= 2^{\sum_{\ell=1}^{K-1} N_\ell} - 2 \Big(2^{\sum_{\ell=1}^{K-1} N_\ell-1}-\frac{1}{2}\Big)=1.
\end{align*}
In this case, the optimal value of $\{\tilde{v}_k\}_{k=1}^{K-1}$ to minimize $f_6(\tilde{\mathbf{v}})$ is given by
\begin{align*}
\tilde{v}_k= -\big(2^{N_k}-1\big) \times 2^{\sum_{\ell=1}^{k-1} N_\ell -1}, {\rm~for~}k=1, \ldots, K-1.
\end{align*}
Note that $\frac{\partial f_6(\tilde{\mathbf{v}})}{\partial \tilde{v}_K}=2\sum_{\ell=1}^{K-1} \tilde{v}_\ell<0$, then for $\tilde{v}_K \in
\Big\{\big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell }\Big\}_{m=1}^{2^{N_K-1}}$, the optimal value of $\tilde{v}_K$ is $\tilde{v}_K=\big(2^{N_K}-1\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell -1}$.
\item If $\tilde{v}_K \in \Big\{-\big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell }\Big\}_{m=1}^{2^{N_K-1}}$, for $\tilde{v}_k \in
\Big\{\pm \big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{k-1} N_\ell }\Big\}_{m=1}^{2^{N_k-1}}$, $k=1,\ldots, K-1$, we have
\begin{align*}
&\frac{\partial f_6(\tilde{\mathbf{v}})}{\partial \tilde{v}_k}
=2 \tilde{v}_K+2\sum_{\ell=1, \ell\neq k}^{K-1} \tilde{v}_\ell
\le 2 \tilde{v}_K+ 2\max_{\{ \tilde{v}_\ell\}_{\ell=1}^{K-1}} \sum_{\ell=1, \ell\neq k}^K \tilde{v}_\ell\\
&< -2^{\sum_{\ell=1}^{K-1} N_\ell }+ 2\max_{\{\tilde{v}_\ell\}_{\ell=1}^{K-1}} \sum_{\ell=1}^{K-1} \tilde{v}_\ell= -2^{\sum_{\ell=1}^{K-1} N_\ell }+ 2\Big( 2^{\sum_{\ell=1}^{K-1} N_\ell-1}-\frac{1}{2}\Big)=-1.
\end{align*}
In this case, the optimal value of $\{\tilde{v}_k\}_{k=1}^{K-1}$ to minimize $f_6(\tilde{\mathbf{v}})$ is given by
\begin{align*}
\tilde{v}_k= \Big(2^{N_k}-\frac{1}{2}\Big) \times 2^{\sum_{\ell=1}^{k-1} N_\ell }, {\rm~for~}k=1, \ldots, K-1.
\end{align*}
In addition, we note that $\frac{\partial f_6(\tilde{\mathbf{v}})}{\partial \tilde{v}_K}=2\sum_{\ell=1}^{K-1} \tilde{v}_\ell<0$, then for $\tilde{v}_K \in
\Big\{-\big(m-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell }\Big\}_{m=1}^{2^{N_K-1}}$, the optimal value of $\tilde{v}_K$ is $\tilde{v}_K=-\big(2^{N_K}-1\big) \times 2^{\sum_{\ell=1}^{K-1} N_\ell -1}$.
\end{enumerate}
Overall, the maximum value of $f_6(\tilde{\mathbf{v}})=2\sum_{\ell=1}^{K-1} \tilde{v}_\ell\sum_{k=\ell+1}^K \tilde{v}_k$ can be achieved by $\tilde{\mathbf{v}}^\star=[\tilde{v}_1^\star, \ldots, \tilde{v}_K^\star]^T$ where
\begin{align}\label{eqn:optcase1}
\tilde{v}_k^\star =\begin{cases}-\big(2^{N_{I,k} -1}-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{k-1} N_{I,\ell} }, &{\rm for~}k=1,\ldots, K-1;\\
\big(2^{N_{I,K} -1}-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_{I,\ell} }, &{\rm for~} k=K,
\end{cases}
\end{align}
or equivalently
\begin{align}\label{eqn:optcase2}
\tilde{v}_k^\star =\begin{cases}\big(2^{N_{I,k} -1}-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{k-1} N_{I,\ell} }, &{\rm for~}k=1,\ldots, K-1;\\
-\big(2^{N_{I,K} -1}-\frac{1}{2}\big) \times 2^{\sum_{\ell=1}^{K-1} N_{I,\ell} }, &{\rm for~} k=K.
\end{cases}
\end{align}
For both cases, we can observe that $f_5(\tilde{\mathbf{v}})$ is also maximized by $\tilde{\mathbf{v}}^\star$. Due to the symmetry of the solutions given in~\eqref{eqn:optcase1} and~\eqref{eqn:optcase2}, in what follows, we only consider the solution given in~\eqref{eqn:optcase1}. In this case, the sum-constellation for achieving the inner minimum is
\begin{align}\label{eqn:optsumcons}
&\sum_{k=1}^K \tilde{s}_k=\sum_{k=1}^K\tilde{v}_k+j \tilde{w}_k\nonumber\\
&=\bigg [\frac{1+j}{2}+\big (2^{N_{I,K} -1}-1\big ) \times 2^{\sum_{\ell=1}^{K-1} N_{I,\ell} }+j\big (2^{N_{Q,K} -1}-1\big ) \times 2^{\sum_{\ell=1}^{K-1} N_{Q,\ell} }\bigg ] d.
\end{align}
We then can have the following remark:
\begin{remark}
When $N_{I,K}=N_{Q,K}=1$, the solution given in~\eqref{eqn:optcase1}, which minimizes $f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ also minimizes $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$. \hfill$\Box$
\end{remark}
\emph{Proof:} For the solution of $\tilde{\mathbf{s}}$ given in~\eqref{eqn:optcase1}, the sum constellation is given in~\eqref{eqn:optsumcons}. When $N_{I,K}=N_{Q,K}=1$, we have $\sum_{k=1}^K \tilde{s}_k=\frac{1+j}{2}d$, and we can let $\sum_{k=1}^K s_k=\frac{1-j}{2}d$. Inserting them back into~\eqref{eqn:optcondition}, we have
$f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})=0$. That is, the values that minimize $f_2(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$ also minimizes $f_1(\mathbf{p},\mathbf{s}, \tilde{\mathbf{s}})$.
\hfill$\Box$
We now consider the outer optimization problem, where the objective function is a monotonically decreasing function against the term $\frac{ab}{d^2}$.
The optimization problem can be reformulated as
\begin{subequations}\label{eqn:poweropt1}
\begin{align}
&\max_{\pi, \mathbf{p}}~\bigg (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\bigg )\bigg (\sum_{k=1}^K p_k E_k +\frac{\sigma^2}{d^2}\bigg )\label{eqn18:objectivefun}\\
&\quad {\rm s.t.}~
\frac{1}{p_k}\le P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)},~ p_k E_k d^2\le P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}, \quad k=1,\ldots, K.\label{eqn18:const3}
\end{align}
\end{subequations}
The optimization problem in~\eqref{eqn:poweropt1} can be resolved by first fixing $\pi$ to find the optimal value of $\mathbf{p}$, and then perform further optimization on $\pi$. To that end, we can observe from~\eqref{eqn18:const3} that, for any given $\pi$, the feasible range of $d^2$ is given by $d^2 \le \frac{P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}}{p_k E_k} \le \frac{P_{\pi^{-1}(k)}^2 \beta_{\pi^{-1}(k)}^2}{E_k}$ for $k=1,\ldots, K$, or equivalently $d^2 \le \min\,\Big\{\frac{P_{\pi^{-1}(k)}^2 \beta_{\pi^{-1}(k)}^2}{E_k}\Big\}_{k=1}^K$.
By the Cauchy-Swartz inequality, we have
\begin{align*}
&\bigg (\sum_{k=1}^K \frac{1}{p_k}+\sigma^2\bigg )\bigg (\sum_{k=1}^K p_k E_k +\frac{\sigma^2}{d^2}\bigg )\\
&\stackrel{(a)}{\ge} \bigg (\sum_{k=1}^K \frac{1}{\sqrt{p_k}} \sqrt{p_k E_k d^2} +\frac{\sigma^2}{d}\bigg )^2=\bigg (\sum_{k=1}^K \sqrt{E_k} +\frac{\sigma^2}{d}\bigg )^2,
\end{align*}
where the inequality in~$(a)$ holds if and only if $\frac{\sqrt{p_k E_k}}{{1}/{\sqrt{p_k}}}=\frac{1}{d}$, for $k=1,\ldots K$. Or equivalently, the optimal power allocation is $\mathbf{p}=[p_1^\star, \ldots, p^\star]^T$ where $p_k^\star=\frac{1}{\sqrt{E_k} d}$ for $k=1, \ldots, K$. Our next task is to check the power constraint on $p_k^\star$ given in~\eqref{eqn18:const3} is violated or not. For $d^2 \le \min\,\Big\{\frac{P_{\pi^{-1}(k)}^2 \beta_{\pi^{-1}(k)}^2}{E_k}\Big\}_{k=1}^K$, we have
\begin{align}
&\frac{1}{p_k^\star}=\sqrt{E_k}d \le P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)},\quad p_k^\star E_k d^2=\sqrt{E_k} d \le P_{\pi^{-1}(k)} \beta_{\pi^{-1}(k)}, {\rm~for~}k=1,\ldots, K,
\end{align}
where no power constraints are violated for $\mathbf{p}$. Finally, the optimization problem on $\pi$ can be given by
\begin{align}
&\min_{\pi}~\sum_{k=1}^{K} \sqrt{E_k}+\frac{\sigma^2}{d}
\quad {\rm s.t.}~d^2\le \frac{P_{\pi^{-1}(k)}^2 \beta_{\pi^{-1}(k)}^2}{E_k}, \quad k=1, \ldots, K.
\end{align}
Or equivalently, we aim to solve
\begin{align}
&\max_{\pi}~d
\quad {\rm s.t.}~d^2 \le \frac{P_k^2 \beta_k^2}{E_{\pi(k)}} , \quad k=1, \ldots, K.
\end{align}
Before proceeding on, we establish the following lemma.
\begin{lemma}\label{lemma:orderedseq}
Suppose that two positive sequences $\{a_n\}_{n=1}^N$ and $\{b_n\}_{n=1}^N$ are arranged both in a nondecreasing order. If we let $\Pi$ denote the set containing all the possible permutations of $1,2,\cdots, N$, then, the solution to
the optimization problem,
$\max_{\pi \in \Pi} \min \Big\{ \frac{a_k}{b_{\pi(k)}}\Big\}_{k=1}^K$,
is given by $\pi^\star(k)=k$ for $k=1,2,\cdots, K$.~\hfill{\rm $\blacksquare$}
\end{lemma}
By Lemma~\ref{lemma:orderedseq}, and note that $P_1\beta_1 \le \ldots \le P_k\beta_K$, to maximize $d$, we should let $E_{\pi(1)} \le \ldots \le E_{\pi(K)}$, i.e., the average power of the sub-constellations should be arranged in an ascending order. All the above discussions can be summarized into the following theorem:
\begin{theorem}
The users are ordered such that $P_1\beta_1 \le P_2\beta_2 \le \ldots \le P_k\beta_K$, and we define $d^\star =\min_{k} \big\{\frac{P_k\beta_k}{\sqrt{E_k}}\big\}_{k=1}^K$, the optimal transmit power for all users can be given by $\mathbf{p}^\star=[\frac{1}{\sqrt{E_1} d^\star}, \ldots, \frac{1}{\sqrt{E_K} d^\star}]^T$. And the optimal permutation matrix is the identity matrix, i.e., $\mathbf{\Pi}=\mathbf{I}_K$.
\hfill{\rm $\blacksquare$}
\end{theorem}
\section{Simulation Results and Discussions}
In this section, computer simulations are performed to demonstrate the superior performance of our design in comparison with existing benchmarks. In our simulations, the small-scale fading is assumed to be the normalized Rayleigh fading. The path-loss as a function of transmission distance $d$ at antenna far-field can be approximated by
\begin{align*}
10\log_{10} L = 20\log_{10} \Big(\frac{\lambda}{4\pi d_0}\Big ) -10\gamma \log_{10} \Big(\frac{d}{d_0}\Big) - \psi,\quad d\ge d_0,
\end{align*}
where $d_0=100$m is the reference distance, $\lambda=v_c/f_c$ ($f_c=3$GHz) is the wavelength of carrier, $\gamma=3.71$ is the path-loss exponent~\cite{goldsmith05}. In the above model, $\psi \sim \mathcal{N}(0, \sigma_{\psi}^2)$ ($\sigma_{\psi} =3.16$) is the Gaussian random shadowing attenuation resulting from the blockage of objects. For the receiver, we assume that the noise power is
$10\log_{10}\sigma^2 =10\log_{10} N_0 B_w = 10\log_{10} 3.2\times 10^{-10} = -125.97\,{\rm dB}$ where the channel bandwidth $B_w=20$MHz, and $N_0= k_0 T_0 10^{F_0/10}$ is the power spectral density of noise with $k_0=1.38\times10^{-23}$ J/K being the Boltzman's constant, reference temperature $T_0 =290$K (``room temperature''), and noise figure $F_0=6$\,dB. For clarity, all the simulation parameters are summarized in Table~\ref{tbl:simpar}.
\begin{table}[ht]
\caption{Simulation Parameters}
\begin{center}
\begin{tabular}{|c|c|}
\hline
{Cell radius $d_{\rm max}$} & $ 1000$ m \\ \hline
{Reference distance $d_0$} & 100 m\\ \hline
{Carrier frequency $f_c$}& 3 GHz\\ \hline
{Channel bandwidth $B_w$}& 20 MHz\\ \hline
{Pathloss exponent $\gamma$} & 3.71 \\ \hline
{Reference temperature / Noise figure} & 290\,K / 6\,dB\\ \hline
{Standard deviation of shadow fading} $\sigma_{\psi}$ & 3.16\\ \hline
\end{tabular}
\end{center}
\label{tbl:simpar}
\end{table}
\begin{figure}
\centering
\flushleft
\resizebox{16cm}{!}{\includegraphics{figure/comparision.pdf}}
\centering
\caption{Comparison of the proposed scheme with MED detector on the average BER of all users versus $M$, 4-QAM are used by all the users with average power constraint.}
\label{fig:avrbervsdist}
\end{figure}
\begin{figure}
\centering
\flushleft
\resizebox{16cm}{!}{\includegraphics{figure/vstraining.pdf}}
\centering
\caption{The comparison between the proposed and the orthogonal training method with $N=3$ users and 4 time slot.}
\label{fig:trainingbsproposed}
\end{figure}
\begin{figure}[ht]
\centering
\flushleft
\resizebox{16cm}{!}{\includegraphics{figure/MLvshanzo.pdf}}
\centering
\caption{The comparison between the proposed and the noncoherent receiver with 8-QAM and 8-DPSK, respectively.}
\label{fig:MLvshanzo}
\end{figure}
We first examine the error performance of the proposed design under the instantaneous average power constraint for different number of users, as illustrated in Fig.\,\ref{fig:avrbervsdist}. It is assumed that the average power upper bound is ${P}_k=316$\,mW (25\,dBm), $\forall k$. All the $K$ users are assumed to be uniformly distributed within the cell of radius $d$. It can be observed that, as the number of users increases, the error performance deteriorates quickly which is caused by the mutual interference among users. Then, more BS antennas are needed to achieve the same average BER. We also compare our design with the {max-min Euclidean distance (MED)}-based method proposed in~\cite{Goldsmith16tit,Zhang2018JSAC}. Since we use two time slots, while the MED methods only need one time slot, we assume that 2-PAM constellations are adopted by all users for the MED based design. We can see from the figure that the proposed approach significantly outperforms the MED-based method in terms of BER in all simulated cases.
We next compare the error performance of the proposed framework with the conventional zero-forcing (ZF) receiver using orthogonal training sequence. The results are shown in Fig.~\ref{fig:trainingbsproposed}. In this simulation, we consider a system setup with $N=3$ users. For the orthogonal training-based method, at least 4 time slots (3 time slots for training and 1 time slot for data transmission) are needed and we assume that the channel coefficients are quasi-static within these consecutive time slots. As 4-QAM is adopted by each user for the proposed scheme, 64-QAM are correspondingly adopted for the training-based approach in order to make a fair comparison. For the channel training algorithm, we consider that a widely-used least-square (LS) channel estimator is employed~\cite{Gershman06}. It can be observed from Fig.\,\ref{fig:trainingbsproposed} that, when the antenna number $M$ is small and the channel gain is large (i.e., the distance $d$ is small), the training-based method outperforms the proposed design in term of BER. However, when the antenna number is relatively large, the proposed design has a better error performance, especially at the cell edge. The rationale is that \emph{without a reliable CSI, especially at low signal-to-noise ratio (SNR) regimes, coherent detection suffers from inferior decoding performance.}
It is finally worth mentioning that a related noncoherent multiuser massive MIMO system was designed in~\cite{Hanzo15} for differential phase shift keying (DPSK) constellations. The transmitted information of all the users is modulated on the phase offset between successive symbols. In fact, the DBPSK and the DQPSK constellations with optimal scale between every sub-constellation are the special case of our QAMD. However, for larger constellations such as 8-DPSK, our design has greater normalized minimal Euclidean distance. The resulting sum-constellation of two 8-DQPSK is not a regular constellation anymore, just as studied in~\cite{Harshan11}. Also, in\cite{Hanzo15}, the actual transmitted power of each user is not given explicitly, and hence, the optimal power allocation under both the average and the peak power constraint case is hard to evaluate. To make a comparison, especially when the constellation size is large, we compare the 8-DPSK constellation suggested in~\cite{Hanzo15} with the optimal scale 1.765 between the two sub-constellations with the rectangular 8-QAM constellation in our case. The error performance of~\cite{Hanzo15} and our proposed design with two users, using 8-DPSK and 8-QAM respectively, is studied in Fig.\,\ref{fig:MLvshanzo}. It can be observed that our scheme with 8-QAM sub-constellation has a better error performance than~\cite{Hanzo15} with 8-DPSK constellation, since the normalized minimal distance for our constellation is larger. Also, it should be pointed out that the resulting sum-constellation in~\cite{Hanzo15} is not a regular constellation and it must be either computed or stored in advance. The detection of the sum-constellation typically requires a exhaustive search over the whole constellation. In addition, the optimal power scale for general DPSK needs to be optimized by numerical methods. In contract, our design leads to a regular QAM sum-constellations. Furthermore, the optimal transmit powers of all users and the sub-constellation assignment among them have been provided in closed-form.
\section{Conclusions}
{In this paper, a non-orthogonal and noncoherent massive MIMO (nn-mMIMO) framework towards enabling scalable URLLC applications has been developed based on a new uniquely-factorable multiuser space-time modulation (UF-MUSTM) scheme. For the MUSTM code design, a simple yet systematic construction method based on the concept of QAM division has been devised. Assuming that the large scale fading coefficients are known at the base station, the detailed transmission scheme and the corresponding noncoherent detector have been carefully designed. We further optimized the proposed design framework by jointly optimizing the constellations of multiple users. Specifically, we implemented a max-min Kullback-Leibler (KL) {divergence}-based design criterion, based on which we jointly optimize the transmitted powers of all users and the sub-constellation assignment among them. Simulations demonstrated that the optimized nn-mMIMO framework has better reliability performance compared to the state-of-the-art benchmarking schemes.}
\section*{Appendix}
\begin{appendices}
\subsection{Proof of Proposition~\ref{proposition:UFCM}}\label{append:prop1}
We first show the sufficiency of Proposition~\ref{proposition:UFCM} for the considered massive MIMO system with unlimited number of antennas. By Assumption~\ref{assumption1} on channel statistics and the central limit theory, we have
$\lim_{M\to \infty} \frac{\mathbf{G}^H \mathbf{G} }{M}=\mathbf{I}_K$ and $
\lim_{M\to \infty} \frac{\mathbf{\Xi}^H \mathbf{\Xi}}{M} =\sigma^2\mathbf{I}_T$. Now, the receiver can employ a simple correlation-based detector by calculating $\mathbf{R}_M= \frac{\mathbf{Y}^H \mathbf{Y}}{M}$. When the antenna array size goes to infinity, we have $\lim_{M\to \infty} \mathbf{R}_M -\sigma^2 \mathbf{I}_T=\mathbf{R}$, where ${\mathbf R}$ is a $T\times T$ Hermitian positive semidefinite matrix such that
$\mathbf{R} = \mathbf{X}_T^H \mathbf{D}\mathbf{X}_T$. Now, $\mathbf{X}_T$ can be uniquely determined by an exhaustive search since for any $\mathbf{X}_T, \widetilde{\mathbf{X}}_T\in {\mathcal M}_{K\times T}$ with $\mathbf{X}_T^H \mathbf{D}\mathbf{X}_T= \widetilde{\mathbf{X}}_T^H \mathbf{D}\widetilde{\mathbf{X}}_T$, we have ${\mathbf{X}}_T=\widetilde{\mathbf{X}}_T$.
Next, we show the necessity of Proposition~\ref{proposition:UFCM}. Suppose that there exist $\mathbf{X}_T, \widetilde{\mathbf{X}}_T\in {\mathcal M}_{K\times T}$ such that ${\mathbf{X}}_T \neq \widetilde{\mathbf{X}}_T$ for $ \mathbf{X}_T^H \mathbf{D}\mathbf{X}_T= \widetilde{\mathbf{X}}_T^H \mathbf{D}\widetilde{\mathbf{X}}_T$. As a consequence, ${\mathbf{X}}_T$ and $\widetilde{\mathbf{X}}_T$ will have exactly the same likelihood function as shown in Eq.\,\eqref{eqn:MLdetector}, and hence they are indistinguishable by the ML detector where reliable recovery of the transmitted signals can not be guaranteed. This finish the proof of Proposition~\ref{proposition:UFCM}.\hfill$\Box$
\subsection{Proof of Proposition~\ref{prop:udcg}}\label{append:prop2}
Let ${\mathbf X}_2=\mathbf{D}^{-1/2}{\mathbf S}_2$ and $\widetilde{\mathbf X}_2=\mathbf{D}^{-1/2}\widetilde{\mathbf S}_2$. Then, if ${\mathbf X}_2^H{\mathbf D}{\mathbf X}_2=\widetilde{\mathbf X}_2^H{\mathbf D}\widetilde{\mathbf X}_2$, and note that $\mathbf{\Pi}^H\mathbf{\Pi}=\mathbf{I}_M$, we have ${\mathbf S}_2^H{\mathbf S}_2=\widetilde{\mathbf S}_2^H\widetilde{\mathbf S}_2$. As a consequence, we have
$\sum_{k=1}^N s_k =\sum_{k=1}^K \tilde{s}_k$, where $s_k, \tilde{s}_k\in \mathcal{X}_k$. Since $\{{\mathcal X}_k\}_{k=1}^K$ form a UDCG, by Lemma~\ref{lemma:UDCG}, we can attain $s_k =\tilde{s}_k$, or equivalently, $\mathbf{S}_2=\tilde{\mathbf{S}}_2$ and now we have ${\mathbf X}_2 =\widetilde{\mathbf X}_2$.
This completes the proof of Proposition~\ref{prop:udcg}. \hfill $\Box$
\subsection{Proof of Lemma~\ref{lemma:orderedseq}} \label{appendix:orderedseq}
Let $m=\arg \min_{k=1,2, \ldots, N}\big\{ \frac{a_k}{b_{\pi^*(k)}}\big\}= \arg \min_{k} \big\{\frac{a_k}{b_k}\big\}$. In other words, $m$ is the index such that $q_m=\frac{a_m}{b_m} =\min_{k=1,2,\ldots, N} \big\{\frac{a_k}{b_k}\big\}$. Now, we want to show that $q_m =\max_{(\pi(1), \pi(2), \ldots, \pi(N)) \in \mathcal{U}} \min_{k=1,2,\cdots,N} \big\{\frac{a_k}{b_{\pi(k)}}\big\}$.
To that end, we divide $\mathcal{U}$ into two mutually exclusive subsets, i.e., $\mathcal{P}=\{(\pi(1), \pi(2), \ldots, \pi(N)) |\pi(m) \neq m \}$ and $\mathcal{U} \setminus \mathcal{P} =\{(\pi(1), \pi(2), \ldots, \pi(N))|\pi(m) = m \}$. Consider the following cases:
\begin{itemize}
\item $ (\pi'(1), \pi'(2), \ldots, \pi'(N)) \in \mathcal{P}$. In this case,
there exists an $\ell \neq m$ such that $\pi'(\ell)=m$ and hence $b_{\pi'(\ell)} =b_m$.
If $\ell < m$, then, we have $\frac{a_\ell}{b_{\pi'(\ell)}}= \frac{a_\ell}{b_{m}} \le \frac{a_m}{b_m} =q_m$.
If $\ell > m$, there exits an $n \le m$ such that $\pi'(n) > m$ by the property of permutation.
Then, we have $\frac{a_n}{b_{\pi'(n)}}\le \frac{a_m}{b_{\pi'(n)}} \le \frac{a_m}{b_m}=q_m$.
Therefore, we conclude $\min_{k=1,2,\ldots, N} \big\{\frac{a_k}{b_{\pi'(k)}}\big\} \le q_m$ for any $ (\pi'(1), \pi'(2), \ldots, \pi'(N)) \in \mathcal{P}$. Or equivalently, $\max_{ (\pi(1), \pi(2), \ldots, \pi(N)) \in \mathcal{P}} \min_{k=1,2,\cdots, N} \big\{\frac{a_k}{b_{\pi(k)}}\big\} \le q_m$.
\item $(\pi'(1), \pi'(2), \ldots, \pi'(N)) \in \mathcal{U} \setminus \mathcal{P}$. In this case, $\pi'(m)=m$ and hence, we have $\min_{k=1,2,\cdots, N} \big\{\frac{a_k}{b_{\pi'(k)}}\big\} \le \frac{a_m}{b_{\pi'(m)}}=\frac{a_m}{b_m}=q_m$. Therefore, $\max_{ (\pi(1), \pi(2), \cdots, \pi(N)) \in \mathcal{U} \setminus \mathcal{P}} \min_{k=1,2,\cdots,N} \big\{\frac{a_k}{b_{\pi(k)}}\big\} \le q_m$.
\end{itemize}
In conclusion, we have $\max_{\pi \in \mathcal{U}} \min_{k=1,2,\cdots, N} \big\{\frac{a_k}{b_{\pi(k)}}\big\} \le q_m$.
In the following, we aim to prove that the equality is achievable for certain $(\pi(1), \pi(2), \ldots, \pi(K))$.
By setting $(\pi(1), \pi(2), \cdots, \pi(N)) =(\pi^*(1), \pi^*(2), \ldots, \pi^*(N))$ and then, from the construction process above, we can find that for the given sequences $a_1\le a_2\le \cdots \le a_N$ and $b_1\le b_2\le \cdots \le b_N$, $\min_{k=1,2, \cdots, N} \big\{\frac{a_k}{b_{\pi^*(k)}}\big\} =\frac{a_m}{b_m}= q_m$.
Hence, the equality is achievable for $(\pi^*(1), \pi^*(2), \cdots, \pi^*(N))$.
This completes the proof.~\hfill$\Box$
\end{appendices}
\small
\bibliographystyle{ieeetr}
|
1,108,101,563,828 | arxiv | \section{Introduction}
Information Retrieval (IR) systems have been revolutionized by neural ranking methods. A few years ago, a state-of-the-art ranking stack would have employed a first stage, sparse retriever, such as BM25, which relies on exact term matches to retrieve an initial pool of items for re-ranking by later stages \cite{lafferty2001document,jones2000probabilistic,schutze2008introduction}. Inverted indices allow the entire corpus to be scanned efficiently, with low latency. First stage retrievers of this type have remained standard for IR since the earliest commercial and academic systems.
Recent advances provide an alternative to first-stage retrievers based on the classic inverted-index. Contextualized pre-trained transformer models, such as BERT \cite{devlin2018bert}, enable a new type of first-stage retriever that maps query and documents into low-dimensional dense vectors and exploit maximum inner product search methods to find a document vectors with the greatest similarity to the query vector~\cite{xiong2020approximate,karpukhin2020dense,zhan2020repbert,gao2020complementing}. While these retrievers enjoy substantial retrieval effectiveness improvements compared to traditional approaches, they still suffer from drawbacks, including increased GPU resources and potentially higher latency. Although on average these dense retrievers outperform the traditional sparse retrievers on retrieval effectiveness, there remain queries for which the sparse retrievers provide superior performance. There is no single retrieval approach that can definitely answer all the queries effectively and efficiently, and selecting a suitable retrieval strategy is a crucial concern for maintaining the trade-off between a system’s effectiveness and efficiency.
The traditional initial retrievers such as BM25 \cite{jones2000probabilistic} have a long history in Information Retrieval. In a modern ranking stack, BM25 is normally paired with a WAND query processing strategy to maximize the efficiency of first-stage retrieval \cite{broder2003efficient}. These first-stage rankers can be called “sparse retrievers”, since they mostly utilize sparse, high-dimensional, bag-of-word representations in their scoring functions to rank the documents for a given query.
Since sparse retrievers operate through exact-matching between query and document terms, they do not suffice when there is a semantic gap between the query and the corpus language~--~the vocabulary mismatch problem. However, they are extremely cost-effective and scale easily, even on inexpensive hardware.
Most sparse ranking methods for first-stage sparse retrievers typically employ BM25 as the first stage in a multi-stage ranking stack~\cite{nogueira2019multi,han2020learning,nogueira2019passage,qiao2019understanding,hofstatter2019effect}. Later stages of the ranking stage use higher-cost neural-based rerankers to rerank the documents retrieved by the initial low-cost sparse retriever.
While this combination of an initial sparse retrieval step, followed by multiple reranking steps, has shown excellent performance, its performance can be further improved by replacing the initial retrieval step by a dense retriever. These dense retrievers use contextualized pre-trained transformer models to map query and documents into an embedding space~\cite{xiong2020approximate,karpukhin2020dense,zhan2020repbert,gao2020complementing,khattab2020colbert}. The association (e.g., dot-product) between the query and documents in this dense low-dimensional embedding space provides a relevance score. The improvement made by replacing the sparse retriever with dense retriever has been shown on many core retrieval tasks and leaderboards such as MS MARCO. MS MARCO is a large-scale collection with a focus on enabling the applications of deep learning methods for information retrieval. Providing a relatively large volume of training data makes MS MARCO a suitable testbed for comparing SOTA retrieval methods.
At the time of writing, the top run on the MS MARCO passage retrieval leaderboard is RocketQA \cite{ding2020rocketqa}, a dense retriever which uses a dual encoder as well as a cross encoder architecture in order to learn dense representations of queries and passages. Similarly, the top performing model on the document retrieval leaderboard is ANCE \cite{xiong2020approximate}, a learning mechanism that learns the query and document representation in a siamese network and uses dot product to rank the documents. Documents can be encoded offline to save time, whereas finding the most similar document to the query is a time-consuming stage in dense retrievers that should be done on the fly when query arrives. Recently, nearest neighbor search algorithms such as FAISS support these dense retrievers, scanning millions of candidates in milliseconds \cite{johnson2019billion}. But even with the most efficient nearest-neighbour algorithm, the latency of dense retrievers may still not be comparable to sparse retrievers. In addition, the need for substantial GPU resources makes dense retrievers expensive to run, and may limit their use in high-volume, real-world applications, where resource limitations are always a crucial concern. Therefore, relative to sparse retrievers, dense retrievers may incur greater costs and impact latency. Also, dense retrievers are not able to produce token-level matching signals which may be critical for named entities and other terms for which an exact match is required.
Some researchers have proposed “Hybrid Retrievers”, which combine sparse and dense retrievers in the hope of gaining some of the advantages of both methods. Some recent approaches leverage sparse retriever lexical information in the training process. For instance \citet{gao2020complementing} train a dense retriever to supplement a sparse retriever by semantic level information. Other researchers have combined the sparse and dense retriever ranking by interpolating the relevance score from each retrieve \cite{lin2020distilling,luan2020sparse,gao2020complementing,kuzi2020leveraging}. However, to the best of our knowledge, there is no work on selecting the appropriate retrieval strategy for individual queries.
In this paper, we investigate strategies for selecting a first-stage retrieval strategy based on the query. Selecting an appropriate strategy provides a trade-off between efficiency and effectiveness. We select from three possible strategies: 1) sparse only, 2) dense only, and 3) hybrid, where both the dense and sparse retrievers are run and their results merged into a single pool for reranking. By selecting the appropriate strategy on a per-query basis, we can leverage the benefits of both sparse and dense retrievers. For some queries, the use of a dense retriever might avoid vocabulary mis-match problems. For other queries, the use of a sparse retriever might provide necessary exact term matches. For some queries, both may be desirable. When a query can be answered with a lower-cost, lower latency, sparse retriever alone, costs may be reduced by avoiding the use of a dense retrieval method. We utilize contextualized pre-trained embedding representation of queries to train the classifier in a cross-encoder architecture with the goal of selecting the retrievers that can answer the query the best. Further, we run experiments on MS MARCO passage collection to train two classifiers i.e., select `sparse vs.\ dense' or `sparse vs.\ hybrid' on per query-basis. Our experiments provides a setting where you can select the best retrieval strategy based on query latency and resource constraints.
\section{Method}
When selecting the retrieval strategy, we aim to capitalize on the efficiency and parsimony of the sparse retriever, preferring it over the dense retriever when possible. Nonetheless, if our classifier predicts relatively poor performance for the sparse retriever on query $q$, we would prefer to exploit the more powerful and resource hungry dense retrieval strategy. For some queries, we also consider a hybrid approach, by running both retrievers and combining the pools of retrieved items from sparse and dense retrievers into a single pool for reranking. Under this hybrid strategy, we always run the sparse retriever, and then decide to run the dense on the basis of the results.
We investigate the effect of this retrieval strategy selection process on the MS MARCO passage collection\cite{nguyen2016ms}\footnote{\url{ https://microsoft.github.io/msmarco/
}}. This collection consists of 8.8 million passages accompanied by more than 500k pairs of query and judged relevant passages for training purposes. For each query $q$, there is a set of relevant judged passages $R_q$ where for over 90\% of queries $|R_q|=1$, i.e., there is a single judged relevant passage per query. In addition, there is set of 6,980 queries for development and validation (``MS MARCO Development Set''). Finally, there is a test set with private relevance judgments that are not available to us. In our experiments, we work with the training and development sets.
As our sparse retriever, we employ BM25 as implemented by the open-source Anserini system from the University of Waterloo \cite{yang2017anserini}, which provides state-of-the-art performance for sparse retrievers. The Anserini\footnote{\url{ https://github.com/castorini/anserini
}} implementation of BM25 has been widely used in top-performing reranking stacks~\cite{nogueira2019multi,hofstatter2020interpretable,hofstatter2020local}. As our dense retriever, we adopt ANCE, which is currently a state-of-the-art first-stage dense retriever from the standpoint of both efficiency and effectiveness. ANCE has shown to be more than 100 times faster than other dense retrievers\cite{xiong2020approximate}. In addition, we repeat our experiments with other representative dense retrievers, such as RepBERT~\cite{zhan2020repbert}
and ColBERT~\cite{khattab2020colbert}. For our hybrid retriever, we simply retrieve documents by both sparse (BM25) and dense (ANCE) and merge them into a single pool. In the following, we discuss how we train our proposed classifiers to decide between `sparse vs. dense' or `sparse vs. hybrid' as the retrieval strategy on a per query basis.
\subsection{Sparse vs. Dense}
\label{sparsevsdense}
In selecting a retrieval strategy, we prefer the lower-cost sparse retriever to a dense retriever. For the training queries, we base labels for our classifier on the position of the first relevant passage in the results from the sparse retriever. For a given training query $q$, we call the set of relevant passages $R_q$ and the ranked list of top-$K$ retrieved documents with sparse retriever $S_q^K$. Let the first relevant retrieved passage within $S_q^K$ be $F_q$. If $F_q$ appears above a threshold $T$ in $S_q^K$, we label $q$ as ``Sparse Retriever''. If $F_q$ appears below the threshold, or if $S_q^K$ does not contain a relevant passage, we label $q$ as ``Dense Retriever'':
\begin{equation}
S_q^K = [d_1,d_2,d_3, ... , d_K]
\label{sqk}
\end{equation}
\begin{equation}
F_q = \{d_x | x= min\{i|d_i \in S_q^K, d_i\in R_q\}\}
\label{Fq}
\end{equation}
\begin{equation}
Rank(F_q) = min\{i|d_i \in S_q^K, d_i\in R_q\}
\label{rank}
\end{equation}
\begin{equation}
Strategy=
\begin{cases}
Sparse Retriever & \text{if $Rank(F_q) <= T$}\\
Dense Retriever & \text{otherwise}\\
\end{cases}
\label{schema1}
\end{equation}
For the results in this paper we use a threshold of $T = 50$, but similar results are obtained for other values (100, 150, and 200).
We use these labels to fine-tune BERT. In recent years, BERT has been extensively used with enormous success on a variety of tasks in Natural Language Processing and Information Retrieval, including first-stage retrieval, reranking, paraphrasing and text similarity~\cite{devlin2018bert,sbert,nogueira2019passage,bert-text-class}. BERT utilizes a cross-encoder framework, which jointly encodes the inputs and candidates in a single transformer. One of the benefits of having all the inputs in such an architecture is having a high number of interactions between the input tokens. Although this model suffers from intensive and slow computations when there is a large-scale pool of candidates at inference time, it aligns with the needs of our classifier because we do not have a high number of inputs. Each query comprises a small number of tokens. We utilized contextualized pre-trained embedding representation of queries since they consider both semantics and context of the queries. The query tokens followed by special tokens are fed into the cross-encoder network. The model performs full-cross self-attention over the given input and label with the aim of attaining higher accuracy. Further, a linear classification layer with binary cross entropy loss on top of the first vector produced by the transformer to reduce dimensions and get a scalar value probability for each class.
\subsection{Sparse vs.\ Hybrid}
\label{sparsevshybrid}
In the previous section, we describe a classifier to select between a sparse or dense retriever given only the text of a query. In this section, we select between a sparse or hybrid strategy, where the selection in made after an initial sparse retrieval, when the results of the sparse retrieval are available. If hybrid retrieval is selected, dense retrieval is performed as a second step and its results are pooled with those from the sparse retriever. As we did in the previous section, labels for training are based on presence and rank of the top relevant passage $F_q$ returned by the sparse retriever:
\begin{equation}
Strategy=
\begin{cases}
Sparse Retriever & \text{if $Rank(F_q) <= T$}\\
Hybrid Retriever & \text{otherwise}\\
\end{cases}
\label{schema2}
\end{equation}
In addition to the text of the query, the text of the top passage returned by the sparse retriever is provided as input to the classifier. We compare the architectures of the two classifiers, i.e., sparse vs.\ dense and sparse vs.\ hybrid in Figure~\ref{fig:classifiers}.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{Fig1.PNG}
\vspace{-1em}
\caption{ Retrieval category selection classifiers: (a) sparse vs.\ dense, using only the query as input and, (b) sparse vs.\ hybrid, using both query and top-retrieved document as input; $q_t^i$ and $d_t^i$ represent query and document tokens.
}
\vspace{-1em}
\label{fig:classifiers}
\end{figure}
\section{Experiments}
\subsection{Experimental Setup}
To train the classifiers, we follow the labelling scheme described in Section~\ref{sparsevsdense} and~\ref{sparsevshybrid}. We assign labels to queries in the training set. For the sparse vs.\ dense classifier we fine-tune BERT base uncased for 1 epoch with batch size of 8. Training on a RTX 2080 GPU took less than 1 hour. For the sparse vs.\ hybrid classifier we fine tune the same pretrained model using the query and the top document retrieved by the sparse retriever. We feed in the query followed by a separator token, the first retrieved document and the assigned class to fine-tune BERT base uncased with the same experimental setup. We made our code publicly available at \url{https://github.com/Narabzad/Retrieval-Strategy-Selection}.
\subsection{Results and Findings}
As shown in Figure 1, the classifiers produce probabilities that queries belong to each of the two classes. To specify a strategy, we pick a threshold between 0.0 and 1.0. By picking different thresholds, we can trade off between the efficiency and resource parsimony of the sparse strategy and the improved effectiveness of the dense and hybrid strategies. To measure effectiveness, we report recall@1000, but the overall results are similar at other recall levels. We use recall because the output of this first-stage retrieval will be re-ranked by additional stages. The goal of the first stage is build a pool containing relevant passages, while the goal of later stages is to place these relevant passages at the top of the ranking.
Figure~\ref{fig:pref} shows the trade-off between sparse vs.\ dense retrievers. As the threshold is varied from 0.0 to 1.0, the dense retrieval strategy is selected for a larger and larger faction of the queries. The x-axis shows fraction of queries that are assigned the dense retriever across all 6980 queries in MS MARCO development set. We can view this fraction as a ``budget'' allocated to the dense retrieval strategy, which we allocate to improve effectiveness. For a budget of 0, the sparse retriever is always used. For a budget of 1, the alternative (dense or hybrid) retriever is always used. At the lower left, where the two blue curves intersect, the dense retriever budget is 0, meaning all the queries were executed with a sparse retriever. The blue point at the upper right, where the blue curves intersection again shows the performance at a dense retriever budget of 1, which means all the queries were executed with a dense retriever. As a baseline, we randomly assign a retriever at the rate given by the budget, shown by the dotted lines. Both classifiers substantially outperform this baseline. For example, at dense retriever budget of 0.5, when 50\% of queries (3490 queries) employ the dense retriever and the rest of the queries (3490 queries) employ the sparse retriever (blue lines) the random retrieval strategy selector obtains a recall of 0.91 while our classifier obtains a recall of 0.95, a recall improvement of 0.04.
The performance of the second classifier, sparse vs.\ hybrid, is shown in Figure~\ref{fig:pref} with solid pink lines. The lower left point shows the performance of all 6980 queries in the MS MARCO development set when executing all of them with the sparse retriever and the upper right pink point illustrate the performance when executing all of them by the hybrid retriever, i.e., both the sparse and dense retrievers. We also show the performance of randomly assigning a fraction of the queries to the hybrid retriever as the dashed pink line. For instance, at the hybrid budget of 50\% where we can only execute 50\% of the queries with the hybrid strategy, the random baseline obtained a recall of 0.91 while our the classifier obtained a recall of 0.96, a recall improvement of over 0.05. At this 50\% level, the recall of the sparse vs.\ hybrid strategy exceeds the recall of the dense retrieval strategy at the 100\% level, while using the dense retriever for only half the queries.
Figure~\ref{fig:pref} also illustrates the gap between the full dense strategy and the full hybrid strategy. To reach a recall@1000 of 0.98, both retrievers must be used. While the dense retriever is superior to the traditional sparse retrieval, it still misses relevant passages found by the sparse retriever.
\begin{figure}[t]
\centering
\label{fig1}
\vspace{-1em}
\includegraphics[width=0.94\linewidth]{fig_2_2.PNG}
\vspace{-1em}
\caption{ Performance of retrieval strategy selection. The x-axis shows \% of queries retrieved by dense retrievers.
}
\label{fig:pref}
\vspace{-1em}
\end{figure}
\begin{figure}[t]
\centering
\label{fig1}
\vspace{0em}
\includegraphics[width=0.95\linewidth]{Fig3.PNG}
\vspace{-1.3em}
\caption{Performance of retrieval strategy selection with three different dense retrievers.
}
\label{fig:alt}
\end{figure}
While BM25 is well established as a state-of-the-art sparse retrieval method, there are alternatives we can use as the dense retriever. To test the robustness of our results, we repeat the experiment with other state-of-the-art dense retrievers: Repbert (a representation based dense retriever) \cite{zhan2020repbert} and Colbert (a late-interaction based dense retriever)\cite{khattab2020colbert}. Figure~\ref{fig:alt} shows the result of our sparse vs. dense retrieval strategy with these three dense retrievers. As shown, the type of dense retriever does not impact our conclusions, and our retrieval strategy is robust to the type of dense retriever.
Finally, we consider the effect of using our retrieval strategy selector in an end-to-end retrieval framework. In \citet{lin2020distilling} query latency time is reported as 55ms for BM25 as the sparse retriever and 103ms ANCE as the dense retriever on the MS MARCO passage collection. As we shift budgets for dense retrieval, from 0\% to 100\%, our proposed method can prioritize queries that need to be retrieved with dense retriever. Figure~\ref{fig:latency} shows the trade-off between latency and effectiveness as we vary the budget. The Pareto frontier shifts between retrieval strategies as latency and recall increase. For example, the sparse vs. dense strategy provides a recall above 0.94 with a latency under 80ms, while the sparse vs. hybrid strategy provides a recall above 0.96 with a latency under 106ms.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Fig4.PNG}
\vspace{-1em}
\caption{Query latency vs.\ recall trade-off for our methods.
}
\label{fig:latency}
\end{figure}
\section{Conclusion}
Dense retrievers provide substantial performance improvements when compared to traditional sparse retrievers, such as BM25. However, dense retrievers introduce performance and resource costs not associated with sparse retrievers, including the need for substantial GPU resources. In addition, dense retrievers can still miss relevant items surfaced by sparse retrievers. In view of the efficiency vs.\ effectiveness trade-offs, we present methods for selecting the appropriate retrieval strategy on a per-query basis, allocating the dense retriever to those queries most likely to benefit. Utilizing our proposed retrieval strategy selection leads achieving improved performance even under computational resources or time constraint. Using previously reported performance characteristics we illustrate the trade-off between strategies, where the preferred strategy depends on latency. Other performance characteristics will produce a different trade-off.
\newpage
\bibliographystyle{ACM-Reference-Format}
\balance
|
1,108,101,563,829 | arxiv | \section{INTRODUCTION}
The interplanetary medium that surrounds exoplanets is filled by stellar wind particles and the embedded stellar magnetic field. Due to detection biases, the large majority of exoplanets found so far are orbiting cool stars at the main-sequence phase. Although the winds of these stars have proven quite challenging to observe \citep{1992ApJ...397..225M,2001ApJ...546L..57W,2005ApJ...628L.143W}, the interaction between exoplanets and their surrounding medium (i.e., the host star's wind) may give rise to observable signatures, such as planetary radio emission \citep{2007P&SS...55..598Z}, enhancement of stellar activity \citep{2000ApJ...533L.151C,2003ApJ...597.1092S,2005ApJ...622.1075S}, bow-shock formation \citep{2010ApJ...722L.168V,2013MNRAS.436.2179L,2013ApJ...764...19B}, charge-exchange between stellar wind protons and planetary neutral hydrogen \citep{2008Natur.451..970H,2010ApJ...709..670E,2013A&A...557A.124B,2014A&A...562A.116K} and formation of comet-like tail structures \citep{2011Icar..211....1M,2012ApJ...752....1R,2013A&A...557A..72B,2014MNRAS.438.1654V}, all of which can provide invaluable insights into the system, such as the intensity of the planetary magnetic field, velocity and temperature of the local stellar wind, etc.
By studying stellar winds, we are able to make quantitative predictions about the interplanetary medium. A significant improvement on our understanding of the interaction between a planet and the wind of its host star has been achieved in the past decade. Traditionally, these works have been based on simplified treatments of the winds \citep[e.g.][]{2004ApJ...602L..53I,2005A&A...434.1191P,2005A&A...437..717G,2005MNRAS.356.1053S,2008MNRAS.389.1233L, 2011MNRAS.411L..46V, 2014A&A...570A..99S, 2014ApJ...795...86S}. For example, simplified wind approaches might assume an isothermal wind structure, or that stars are non-rotating and/or non-magnetised bodies, among others. However, stellar winds are three-dimensional (3D) in nature, where complex interactions of a rotating, magnetised plasma take place. In view of that, more recently, new generations of 3D, magnetohydrodynamical (MHD) models have started to be employed in the studies of interactions between stars/winds and their planets \citep[e.g.,][]{2009ApJ...703.1734V,2009ApJ...699..441V,2010ApJ...720.1262V,2011MNRAS.412..351V,2012MNRAS.423.3285V,2014MNRAS.438.1162V, 2009ApJ...704L..85C, 2014ApJ...790...57C, 2013MNRAS.436.2179L}.
The advantage of using simplified treatments of the wind is that these works rely on analytical and low-dimensional (1D, 2D) numerical studies, which are significantly faster and do not demand extensive computational resources as 3D models do. Because of that, 1D works can investigate a much wider range of stellar wind parameters \citep[e.g.][]{2011ApJ...741...54C,2014A&A...570A..99S} than more complex, computationally-expensive 3D models can. The disadvantage, on the other hand, is that the simplified models can not capture the 3D structure of stellar winds. Combined with modern techniques to reconstruct stellar magnetic fields, some 3D models are able to provide a more realistic account of the stellar magnetic field topology embedded in the wind, recognised to be essential to interpret and predict signatures of star-planet interactions \citep{2006MNRAS.367L...1M,2010MNRAS.406..409F,2011MNRAS.414.1573V,2012A&A...544A..23L, 2013MNRAS.436.2179L}.
\subsection{Interactions with magnetised planets}\label{sec.introBp}
As the wind outflows from the star, it interacts with any planet encountered on its way. If these planets are magnetised, their magnetic fields can act as shields, which prevent stellar wind particles from reaching all the way down to the surface or atmosphere of these objects \citep[e.g.][]{2007AsBio...7..167K,2007AsBio...7..185L,2013Icar..226.1447D,2014A&A...562A.116K}. This is in particular the case of the Earth and, more generally, of planets with dipolar field configurations. For these objects, the solar and stellar winds are deflected around the magnetospheric cavity, potentially helping the planet to retain its atmosphere. However, atmospheric escape can still occur at high magnetic latitudes through polar flows, as is the case of the Earth (e.g., \citealt{2001Sci...291.1939S,2007RvGeo..45.3002M}) and predicted for exoplanets (\citealt{2014MNRAS.444.3761O}; see also Section~\ref{sec.polarflows}). Part of this planetary outflow can return from the magnetosphere back into atmospheric regions of low-magnetic latitudes, reducing the total net loss rate of atmospheric escape, as suggested for the Earth scenario \citep{2001Sci...291.1939S}. The detailed process of atmospheric dynamics and escape is certainly complex and not examined here.
In the present work, only magnetised exoplanets are considered. This means that the cross-section of the `obstacle' is not that of the planet itself, but rather takes into account the magnetospheric size of the planet. The magnetospheric size of the planet depends both on the the characteristics of the local environment surrounding the planet (interplanetary density, velocity, magnetic field, temperature) and on its own magnetic field. On the theoretical side, some models suggest that the strength of the planetary magnetic field is dependent on the rotation rate of the planet \citep{1999JGR...10414025F}. In this situation, close-in planets that are tidally locked could have a reduced magnetic moment \citep{2004A&A...425..753G}. Other models advocate that the planetary magnetic field is related to the energy flux coming from the planetary core and does not depend on the rotation rate of the planet \citep{2009Natur.457..167C}. Recent studies indicate that the planetary field strength is independent of rotation rate, which instead plays a role in the geometry of the generated magnetic field \citep{2012Icar..217...88Z}.
Although planetary magnetism has been observed in several solar system planets, such as in Earth and the giant planets, the presence of exoplanetary magnetic fields are much more elusive. \citet{2010ApJ...722L.168V} suggested that the close-in giant planet WASP-12b hosts a bow-shock that surrounds its magnetosphere at a distance of about $4$ -- $5$ planetary radii. Their suggestion was motivated by transit observations of the close-in giant planet WASP-12b by \citet{2010ApJ...714L.222F}, who, based on space-borne spectroscopic observations in the near-UV, showed that the transit lightcurve of WASP-12b presents both an early ingress when compared to its optical transit, as well as excess absorption during the transit \citep[see also][]{2012ApJ...760...79H}. \citet{2010ApJ...722L.168V} attributed this signature to an absorption of the material in the bow shock (see also \citealt{2011MNRAS.416L..41L}). If confirmed, this technique should provide a useful tool for determining planetary magnetic field intensities for hot-Jupiter transiting systems. In the case of WASP-12b, \citet{2010ApJ...722L.168V} derived an upper limit of $24$~G for the planetary field. \citet{2011MNRAS.411L..46V} later proposed other targets with good prospects to hosting observable early-ingresses. Unfortunately, the near-UV ($254-258$nm) early-ingress signature of WASP-12b observed with (expensive) space-based spectroscopic observations \citep{2010ApJ...714L.222F, 2012ApJ...760...79H} does not seem to be observable with ground-based, broad-band photometry in the wavelength range $\sim 340 - 540$nm \citep{2012ApJ...760...79H}, and neither in the range of $303 - 417$nm (Turner, private comm.; for other transiting exoplanets see \citealt{2013MNRAS.428..678T,2014NewA...27..102P}). Observations from \citet{2010ApJ...714L.222F} indicate that the material surrounding WASP-12b absorbs at certain resonance lines in the near-UV (in particular in MgII lines). The lack of absorption from broad-band photometric observations of WASP-12b possibly indicates that either the material is not absorbing at the observed photometric wavelengths ($\sim 303 - 540$nm), or that the absorption occurs only at some specific spectral lines, but gets diluted over the much wider spectral region.
Another hint that close-in planets may also harbour intrinsic magnetic fields, similar to the Earth and the giant planets of the Solar System, was found by \citet{2003ApJ...597.1092S,2005ApJ...622.1075S,2008ApJ...676..628S}, who observed modulations of chromospheric spectral lines in phase with orbital periods on a few systems. Such modulations were interpreted as induced activity on the stellar surface due to magnetic interactions between star and planet. \citet{2008ApJ...676..628S} showed that there exists a correlation between the night-to-night stellar activity variation with the ratio between the planetary mass to orbital period, used as a proxy for the magnetic moment of a tidally-locked planet. Although unfortunately this correlation does not provide the intensity of the planetary magnetic field, it offers a way to measure the relative field strength among the different exoplanets in their sample. Therefore, once magnetism is assessed for one of their targets (by a different method), the magnetic field strength of their remaining targets could be derived.
These two suggestions (early ingress and activity enhancement), however, cannot be used as conclusive evidence of the presence of planetary magnetic fields, as alternative, non-magnetic explanations for the observations exist \citep{2006A&A...460..317P,2010ApJ...721..923L, 2013ApJ...764...19B, vidotto_springer}. A conclusive way to probe the presence of exoplanetary magnetic fields could be achieved by the detection of radio emission from the planet. The stellar wind that impacts on the planet produces energetic particles that are captured by the planet's magnetic field, emitting cyclotron radiation at radio wavelengths. This emission depends on the planet's magnetic field intensity and on the stellar wind power: it implies that the stronger is the stellar wind, the more luminous is the planet. As such radio emission is observed in the Solar System \citep{2007P&SS...55..598Z}, there are expectations that close-in exoplanets will exhibit comparable radiation (see \citealt{2012MNRAS.427L..75N} for the case of planets that are not necessarily close-in). In particular, hot-Jupiters are expected to be much more luminous than the most luminous planet in our solar System, Jupiter \citep[e.g.,][]{1999JGR...10414025F,2005A&A...437..717G,2007P&SS...55..598Z,2008A&A...490..843J,2010ApJ...720.1262V}. This is because hot-Jupiters are located much closer to their stars, interacting with portions of the host-star's wind that has larger kinetic and magnetic energies available to power planetary radio emission. So far, radio signatures of close-in exoplanets have not yet been detected \citep[e.g.][]{2000ApJ...545.1058B,2004ApJ...612..511L,2009MNRAS.395..335S,2013ApJ...762...34H} and one possible reason for that may be due to the lack of instrumental sensitivity in the appropriate frequency range of the observations \citep{2000ApJ...545.1058B}. This picture, however, might be changing, as possible hints of exoplanetary radio emission have recently been reported \citep{2013A&A...552A..65L,2014A&A...562A.108S}.
The theoretical estimates of the radio flux emitted by extrasolar planets carry a large uncertainty due to the fact that the stellar wind properties are poorly constrained. In this work, we model the 3-D structure of the stellar wind of a sample of $5$ planet-hosting stars, whose 3-D winds have not yet been studied to date. We investigate the nature of the interplanetary media of these exoplanetary systems and how different they are from the environment surrounding our own Solar System planets. The stars used in this study, described in Section~\ref{sec.sample}, have had their surface magnetic field recently reconstructed by means of tomographic techniques \citep{2012MNRAS.423.1006F,2013MNRAS.435.1451F}. These surface fields are used as boundary conditions for our data-driven simulations of stellar winds. Our model is described in Section~\ref{sec.model}. The derived global characteristics of the stellar winds are presented in Section~\ref{sec.results} and the properties of the local environment surrounding the exoplanets in our sample are described in Section~\ref{sec.planets}. We then use these computed quantities to calculate the strengths of the interactions between the stellar wind and the planetary system, making it possible to quantitatively predict planetary radio emission and bow shock formation. Our discussion is shown in Section \ref{sec.discussion}
and summary and conclusions are presented in Section~\ref{sec.conclusions}.
\section{The sample of stars}\label{sec.sample}
The stars considered in this study consist of five solar-type stars of spectral types F8 to K1, namely: HD~46375, HD~73256, HD~102195, HD~130322 and HD~179949. All these stars host a gaseous planet at very close orbit (i.e., a hot-Jupiter). Table \ref{tab.sample} presents a summary of the observationally-derived characteristics of the host stars and also of their hot-Jupiters (planet `b'). The large-scale surface magnetic field maps of the planet hosts have been reconstructed by \citet{2012MNRAS.423.1006F, 2013MNRAS.435.1451F} from a series of circular polarisation spectra (acquired at CFHT/ESPaDOnS and TBL/NARVAL) using the Zeeman-Doppler Imaging (ZDI) technique \citep[e.g.,][]{1997A&A...326.1135D,2006MNRAS.370..629D}. Figure~\ref{fig.maps} presents the radial component of the reconstructed surface field of these stars. Our targets present surface magnetic fields with a variety of topologies and intensities. For instance, HD~46375 presents a magnetic field that is mostly dipolar, whose axis is slightly tilted with respect to the rotation axis. HD~73256, on the other hand, has a magnetic field topology that is less axisymmetric.
\begin{figure*}
\includegraphics[width=58mm]{figs/f1a.pdf}
\includegraphics[width=58mm]{figs/f1b.pdf}
\includegraphics[width=58mm]{figs/f1c.pdf}\\
\includegraphics[width=58mm]{figs/f1d.pdf}
\includegraphics[width=58mm]{figs/f1e.pdf}
\caption{Mollweide projection of the radial component of the stellar surface magnetic field (from \citealt{2012MNRAS.423.1006F, 2013MNRAS.435.1451F}). These observed magnetic maps are included as boundary conditions in our data-driven simulations.
\label{fig.maps}}
\end{figure*}
\begin{table*}
\centering
\caption{Observationally derived characteristics of the exoplanets and planet-host stars of our sample. The columns are: the host-star name, spectral type, mass ($M_\star$), radius ($R_\star$), effective temperature ($T_{\rm eff}$), rotation period ($P_{\rm rot}$), Rossby numbers (Ro), distance ($d$), inclination between the stellar rotation axis and the line-of-sight ($i$) estimated from ZDI, unsigned surface magnetic flux ($\Phi_{\rm 0}$), date of the spectropolarimetric observations, projected mass of the planet `b' ($M_p \sin i$) and semi-major axis of the planetary orbit ($a$). {For uncertainties in the quantities below, we refer the reader to the following literature.} All the values listed below were compiled by \citet{2013MNRAS.435.1451F}, except for $d$, whose references are listed in the footnote of the table, and Ro, which was derived by \citet{2014MNRAS.441.2361V} using the models of \citet{2010A&A...510A..46L}. \label{tab.sample}}
\begin{tabular}{cccccccccccccc}
\hline
Star & Spectral & $ M_\star $ & $ R_\star $ & $ T_{\rm eff} $ & $ P_{\rm rot} $ & Ro & $ d $ & $ i $ & $ \Phi_{\rm 0} $ & Date & $ M_{p} \sin i $ & $ a $ \\
ID & type & $ (M_\odot) $ & $ (R_\odot) $ & (K) & (d) & & (pc) & (deg) & ($10^{23}$ Mx) & & $ (M_{\rm Jup}) $ & $ (R_\star) $ \\ \hline
HD 46375 & K1IV & $ 0.97 $ & $ 0.86 $ & $ 5290 $ & $ 42 $ & $ 2.340 $ & $ 33.4 ^1 $ & $ 45 $ & $ 0.85 $ & 2008 Jan & $ 0.2272 $ & $ 10.0 $ \\
HD 73256 & G8 & $ 1.05 $ & $ 0.89 $ & $ 5636 $ & $ 14 $ & $ 0.962 $ & $ 36.5 ^2 $ & $ 75 $ & $ 2.1 $ & 2008 Jan & $ 1.869 $ & $ 9.0 $ \\
HD 102195 & K0V & $ 0.87 $ & $ 0.82 $ & $ 5290 $ & $ 12.3 $ & $ 0.473 $ & $ 29.0 ^3 $ & $ 50 $ & $ 2.1 $ & 2008 Jan & $ 0.453 $ & $ 12.6 $ \\
HD 130322 & K0V & $ 0.79 $ & $ 0.83 $ & $ 5330 $ & $ 26.1 $ & $ 0.782 $ & $ 30.0 ^4 $ & $ 80 $ & $ 0.74 $ & 2008 Jan & $ 1.043 $ & $ 23.2 $ \\
HD 179949 & F8V & $ 1.21 $ & $ 1.19 $ & $ 6168 $ & $ 7.6 $ & $ >1.726 $ & $ 27.0 ^5 $ & $ 60 $ & $ 1.3 $ & 2007 Jun & $ 0.902 $ & $ 7.9 $ \\ \hline
\end{tabular}\\
$^1$\citet{2000ApJ...536L..43M}; $^2$\citet{2003A&A...407..679U}; $^3$\citet{2006ApJ...648..683G}; $^4$\citet{2000A&A...356..590U}; $^5$\citet{2007A&A...475..519H}
\end{table*}
\section{Stellar wind model}\label{sec.model}
The stellar wind model we use here is identical to the one presented in \citet{2014MNRAS.438.1162V}. We use the 3D MHD numerical code BATS-R-US \citep{1999JCoPh.154..284P,2012JCoPh.231..870T} to simulate the stellar winds. BATS-R-US solves the set of ideal MHD equations for the mass density $\rho$, the plasma velocity ${\bf u}=\{ u_r, u_\theta, u_\varphi\}$, the magnetic field ${\bf B}=\{ B_r, B_\theta, B_\varphi\}$, and the gas pressure $P$:
\begin{equation}
\label{eq:continuity_conserve}
\frac{\partial \rho}{\partial t} + \boldsymbol\nabla\cdot \left(\rho {\bf u}\right) = 0,
\end{equation}
\begin{equation}
\label{eq:momentum_conserve}
\frac{\partial \left(\rho {\bf u}\right)}{\partial t} + \boldsymbol\nabla\cdot\left[ \rho{\bf u\,u}+ \left(P + \frac{B^2}{8\pi}\right)I - \frac{{\bf B\,B}}{4\pi}\right] = \rho {\bf g},
\end{equation}
\begin{equation}
\label{eq:bfield_conserve}
\frac{\partial {\bf B}}{\partial t} + \boldsymbol\nabla\cdot\left({\bf u\,B} - {\bf B\,u}\right) = 0,
\end{equation}
\begin{equation}
\label{eq:energy_conserve}
\frac{\partial\varepsilon}{\partial t} + \boldsymbol\nabla \cdot \left[ {\bf u} \left( \varepsilon + P + \frac{B^2}{8\pi} \right) - \frac{\left({\bf u}\cdot{\bf B}\right) {\bf B}}{4\pi}\right] = \rho {\bf g}\cdot {\bf u} ,
\end{equation}
where
\begin{equation}\label{eq:energy_density}
\varepsilon=\frac{\rho u^2}{2}+\frac{P}{\gamma-1}+\frac{B^2}{8\pi} .
\end{equation}
We assume the wind is polytropic, in which $P\propto \rho^\gamma$ and $\gamma$ is the polytropic index. To derive the temperature, we consider an ideal gas, so $P=n k_B T$, where $k_B$ is the Boltzmann constant, $T$ is the temperature, $n=\rho/(\mu m_p)$ is the particle number density of the stellar wind, $\mu m_p$ is the mean mass of the particle. In this work, we adopt $\gamma=1.1$, similar to the effective adiabatic index measured in the solar wind \citep{2011ApJ...727L..32V}, and $\mu=0.5$, for a fully ionised hydrogen plasma.
At the initial state of the simulations, we assume that the wind is thermally driven \citep{1958ApJ...128..664P}. The stellar rotation period $P_{\rm rot}$, $M_\star$ and $R_\star$ are given in Table \ref{tab.sample}. At the base of the corona ($r=R_\star$), we adopt a wind coronal temperature $T_0 = 2\times 10^6$~K and wind number density $n_0=10^{9}$cm$^{-3}$ (Section~\ref{sec.limitations} discusses the choices of $n_0$ and $T_0$ and how they affect our results). With this numerical setting, the initial solution for the density, pressure (or temperature) and wind velocity profiles are fully specified. The radial component of the magnetic field $B_r$ anchored at the base of the wind, is reconstructed from observations (Fig.~\ref{fig.maps}). The other two components of the surface field are assumed to be potential ($\boldsymbol\nabla \times {\bf B}=0$), as it has been shown that stellar winds are largely unaffected by the non-potential part of the observed surface field \citep{2013MNRAS.431..528J}. At the initial state, we assume that the field considered in the simulation box is potential up to a radial distance $r=r_{\rm SS}$ (known as the source surface) and, beyond that, the magnetic field lines are considered to be open and purely radial. As the simulation evolves in time, the wind particles interact with the magnetic field lines (and vice-versa), removing the field from its initial potential state. For all the cases studied here, we take $r_{\rm SS}=4~R_\star$, but we note that different values of $r_{\rm SS}$ produce similar final steady-state solutions for the simulations \citep{2011MNRAS.412..351V, 2014MNRAS.438.1162V}.
Once set at the initial state of the simulation, the values of the observed $B_r$ are held fixed at the base of the wind throughout the simulation run, as are the coronal base density and thermal pressure. A zero radial gradient is set to the remaining components of ${\bf B}$ and ${\bf u}=0$ in the frame corotating with the star. The outer boundaries at the edges of the grid have outflow conditions.
The rotation axis of the star is aligned with the $z$-axis, and the star is assumed to rotate as a solid body. Our grid is Cartesian and the star is placed at the origin of the grid, which extends in $x$, $y$, and $z$ from $-20$ to $20~R_\star$, except for HD~102195, whose simulation box extends from $-24$ to $24~R_\star$, as to extend out to the orbit of the planet. BATS-R-US uses block adaptive mesh refinement. The finest resolved cells are located close to the star (for $r \lesssim 2~R_\star$), where the linear size of the cubic cell is $0.0097~R_\star$ (or $0.012~R_\star$ for the simulation of HD~102195). The coarsest cell has a linear size of $0.31~R_\star$ (or $0.37~R_\star$ for HD~102195) and is located at the outer edges of the grid. The total number of cells in our simulations is around $40$ million. As the simulations evolve in time, both the wind and magnetic field lines are allowed to interact with each other. The resultant solution, obtained self-consistently, is found when the system reaches steady state in the reference frame corotating with the star.
\section{Derived properties of the stellar winds}\label{sec.results}
Table~\ref{tab.results} presents the properties of the stellar winds obtained in our simulations. The unsigned observed surface magnetic flux is
\begin{equation}\label{eq.phi0}
\Phi_0 = \oint_{S_\star} |B_r (R_\star, \theta, \varphi)| {\rm d} S_\star
\end{equation}
and the unsigned open magnetic flux is
\begin{equation} \label{eq.phiopen}
\Phi_{\rm open} = \oint_{S_{\rm sph}} |B_r (r, \theta, \varphi)| {\rm d} S_{\rm sph}.
\end{equation}
The surface flux (Table~\ref{tab.sample}) is integrated over the surface of the star $S_\star$ and the open flux (Table~\ref{tab.results}) over a spherical surface $S_{\rm sph}$ at a distance $r$ from the star, where all the magnetic field lines are open. The mass-loss rate $\dot{M}$ of the stellar wind, which outflows along open magnetic field lines, can be calculated as the flux of mass integrated across $S_{\rm sph}$
\begin{equation}\label{eq.mdot}
\dot{M} = \oint \rho u_r {\rm d} S_{\rm sph},
\end{equation}
where $\dot{M}$ is a constant of the wind. Similarly, the angular momentum loss rates can be calculated as the angular momentum flux across $S_{\rm sph}$
\begin{equation}\label{eq.jdot}
\dot{J} = \oint_{S_{\rm sph}} \left[ - \frac{\varpi B_\varphi B_r}{4 \pi} + \varpi u_\varphi \rho u_r \right] {\rm d} S _{\rm sph}
\end{equation}
\citep{1970MNRAS.149..197M,1999stma.book.....M,2014MNRAS.438.1162V}, where $\varpi=(x^2+y^2)^{1/2}$ is the cylindrical radius. In our simulations, we find that $\dot{M}$ ranges from $\sim 2$ to $8 \times 10^{-13} ~{\rm M}_\odot ~{\rm yr}^{-1}$ and $\dot{J}$ between $\sim 0.14$ and $2.4 \times 10^{31}$~erg for the stars in our sample. The open flux ranges from $26\%$ to $69\%$ of the large-scale unsigned surface flux. These values are within the range $(4.4 - 8.4) \times 10^{22}$~Mx. For the solar wind, \citet{2006ApJ...644..638W} obtained magnetic field values at the orbit of the Earth in the range between $0.01$ and $0.05$~mG, or in terms of open magnetic fluxes, in the range $(2.8 - 14) \times 10^{22}$~Mx, depending on the phase of the solar activity cycle. Although the range of open fluxes calculated for the simulations presented here fall within the values of the solar wind, we show in Section~\ref{sec.planets} that the values of the interplanetary magnetic field at the orbits of the hot-Jupiters are more than $100$ times larger than the interplanetary magnetic field at the Earth's orbit (compare $0.01$ to $0.05$~mG to the values presented in Table~\ref{tab.resultsp}).
\begin{table*}
\centering
\caption{Characteristics of the stellar winds. The columns are: the star name, the stellar wind mass-loss rate ($\dot{M}$), angular momentum-loss rate ($\dot{J}$), unsigned open magnetic flux ($\Phi_{\rm open}$), average radii of the Alfv\'en\ surfaces ($\langle r_A \rangle$), its minimum and maximum value ($r_A^{\rm min}$ and $r_A^{\rm max}$) and the effective radius of the source surface derived from the MHD models ($r_{\rm ss}^{\rm eff}$). In our simulations, $\dot{M}$, $\dot{J}$ and $\Phi_{\rm open}$ are conserved within $0.03\%$, $3\%$ and $4\%$, respectively. \label{tab.results}}
\begin{tabular}{cccccccccccccc}
\hline
Star & $ \dot{M} $ & $ \dot{J} $ & $ \Phi_{\rm open} $ & $ \langle r_A \rangle $ & $ [ r_A^{\rm min} , r_A^{\rm max} ] $& $ r_{\rm SS}^{\rm eff} $ \\
ID & $ (10^{-13} M_\odot~\rm{yr}^{-1}) $ & ($10^{31}$ erg) & $ (\Phi_{\rm 0}) $ & $ (R_\star) $ & $ (R_\star) $ & $ (R_\star) $ \\ \hline
HD 46375 & $ 1.9 $ & $ 0.14 $ & $ 0.52 $ & $ 5.1 $ & $ [ 3.0 , 6.1 ] $ & $ 2.7 $ \\
HD 73256 & $ 2.1 $ & $ 2.3 $ & $ 0.26 $ & $ 6.2 $ & $ [ 1.8 , 8.1 ] $ & $ 5.6 $ \\
HD 102195 & $ 3.2 $ & $ 2.0 $ & $ 0.41 $ & $ 6.4 $ & $ [ 2.3 , 7.5 ] $ & $ 5.6 $ \\
HD 130322 & $ 5.8 $ & $ 0.36 $ & $ 0.69 $ & $ 3.5 $ & $ [ 1.6 , 4.2 ] $ & $ 1.9 $ \\
HD 179949 & $ 8.0 $ & $ 2.4 $ & $ 0.34 $ & $ 2.8 $ & $ [ 1.0 , 3.7 ] $ & $ 3.0 $ \\ \hline
\end{tabular}
\end{table*}
The left panels in Figure~\ref{fig.IC-SS} show the final configuration of the magnetic field lines obtained through self-consistent interaction between magnetic and wind forces after the simulations reached steady state. Although we assume the magnetic field is current-free in the initial state of our simulations, this configuration is deformed when the interaction of the wind particles with the magnetic field lines (and vice-versa) takes place (currents are created in the system). The right panels of Figure~\ref{fig.IC-SS} show the Alfv\'en surface $S_A$ of each simulation. This surface is defined as the location where the wind velocity reaches the local Alfv\'en velocity ($v_A = B(4 \pi \rho)^{-1/2}$). Inside $S_A$, where the magnetic forces dominate over the wind inertia, the stellar wind particles are forced to follow the magnetic field lines. Beyond $S_A$, the wind inertia dominates over the magnetic forces and, as a consequence, the magnetic field lines are dragged by the stellar wind. In models of stellar winds, the Alfv\'en\ surface has an important property for the characterisation of angular momentum losses, as it defines the lever arm of the torque that the wind exerts on the star \citep[e.g.,][]{1967ApJ...148..217W}. Its location is also relevant in the studies of magnetic interactions with planets \citep[e.g.,][]{2014ApJ...795...86S,2014ApJ...790...57C}. As shown in \citet{2014MNRAS.438.1162V}, the Alfv\'en\ surfaces of the objects investigated here have irregular, asymmetric shapes as a consequence of the irregular distribution of the observed magnetic field. To illustrate the difference in sizes of these surfaces, we show in the right panels of Figure~\ref{fig.IC-SS} the scales of the images plotted (red lines). We find that the average radius of the Alfv\'en\ surfaces range between $2.8~R_\star$ (for HD~179949) to $6.4~R_\star$ (for HD~102195).
\begin{figure*}
\includegraphics[width=70mm]{figs/f2a.png}
\includegraphics[width=70mm]{figs/f2b.png}\\
\includegraphics[width=70mm]{figs/f2c.png}
\includegraphics[width=70mm]{figs/f2d.png}\\
\includegraphics[width=70mm]{figs/f2e.png}
\includegraphics[width=70mm]{figs/f2f.png}
\caption{Left: The final configuration of the magnetic field lines after the wind solution has relaxed in the grid. Over-plotted at the surface of the star is the observationally reconstructed stellar magnetic field \citep{2012MNRAS.423.1006F,2013MNRAS.435.1451F}, used as boundary condition for the radial magnetic field. Right: The Alfv\'en\ surfaces are shown in grey. Note their irregular, asymmetric shapes due to the irregular distribution of the observed field. The equatorial ($xy$) planes of the star, assumed to contain the orbits of the planet, are also shown, as are the intersections between the $xy$ plane and the Alfv\'en\ surface (thin black contour) and the orbital radius of the planet (thick blue contour). \label{fig.IC-SS}}
\end{figure*}
\begin{figure*}
\includegraphics[width=70mm]{figs/f2g.png}
\includegraphics[width=70mm]{figs/f2h.png}\\
\includegraphics[width=70mm]{figs/f2i.png}
\includegraphics[width=70mm]{figs/f2j.png}\\
\contcaption{}
\end{figure*}
In order to provide constraints for analytical methods of extrapolation of magnetic field lines, we also compute here the MHD equivalent of the source surface. In particular, the potential field source surface (PFSS) method has proven to be a fast and simple way to extrapolate surface magnetic fields into the stellar coronal region \citep{1999MNRAS.305L..35J,2002MNRAS.333..339J, 2013A&A...557A..67V}. It is also used here as the initial conditions for our simulations. However, the PFSS method has an unconstrained parameter: the radius $r_{\rm SS}$ of the source surface, beyond which the magnetic field lines are assumed open and purely radial, as a way to mimic the effects of a stellar wind. Because of stellar rotation and magnetic field stresses, in the MHD solutions, the surface where all magnetic field lines are purely radial does not exist -- even in the region of open field lines, there is always $B_\theta$ and, especially, $B_\varphi$ components that are non-null. Therefore, we define here an ``effective radius of the source surface'' $r_{\rm SS}^{\rm eff}$ as the radius of the spherical surface where $97$~percent of the average magnetic field is contained in the radial component (i.e., $\langle |B_r| \rangle / \langle |B| \rangle =0.97$, based on \citealt{2006ApJ...653.1510R}). For some of the stars in our sample (HD~73256 and HD~179949), the ratio $\langle |B_r| \rangle / \langle |B| \rangle$ does not reach the 97-percent level and in such cases, we take $r_{\rm SS}^{\rm eff}$ to be the position where $\langle |B_r| \rangle / \langle |B| \rangle$ is maximum. Table~\ref{tab.results} shows that $r_{\rm SS}^{\rm eff}$ is in the range between $1.9~R_\star$ and $5.6~R_\star$, indicating a compact region of closed field lines. We note that this size is similar to the usual adopted size of $2.5~R_\odot$ from PFSS methods of the solar coronal magnetic field and also similar to the values obtained in other MHD simulations of winds \citep{2006ApJ...653.1510R,2011MNRAS.412..351V,2014MNRAS.438.1162V}.
\section{Characterising the local environment surrounding hot-Jupiters and resultant interactions}\label{sec.planets}
All the stars in our sample host giant planets orbiting at close distances. Mercury, the closest planet to our Sun, has a semimajor orbital axis of about 0.39~au, or equivalently, of about $83~R_\odot$. The hot-Jupiters in our sample have considerably closer orbits, with semimajor axes of about $9$ to $23~R_\star$ (i.e., about $9$ to $4$ times closer than Mercury). As a consequence, the hot-Jupiters in our sample interact with much denser winds that have larger ram pressures than those typically found around the planets in the solar System. In addition, because the hot-Jupiters are located much closer to the star, the large-scale magnetic field at the orbit of these planets has also a larger strength compared to the interplanetary magnetic field strength of our solar System planets.
The orbital planes of the planets considered in this work are not known. Here, we assume their orbits lie in the equatorial plane of the star. This seems to be a reasonable hypothesis for our targets (cf. Table~\ref{tab.sample}), as planets orbiting stars cooler than $6200~$K have been observed to have small (projected) obliquities \citep{2010ApJ...718L.145W}.
Figure~\ref{fig.ptot} shows the total pressure $p_{\rm tot}$ (i.e., the sum of thermal, magnetic and ram pressures) experienced by a planet as it orbits at the equatorial plane of the stars. Note that the ram pressure term must take into account the relative motion of the planet through the interplanetary medium. Here, we assume prograde motion of the planetary orbit relative to the stellar rotation. The white circles indicate the orbital radii of each hot-Jupiter, taken here to be circular (note that for the systems investigated here the eccentricities are rather small, $< 0.06$). The colour-bar is the same for the five images, illustrating that the total pressure varies from planet to planet. The last panel in Figure~\ref{fig.ptot} shows the total {\it local} pressure at the planetary orbits as a function of subplanetary longitude (see also Table~\ref{tab.resultsp}). For the cases studied here, at these orbital distances, the dominant term in the total pressure is the ram pressure of the relative motion of the planet through the wind. The values of the local total pressure are within $(0.58 - 4.1) \times 10^{-4}$~dyn~cm$^{-2}$, which are about 4 orders of magnitude larger than the ram pressure of the solar wind at the Earth's orbit ($1.8 \times 10^{-8}$~dyn~cm$^{-2}$, \citealt{2014A&A...570A..99S}). We also note that there is some variability in the local total pressure, showing that the planets interact with the varying environment of the star along their orbits. In the case of HD~73256, the amplitude of this variability is the highest among the cases studied here and is due to the peak (which is a factor of $1.9$ above the average value of $p_{\rm tot}$ of HD~73256) at $\sim 220$~deg. This peak is caused by a fast wind stream, associated to the magnetic feature seen in the surface magnetograms at longitude $\sim 225~$deg (Fig.~\ref{fig.maps}). A similar feature appears in Figures~\ref{fig.rM} and \ref{fig.radio} that we present later.
Variability on larger timescales due to intrinsic variations of the stellar magnetic field can also alter the environment surrounding planets \citep{2011MNRAS.414.1573V, 2012MNRAS.423.3285V,2013MNRAS.436.2179L}, but it is not considered in the present work.
\begin{figure*}
\includegraphics[height=53mm]{figs/f3a.png}
\includegraphics[height=53mm]{figs/f3b.png}
\includegraphics[height=53mm]{figs/f3c.png}
\\
\includegraphics[height=53mm]{figs/f3d.png}
\includegraphics[height=53mm]{figs/f3e.png}
\includegraphics[height=53mm]{figs/f3f.png}
\caption{Distribution of the total pressure $p_{\rm tot}$ experience by a planet as it orbits at the equatorial plane of each star in our simulations. The black dashed and solid red contours are cuts of the fast magnetosonic and Alfv\'en\ surfaces of the stellar wind, respectively, at the equatorial plane. The white lines indicate the orbital radii, taken to be circular, of the hot-Jupiters. The last panel shows the total local pressure at these orbits as a function of subplanetary longitude. \label{fig.ptot}}
\end{figure*}
\subsection{Exoplanetary bow shocks: sizes and orientations}
If a planet is magnetised, its magnetic field can act as shield for the stellar wind, deflecting the wind particles and potentially preventing the wind from reaching down to the planetary atmosphere. A way to estimate the size of this stand-off distance is by pressure balance between the local total pressure of the interplanetary medium (i.e., the stellar wind) and the planet total pressure. Thus, at the interaction zone, we have
\begin{equation}\label{eq.equilibrium}
p_{\rm tot} = \frac{B_{{p},r_M}^2}{8\pi} ,
\end{equation}
where $B_{{p},r_M}$ is the planetary magnetic field intensity at a distance $r_M$ from the planet centre. Eq.~(\ref{eq.equilibrium}) neglects the planetary thermal pressure component on the right side. Because of the exponential decay of planetary densities, at the height of a few planetary radii, the thermal pressure is usually negligible compared to the planetary magnetic pressure. If we assume the planetary magnetic field is dipolar, we have that $B_{{p},r_M} = B_{p, {\rm eq}} (R_p/r_M)^3$, where $R_p$ is the planetary radius and $B_{p, {\rm eq}}$ its surface magnetic field at the equator (half the value of the intensity at the magnetic pole). For a planetary dipolar axis aligned with the rotation axis of the star, the magnetospheric size of the planet is given by
%
\begin{equation}\label{eq.r_M}
\frac{r_M}{R_p} = \left[ \frac{B_{p, {\rm eq}}^2}{8 \pi p_{\rm tot}} \right]^{1/6}.
\end{equation}
In the absence of observational constraints, we assume the hot-Jupiters studied here to host magnetic fields similar to Jupiter's. Figure~\ref{fig.rM}a shows the magnetospheric sizes of these hot-Jupiters assuming $B_{p, {\rm eq}}=7~$G (i.e., half of the maximum observed field of Jupiter of $\sim14~$G, \citealt{1975Sci...188..451S,1992AREPS..20..289B}.). The average estimated magnetospheric sizes range from about $\langle r_M\rangle=4.2~R_p$ for HD~179949b to $\langle r_M\rangle = 5.6~R_p$ for HD~130322b (see Table~\ref{tab.resultsp}). Variations in $r_M$ along the planetary orbit are roughly $\sim 10\%$. This variation occurs because, as the planet goes along its orbit and probes regions with different $p_{\rm tot}$, its magnetospheric size reacts accordingly, becoming smaller when the external $p_{\rm tot}$ is larger and vice-versa.
\begin{figure}
\includegraphics[width=85mm]{figs/f4a.pdf}\\
\includegraphics[width=85mm]{figs/f4b.pdf}\\
\includegraphics[width=85mm]{figs/f4c.pdf}
\caption{As a magnetised planet orbits around its host star, it probes regions of the stellar wind with different properties. As a consequence, its magnetospheric size and shock orientation change. Upper panel: the magnetospheric stand-off distance for the hot-Jupiters studied here as a function of subplanetary longitude. Middle panel: the ratio between the relative velocity of the planet and the local fast magnetosonic velocity. Bottom panel: the angle formed between the shock normal and the tangent of a circular orbit. The top and bottom figures assume the hot-Jupiters have a dipolar field of $7~$G at their equator. \label{fig.rM}}
\end{figure}
\begin{table*}
\centering
\caption{Derived characteristics of the hot-Jupiters and of their local environments. The columns are, respectively, the planet name, the averages of the local velocity of the wind in the reference frame of the planet, wind density, magnetic field strength, wind temperature, total pressure, planetary magnetospheric radius, shock angle, auroral oval opening angle and fractional area of the polar cap. These quantities were averaged over the subplanetary longitude. Values in brackets represent the minimum and maximum values of the averaged quantity. \label{tab.resultsp}}
\begin{tabular}{lccccccccccccc}
\hline
Planet & $ \langle \Delta u \rangle $ & $ \langle n \rangle $ & $ \langle |B| \rangle $ & $ \langle T \rangle $ & $ \langle p_{\rm tot} \rangle $ & $ \langle r_M\rangle $ & $ \langle \theta_{\rm shock} \rangle $ & $ \langle \alpha_0 \rangle $ & $ \langle A_{\rm auroral} \rangle $ & $ \langle \phi_{\rm radio} \rangle $ \\
ID & (km s$^{-1}$) & (10$^5$ cm$^{-3}$) & (mG) & ($10^6$ K) & $ (10^{-4} \frac{\rm dyn}{\rm cm^{2}}) $ & $ (R_p) $ & (deg) & (deg) & $ (A_{\rm planet}) $ & (mJy) \\ \hline
HD46375b & $ 234 $ & $ 1.8 $ & $ 8.8 $ & $ 0.87 $ & $ 1.1 $ & $ 5.1 $ & $ 52 $ & $ 26.2 $ & $ 0.10 $ & $ 0.037 $ \\
& $ [ 228 , 242 ] $ & $ [ 1.7 , 2.0 ] $ & $ [ 0.55 , 11 ] $ & $ [ 0.86 , 0.91 ] $ & $ [ 1.0 , 1.2 ] $ & $ [ 5.0 , 5.2 ] $ & $ [ 50 , 53 ] $ & $ [ 26.0 , 26.6 ] $ & $ [ 0.10 , 0.11 ] $ & $ [ 0.036 , 0.043 ] $ \\
HD73256b & $ 263 $ & $ 2.0 $ & $ 17 $ & $ 1.1 $ & $ 1.6 $ & $ 4.8 $ & $ 57 $ & $ 27.1 $ & $ 0.11 $ & $ 0.045 $ \\
& $ [ 217 , 345 ] $ & $ [ 1.6 , 2.6 ] $ & $ [ 2.6 , 26 ] $ & $ [ 0.91 , 1.6 ] $ & $ [ 1.0 , 2.7 ] $ & $ [ 4.4 , 5.2 ] $ & $ [ 44 , 67 ] $ & $ [ 26.1 , 28.5 ] $ & $ [ 0.10 , 0.12 ] $ & $ [ 0.027 , 0.081 ] $ \\
HD102195b & $ 288 $ & $ 1.5 $ & $ 14 $ & $ 0.96 $ & $ 1.3 $ & $ 5.0 $ & $ 65 $ & $ 26.6 $ & $ 0.11 $ & $ 0.067 $ \\
& $ [ 240 , 338 ] $ & $ [ 1.1 , 2.0 ] $ & $ [ 3.6 , 18 ] $ & $ [ 0.87 , 1.2 ] $ & $ [ 1.1 , 1.6 ] $ & $ [ 4.8 , 5.1 ] $ & $ [ 61 , 69 ] $ & $ [ 26.2 , 27.1 ] $ & $ [ 0.10 , 0.11 ] $ & $ [ 0.054 , 0.086 ] $ \\
HD130322b & $ 322 $ & $ 0.6 $ & $ 2.3 $ & $ 0.78 $ & $ 0.62 $ & $ 5.6 $ & $ 74 $ & $ 25.0 $ & $ 0.09 $ & $ 0.055 $ \\
& $ [ 316 , 334 ] $ & $ [ 0.6 , 0.7 ] $ & $ [ 0.36 , 2.9 ] $ & $ [ 0.77 , 0.79 ] $ & $ [ 0.58 , 0.69 ] $ & $ [ 5.5 , 5.7 ] $ & $ [ 74 , 75 ] $ & $ [ 24.8 , 25.2 ] $ & $ [ 0.09 , 0.10 ] $ & $ [ 0.053 , 0.061 ] $ \\
HD179949b & $ 243 $ & $ 5.9 $ & $ 9.6 $ & $ 0.97 $ & $ 3.8 $ & $ 4.2 $ & $ 53 $ & $ 29.3 $ & $ 0.13 $ & $ 0.112 $ \\
& $ [ 225 , 257 ] $ & $ [ 5.5 , 6.2 ] $ & $ [ 1.4 , 15 ] $ & $ [ 0.96 , 0.99 ] $ & $ [ 3.1 , 4.1 ] $ & $ [ 4.1 , 4.3 ] $ & $ [ 51 , 55 ] $ & $ [ 28.9 , 29.6 ] $ & $ [ 0.12 , 0.13 ] $ & $ [ 0.092 , 0.127 ] $ \\ \hline
\end{tabular}
\end{table*}
Over-plotted to Figure~\ref{fig.ptot} are the contours at the equatorial plane of the Alfv\'en\ surface (red lines) and the magnetosonic surface (black lines) of the stellar wind. In all the cases studied here, the planets orbit at regions of super fast magnetosonic velocities of the stellar wind. One exception is the case of HD~73256b, in which a small part of its orbit (white circle) lies within the fast magnetosonic surface of the wind. This does not necessarily mean that at these orbital positions a bow shock will not be formed surrounding HD~73256b's magnetosphere. Rather, it is the relative velocity of the planet orbiting through the stellar wind
\begin{equation}
\Delta {\bf u} = {\bf u}-u_K \boldsymbol{\hat{\varphi}},
\end{equation}
where $u_K$ is the (purely azimuthal) Keplerian velocity of the planet, that should be compared to the fast magnetosonic velocity of the local plasma $v_f = (c_s^2 + v_A^2)^{1/2}$, where $c_s$ is the local sound speed. Figure~\ref{fig.rM}b shows the fast magnetosonic Mach number ($\Delta u/v_f$) calculated at the orbital radii of the hot-Jupiters, where we see that the relative planetary velocity is always super-fast magnetosonic (i.e., $\Delta u/v_f>1$), indicating that the magnetosphere of these planets are surrounded by bow shocks.
It has been proposed that these bow shocks might absorb at specific wavelengths, generating asymmetric transit lightcurves \citep{2010ApJ...722L.168V}. This is particularly relevant for the case of hot-Jupiters, in which the orientation of the bow shock is shifted towards the direction of planetary motion (as opposed to the bow shocks surrounding the solar system planets, which are largely formed facing the Sun). These `sideways' bow-shocks present the best conditions for detection during planetary transits \citep{2011MNRAS.416L..41L,2013MNRAS.436.2179L}. Although the hot-Jupiters investigated here are not transiting and do not have constrained orbital inclinations, there has been cases in the literature of non-transiting (but grazing) exoplanets whose extended atmospheres might undergo partial transit \citep{2012A&A...547A..18E}. Likewise, combined with the orbital inclinations, it is possible that bow shocks of non-transiting planets might be visible if they graze the stellar disc. We, here, do not model the 3D extent of bow shocks, as done in \citet{2011MNRAS.416L..41L,2013MNRAS.436.2179L}, but we can calculate the angle between the shock normal and the tangent of a circular orbit
\begin{equation}
\theta_{\rm shock} = \arctan \left( \frac{u_r}{|u_K-u_\varphi|}\right)
\end{equation}
\citep{2010ApJ...722L.168V}. Along its orbital path, the planet probes regions of the wind with different velocities, which implies that the orientation of the bow shock that forms surrounding planetary magnetospheres changes along the planetary orbit. This can be seen in Figure~\ref{fig.rM}c and Table~\ref{tab.resultsp}, where we present $\theta_{\rm shock}$ as a function of the subplanetary longitude. We present in Table~\ref{tab.resultsp} the average shock angle $\langle \theta_{\rm shock} \rangle$ of the bow shock of each of these hot-Jupiters, where we note that they range from about $52^{\rm o}$ to $74^{\rm o}$.
\subsection{Exoplanetary auroral ovals: escape channels and radio emission}\label{sec.polarflows}
As the planetary magnetosphere extent is reduced, the size of the `auroral oval', which is the amount of planetary area with open magnetic field lines, increases. Along these open field lines, particles can be transported to/from the interplanetary space, affecting, for instance, the amount of atmospheric mass loss \citep{2011ApJ...730...27A}. We estimate here the size of the auroral region of the planet as follows. Assuming the planet to have a dipolar magnetic field, aligned with the planetary orbital spin axis, the colatitude of the largest closed field line of the planet, which defines the boundary between open- and closed-field line regions, can be estimated as $\alpha_0=\arcsin ({R_p}/{r_M})^{1/2}$ \citep{1975JGR....80.4675S,2010Sci...327.1238T}. This implies in a fractional area of the planetary surface that has open magnetic field lines
\begin{equation}\label{eq.area}
\frac{A_{\rm polar~cap}}{A_{\rm planet}} = (1-\cos \alpha_0),
\end{equation}
%
\citep{2013A&A...557A..67V}. Therefore, in addition to making $r_M$ smaller, a stronger external pressure of the stellar wind exposes a larger area of the polar cap of the planet. Table~\ref{tab.resultsp} shows the average, minimum and maximum angles of the auroral ovals $\langle \alpha_0 \rangle$ and fraction of open area $\langle A_{\rm polar~cap} \rangle$ as calculated by Eq.~(\ref{eq.area}). For the hot-Jupiters analysed here, $\langle \alpha_0 \rangle$ ranges between $25^{\rm o}$ and $29^{\rm o}$, and $\langle A_{\rm polar~cap} \rangle$ ranges between $9\%$ and $13\%$. For comparison, the size of the auroral oval in Saturn is $\alpha_0 \simeq 10^{\rm o}$ -- $20^{\rm o}$ \citep{2005Natur.433..717C} and at the Earth it is $\alpha_0 \simeq 17^{\rm o}$ -- $20^{\rm o}$ \citep{2009AnGeo..27.2913M}. Using Eq.~(\ref{eq.area}), a rough estimate indicates that the open-field-line region covers $\sim 1.5\%$ -- $6 \%$ of Saturn's surface and $\sim 4.5\%$ -- $6 \%$ of Earth's surface. This is a factor of $\sim 2$ smaller than the values we derive for the hot-Jupiters in our sample, but not as extreme as the cases of planets orbiting at the habitable zone of more active stars \citep{2013A&A...557A..67V}.
Planetary radio emission takes place in a hollow cone of half-aperture angle given by the auroral oval co-latitude $\alpha_0$. It has been recognised that the radio emission of the Earth and the four giant planets of the solar system correlates to the local characteristics of the solar wind \citep[e.g.,][]{1998JGR...10320159Z}, an indication that radio emission is powered by the local solar wind. Analogously, it is expected that when exoplanets interact with the wind of their host stars, they would also be sources of radio emission.
We use the results of our stellar wind simulations to calculate the kinetic power of the wind, at the orbital radii of the hot-Jupiters studied here. Our approach follows closely the one in \citet{2012MNRAS.423.3285V}. The kinetic power $P_k$ of the impacting wind on the planet is approximated as the ram pressure of the particles $\rho (\Delta u)^2$ impacting on the planet, with effective cross-section $\pi r_M^2$, at a relative velocity $\Delta {\bf u}$
\begin{equation}\label{eq.pK}
P_k \simeq \rho (\Delta u)^3 \pi r_M^2 .
\end{equation}
The radio flux can be written as
\begin{equation}\label{eq.radioflux}
\phi_{\rm radio} = \frac{P_{\rm radio}}{d^2 \omega \Delta f} = \frac{\eta_k P_{\rm k}}{d^2 \omega \Delta f}
\end{equation}
where $d$ is the distance to the system, $\omega = 2\times 2 \pi (1 - \cos \alpha_0)$ is the solid angle of the hollow emission cone (defined by the auroral oval), and $\Delta f$ is the frequency of emission. In the last equality, we assumed a linear efficiency $\eta_k$ in converting the power released from the dissipation of kinetic wind energy to radio emission (`radiometric Bode's law'). We adopt $\eta_k = 10^{-5}$, as derived from observations of the Solar System planets \citep{2007P&SS...55..598Z}. Here, we assume that the emission bandwidth $\Delta f$ is approximately the cyclotron frequency \citep{2007P&SS...55..618G}:
\begin{equation}\label{eq.fcyc}
\Delta f = \frac{e B_p(\alpha_0) }{m_e c} = 2.8 \left[ \frac{B_p(\alpha_0)}{1~{\rm G}}\right] ~{\rm MHz} ,
\end{equation}
where $m_e$ is the electron mass and $c$ the speed of light. $B_p(\alpha_0)$ is the planet's magnetic field strength at colatitude $\alpha_0$ of the auroral ring. For a dipolar field, $B_p(\alpha_0)= B_{p, {\rm eq}} (1 + 3 \cos \alpha_0)^{1/2}$.
To compute the radio flux (Eq.~\ref{eq.radioflux}), we need to know the physical size of $r_M$.
This value, normalised to the planet's radius, is given in Figure~\ref{fig.rM}a and we further assume planetary radii of $1.5R_{\rm Jup}$ for all the hot-Jupiters analysed in this work (note that they are non-transiting planets and therefore do not have observationally-determined radii). Eq.~(\ref{eq.radioflux}) is the only place where the physical size of the exoplanet is required and different choices of $R_p$ influence the estimated radio flux as $\phi_{\rm radio} \propto r_M^2 \propto R_p^2$.
Figure~\ref{fig.radio}a shows the radio flux computed using the results of our wind simulations and Figure~\ref{fig.radio}b shows the calculated frequency of emission. We find that the predicted emission frequency occurs at $\sim 36$~MHz and the radio fluxes range between $0.02$ and $0.13$~mJy among all the cases studied here (see also Table~\ref{tab.resultsp}). Values of radio fluxes such as these (including the peak values that occur at favourable phases) should be challenging to be observed with present-day technology, such as with LOFAR, whose sensitivity at $20$ to $40$~MHz is $\gtrsim30$ to $3$mJy, respectively, for a one-hour integration time \citep{2011RaSc...46.0F09G}. It is likely, however, that even these small radio fluxes will be detectable with future higher sensitivity arrays, such as the SKA-low array system.
\begin{figure}
\includegraphics[width=85mm]{figs/f5a.pdf}\\
\includegraphics[width=85mm]{figs/f5b.pdf}
\caption{{The predicted radio flux (Eq.~\ref{eq.radioflux}) computed using the results of our wind simulations (top) and associated frequency of emission (bottom) assuming the emission bandwidth is the cyclotron frequency (Eq.~\ref{eq.fcyc}). These results assume a dipolar exoplanetary magnetic field, whose intensity is $7~$G at the equator.} \label{fig.radio}}
\end{figure}
Among the systems studied here, HD~179949b has the highest estimated radio flux. This occurs for two reasons. First, this exoplanet has the closest orbital radius and, because of that, $\rho \Delta u^3$ is the largest among our sample; for the same reason, it also has the smallest $r_M$ (cf.~Tables \ref{tab.sample} and \ref{tab.resultsp}). In spite of the smallest cross-section $\pi r_M^2$, the large $\rho \Delta u^3$ term is more important in Eq.~(\ref{eq.pK}), which results in the largest stellar wind kinetic power impacting on the magnetosphere of the exoplanets studied here. Second, the closest distance to the HD~179949 system also favours a larger radio flux (Eq.~(\ref{eq.radioflux})).
It is also worth comparing the emission calculated here and the values calculated for $\tau$~Boo~b and HD~189733b\footnote{Note that the stellar wind simulations presented in \citet{2012MNRAS.423.3285V} and \citet{2013MNRAS.436.2179L} have the same assumptions as the ones shown in the present work.}. Using the same radio emission model presented here, \citet{2012MNRAS.423.3285V} estimated the radio flux of $\tau$~Boo~b at different epochs of the host star's magnetic cycle. They found the radio flux of $\tau$~Boo~b to be of the order of $0.5$ -- $0.9$~mJy. We can also use the simulations presented in \citet{2013MNRAS.436.2179L} to compute the radio flux of HD~189733b. Assuming a planetary radius of $R_p = 1.15~R_{\rm Jup}$ and a distance of $19.3~$pc, we calculate the radio flux of HD~189733b to be on average $0.47$~mJy (peak at 0.98~mJy) for the case where the observed stellar magnetic map is derived from the 2007~June observations and $0.23~$mJy (peak at $0.51$~mJy) for the 2008~July map (cf.~\citealt{2010MNRAS.406..409F}).
The radio fluxes computed for $\tau$~Boo~b and HD~189733b are therefore considerably larger than the values computed for the exoplanets presented here, having better prospects for being detected. The reason why these two systems have higher radio fluxes is similar to the reasons discussed for the case of HD~179949b: a combination of closer orbital radii ($6.8~R_\star$ and $8.6~R_\star$ for $\tau$~Boo~b and HD~189733b, respectively) and closer distances to the systems ($15.6$ and $19.3$~pc). It is also expected that exoplanets orbiting young stars (with denser stellar winds) are likely to produce higher radio fluxes \citep{2005A&A...437..717G,2010ApJ...720.1262V}, presenting also better prospects for detection of exoplanetary radio emission.
Radio fluxes estimated for the $5$ hot Jupiters studied here have been estimated by other authors. For instance, \citet{2011RaSc...46.0F09G} predicted radio fluxes that are larger than the values predicted here by a factor of $500$ -- $2000$ (compared to the case for their rotation-independent planetary magnetic field model). Although our radio emission model is similar to the one used in \citet{2011RaSc...46.0F09G} (i.e., both our models assume a `radiometric Bode's law', in which a fraction of the dissipation of the wind power is converted into planetary radio emission), we attribute the difference found between their work and the present one due to the different models assumed for the stellar wind and stellar magnetic field, as well as for the assumed planetary magnetic field intensities. For the stellar wind, \citeauthor{2011RaSc...46.0F09G}'s work assumes a spherically symmetric, isothermal wind model \citep{1958ApJ...128..664P}. The velocity and density structures are scaled with respect to the age of the system, based on the age relations found by \citet{1980asfr.symp..293N} and \citet{2005ApJ...628L.143W}. For the planetary magnetic field, \citeauthor{2011RaSc...46.0F09G}'s work assumes either a case where the planetary dynamo is independent \citep{2010A&A...522A..13R} or dependent \citep{2004A&A...425..753G} on the planetary rotation. \citet{2011RaSc...46.0F09G} showed that the intensity of the planetary magnetic field affects the frequency of the emission (as in our model) and that the radio flux has a strong dependence with the intensity of the planetary magnetic field (contrary to our model). More recently, \citet{see2015} studied the variability of exoplanetary radio emission for a sample of planet host stars, which includes the objects studied in the present work. Similar to our model, their model incorporates the realistic large-scale geometry of the stellar magnetic field, but their radio emission model differs from ours. Instead, their study was based on the model developed by \citet{2008A&A...490..843J}, which computes the radio emission generated by energetic electrons released in the reconnection between stellar and exoplanetary magnetic field lines, without assuming the a priori relation of the `radiometric Bode's law'. Despite the differences in these models, the radio fluxes estimated by \citet{see2015} and by us are very similar, within a factor of $1$ -- $4$, except for HD130322, in which we estimate radio fluxes that are $100$ times larger than theirs. In addition to providing information on exoplanetary's magnetic field, detection of exoplanetary radio emission would clearly provide invaluable constraints to stellar wind models as well.
\section{Discussion}\label{sec.discussion}
\subsection{Limitations of the models}\label{sec.limitations}
The stellar wind models presented in this paper use as input the observationally reconstructed stellar magnetic field and are, therefore, more realistic (and provide an advance) compared to models that are non-magnetised or that assume simplified stellar magnetic field topologies. In spite of that, our wind models share the limitations of global, polytropic wind models. In particular, these types of models have three parameters that are poorly constrained by observations, namely, the wind base density and temperature and the temperature profile (i.e., the profile of energy deposition through the parameter $\gamma$). In this work, we have chosen to set all these three parameters to be the same for all the stars in our sample. On the other hand, parameters such as the stellar mass, radius, rotation period and magnetic field differ for each object and are constrained to values observationally-derived for each stars (Table~\ref{tab.sample}).
\citet{Johnstone2015} recently showed that the average temperature of X-ray coronae $\langle T_{\rm cor} \rangle$ is a weak function of X-ray flux $F_X$: $\langle T_{\rm cor} [{\rm MK}] \rangle= 0.11 (F_X/[{\rm erg~s}^{-1}{\rm cm}^{-2}])^{0.26}$ (see also \citealt{2005ApJ...622..653T}). Using their relation, the X-ray luminosities compiled in \citet{2014MNRAS.441.2361V} and the stellar radii from Table~\ref{tab.sample}, we estimate $\langle T_{\rm cor} \rangle$ to be in the range between $2$ and $3.6$~MK for the stars in our sample. Naively, one could expect that the temperature at the base of the wind is related to the temperature of the closed X-ray corona (and this is the case for our Sun), but it is not clear if this relation is true for other stars. Therefore, in the absence of a stronger constraint, in our models, we adopt a wind base temperature of $2$~MK, typical of stellar coronae of solar-type stars. We adopt $\gamma$ that is the same as the effective adiabatic index measured in the solar wind \citep{2011ApJ...727L..32V}. For the base density, we adopted a value of $10^9 {\rm cm}^{-3}$. Ideally, observations of mass-loss rates of cool dwarf stars would allow us to place better constraints on the densities. However the lack of, or difficult-to-obtain, observational signatures of these winds make constraints of base density (or mass-loss rates) challenging to be obtained.
To investigate how our results change with the change in the wind base density, we performed a stellar wind simulation of HD~46375 that results in a mass-loss rate ($\dot{M}=2.9 \times 10^{-14}~{\rm M}_\odot ~{\rm yr}^{-1}$) that is similar to the one observed in the solar wind ($\dot{M}=2 \times 10^{-14}~{\rm M}_\odot ~{\rm yr}^{-1}$). Compared to the values of HD~46375 reported in Table~\ref{tab.results}, in this simulation, we found a mass-loss rate that is a factor of $6.5$ smaller, $\dot{J}$ that is a factor $3$ smaller and $\Phi_0$ that is a factor $1.3$ smaller.
Locally, the hot-Jupiter HD~46375b experience a total external pressure whose average value (averaged over the longitude of the subplanetary point) is a factor of $5.6$ smaller than the value presented in Table~\ref{tab.resultsp}. Because $r_M$ is weakly dependent on $p_{\rm tot}$ ($r_M \propto p_{\rm tot}^{-1/6}$), the value of $r_M$ we estimated before is smaller by a factor of only $1.3$. In spite of the larger the cross-section of the planetary magnetosphere, the radio flux decreased by a factor of $2.3$, caused by the decrease in the ram pressure [Eq.~(\ref{eq.pK})].
Another parameter we have assumed in our models is the planetary magnetic field intensity. As discussed in Section~\ref{sec.introBp}, this is a quantity that has yet not been measured in exoplanets. Here, we adopted a magnetic field intensity which is similar to the value of Jupiter. We can also estimate how the magnetospheric size we presented in Fig.~\ref{fig.rM}a would have changed if a different field strength were to be adopted. Because $r_M \propto B_p^{1/3}$, a strength that is a factor of $2$ smaller would decrease the reported values of $r_M$ by $2^{1/3}$ (i.e., only $25\%$). In spite of that, this would not have significantly altered the computed radio flux (our radio flux model is weakly dependent on the planetary field strength; see discussion in \citealt{2012MNRAS.423.3285V}), but would have decreased the frequency of the emission by a factor of $2$, making it not possible to be observed from the ground, due to the Earth's ionospheric cut-off in frequencies. Indeed, one of the possibilities that exoplanetary radio emission has not been detected so far might be due to a frequency mismatch between the emission source and that of the search instruments \citep{2000ApJ...545.1058B}.
\subsection{Exoplanetary system conditions for detectability at radio wavelengths}
Because of the cyclotron nature of the magnetospheric radio emission, exoplanets with higher magnetic field strengths emit at higher frequencies, where the detection sensitivity is larger. For instance, an exoplanet with a magnetic field of about $40$ -- $50$~G emits at the frequency range between $110$ and $140$ MHz. The sensitivity of LOFAR at $100$ to $200$~MHz is roughly about $0.05$~mJy (cf.~Fig.~1 in \citealt{2011RaSc...46.0F09G}). This indicates that, except for HD~46375b, all the remaining exoplanets studied here could in principle be detectable with LOFAR, if their magnetic field strengths were about $40$ -- $50$~G. Compared to Jupiter' maximum field intensity, these field strengths are about $\sim 3$ times higher.
We can also estimate what would be the required dissipated stellar wind power to generate detectable radio signatures from the exoplanets studied here. In this exercise, we take the same exoplanetary magnetic field assumed in Section~\ref{sec.planets} (i.e., $B_{p, {\rm eq}}=7~$G). With such a magnetic field intensity, the frequency of emission is around $36$~MHz, where the LOFAR sensitivity for a one-hour integration time is about a few mJy. The radio power calculated in Section~\ref{sec.planets} yielded values of about ($1.6$ -- $5.6) \times 10^{25}$ erg~s$^{-1}$. For a radio flux of a few mJy, the required radio power of the exoplanets studied here should be higher, on the range of ($1.1$ -- $2.1) \times 10^{27}$ erg~s$^{-1}$. To have a radio power (or, equivalently, a wind kinetic power) that is roughly 2 orders of magnitude larger, the stellar wind characteristics need to change -- either by increasing the density of the stellar wind or its velocity or both, as demonstrated next.
From equations (\ref{eq.r_M}), (\ref{eq.pK}) and (\ref{eq.radioflux}), and assuming a ram pressure-dominated wind, one can show that
\begin{equation}
\rho^2 \Delta u^7 \sim \frac{8 P_k^3}{\pi^2 R_p^6 B_p^2},
\end{equation}
such that the ratio between the values required for a radio flux of about 3~mJy to the ratio of the values calculated at Section~\ref{sec.planets} is
\begin{equation}\label{eq.estimate}
\frac{[\rho^2 \Delta u^7]_{(3 {\rm mJy})}}{[\rho^2 \Delta u^7]_{\rm (Sect.~5)} } \sim \left( \frac{P_{k(3 {\rm mJy})}}{P_{k (\rm Sect.~5)} }\right)^3 \sim (0.5 ~{\rm to~} 3.3)\times10^5.
\end{equation}
A very crude estimate\footnote{Note that this approach is not a self-consistent one, because in this scenario the structure of the wind temperature, velocity and magnetic fields are not modified (i.e., we are assuming they remain unchanged as the structures derived in Section~\ref{sec.results}). In the self-consistent approach, if either the density of the wind or its velocity are modified, one needs to solve the coupled MHD equations to derive all the remaining quantities of the wind. However, this back-of-the-envelope calculation can give a rough estimate of how larger should the stellar wind power be in order for the radio emission to reach values above the sensitivity limit of a couple of mJy.} then tells us that either the wind density needs to increase by a factor of at least $\sim 300$ to $600$ (i.e., the square root of the values derived in Eq.~(\ref{eq.estimate})) or the velocity requires an increase of a factor of at least $\sim 5$ to $7$ (i.e., the 7-th root of the values in Eq.~(\ref{eq.estimate})). Or, alternatively, density and velocity should both change such that they obey the relation (\ref{eq.estimate}).
From Table~\ref{tab.resultsp}, a $5$ to $7$ times increase in the wind velocity implies a relative velocity $\gtrsim 1200$~km/s, which is $50\%$ larger than the speed of the fast solar wind and 3 times larger than the slow solar wind speed. In terms of density, an increase of $\sim 300$ to $600$ roughly implies a similar increase in mass loss rates and, from Table~\ref{tab.results}, this would result in $\dot{M}$ of at least $(2.9$ -- $24) \times10^4$ times the solar wind mass-loss rates. Such mass loss-rates are typical of very young stars, indicating that exoplanets orbiting young Suns are more likely to produce detectable levels of radio fluxes \citep{2005A&A...437..717G,2010ApJ...720.1262V}.
\section{Summary and conclusions}\label{sec.conclusions}
In this work we have investigated the interplanetary media surrounding five hot-Jupiters, namely: HD~46375b, HD~73256b, HD~102195b, HD~130322b, HD~179949b. For that, we carried out 3D MHD stellar wind simulations, which incorporate as boundary conditions the surface magnetic field of the star reconstructed by \citet{2012MNRAS.423.1006F,2013MNRAS.435.1451F} using the Zeeman Doppler Imaging technique. The global characteristics of our wind models are presented in Table~\ref{tab.results}.
We then calculated the {\it local} characteristics of the stellar winds at the orbital radius of the hot-Jupiters, in order to characterise the interplanetary medium surrounding these exoplanets. In particular, we calculated the total pressure of the interplanetary medium and estimated what would be the size of planetary magnetospheres in case these hot-Jupiters had a magnetic field similar to Jupiter's field. We found that magnetospheric sizes range between $4.1$ and $5.6~R_p$ and that they can vary by a few percent due to variations in the external environment of the planets, as they orbit around their parent stars.
We also demonstrated that these planets orbits are super fast magnetosonic, indicating that bow shocks should be formed around their magnetospheres. The bow shock orientations (i.e., the angle between the shock normal and the tangent of the circular orbit) are of intermediate type, in which the shock forms at intermediate angles from the one of a shock facing the motion of the planet (`ahead shock') and the one connecting the star-planet centres (`dayside shock').
We also calculated the size of the auroral ovals of these planets. Inside these ovals, the planetary magnetic field lines are open, through which particles from the star and from the cosmos can penetrate as well as planetary atmospheric particles can escape through polar flows. On average, the auroral ovals we calculated have a half-opening angle of about $25^{\rm o}$ to $29^{\rm o}$, leaving exposed about $9\%$ to $13\%$ of the planetary area, which is a factor of $\sim 2$ larger than estimates for the Earth's and Saturn's auroral caps. Finally, we estimated the radio flux of these planets, using the analogy observed in the solar system, in which the radio emission from the magnetised planets is correlated to the solar wind power. We found small radio fluxes ranging from $0.02$ to $0.13$~mJy, which should represent a challenge to be observed with present-day technology (e.g., LOFAR; \citealt{2011RaSc...46.0F09G}), but could be detectable with higher sensitivity arrays, such as the SKA-low array system. Radio emission from systems having closer hot-Jupiters, such as from $\tau$~Boo~b (radio flux of the order of $0.5$ -- $0.9$~mJy, \citealt{2012MNRAS.423.3285V}), HD~189733~b ($0.5$ -- $1$~mJy, calculated using the simulations from \citealt{2013MNRAS.436.2179L} and the same model as presented here), or from nearby planetary systems orbiting young stars \citep{2005A&A...437..717G,2010ApJ...720.1262V}, are likely to have higher radio fluxes, presenting thus better prospects for detection of exoplanetary radio emission.
\section*{Acknowledgements}
AAV acknowledges support from the Swiss National Science Foundation through an Ambizione Fellowship. RF acknowledges support from a STFC grant. The results of this work are based on observations acquired at
CFHT/ESPaDOnS and TBL/NARVAL. This work was carried out using the BATS-R-US tools developed at The University of Michigan Center for Space Environment Modeling (CSEM) and made available through the NASA Community Coordinated Modeling Center (CCMC). This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID s516. This work used the DiRAC Data Analytic system at the University of Cambridge, operated by the University of Cambridge High Performance Computing Service on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant (ST/K001590/1), STFC capital grants ST/H008861/1 and ST/H00887X/1, and STFC DiRAC Operations grant ST/K00333X/1. DiRAC is part of the National E-Infrastructure.
\def\aj{{AJ}}
\def\araa{{ARA\&A}}
\def\apj{{ApJ}}
\def\apjl{{ApJ}}
\def\apjs{{ApJS}}
\def\ao{{Appl.~Opt.}}
\def\apss{{Ap\&SS}}
\def\aap{{A\&A}}
\def\aapr{{A\&A~Rev.}}
\def\aaps{{A\&AS}}
\def\azh{{AZh}}
\def\baas{{BAAS}}
\def\jrasc{{JRASC}}
\def\memras{{MmRAS}}
\def\mnras{{MNRAS}}
\def\pra{{Phys.~Rev.~A}}
\def\prb{{Phys.~Rev.~B}}
\def\prc{{Phys.~Rev.~C}}
\def\prd{{Phys.~Rev.~D}}
\def\pre{{Phys.~Rev.~E}}
\def\prl{{Phys.~Rev.~Lett.}}
\def\pasp{{PASP}}
\def\pasj{{PASJ}}
\def\qjras{{QJRAS}}
\def\skytel{{S\&T}}
\def\solphys{{Sol.~Phys.}}
\def\sovast{{Soviet~Ast.}}
\def\ssr{{Space~Sci.~Rev.}}
\def\zap{{ZAp}}
\def\nat{{Nature}}
\def\iaucirc{{IAU~Circ.}}
\def\aplett{{Astrophys.~Lett.}}
\def\apspr{{Astrophys.~Space~Phys.~Res.}}
\def\bain{{Bull.~Astron.~Inst.~Netherlands}}
\def\fcp{{Fund.~Cosmic~Phys.}}
\def\gca{{Geochim.~Cosmochim.~Acta}}
\def\grl{{Geophys.~Res.~Lett.}}
\def\jcp{{J.~Chem.~Phys.}}
\def\jgr{{J.~Geophys.~Res.}}
\def\jqsrt{{J.~Quant.~Spec.~Radiat.~Transf.}}
\def\memsai{{Mem.~Soc.~Astron.~Italiana}}
\def\nphysa{{Nucl.~Phys.~A}}
\def\physrep{{Phys.~Rep.}}
\def\physscr{{Phys.~Scr}}
\def\planss{{Planet.~Space~Sci.}}
\def\procspie{{Proc.~SPIE}}
\def\actaa{{Acta~Astronomica}}
\def\pasa{{Publications of the ASA}}
\def\na{{New Astronomy}}
\def\icarus{{Icarus}}
\let\astap=\aap
\let\apjlett=\apjl
\let\apjsupp=\apjs
\let\applopt=\ao
\let\mnrasl=\mnras
|
1,108,101,563,830 | arxiv | \section{INTRODUCTION}
Mobile robotic systems that move autonomously in complex environments are becoming more prevalent. However, the perceptual input available to a mobile robot, for example from computer vision, is uncertain. Therefore it is {\em not possible to certify that a path is collision-free}. When such robots perform complex manoeuvres among obstacles, absolute safety therefore cannot be guaranteed~\cite{r19}. But instead, robots can operate within a controlled level of risk of collision~\cite{r1}.
Typically, the environment is perceived through sensors such as stereo vision or LIDAR. Uncertainty arises directly from sensor noise, and then indirectly through perception algorithms that detect discrete
obstacles~\cite{r22, r23} or impassable terrain. It is therefore realistic to expect each perception module to output a {\em probability distribution}~\cite{r2} over pose, for each detected object. Then, given these random variables from detector outputs, candidate paths can be assessed for the risk of collision.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-trajectory-risk.png}
\caption{\small{\bf Robot motion problem.} Data taken from an aerial view of a car-park. The obstacles here are 35 cars with shapes given by the green bounding boxes, and uncertainties in location visualised as a green halo. The robot vehicle (blue) has a set of candidate paths, evaluated here as high risk (red) to low risk (blue) according to the scale shown. Bounds on collision risk are represented by different colours as on the logarithmic scale shown. Some higher risk paths involve squeezing through a narrow gap.} \label{f:trajectory-risk}
\end{figure}
The FPR algorithm introduced here efficiently computes a bound $F_{\rm D}$ on risk of collision. Note that a bound on risk would not be useful for determining an optimal path. However the problem in this paper is different: to select paths whose risk fall below a certain threshold. For that a bound is entirely usable.
\subsection{Specifying the Problem}
It is assumed that a static freespace in the plane is defined deterministically, and that a robot of known shape $A$ translates and rotates in that freespace. In addition, discrete ``tethered'' obstacles $k=1\ldots,K$ of known shape and uncertain location are perceived by the robot's sensor systems. Of course there is a great deal of work in Robotics addressing uncertain estimation of robot location. This has been so successful (e.g.\cite{Cremers13}) that here we assume that uncertainty in robot location is negligible compared with uncertainty in the locations of perceived obstacles.
The problem is then, over a (short) time-interval $t\in\,[0,\ldots,T]$ to:
\begin{enumerate}
\item{\bf Generate} $N$ candidate paths in configuration space $\Re^2 \times S$ for the robot. Each such path then sweeps out a shape $A$ in the plane $\Re^2$.
\item{\bf Bound} the risk of collision: a bound $F_{\rm D}$ on risk is computed for each candidate path.
\end{enumerate}
The main contribution of the paper relates not to 1. above, for which off-the-shelf methods are used, but to 2. where we introduce a novel, fast computation of a bound on the risk of collision for a given path. Its computational complexity is $O(N+K)$, compared with the naive $O(NK)$. Once the bound $F_{\rm D}$ has been computed for the first path, the cost of computing the bound for subsequent paths is {\em independent} of the {\em number of obstacles} $K$.
The following inputs to the risk computation are assumed:
\begin{enumerate}
\item{\bf Freespace}: it is assumed that freespace $F$ is initially defined as a subset of $\Re^2$. Then, within $F$, further, tethered obstacles are defined stochastically as below.
\item{\bf Robot}: assumed to have a deterministic spatial extent and to be manoeuvrable in translation and rotation. Over the interval $t\in\,[0,\ldots,T]$, it sweeps out the set $A$ in the plane.
\item{\bf Obstacle shape}: the $k^{\rm th}$ obstacle is assumed to have deterministic shape and mean orientation represented by the set $B_k$.
\item{\bf Tethered obstacle}: tethering here means specifying a probability distribution $p_k({\bf r})$ for the location of the $k^{\rm th}$ obstacle, where ${\bf r}=(x, y)$ are coordinates in the plane. This takes into account: i) variability arising from any possible motion over the time-interval $[0,\ldots,T]$; and ii) modelled uncertainty in object detector output. Obstacle locations are assumed to be mutually independent --- arising from independent sensor observation/detection.
\item{\bf Obstacle rotation:} treated as part of its shape so, to the extent that an obstacle may rotate over the time interval $[0,\ldots,T]$, that rotation must be absorbed in a deterministic expansion of the set $B_k$.
\end{enumerate}
This treatment of obstacle rotation is a limitation of our framework which, however, is reasonable if the time interval $[0,\ldots,T]$ is short, so rotation is limited. For longer time intervals it may be necessary to consider the full $xyt$-space, rather than more simply the $xy$-plane as in this paper. However our treatment is fully general in the rotation of the robot.
\subsection{Existing approaches}
We review some prominent approaches to computing the risk of collision under uncertainty. There are a number of recent approaches to estimating the risk of collision under uncertain robot dynamics \cite{ud1,ud2,ud3}. In our problem it is the environment that is uncertain rather than the dynamics. One approach to this problem involves casting ``shadows'' around obstacles~\cite{r13} but that does not facilitate the resolution of uncertainty from multiple different sources. Probability density functions can usefully model robot and obstacle uncertainty, as in~\cite{ab2}, which however requires Monte-Carlo computation and has $O(NK)$ complexity. Bevilacqua et al.~\cite{ab3} model obstacles stochastically, but deal with just one obstacle, and do not allow for sensor or perceptual uncertainty. Empirical probability distributions can also be useful~\cite{ab4} in the case of a single obstacle. Alternatively Althoff et al.~\cite{ab5} elegantly avoid Monte Carlo computation by compiling stochastic reachability of moving obstacles down to finite Markov Chains, but the risk computation remains $O(NK)$.
Probabilistic Occupancy Grids are an established mechanism for dealing with spatial uncertainty probabilistically~\cite{r3,r10a} and can be used to find paths. However, to calculate the risk of collision along a path with numerous obstacles, a single grid is not enough. Laugier and collaborators~\cite{r10b,r10} show that certain ``Laugier integrals'' (our term) over multiple grids, one grid per point-obstacle, can be combined nonlinearly to compute the total risk of collision. We build on this approach.
An important question is then whether the combination of the Laugier integrals can be simplified somehow, despite the nonlinearity. For example, if the set of obstacles could somehow be replaced by the union of obstacles, that could live on a single grid, it would simplify computation. However, given that obstacles are each defined here not just by their shape but also by the uncertainty in their location, it turns out that constructing a composite obstacle as a trivial union of obstacle shapes is not valid. Therefore, in the FPR algorithm, we derive and justify a non-trivial combination of shape properties and location distributions, onto just 2 grids. This ultimately leads to the qualitative improvement in computation time of the FPR algorithm.
\subsection{Main contributions}
Note that this paper is not about path-planning {\it per se}. It claims no new contribution whatsoever to the extensive science of path-planning~\cite{Latombe12}. Its novel contribution is entirely directed at the efficient computation of the risk of collision.
Our principal contributions are as follows.
\begin{enumerate}
\item A linearisation of the Laugier integrals scheme gives a close approximation and a bound on the risk, and allows the entire computation to be done over just two grids, regardless of the number of obstacles. That reduces computational complexity from $O(NK)$ to $O(N+K)$, for $K$ point obstacles and $N$ paths. So far, this applies only to point obstacles, not obstacles of finite size.
\item The Laugier integrals can however be extended by means of Minkowski sums to apply to obstacles of finite size.
That, together with a new ``convolution trick'', leads to the FPR algorithm for computing a bound on the risk of collision with finite, tethered obstacles. The computational complexity of the FPR algorithm is $O(N+K)$, as desired.
\item Simulations quantify the difference between the FPR bound on risk and the true risk, under various circumstances.
\item Simulations with simulated and real data show that the $O(N+K)$ computational complexity does indeed lead to substantial reductions in practical computation times.
\end{enumerate}
\section{EFFICIENT COMPUTATION OF BOUNDS ON COLLISION PROBABILITIES}
Given the shapes and uncertain location of obstacles in an environment, the problem is to estimate the risk of collision for a set of candidate paths. This risk computation uses the probability distributions for encroachment by obstacles on the path swept by a moving robot, during a given time interval. Our starting point is the work by Laugier and collaborators~\cite{r10b,r10} who propose a probabilistic framework of this sort. This section extends the framework and develops an efficient algorithm for computing risk.
\subsection{Probabilistic Obstacle Framework}
Consider an environment consisting of a set of $K$ point obstacles. The obstacles are typically detected by perception modules whose outputs are uncertain (by design), so the position of each obstacle is a random variable given by the density function $p_k({\bf r})$ where ${\bf r}=(x,y)\in\Re^2$.
Then the probability of collision between the robot and the $k^{\rm th}$ point obstacle can be written~\cite{r10} as
\begin{equation}
\label{e:Laugier0}
P_{\rm D}(k) = \int_A p_k({\bf r})
\end{equation}
where $A$ is the swept area of the robot along a path $\pi$ and over a time interval $t \in [0,T]$.
Now the total probability of collision $P_{\rm D}$ is computed~\cite{r10} as
\begin{equation}
\label{e:PD}
P_{\rm D} = 1 - \prod_{k=1}^K (1-P_{\rm D}(k)),
\end{equation}
which must be recomputed for {\textit{each}} swept path $A$ of $N$ candidate paths. However, we propose instead a bound $P_{\rm D}\leq \bar{P}_{\rm D}$ that can be computed as
\begin{equation}
\bar{P}_{\rm D} = \sum_{k=1}^K P_{\rm D}(k) .
\label{e:PDbar}
\end{equation}
Moreover, when $P_{\rm D}\ll 1$, as we expect in practical, relatively safe situations, the bound $\bar{P}_{\rm D}$ is tight.
The bound can be computed efficiently, exploiting the linearity of (\ref{e:PDbar}) cf. (\ref{e:PD}), to calculate
\begin{equation}
\bar{P}_{\rm D} = \int_A G({\bf r}) \mbox{ where } G = \sum_{k=1}^K p_k ({\bf r}) .
\label{e:FDG}
\end{equation}
The computation is then trivially $O(N+K)$ not $O(NK)$, since $G$ in (\ref{e:FDG}) can be precomputed and re-used for all candidate paths $A$. However, with obstacles of finite size (as opposed to point obstacles) achieving $O(N+K)$ complexity is no longer trivial, as we see next.
\subsection{Finite obstacles}
Generally for obstacles that are not just points but have finite area, (\ref{e:Laugier0}) has been extended~\cite{r10b} for the case of circular obstacles by ``adding on'' the obstacle radius to the robot $A$.
More generally, for an obstacle shape $B_k \subset \Re^2$, situated at the origin, Minkowski sum can be used to expand the robot shape $A$. At a general position ${\bf r}$, the displaced obstacle is
\begin{equation}
B_k({\bf r}) = B_k + {\bf r} = \{ {\bf r} + {\bf r}' : {\bf r}' \in B_k \}.
\end{equation}
So the probability of collision with the obstacle can be rewritten as
\begin{equation}
P_{\rm D}(k) = \int_{A_k} p_k({\bf r}) ,
\label{Laugier_int}
\end{equation}
where $A_k = A \oplus B_k$, the Minkowski sum of the robot shape and the obstacle shape, as in figure~\ref{fig2}. We term this equation (\ref{Laugier_int}) the {\em Laugier Integral}.
In a search-based algorithm, the Minkowski sums for $A_k$ must then be recomputed for each of $N$ candidate swept paths $A$, and for every obstacle $B_k$. We would therefore like to find a way to replace this naive $O(NK)$ computation, by an efficient $O(N+K)$ computation --- as was done above for point obstacles, but now in the finite obstacle case.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-expanded-swept-area.png}
\caption{\small {\bf Minkowski Sum.} Path $\pi$ with swept area $A$, dilated by Minkowski sum with obstacle $B_k$, to give the expanded swept area $A_k$.}\label{fig2}
\end{figure}
\subsection{Minkowski Sum and the ``Convolution Trick''}
Note that the integral in (\ref{Laugier_int}) can be rewritten as the mathematical convolution of two functions, evaluated at the origin:
\begin{equation}
\label{e:LaugierConvol}
P_{\rm D}(k) = [{\rm I}_{A_k} \ast \tilde{p}_k({\bf r})]({\bf 0}) ,
\end{equation}
where ${\rm I}_S$ denotes the indicator function of the set $S$, $\tilde{f}$ denotes the reflection of a function, i.e., $\tilde{f}({\bf r}) = f(-{\bf r})$, and the notation $[\ldots]({\bf r})$ means that the function defined in square brackets is evaluated at the location ${\bf r}$. (So in this instance (\ref{e:LaugierConvol}), the function in square brackets is a convolution of two functions, which is then evaluated at the origin ${\bf r}= {\bf 0}$.)
There is also a known connection~\cite{ab1} (see also~\cite{r20b, r20}) between the convolution of the indicator functions of two sets, and the Minkowski sum of the two sets, as follows:
\begin{equation}
X \oplus Y = \mbox{supp}({\rm I}_X \ast \tilde{\rm I}_Y) ,
\end{equation}
where $\mbox{supp}(f)$ is the support of the function $f$. In particular,
\begin{equation}
A_k = \mbox{supp}({\rm I}_A \ast \tilde{\rm I}_{B_k}) .
\end{equation}
It is not generally the case that the indicator of a Minkowski sum is simply equal to the (normalised) convolution of the two indicator functions (see figure~\ref{fig3}). Nonetheless, over a restricted portion of the domain, corresponding to the case when the obstacle $B_k$ lies {\textit{inside}} the robot path $A$, equality does hold:
\begin{equation}
{\rm I}_{A_k}({\bf r}) = \lambda_k ~ [{\rm I}_A \ast \tilde{\rm I}_{B_k}]({\bf r}) ~\mbox{when}~ B_k({\bf r}) \subset A ,
\label{conv_eq_1}
\end{equation}
where $\lambda_k = \frac{1}{area(B_k)}$. The expression on the right of this equation is everywhere positive as it is a convolution of indicator functions which are positive.
This gives us a formula for ${\rm I}_{A_k}$ when $B_k({\bf r}) \subset A$. Next, we need the corresponding formula for the complementary case when $B_k({\bf r}) \not\subset A$.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-Minkowski-convolution.png}
\caption{\small {\bf Convolution and the Minkowski sum.} Illustration in 1D of the fact that the indicator of a Minkowski sum is not generally equal to the convolution of the two indicators, but they do share the same support.}\label{fig3}
\end{figure}
\subsection{Contour convolution}
For the complementary component of the Minkowski sum, where $B_k({\bf r}) \not\subset A$, then ${\rm I}_{A_k}$ can be bounded using a convolution of the bounding contours of obstacles $\partial{B_k}$, and of the robot $\partial{A_k}$. This leads to an upper bound on any integral of the form $\int_{A_k} f({\bf r})$, and in particular on the collision probability (\ref{Laugier_int}).
Given the set $A$, we define the {\em delta function ridge} around its boundary $\partial{A}$ as:
\begin{equation}
\partial{A_\sigma}({\bf r}) = |\nabla g_\sigma({\bf r}) \ast {\rm I}_A ({\bf r})|
\end{equation}
where $g_\sigma({\bf r})$ is a normalised, isotropic, $2$D Gaussian function with a (small) diameter $\sigma$.
Similarly, we define $\partial{B}_{k,\sigma}({\bf r})$ as the delta function ridge around $\partial{B}_{k}$.
Now, we claim that the indicator function for the Minkowski sum is bounded in the complementary condition, and in the limit that $\sigma \rightarrow 0$, by the convolution of these two delta function ridge functions, as follows:
\begin{equation}
{\rm I}_{A_k}({\bf r}) \le \frac{1}{2} [\partial{A_\sigma} \ast \tilde{\partial{B}}_{k, \sigma}]({\bf r}) ~\mbox{when}~ B_k({\bf r}) \not\subset A
\label{conv_eq_2}
\end{equation}
This is illustrated in figure~\ref{fig4}, and proved later.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-contour-convolution.png}
\caption{\small {\bf Contour convolution.} Approximating the indicator function of the Minkowski sum of sets $A$ and $B$ via contour convolution.}
\label{fig4}\label{f:contour-convolution}
\end{figure}
\subsection{Combined bound for the FPR algorithm}
As with (\ref{conv_eq_1}), the right hand side of the inequality (\ref{conv_eq_2}) is everywhere positive. So, now the mutually complementary expressions (\ref{conv_eq_1}) and (\ref{conv_eq_2}) can be combined into a single bound on the indicator function of the Minkowski sum:
\begin{equation}
{\rm I}_{A_k}({\bf r}) \le \frac{1}{2} \left[\partial{A_\sigma} \ast \tilde{\partial{B}}_{k, \sigma} \right]({\bf r}) + \lambda_k ~\left[{\rm I}_A \ast \tilde{\rm I}_{B_k}\right]({\bf r}) .
\label{conv_eq_3}
\end{equation}
As this bound holds everywhere, we can simply write
\begin{equation}
{\rm I}_{A_k} \le \frac{1}{2} \partial{A_\sigma} \ast \tilde{\partial{B}}_{k, \sigma} + \lambda_k ~{\rm I}_A \ast \tilde{\rm I}_{B_k} .
\label{conv_eq_4}
\end{equation}
Returning to the earlier expression~(\ref{e:LaugierConvol}) for the collision probability, we have
\begin{equation}
P_{\rm D}(k) \le \left[ \left( \frac{1}{2} \partial{A_\sigma} \ast \tilde{\partial{B}}_{k, \sigma} + \lambda_k {\rm I}_A \ast \tilde{\rm I}_{B_k}\right) \ast \tilde{p_k} \right] ({\bf 0}) .
\label{conv_eq_5}
\end{equation}
Now using the associativity of the convolution operator, this can be rewritten as
\begin{equation}
P_{\rm D}(k) \le \frac{1}{2} \left[ \partial{A_\sigma} \ast \tilde{\partial{B}}_{k, \sigma} \ast \tilde{p_k} \right] ({\bf{0}}) + \lambda_k \left[ {\rm I}_A \ast \tilde{\rm I}_{B_k} \ast \tilde{p_k}\right]({\bf 0}) .
\label{conv_eq_6}
\end{equation}
This can equivalently be written as
\begin{equation}
P_{\rm D}(k) \le \int \partial{A_\sigma} \frac{1}{2} \left(\partial{B_{k,\sigma}} \ast {p_k}\right) + \int {\rm I}_A \left( \lambda_k {I}_{B_k} \ast {p_k} \right) .
\label{bound_eq_1}
\end{equation}
Finally, summing up over obstacles as in (\ref{e:PDbar}), the bound $F_{\rm D}$ on the number of collisions is given by:
\begin{equation}
\bar{P}_{\rm D} \leq F_{\rm D} = \int \partial{A_\sigma}({\bf r}) \partial G_\sigma({\bf r}) + \int {\rm I}_A({\bf r}) G({\bf r})
\label{bound_eq_2}
\end{equation}
where $\partial G_\sigma$ and $G$ are:
\begin{align}
\label{Gsigma_eq} \partial G_\sigma &= \frac{1}{2} \sum_k \partial{B_{k,\sigma}} \ast p_k \\
\label{G_eq} G &= \sum_k \lambda_k {\rm I}_{B_k} \ast p_k .
\end{align}
Note that $G$ and $\partial G_\sigma$ are independent of $A$ and {\textit{do not}} need to be {\em recomputed} every time $A$ changes. So the repeated computation of the bound (\ref{bound_eq_2}), for $N$ different swept paths $A$, would indeed have complexity $O(N+K)$.
This combined ``convolution trick'' gives the FPR method to calculate the bound on the number of collisions which is summarised in Algorithm~1.
The two equations (\ref{Gsigma_eq}) and (\ref{G_eq}) combine the full set of obstacles (together with their location distributions) onto 2 grids or planes. This non-trivial combination of $K$ obstacle shapes and distributions onto just 2 grids is what gives the FPR algorithm its increased efficiency. However, it is important to note that this is {\em not simply a union of obstacles}. It is a complex combination of shapes, outlines and location distributions, which is by no means obvious, but is derived and justified in this paper by means of the convolution trick.
\subsection{Proof of the Contour Convolution Formula}
We show that the indicator function for the Minkowski sum is indeed bounded, in the limit $\sigma \rightarrow 0$, as in inequality (\ref{conv_eq_2}).
For values of ${\bf r}$ such that $B_k({\bf r}) \cap A = \phi$, both sides of the inequality in (\ref{conv_eq_2}) are $0$, in the limit. Elsewhere $B_k({\bf r}) \not\subset A$, so the contours $\partial A$ and $\partial B_k$ must intersect at least twice. In that case, the convolution
\begin{equation}
[\partial{A}_\sigma \ast \tilde{\partial{B}}_{k,\sigma}]({\bf r}) =
\int_{{\bf r}'} \partial{A_\sigma}({\bf r}') \partial{B}_{k,\sigma} ({\bf r}' -{\bf r})
\label{proof_eq_2}
\end{equation}
integrates across two or more contour intersections. The integral at each intersection of two smooth contours (crossing at an angle $\theta$) has the general form
\begin{equation}
J = \int\int g_\sigma(x) g_\sigma(x \cos\theta + y \sin\theta) \,{\rm d}x \,{\rm d}y .
\label{proof_eq_3}
\end{equation}
Now as $g_\sigma$ is a normalised Gaussian, $\int_x g_\sigma(x)\,{\rm d}x = 1$, and applying this above in $y$ (with a straightforward substitution), and in $x$, yields
\begin{equation}
J = \frac{1}{\sin\theta} \ge 1 ,
\label{proof_eq_4}
\end{equation}
so the integral (\ref{proof_eq_2}) accumulates a value of at least 1 for each contour intersection. Therefore, with 2 or more intersections, the right hand side of inequality~(\ref{conv_eq_2}) is at least $\frac{1}{2}\times 2=1$, compared with the value of 1 for the indicator function ${\rm I}_{A_k}({\bf r})$ on the left hand side, so the inequality does indeed hold.
\RestyleAlgo{ruled}
\begin{algorithm}
\KwData{$N$~instances~of~path~$A$, $B_{1:K}$, $p_{1:K}({\bf r})$, $\sigma$}
\KwResult{$F_{\rm D}$}\;
\textit{// Compute $\partial G_\sigma$}\;
\For{k in 1 to K} {
$\partial{B_{k, \sigma}}({\bf r}) = |\nabla g_\sigma({\bf r}) \ast {\rm I}_{B_k} ({\bf r})|$
}
\;
$\partial G_\sigma({\bf r}) = \frac{1}{2}\sum_{k=1}^{K}{\partial{B_{k, \sigma}}({\bf r}) \ast p_{k}({\bf r})}$\;
\;
\textit{// Compute $G$}\;
$G({\bf r}) = \sum_{k=1}^{K}{\frac{1}{\text{area}(B_k)}{\rm I}_{B_k}({\bf r}) \ast p_{k}({\bf r})}$\;
\;
\For{{\rm each}~A} {
\textit{// Compute $\partial A_\sigma$}\;
$\partial{A_\sigma}({\bf r}) = |\nabla g_\sigma({\bf r}) \ast {\rm I}_A ({\bf r})|$\;
\;
\textit{// Integrate over a box around A with margins $4\sigma$}\;
$F_{\rm D} = \int_{\mathbb{R}^2}{\partial{A_\sigma}({\bf r})\partial G_\sigma({\bf r}) + {\rm I}_A({\bf r})G({\bf r})}$
}
\;
\caption{\small {\bf FPR algorithm} for the bound on collision risk given $N$ paths, and $K$ obstacles.}
\end{algorithm}
\section{Results}
In this section we demonstrate that randomly generated trajectories of an SE(2) robot can be efficiently labelled by the FPR algorithm, according to the bound $F_{\rm D}$ on the risk of collision for each path. The FPR algorithm is agnostic as to the motion planner used to synthesise the candidate trajectories and is generally compatible with state of the art methods for motion planning \cite{katrakazas2015real}. We use an off-the-shelf Closed Loop variant CL-RRT~\cite{kuwata2009real} of the RRT algorithm~\cite{lavalle1998rapidly}
to generate candidate paths in the environment, drawn from the kinodynamic model for a particular robot. This has the advantage of generating typically smooth paths, that are plausible paths for that robot. Then the risk bound is calculated for each generated path.
First our results demonstrate the FPR algorithm for a simulated environment, then for a real environment taken from an aerial view, and finally a substantial dataset of 7481 birds-eye views each with several goals and multiple trajectories for each goal --- 240,498 trajectories in all. In each case, higher risk paths take tighter lines around obstacles, as would be expected. We show: i) how close the bound on risk is to the true risk; and ii) that the use in FPR of the convolution trick, which improves computational complexity from $O(NK)$ to $O(N+K)$ as explained earlier, leads to substantial reductions in practical computation times.
\subsection{Simulated Environments}
We first use a $2$D simulation in which a rectangular SE(2) robot of size $2{\rm m} \times 4{\rm m}$ navigates along continuous paths, defined as the progression of $(x,y,\theta)$ pose over time (though our visualisations only depict the centroid). A simulation scenario is defined as a collection of obstacles within the environment, each specified as a shape (a subset of $\Re^2$), and pose, together with positional uncertainty, as well as start and goal poses for the ego vehicle.
The simulated scenario shown here in figure~\ref{f:sim-results} resembles sections of a car park; 400 paths are generated at random by CL-RRT. The uncertainty over each obstacle's position is modelled as a two-dimensional Gaussian distribution with standard deviation $0.3$m, which is 15\% of each obstacle's width.
In figure~\ref{f:sim-results}, paths with lower $F_{\rm D}$ are seen to maintain a greater clearance from the obstacles, as expected.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-sim-results.png}
\caption{\small {\bf Visualisation of paths.} The figure shows paths for a simulated environment. A set of candidate paths are generated with fixed start and end poses. The dark objects are obstacles whose position is known with an uncertainty visualised here as a shaded halo. Risk bounds $F_{\rm D}$ on each path are represented by different colours, on the logarithmic scale shown. Safer paths maintain greater clearance around obstacles as expected. Some risky paths (bottom) involve squeezing through a narrow gap. }\label{f:sim-results}
\end{figure}
In figure~\ref{f:trajectory-risk} an aerial view of a car park is shown, with a set of candidate paths generated by CL-RRT, between fixed start and end points. Obstacle vehicle shapes are represented as bounding rectangles. Error in estimated position of the obstacle-cars is Gaussian with standard deviation of 0.3m.
The candidate paths are coloured according to the computed value of the bound on collision risk. This turns out to include safer paths with collision risk down to $10^{-5}$ and below, and riskier paths, above $10^{-2}$ risk of collision, that involve squeezing through a narrow gap.
Finally, for the sole purpose of researching the behaviour of the FPR algorithm, a larger dataset derived from the birds-eye view KITTI collection~\cite{KITTI13} is used. Each scene contains a number of vehicles, obstacles $B_k$ that are represented as rectangles, with positions labelled and assumed here to have Gaussian error with standard deviation of 0.7m. One example view, from the total of 7481 birdseye views, is illustrated in figure~\ref{f:KittiTrajectories}.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.48\textwidth]{figures/fig-KittiTrajectories.png}
\caption{\small {\bf KITTI birdseye data.} Example view from KITTI dataset of 7,481 birdseye views of traffic scenes. Obstacles shown in blue, each have simulated Gaussian uncertainty with standard deviation 0.7m. A goal is chosen automatically. Possible paths for an SE(2) robot (red) are shown. FPR bound on collision risk is displayed on the colour scale shown.
}\label{f:KittiTrajectories}
\end{figure}
In each scene, an SE(2) robot is given an initial position, and several goals are chosen, automatically. Then up to 10 paths per goal are generated --- a total of 240,498 paths. Shapes of obstacles are assumed known, and in practice this could be achieved by recognition of known objects such as vehicles.
\subsection{Performance Evaluation}
Simulation results given here illustrate the computational benefits of the FPR approach for evaluating bounds on risk. In figure~\ref{f:timings}, we present empirical data regarding the computational efficiency of our method compared to the exact computation of the integral in (\ref{Laugier_int}). Computing the FPR bound is, on average, significantly faster than exact computation. Even for the first path, FPR is more than 3 times as efficient on average, thanks to the use of efficiently implemented convolution, in place of Minkowsi sum. For subsequent paths FPR is on average two orders of magnitude more efficient, at 10ms per path. This is consistent with $O(N+K)$ complexity c.f $O(NK)$ complexity (for $N$ evaluated paths), as expected from theory.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-KittiTimings.png}
\caption{\small {\bf Performance of FPR bound computation} over the KITTI birdseye data, compared with exact (\ref{Laugier_int}) computation of risk.
}\label{f:timings}
\end{figure}
\subsection{How tight is the bound on collision risk?}
It is reasonable to ask how close the FPR bound is, in practice, to the exact risk $P_{\rm D}$. The ratio of the bound to the exact risk is evaluated over all the 240,498 paths derived from the KITTI data. The ratio is about 2.7 on average, with a distribution largely between 1 and 10 (93\% of examples), as in figure~\ref{f:KittiSlack}.
\begin{figure}[htbp]
\centering
\vspace{3mm}\includegraphics[width=0.45\textwidth]{figures/fig-KittiSlackFactor.png}
\caption{\small {\bf Tightness of bounds -- KITTI data.} Bounds $F_{\rm D}$ on collision risk are computed for all 240,498 trajectories and compared with exact risk. The risk bounds have an average ratio of 2.72 times the exact risk, and that ratio is distributed as shown in the histogram.}
\label{f:KittiSlack}
\end{figure}
The effect of this ratio is that the bound will lead to conservative decisions. For example if the ratio is $3$, then to achieve a desired collision risk of say $10^{-3}$ or better, setting the FPR risk to $10^{-3}$ will actually achieve a lower risk of $\frac{1}{3} 10^{-3}$. As a result, selected paths may leave more clearance from obstacles than strictly necessary. Occasionally, it is possible that no path in a certain set may have an FPR bound within the acceptable level of risk, even when a path with an acceptable level of true risk does actually exist.
\subsection{Implementation details}
We use an augmented version of the CL-RRT planner, with probabilistic sampling, similar to other approaches for heuristically biasing RRT growth~\cite{urmson2003approaches}, and choosing tree nodes for expansion according to their scores. It discretises steering and accelerator input ranges when expanding the random tree, to generate realisable trajectories, and in order to restrict abrupt steering or velocity changes. Nodes in the RRT are scored based on their proximity to the goal, and similarity to its orientation and velocity. We treat each
tethered obstacle as deterministic, just for the purposes of CL-RRT, taking the shape $B_k$ at the mean location over $p_k$.
For all simulations, space is discretized on a grid with a resolution of 5 cm/px. All convolutions
in our implementation use a Gaussian or gradient of Gaussian kernel and so we exploit the separability property in order to perform convolutions efficiently. Additionally, we approximate the final integration step in Alg. 1 with a computationally efficient Riemann sum over the discretised
grid. (If it were desired to use non-Gaussian $p_k({\bf r})$ then the convolutions with $p_k({\bf r})$ could be done by FFT or morphologically~\cite{Samaniego19}.)
The constant for Gaussian convolution, $\sigma=2$ grid squares, is just big enough for good numerical behaviour.
All of the numerical computations are implemented using the GPU-enabled Python module for linear algebra CuPy~\cite{cupy_learningsys2017}. The goal and trajectory data, used with KITTI data in simulations, will be made available on the web.
\section{CONCLUSIONS}
Our FPR algorithm bounds the risk of collision for candidate paths in a given environment.
It builds on a probabilistic framework for calculating collision risk, using the convolution trick to render these computations in linear time in $N$ and $K$. Amongst trajectories deemed safe enough, there would then be freedom to optimise for other criteria such as rider comfort and travel time.
Other sources of uncertainty would of course also need to be taken into account in an end-to-end implementation, such as missed detections of obstacles. The current state of the art~\cite{r23} suggests risk of the order of $10^{-3}$ for missed detections. It is to be hoped however that this improves with future advances in temporal and cross-modal fusion.
The effect of computing risk in $xy$-space, as opposed to the full $xyt$-space, is further to approximate (in fact bound) the computed risk. The bounding effect preserves safety, but in some circumstances is overly conservative and may overestimate risk, which could lead to `frozen robots'~\cite{r9}.
Future work looks at extending FPR from tethered obstacles to fully dynamic obstacles whose position evolves stochastically. Then $p_k({\bf r})$ in (\ref{Laugier_int}) would be extended to a spatio-temporal (stochastic) process as in~\cite{ab2,ab3,ab5}. Risk computation would be in the full $xyt$-space. The question is, to what extent could full Monte-Carlo computation of risk be avoided, and linear time-complexity in $N$ and $K$ be retained?
\addtolength{\textheight}{-13cm}
|
1,108,101,563,831 | arxiv | \section{Introduction}
\label{sec:introduction}
Small-world networks~\cite{Watts+Strogatz} are now an important class of complex network and have been used for many
purposes ranging from social network models~\cite{Klemm-SocialNetworks} to quantum gravity~\cite{WanOnSmallWorldLQG}.
Complex networks have many unusual properties~\cite{NewmanComplexNetworks} but one of the most valuable aspects of the
small-world model is that it allows a parameterized interpolation between those properties exhibited by a random
graph~\cite{Bollobas} and those by regular lattices. Understanding the process that lead to small-world networks is
non-trivial~\cite{Mathias+Gopal,NewmanSmallWorldModels} and one important approach involves comparing the critical phase
transitional behaviour of small-world systems with well-studied phase transition models. The Ising model provides a
valuable test model for studying phase transitions on various networks. Study of the 1-D Ising
model~\cite{LopesExactIsing} has led to some insights and relatively recent work by Herrero~\cite{Herrero,Herrero2} and
others~\cite{HastingsSmallWorld} raises some interesting questions about the behaviour of the Ising model on small-world
rewired lattices.
The Ising model can be adapted in a number of ways such as adding long-range weak
interactions~\cite{Zhang+NovotnyOnLongRange}, however we are interested in preserving the underpinning model structure
and consider solely the effect on spatial distortions and in particular shortcuts. It is still unclear whether real
physical materials exhibit small-world magnetic properties~\cite{NovotnyEtAlOnNanomaterials} but one can speculate about
small-world ``shortcut'' effects that might arise when effectively one-dimensional structures such as protein chains are
folded. Two-dimensional crumpled sheet systems embedded in a three-dimensional space could also exhibit real-space
shortcuts. We might further speculate that higher dimensional systems such as quantum gravity models, that are not
restricted to three Euclidean dimensions, could also exhibit physically relevant small-world transitional behaviour.
The Ising model is usually formulated on a regular d-dimensional hyper-cubic lattice where each magnetic spin variable
is connected to $2 \times d$ nearest neighbouring sites. The model Hamiltonian is usually written in the form:
\begin{equation}
H = - \sum_{i \neq j} J_{ij} S_i S_j
\end{equation}
where $S_i = \pm1$, $i=1,2,...N$ sites, and $J_{ij}$ is $|J| = 1 / k_BT$ is the ferromagnetic coupling over neighbouring
sites $i$,$j$ on the network.
The model has a critical temperature of $T_c = \frac{1}{J_c k_B}= 0$ in one dimension, but displays finite transition
temperatures due to the spontaneous magnetization effects in higher dimensions. Specifically in two- and
three-dimensions the Ising-type phase transitions have been very well studied and the critical temperatures are known
exactly in two dimensions~\cite{Onsager} and approximately from computer simulations such as~\cite{MCRG} in three
dimensions. In systems of four dimensions and higher a finite temperature phase transition still occurs, but the nature
of the transition is well described by mean-field theory. Values for the transition temperature in these higher
dimensional systems are also known.
The regular lattice Ising model can be perturbed in a number of controlled ways: by removing bonds or ``damaging'' the
system; by rewiring the bonds; or by adding extra bonds. Herrero and others have employed the Watts-Strogatz rewiring
model which preserves the number of total bonds, but does not preserve the number of bonds connecting a given site.
Svenson and Johnson explored damage spreading in small-world Ising systems~\cite{Svenson+Johnson}, again using the
Watt-Strogatz approach.
Generally, the Ising model phase transitional temperature is systematically shifted by these damaged, rewired or added
links. In the case of the small-world rewiring, individual sites can become connected to sites very far away
physically. These long-range bonds encourage and help the long-range correlations that manifest the spontaneous
magnetization and hence give rise to the peculiarly Ising-like critical phenomena. The Ising model critical temperature
rises as long-range order is encouraged by the rewiring. Physically, it is easier for the system to maintain long-range
order against the thermal disordering effects of higher temperatures than it would be without the rewired long-range
bonds. This can be measured as a monotonic dependence of the critical temperature $T_c(p)$ on the small-wiring rewiring
probability $p$.
It has been speculated that under the Watts-Strogatz small-world rewiring this shift in $T_c$ is partially due to the
effective change in the dimensionality of the system as individual sites can have more or less than $ z = 2 \times d$
bond-connected neighbours. In this paper we explore an alternative small-world rewiring model that adjusts pairs of
bonds and is able to preserve exactly the coordination number $z$ for all spin sites.
This means that as far as individual spin-sites are concerned their environment is locally identical to the conventional
$z=2 \times d$ lattice-based Ising model. We explore the behaviour of the Ising model under such a rewiring and compare
it with the Watts-Strogatz rewiring studied by Herrero and others. We are able to show that the small-world effect is
not dependent on an effective dimension change but solely on the long-range nature of the interactions themselves.
In our model, spins still have precisely $z$ ``nearest neighbour'' sites to which they are directly connected and with
which they are topologically directly coupled via the Ising model coupling parameter $J$. However in terms of a
physical space interpretation, the rewired links mean that nearest neighbouring sites can now be at arbitrary physical
distances apart. This can be interpreted as filling physical space with ``worm-holes'' or applying an elaborate
manifold folding of physical space.
Like that of~\cite{Herrero} our study is based on Monte Carlo simulations of the Ising system on rewired lattices. We have
been able to study larger systems than prior work for both two- and three-dimensions and have also made some preliminary
studies of four- and five dimensional systems. Of particular importance is the need to study systems that are large
enough to support the so called small-$p$ regime. It transpires that the power law dependence on important properties
of the model on the rewiring probability parameter $p$ requires an analysis across logarithmic system sizes and length
scales. Consequently we have needed to simulate large Ising systems of up to $1024^2$ and $384^3$.
Equilibration requirements for large system sizes over a large parameter space set of $p$ and $T$ values needed a faster
Monte Carlo simulation algorithm than the local Metropolis method used by prior small-world Ising work. We therefore
have explored non-local cluster-based updating methods such as that of Wolff~\cite{Wolff} for our simulations. We
certainly expected the Wolff cluster method to perform entirely correctly on a $z$-preserving rewiring model, and
indeed it does. We have also, however, verified that it produces results on the non-$z$-preserving Watts-Strogatz rewired
system that are consistent with those using the localised Metropolis updating method.
The main question of interest is how the critical phenomenological behaviour~\cite{CriticalPhenomena} of the model
changes due to the small-world links. Approximate theoretical models for the $p$-concentration dependence of $T_c$, such
as those given in~\cite{Meilikhov+Farzetdinova}, depend on arguments based on domain wall formation energies in the
Ising model. In the work reported here, we show that the critical temperature is monotonically shifted upwards from its
regular lattice value in $d = 2,3,4,5$ and that it is very-well characterised by a power law in $p$. We also consider
the critical exponents $\beta$ and $\nu$ and our data shows a gradual but monotonic change with increasing $p$ from an
Ising-like phase transition to a mean-field-like transition in both two- and three-dimensions.
Although a great deal of practical lore exists in the literature concerning the operational methods of obtaining
critical values for the regular Ising model, it is more difficult to obtain corresponding values for small-world systems
due to less well studied $p$-regimes and the need for appropriate averaging of rewired lattice configurations. We
therefore describe our computational method in detail in section~\ref{sec:method}. Our main results concern the nature
of the rewiring models and the interpretation of the parameter $p$ and in section~\ref{sec:rewiring} we describe our
rewiring algorithm. In section~\ref{sec:results} we present some results of the Monte Carlo simulations and for some
static network properties such as the maximum path length in a rewired lattice. Finally we suggest some conclusions and
areas for further simulation in section~\ref{sec:conclusions}.
\section{Computational Method}
\label{sec:method}
Herrero noted the uncertainty as to whether his simulated system sizes were sufficiently large to truly be able to
explore the ``small-$p$'' regime. We have experimented with a variety of system sizes in both two- and three-dimensions. We
believe a useful rule of thumb is that at least 100 of the system's bonds need to be rewired to have a reliable and
measurable effect. This places a practical lower limit on the p-values that can be explored, given the practical upper
limits on system sizes that can be reliably simulated.
In view of the need to simulate large systems sizes, for long measurement periods and for many different $p$-values near
criticality, we felt it was important to investigate fast algorithms such as cluster updating methods. The Wolff
cluster updating algorithm~\cite{Wolff} carries the Ising spin configuration to a new point in phase space by
constructing a cluster of {\em like} spins with an appropriate probability, based on the coupling (and hence the simulated
temperature). This algorithm is particularly effective at temperatures near the critical point as it can flip very
large spin clusters and effectively overcomes the critical slowing down of a local update method such as Metropolis. In
this work we are interested in equilibrium properties, not dynamical ones and therefore the time evolution properties of
the update algorithm need not resemble any physical process. We verified our implementation of the Wolff algorithm
against implementations of both Metropolis and Glauber~\cite{Glauber} Monte Carlo updates. For the work reported here,
we note no discernible difference in quality of equilibrium properties estimated.
\subsection*{Computational Resourcing Issues}
Although the work was carried out on a mix of 32-bit and 64-bit microprocessors, we found that it is still only
tractable to simulate systems that comfortably fit within approximately 2 GBytes of memory. This is the addressable limit
attainable using a 32-bit signed integer, and although a bit-based model such as the Ising model can be implemented
using a compact storage scheme, for implementations of the Wolff cluster algorithm we require a full integer for each of
the $N$ spin sites. Furthermore, when investigating non-regular networks we need to explicitly store the neighbour
addresses for each site. For our code, we optimise for speed using a storage scheme that records the neighbour arcs for
every site -- thus storing a bond's source and destination site twice.
In the case of a rewiring model that allows different sites to have different coordination numbers, this scheme gives
rise to a storage budget of one spin-bit; one mark-bit; one coordination number (integer) and on average $2\times d$
site addresses per spin site. The practical upshot of this is that we can, in principal, simulate Ising systems of up
$384^3$ sites. In practice however, even with the Wolff algorithm to assist with equilibration and decorrelation near
the critical temperature, we found that the processors available to us for this work would take over 1 week of wall-clock
time to attain a useful measurement for one $T-p$ pair. Consequently we found that size limits of around $N = 256^3$
(around 16 million sites) were more practical. We anticipate that larger sizes (approaching $512^3$) will be practical
with 64-bit address spaces and the next generation of processor speeds.
In applying the Binder cumulant method we found it useful to run triples of systems size, such as N=$224^3$, $240^3$,
$256^3$ in three dimensions, or $992^2$, $1024^2$ and $1056^2$ in two dimensions. Although only two cumulant curves are
needed to obtain an intersection, three combine to also give a measure of uncertainty.
\subsection*{Monte Carlo Algorithmic Issues}
The Wolff cluster algorithm is not efficient when performing an initial quench from a hot or random spin configuration,
since the clusters it builds tend to be very small. Given the unknown dependencies on the parameter $p$ we erred on the
side of caution and generated a completely independent start configuration for each experimental $T-p$ pair
investigated. Typically we quenched a hot starting spin configuration to finite $T$ using on average 1000 Metropolis
hits per site. Although the debate in the literature over the effect of sweeping artifacts seems now resolved, we
avoided any bias accidentally introduced by sweeps by performing the Metropolis equilibration site hits in a random
order. So in effect we apply $N \times 1000$ randomly chosen local Metropolis hits following the quench. Since we are
typically interested in behaviour near $T_{c}^{p}$, we typically used the Wolff cluster method predominantly during
measurement, but again to err on the side of caution, we used a hybrid update step consisting of 100 Wolff cluster hits,
followed by $N$ local Metropolis hits for each measurement.
In determining the Binder cumulant and other statistics, we typically made 12 million measurement steps, dividing them
into blocks of 1 million, and discarding the first two blocks as additional equilibration. For the smaller system
sizes, we found that 100-150 thousand measurement steps were sufficient. The data blocking approach gave us a measure of
uncertainty in each cumulant value.
For the large systems sizes used it appears that good self-averaging properties hold and the results are largely
independent of the random rewiring pattern chosen for a particular run at a particular $p$ value. For small-$p$ values
and also for smaller system sizes we found a perceptible experimental spread of results and it was necessary to repeat
runs at the same $T-p$ value. In the work reported here we typically combined measured cumulant values from 16 or 25
completely independent runs. Study of the statistics indicated a satisfactory central-limiting behaviour and a ready
calculation of uncertainties in the cumulants so obtained.
In most of the work reported here, we used the lagged-Fibonacci random number generator of Marsaglia~\cite{Marsaglia}.
For some simulations on 64-bit platforms we used an implementation of the Mersenne-Twistor generator
algorithm~\cite{MersenneTwistor}. We are not aware of any concerns regarding periodicity or correlations due to either
of these generators.
\section{rewiring Algorithms}
\label{sec:rewiring}
The value of a small-world rewiring procedure is that the amount of space folding can be specified statistically by a
single parameter $p$, with the extreme value $p=0$ corresponding to a normal periodic hyper-cubic geometry lattice. The
case $p=1$ corresponds to a random network of spins.
Herrero and other authors report using a network rewiring algorithm that maintains an average
coordination number $z$ for each spin site, but that does allow individual spins to have a lesser or
greater $z$ value. This small-world rewiring algorithm is essentially that described by Watts and
Strogatz and works by randomly selecting $p \times N \times d$ of the $N \times d$ original regular-lattice bonds
and reassigning them to link random spin sites.
This algorithm can be implemented as:
\begin{enumerate}
\item choose a spin site {\bf A} at random
\item choose one of its $2 \times d$ existing neighbour {\bf B} at random
\item choose another (distinct) spin site {\bf C}
\item re-wire A to C, disconnecting A from B
\end{enumerate}
In this rewiring model $p$ is the probability that any of the $z \times N$ regular bonds has been rewired. It can be
verified experimentally that a requested $p$ value from the rewiring algorithm has been implemented by performing a
bond-by-bond comparison with each bond's regular lattice endpoints.
One difficulty with this model is that the system is also influenced by percolation transition effects. Even at
small-$p$ values individual spin sites have a finite probability of becoming completely disconnected from the rest of
the system. In practice the system will have a finite number of monomer and dimer spin sites, separate from the one
giant component. At large $p$ values (greater than $0.1$ for example) this is a serious effect and the spin system is
typically fragmented into more than one major component. Although this can be compensated for by only making
measurements on the largest component, it is an operational annoyance as well as seriously worsening finite-size effects
that impinge on the calculation of the critical temperature and exponents. Above the bond percolation threshold of $p =
0.5$ the system is nearly always very fragmented and consists of a number of similar-sized components.
A major aim of our work was to investigate the effect of a $z$-preserving network rewiring algorithm both in terms of
how it compared to the Watts-Strogatz edge-rewiring algorithm and also how to interpret its probabilistic rewiring
parameter $p$. To preserve coordination number $z$ exactly and for all spin sites it is necessary to re-wire the
regular lattice in terms of pairs of edges.
Our network is constructed from the starting lattice using a similar rewiring procedure. We modify the bonds of each site so
that it still has degree $z = 2 \times d$ bonds per site in dimension $d$, but that they may link, with probability $p$,
to a randomly chosen site elsewhere in the original lattice. We ensure each site links to other sites at most once, and
there are no self-bonds in our system. This is feasible below the percolation threshold. Specifically, our procedure
is:
\begin{enumerate}
\item choose a spin site {\bf A} at random
\item choose one of its $2 \times d$ existing neighbours {\bf B} at random
\item choose another (distinct) spin site {\bf C}
\item choose one of its neighbours {\bf D} at random
\item ensure {\bf A}, {\bf B}, {\bf C} and {\bf D} are all distinct to avoid self bonds and multiple bonds
\item re-wire {\bf A} to {\bf C} and {\bf B} to {\bf D}, thus exactly preserving $z$ for all of {\bf A}, {\bf B}, {\bf C}
and {\bf D}
\end{enumerate}
Repeating this procedure $p \times N \times d$ times achieves a rewiring of the regular lattice to have an effective
rewiring probability $p^{*}$ which can be subsequently measured by comparing spin-site neighbour-lists to that of the
regular lattice.
This algorithm has the additional desirable property that it is guaranteed not to fragment the lattice into multiple
components since each site still connects to $z$ distinct other sites.
\begin{figure}[htbp]
\begin{tabular}{cp{0.4cm}c}
\includegraphics[angle=0,width=4cm]{2D-wire3} & &
\includegraphics[angle=0,width=4cm]{2D-wire1} \\
a) & & b) \\
\end{tabular}
\caption{\label{fig:rewiring} Different rewiring models: a) Watts-Strogatz non-$z$
preserving model, b) Our $z$-preserving rewiring model. }
\end{figure}
Figure~\ref{fig:rewiring} shows an example of the two rewiring models for a regular square lattice substrate.
In practice, the achieved or effective rewiring probability $p^{*} < p$ by an amount that is of order $p^2$ since our
algorithm allows a site to be ``re-rewired''. There is therefore an exclusion effect that gives rise to a correction
term in $p$. Since, $p^{*}$ is readily measured, this is only an operational inconvenience and a chosen effective
$p^{*}$ can be set with an appropriate choice of $p$ given the solvable quadratic relationship between them and
experimentally fitted coefficients.
Over and above this exclusion correction however, it is important to note that {\em pairs} of bonds have been rewired.
This pairing leads to a factor of two in the effective value of $p$ for our model when we compare it with the
Watts-Strogatz rewiring model. This is explained when we consider the actual effect of a pair of rewired links
connecting sites {\bf A}, {\bf B}, {\bf C}, {\bf D} as shown in figure~\ref{fig:rewiring}. If {\bf A} and {\bf B} were
originally neighbours, and so were {\bf C} and {\bf D}, and now {\bf A}, {\bf B} are connected via a long-range pair of
links to {\bf C}, {\bf D} then the net effect is really only as if there were one long range link. This manifests itself
in the role of $p$ for our $z$-preserving rewiring model. In comparing the two models, we need to measure the actual
$p$ implemented by comparing with the regular lattice bond end-points. Then we should treat the effective $p$ for the
$z$-preserving model as half the effective $p$ achieved by the Watts-Strogatz model. This is further evidence that it
is the long-range nature of the rewired bonds that is behind the small-world critical shifts in the Ising model -- the
local topological details (such as which particular neighbour of {\bf A}, {\bf B}, etc. was rewired) are much less
important.
\section{Results and Discussion}
\label{sec:results}
The effect of the small-world network rewiring is to add long-range interactions across the lattice. This enables the
formation of correlations across the spin sites and consequently makes it easier for the Ising system to maintain
spontaneous magnetization at a higher critical temperature than it otherwise would. This effect is not dissimilar to
the finite-size effects of simulating {\em periodic} lattice models. The periodicity means that the system is able to
support spin correlations at a higher temperature that it otherwise could and consequently the critical temperature is
shifted to a higher value than that of a real system in the thermodynamic limit. A key step to understanding the
small-world effect is to quantify the resulting shift in the critical temperature for the different network rewiring
models.
\subsection*{Binder Cumulant Analysis}
We compute the temperature shift $\Delta T_c = T_{c}^{p} - T_{c}^{p=0}$ based on the critical temperature measured for a
small-world rewired Ising model system, compared with that of an unperturbed system on a regular lattice.
\begin{figure}[hbt]
\begin{center}
\includegraphics[angle=-90,width=9cm]{binder}
\caption{\label{fig:binder} Binder Cumulants }
\end{center}
\end{figure}
The critical temperature of a Monte Carlo simulated system can be calculated using the fourth-order Binder
cumulant method~\cite{Binder+LandauCumulant,Binder+Heermann}. The cumulant:
\begin{equation}
U_N = 1 - \frac{\langle M^4 \rangle_N}{3 \langle M^2 \rangle^2_N}
\label{eq:cumulant}
\end{equation}
is defined at different network sizes $N$, where $N=L^d$ based on the edge length $L$ of our substrate lattice. The
cumulant shows a transition edge at the critical temperature. The cumulant curves for different $N$ (or $L$) values
coincide at the critical temperature and this gives a way of extrapolating to the thermodynamic limit from relatively
small sized simulations. Figure~\ref{fig:binder} shows the form of the cumulant and how it has a sharper transition
edge at larger system sizes. A practical method of obtaining a critical temperature is to simulate at least three
different system sizes and by fitting straight lines to the linear region of the cumulant curve around the critical
temperature, calculate the intercept and an uncertainty estimate. For all the work we report in this paper we used at
least three system sizes. For small systems multiple independent rewiring configurations can be sampled. For very
large systems needed for estimating low-$p$ behaviour it is impractical to run more than a few independent samples and
consequently the measurement uncertainties are very much greater. Generally we have been able to estimate the critical
temperatures for two- and three-dimensional systems to around four significant figures, to three significant figures for
four-dimensional systems. Our five-dimensional system simulations are limited to qualitatively showing that a
small-world effect takes place.
\begin{figure}[hbt]
\begin{center}
\includegraphics[angle=-90,width=9cm]{Tc-vs-P}
\caption{\label{fig:kTperJz} Variation of the Critical Temperature against small-world (bond pair) rewiring probability for
Ising model in 2, 3, 4 and 5 dimensions, scaled by the underpinning regular lattice coordination number
$z$. Error-bars are comparable to symbol sizes and lines joining points are guides to the eye only.
}
\end{center}
\end{figure}
Generally, obtaining the shifted critical temperatures for different $p$-values was an iterative procedure. We employed
small lattice sizes to home in on approximate locations then used progressively larger system sizes to refine precision
and uncertainties. Figure~\ref{fig:kTperJz} shows the qualitative behaviour of the critical temperature as it varies
with $p$ for different dimensionalities.
Analysis of the shift $\Delta T_c = T_{c}^{p} - T_{c}^{p=0}$ in the critical temperature from the regular lattice value
of shows a power law of the form $\Delta T_{c}^{(p)} \approx p^s$. This is shown on a logarithmic scale for three
dimensions in figure~\ref{fig:collapse} where we have included all our three-dimensional data for both the
Watts-Strogatz rewiring model and our $z$-preserving model (after suitable corrections to the meaning of $p$), as well as
data taken from~\cite{Herrero}. We believe that within the bounds of experimental uncertainty these all agree.
This confirms that well below the percolation limit, where the system is essentially one single component, the behaviour
is independent of the rewiring model. We have been able to extend the simulation results of Herrero to much larger
system sizes and hence much smaller values of $p$.
\begin{figure}[hbt]
\begin{center}
\includegraphics[angle=-90,width=9cm]{deltaTc-vs-P}
\caption{\label{fig:collapse} Collapse of Wiring model 1 data onto Model 3 (actual p) after factor of two scaling }
\end{center}
\end{figure}
We studied both the Watts-Strogatz and our $z$-preserving rewiring models in detail on small-world systems constructed
from 2-D, 3-D and 4-D hyper-cubic lattices. For each dimension we studied at least three decades of $p$ values and in
the case of 3-D systems we studied six decades of $p$ down to $10^{-6}$ on systems of up to $N \approx 384^3$. A
preliminary study of a 5-D system also indicates that the small-world shift in $T_c$ does take place, although our 5-D
data is as yet insufficient to determine a useful value of $s$.
For $p \lesssim 0.1$ we are below the percolation regime and the system consists of one large component for the W-S
model. In this regime our rewiring model and the W-S rewiring model are in agreement within the limits of experimental
error, and furthermore agree closely with the 3-D data reported for a W-S rewiring model~\cite{Herrero}. Combining all
data we find $s \approx 0.698 \pm 0.002 $. Although we have less data for 2-D systems, we investigated larger system
sizes (up to $1024^2$) than Herrero, and do not find the tail-off he reports for small-$p$. We find a very good fit for
$s \approx 0.50 \pm 0.001$ for 2-D systems.
We conclude that the $\Delta T_{c}^{(p)} \approx p^s$ power-law is a very good description of all the data and any
deviation from it is indeed due to finite-size limitations that do not properly achieve the small-$p$ regime as Herrero
correctly suggests for his data.
\subsection*{Critical Exponent $\beta$}
It remains to explore the critical exponents of the simulated small-world system to determine in what manner the
system transitions between the Ising-type transitions at $p=0$ in two- and three-dimensions, and the mean-field like
transitions at high-$p$ values.
The critical exponent $\beta$ describes how the Ising model order parameter -- its magnetization $M$ -- diverges close to
the critical temperature. It is usually defined by:
\begin{equation}
\langle M \rangle \sim |T_{c}-T|^{\beta}, T < T_{c}
\end{equation}
and we define the reduced temperature $t \equiv T_{c} - T$ so that $\langle M \rangle \sim t^{\beta}, t > 0$.
We can follow Herrero and study the logarithmic derivative:
\begin{equation}
\mu(t) = \frac{d \log \langle M\rangle}{d \log t}
\end{equation}
This gives a qualitative indication of how the behaviour is changing as $p$ is varied.
The logarithmic derivative $\mu$ curves for different $p$ values appear to intercept the $t=0$ axis at values between a
3-D Ising-type transition $\beta$ value of $\approx 0.325$\cite{Gupta} and the mean-field value of $0.5$. Our 3-D data
has quite high uncertainties in it due to small numbers of samples with large system sizes, but it suggests a monotonic
increase in the limiting value of the $t=0$ intercept for $\mu$ with rising $p$. Our 2-D data also shows a steady
change between the 2-D Ising value for $\beta$ of $0.125$ and the mean-field value.
\begin{figure}[hbt]
\begin{center}
\includegraphics[angle=-90,width=9cm]{M-vs-T}
\caption{\label{fig:mag} Deriving $\beta$ from the Magnetization }
\end{center}
\end{figure}
Figure~\ref{fig:mag} shows the magnetization averaged over $25$ sample sizes for a three-dimensional system at a finite
value of $p$ as we approach the critical temperature from below. We can estimate the limiting value of the slope at low
$t$ (as shown in the inset) and use this to estimate values of $\beta$.
Applying this procedure we find a good straight line fit of $\log \beta$ vs $\log p$ characterised by
$\beta \approx p^{0.10 \pm 0.005}$ in 2-D and $\beta \approx p^{0.08 \pm 0.002}$ in 3-D.
We conclude that the model transitions continuously from Ising-like behaviour to mean-field behaviour as $p$ is
increased. We find no difference in these values for the two different rewiring models. This again emphasises the
power of the small-world parameter $p$ in interpolating between two different behaviour regimes.
\subsection*{Critical Exponent $\nu$}
The correlation length near the critical temperature is known to scale as:
\begin{equation}
\xi \sim |T-T_c|^{-\nu}
\end{equation}
and the correlation length exponent $\nu$ is known to have values~\cite{KadanoffBook} of $\nu = 0.632$ for 3-D systems;
$\nu = 1$ for a 2-D system and a value of $\nu = 0.5$ from mean-field theory.
At the linearised region near the critical temperature, an expansion indicates that the Binder cumulant depends upon
approximately on temperature~\cite{KimOnXY} as:
\begin{equation}
U_N(T) \approx U^* + U_1 \left( 1-\frac{T}{T_C} \right) N^{\frac{1}{\nu}}
\end{equation}
so that
\begin{equation}
\frac{\Delta U_N}{\Delta T} \propto -N^\frac{1}{\nu}
\end{equation}
On the work we report here on small-world systems based upon regular lattices with a fixed $z$, we can also write:
\begin{equation}
\frac{\Delta U_N}{\Delta T} \propto -L^\frac{d}{\nu}
\end{equation}
Straight line fits to the experimental Binder cumulants around the critical temperature yields values of $\nu$ that are
between the requisite Ising value and mean-field values. The experimental uncertainties in our data for $\nu$ are quite
high however, and although the values suggest a monotonic change between Ising and mean-field behaviour as $p$ is
increased, we are unable to identify a meaningful functional form to characterise the variation with $p$.
\subsection*{$p$-Dependence of the Transition}
It seems that the small-world transitional behaviour of the Ising model is intimately tied with enhancements to the
systems ability to support long-range correlations. It is therefore useful to consider the length scales present in
the rewired lattice. A useful metric is the maximum path length connecting two spin sites, as counted in terms of number of
traversed bonds or ``hops'' along edges of the graph.
\begin{figure}[hbt]
\begin{center}
\includegraphics[angle=-90,width=9cm]{dijkstra}
\caption{\label{fig:dijkstra} Mean All-pairs Dijkstra distance $\langle L \rangle$ averaged over 25 samples of $32
\times 32$ lattices perturbed with the Watts-Strogatz (W-S) and our $z$-preserving (z-p) rewiring models.}
\end{center}
\end{figure}
Figure~\ref{fig:dijkstra} shows the Mean All-pairs distance $\langle L \rangle$ calculated using the Dijkstra
algorithm~\cite{Dijkstra} for a 2-D lattice when perturbed with the Watts-Strogatz (W-S) and our $z$-preserving (z-p)
rewiring models. The calculation is shown for a $32 \times 32$ system with the two different rewiring algorithms
applied. The underpinning regular lattice is periodic so the furthest apart any two spin sites can be is $L / 2 =
16$ edge units. This is shown on the inset plot on a linear scale, where the curves tend towards a maximum value of $16$
at $p=0$.
There are three distinct $p$-regimes shown in the figure. At small values of $p$, for which the lattice size is big
enough to support a reasonable number of rewired bonds, the $\log-\log$ plot shows that $\langle L \rangle \approx
p^{m}$. We find $m \approx 0.20$ for this data set. At $p$-values that are too small for the lattice to support, the
straight line tails off. At high values of $p$ the value of $\langle L \rangle$ falls off to the value for a random
lattice.
Note that at high $p$ values, our $z$-preserving rewiring algorithm gives a different $\langle L \rangle$ value from the
W-S rewiring algorithm. This was discussed in section~\ref{sec:rewiring} and is due to exclusion effects of our
algorithm in avoiding self-bonds and multiple bonds. At high-$p$ the Watts-Strogatz rewiring algorithm breaks up the
system into multiple components. This is reflected in the averaged Dijkstra distance calculations, and for the W-S
rewiring $\langle L \rangle$ falls to zero in the limit of a completely random lattice.
Herrero hypothesized that the order-disorder transition temperature for small $p$ becomes
\begin{equation}
T_c - T_{c}^{p=0} \sim p^\frac{1}{\nu d}
\label{eq:hypothesis}
\end{equation}
where $\nu$ is the critical exponent of the regular (unperturbed) lattice. His data supports this in a 2-D system, as
does ours. However for a 3-D system his data disagree with this hypothesis and he speculated this was due to too small
system sizes and insufficiently small $p$. This conclusion was also reached by \cite{KimOnXY} for the XY
model. Nevertheless our value of $s \approx 0.698$ for a 3-D system, for which we have quite a high degree of
confidence, and which is derived from much larger simulated system sizes, is still in disagreement
with equation~\ref{eq:hypothesis}. Furthermore, in the case of our 4-D system, which has mean-field behaviour in the regular
unperturbed lattice case as well as in the rewired case, our measured value of $s = 0.75$ also clearly disagrees with
this hypothesis.
\section{Conclusions}
\label{sec:conclusions}
Generally the effect of the rewiring is to shift the the critical temperature $T_c$ upwards. We have determined this
remains the case in dimensions $2, 3, 4$ and $5$. We have also found that the precise local rewiring details are not
important to the nature of the small-world phase transition, providing the appropriate interpretation of parameter $p$
is made. We find that the shift in $T_c$ seems to go as a power law in $p$ in all dimensions.
Since our re-wire model preserves $z$ and hence the effective dimensionality of the underpinning lattice, we believe it is this
effect that principally gives rise to the $p$-concentration dependences of $T_c$ and not the change in effective
dimensionality of the system that arises when links are added. The small-world shortcuts support long-range spin-spin
correlations above the normal $T_c$ value. It appears that it is the presence of long-range correlations that
essentially constitute the nature of the phase transition and the critical behaviour; it is not unreasonable that the
small-worlding of the lattice has this effect.
Operationally we have found that a cluster-updating method such as that of Wolff is entirely satisfactory and its use
may assist in investigating more closely the small-world behaviour in large systems sizes at higher dimensionality. The
exact nature of the small-world transition and in particular its interplay with the Ising transition are still not
entirely clear.
We believe that the shift from Ising-like behaviour to mean-field behaviour is a gradual one, but it remains to
ascertain this with better statistical sampling in higher dimensional systems. A particular area worthy of further
attention is the nature of cluster break-up in the system under a Watts-Strogatz rewiring, and the associated path and
correlation lengths that arise for different cluster components.
\begin{acknowledgments}
This work was funded by Massey University. We thank P.D.Coddington for valuable discussions on use of the
Wolff cluster method.
\end{acknowledgments}
|
1,108,101,563,832 | arxiv | \section{Introduction}
The isolation of graphene\cite{nov04} has lead to a wide range of
scientific discoveries and technological
opportunities.\cite{Geim2007,Geim2009} These include, for instance, a
half-integer quantum Hall effect\cite{nov2005,zhang2005} with
potential for a drastically improved quantum resistance
standard,\cite{tza2010} and a high electron mobility in this
atomically thin crystal for applications in radiofrequency
electronics\cite{lin2010,liao2010} with reduced short channel
effects\cite{sch2010} or as a transparent flexible
electrode.\cite{bae2010} In a wide perspective, graphene is a material
with potential as a versatile and controllable bridge between the
atomic and the micron scales, with unique opportunities for
nanoelectronics applications. Chemistry tools may alter graphene
properites either globally (for example chemical
gating\cite{lara2011}) or locally (atoms and molecules binded to
graphene\cite{schedin2007,can2011}). At the same time, modern
lithographic techniques compatible with semi-conducting technology can
be used to pattern graphene into devices and integrate them with
conventional electronics.\cite{pal2010} Building on these discoveries,
we show in this paper how graphene patterned to form a nanogap can be
used as electrodes for gate-tunable molecular electronics with single
molecules. The transistor effect is achieved by gating the graphene
electrodes while additional device functionalities can be built into
the molecule bridging the nanogap.
The idea of utilizing single molecules as active elements for
electronics applications are based on a number of
observations,\cite{Joa_review,Cuniberti_book,Bjo_review,cuevas_book}
including device minituarization, reproducibility, and functionality.
Besides being the ultimately small object, molecules can be
mass-replicated and their functionality can be tailored through
molecular synthesis. Many functional single molecule devices have been
demonstrated,\cite{Tao_review} but many problems remain before
practical applications can be realized. Traditionally, a metal such as
gold has been used to make contacts, although other configurations
such as semiconducting substrates combined with the scanning tunneling
microscope\cite{gui2004,rak2004} (STM), have been shown to work as
well. The huge size mismatch between metallic leads ($>10$ nm
thick) and molecules is unavoidable, which makes nanogap fabrication
challenging. Utilization of the STM for contacting molecules is not a
scalable technology and mainly suited for making devices for
research. In addition to the problem of making nanogaps, an equally
important problem for molecular electronics is how to make good
molecular transistors.\cite{kub03,oso08} The difficulty is to put a
gate sufficiently close to the molecule in a metallic nanogap to
achieve a gate effect.
Here, we study theoretically the prospects of using graphene as a
platform for single molecule electronics, where graphene
nanostructures are used as contacts and interconnects instead of metal
wires. The main purpose of this paper is to show that a transistor
effect can be achieved by utilizing a back gate that changes the
electron density and the density of states at the Fermi level of the
graphene leads. This transistor effect works well when the coupling of
the gate to the molecule is {\it weak} compared with the coupling of
the gate to the leads, which is the opposite situation to a
traditional molecular transistor, including nanotube-based
devices\cite{guo2006} although sharing the same robustness as these
through a resonance tunneling mechanism.\cite{hyld} In addition to the
transistor effect, the usage of graphene, being one-atom thick, would
circumvent the size-mismatch problem experienced with metal contacts.
The current fast improvements in graphene patterning and device
fabrication,\cite{biro2010,wang10,he2010} have opened new
opportunities for making advanced devices. These include gas
sensors,\cite{schedin2007} nanopores for DNA
sequencing,\cite{postma2010,pra2011} and single-electron transistors
operated as read out devices.\cite{can2011} In the latter experiment,
magnetic molecules were deposited on top of a graphene
constriction. By utilizing an external magnetic field, the spin states
of the molecules were manipulated, as could be read off by the
graphene single-electron transistor working in its conducting
state. We conclude that with this rapid progress in graphene device
fabrication, the devices we shall study here can be made in the near
future.
\section{Model}
The geometry of the molecular transistor we are considering is shown
in Fig.~\ref{Fig-ELD}(a). The molecule, resembling a dumbbell,
consists of a central wire of 1,4-phenylenediamine with C60 end-groups
(i.e. fulleropyrrolidine terminated benzene), as depicted in
Fig.~\ref{Fig-ELD}(b). The wide graphene leads extends far from the
molecule and are electrically connected to source and drain. A back
gate can be used to change the position of the Dirac point in the
graphene bandstructure with respect to the molecular energy levels. We
describe this gate effect in more detail below.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1_ELD.jpg}
\caption{(a) Geometry of the transistor. (b) {\it Dumbbell} molecule
consisting of a 1,4-phenylenediamine bridge with C60 anchoring
groups. (c) Energy level diagram at zero source-drain voltage.
The work function $W$ of the leads can be changed by a few hundred
meV via a back gate so that the Dirac point $E_D$ in the graphene
density of states $N(E)$ is above or below (depicted) the Fermi
level $E_F$. The estimated charge transfer to the molecule,
reflected in the contact potential $CP$, is such that the Fermi
level intersects the LUMO.}
\label{Fig-ELD}
\end{figure}
The goal of this work is not to predict the functionality of a certain
molecule in great detail, or to reproduce or explain a certain
experiment. Rather, the goal is to show the most salient features of a
single-molecule transistor with graphene leads operating in the
quantum coherent regime. But we base our studies on the specific
molecule in Fig.~\ref{Fig-ELD}. This molecule has recently attracted a
lot of attention because it has promising properties for making
reproducible contacts via the large C60 anchoring
groups.\cite{BDC_zant,BDC_JOC} Thus, both leads and anchoring groups
are made of carbon.
We shall in this work use a minimal model based on a tight-binding
description. The Hamiltonian is
\begin{equation}
H = \sum_i E_i c_i^{\dagger}c_i + \sum_{i \neq j} t_{ij} c_i^{\dagger}c_j,
\label{hamiltonian}
\end{equation}
where $E_i$ are onsite energies, $t_{ij}$ are hopping amplitudes
between sites $j$ and $i$, and the operators $c_i^{\dagger}$ and $c_i$
create and destroy electrons on sites $i$. We concentrate on the
quantum coherent transport regime and leave effects of Coulomb
interaction to future studies. This corresponds to assuming a small
charging energy $U$ on the molecule compared with the molecular level
broadening $\Gamma$ due to the coupling to the leads. The graphene
nanostructured leads, as well as the molecule, are readily built up by
restricting the sites $i$ and $j$. We study both armchair and zigzag
graphene nanoribbon leads with nearest neightbor hopping amplitude
$t$, with and without edge disorder. We include large and wide
sections of the graphene leads in the calculations and connect them to
ideal ribbons connected to reservoirs through the technique of
self-energies. This is a Landauer approach based on non-equlibrium
Green's functions.\cite{cuevas_book,datta_book} We will for simplicity
focus on the low-bias regime and present results for the transmission
function of the device, as well as spectral charge current flow
patterns inside the device. We note that it is important to have wide
ribbons, with edge disorder on a length scale smaller than the ribbon
width, otherwise weak links may form at necks of the imperfect
graphene ribbon. Two such necks define a quantum dot with
single-electron transistor properties,\cite{ihn2010} which leads to
unwanted Coulomb blockade effects in the leads.
For the molecule, the C60 end groups and the benzene ring of the
bridge are all carbon based, while each link group in addition
contains a nitrogen atom. For our purposes, it is enough to model the
molecule within the tight-binding (H\"uckel) theory on equal footing
with the leads, and leave details to be explored in future
calculations and experiments. The parameters of the molecule are kept
to a minimum by only varying hopping between C60 end groups and the
center phenylenediamine bridge, while letting all other carbon atoms
have the same parameters in the molecule as in the graphene leads. The
molecule parameters are the onsite energy on the nitrogen atom
relative to the onsite energy of the carbon atoms,\cite{Inga}
$E_N=E_C-0.9|t|$, ($t$ is the C-C hopping amplitude), the hopping from
C60 to the nitrogen group $0.4t$, and the hopping from the benzene
ring to the nitrogen group $0.6t$, see Fig.~\ref{Fig-EIG}(b). The
energy levels and orbitals of the C60 end groups are known within the
tight-binding model.\cite{Dresselhaus_book} For the bridge, we have
compared our model with orbitals obtained with the freely available
quantum chemistry package GAMESS,\cite{QC} see
Fig.~\ref{Fig-EIG}(a). We would like to emphasize that if the
parameters of this model are varied, unimportant details of the
results may change, but the general principles of how the transistor
operates will not change.
\begin{figure}
\includegraphics[width=\columnwidth]{fig2_EIG2.jpg}
\caption{(a) Molecular eigenvalues of the phenylenediamine bridge
without C60 end-groups (black squares), the C60 molecule (red
rings), and the dumbbell molecule (blue stars) within the
tight-binding model with the indicated parameters. The dumbbell
HOMO originates from the phenylenediamine bridge, while the LUMO
originates from the end-groups. The insets show the HOMO (left)
and LUMO (right) of the phenylenediamine bridge computed with the
quantum chemistry package GAMESS.\cite{QC} The symmetries of these
orbitals are also obtained within the tight-binding model. (b)
Geometry of the molecular bridge. Carbon atoms number 1, 2, 7, and
8 are in $sp^3$ orbital hybridization and do not participate in
current transport, but we include next-nearest neighbor coupling
$t_{CN}=0.4t$ between atoms 3, 4, 5, and 6 to the nitrogen atom
(number 9). The hopping from nitrogen to the nearest neighbor in
the benzene ring (atom number 10) is $t_{NC}=0.6t$. (c) Bernal
stacking of a hexagon in C60 (red) and graphene (black);
orientation O2 in Fig.~\ref{Fig-vdW}. (d) Stacking for a
$30^{\circ}$ rotation of the C60; orientation O1 in
Fig.~\ref{Fig-vdW}. The nearest neighbor hopping [vertical in (c)
and dotted in (d)] and next-nearest neighbor hopping (dashed
lines) between the two layers are $t_1=0.28t$ and $t_2=0.22t$,
respectively.}
\label{Fig-EIG}
\end{figure}
Finally, to determine the C$_{60}$-on-graphene adsorption geometry and
estimate both the magnitude of effective C$_{60}$-graphene hopping
constants and the C$_{60}$-graphene charge transfer, we have performed
a study using the van der Waals density functional (vdW-DF)
method.\cite{dion2004,thon2007} The results have been obtained in a
non-selfconsistent evaluation of the most recent version,
vdW-DF2,\cite{lee2010} a version which some of us have previously
found gives an accurate description of the binding in both a C$_{60}$
crystal and of graphene layers.\cite{ber2011} We have used a
plane-wave code and a standard semi-local density functional
approximation, to obtain underlying results for the electron-density
variation (as a function of adsorption geometries and distances). Our
choice of non-selfconsistent vdW-DF evaluations is motivated by a
recent analysis.\cite{thon2007}
We present in Fig.~\ref{Fig-vdW} a summary of our vdW-DF study. Panel
(a) shows vdW-DF2 results for the variation in binding energy of
C$_{60}$ on graphene with center-of-mass separation for three typical
low-energy adsorption geometries that we have investigated in an
extended search. We find that there is a systematic preference for
adsorption with the C$_{60}$ hexagon facing down and situated on top
of a graphene atom. The panel also provides a comparison of this type
of adsorption geometries (identified by the inserts which shows
C$_{60}$ atoms in purple, graphene atoms in black) and we find that
the most favorable configuration corresponds to a 30$^{\circ}$
rotation of what would amont to a Bernal stacking of the hexagon on
the graphene. Important for the transport modeling, we predict that
the optimal adsorption separation is smaller than the value (vertical
dashed line) which would corresponds to the predicted layer separation
in graphite. We conclude that the effective C$_{60}$-to-graphene
hopping constants must be chosen larger than the choice which is made
in a tight-binding modeling of graphite; in the qualitative transport
modeling below, we simply set the enhancement at a factor of two. The
two lowest energy configurations (solid and dashed lines in
Fig.~\ref{Fig-vdW}) corresponds, in the transport calculation below,
to anchoring of the dumbbell molecule for zigzag (orientation O1) and
armchair (orientation O2) leads, respectively [see
Fig.~\ref{Fig-EIG}(c)-(d)].
In Fig.~\ref{Fig-vdW}(b) we show the details of the C$_{60}$
adsorption and the complex charge transfer which we have calculated
for the optimal adsorption geometry. The blue colors shows regions of
electron accumulation whereas the red regions identify an electron
depletion (relative to a superposition of the graphene and C$_{60}$
electron densities). The panel shows that the binding is characterized
not only by van der Waals forces but also by a pronounced dipole (and
even multipole) formation. In addition, we find\cite{neat2006} a net
charge transfer from the graphene and to the C $_{60}$. We find that
the C$_{60}$-on-graphene binding is beyond simple physisorption as the
charge rearrangement causes a work function
shift.\cite{neat2006,kaas2008}
\begin{figure}
\includegraphics[width=\columnwidth]{fig3_vdW.jpg}
\caption{Results of a density functional theory calculation
including the van der Waals interaction. (a) Binding energy of C60
on graphene for three high-symmetry configurations. The solid
black line (orientation O1) and dashed black line (orientation O2)
correspond to dumbbell molecule anchoring for zigzag and armchair
leads, respectively. In both cases, a hexagon of the C60 faces
graphene. The configuration with a pentagon facing graphene (red
line) is less favorable. (b) The charge density distribution at
the C60-graphene contact for orientation O1. The blue (red) area
is negative (positive) charge, implying a visible charge transfer
to C60 from graphene.}
\label{Fig-vdW}
\end{figure}
The first step in the transport characterization is to obtain the
retarded Green's function $G^R(E)$ of the system, which is a matrix in
the site indices. We utilize our own implementation of a recently
developed recursive algorithm\cite{waintal} within which the sites are
added one by one, which is ideal for our devices with complicated
geometries. The advanced Green's function is obtained by hermitian
conjugation $G^A(E)=\left[G^R(E)\right]^{\dagger}$. Observables are
related to the lesser Green's function $G^<$. In the absence of
electron correlations, the expression for the lesser Green's function
is reduced to the form
\begin{eqnarray}
G_{ij}^<(E) &=& \sum_{\ell} f_{\ell}(E) \nonumber\\
&&\times \sum_{c\tilde c} G_{ic}^R(E)
\left\{
\left[\Sigma_{\ell}^R(E)\right]^{\dagger}-\Sigma_{\ell}^R(E)
\right\}_{c\tilde c}
G_{\tilde cj}^A(E).\nonumber
\end{eqnarray}
It involves the distribution functions of the leads $f_{\ell}(E)$ and
self-energies $\Sigma_{\ell}^R(E)$ at the surfaces of the leads that
remain after eliminating the leads in favor of the system Green's
function. The leads are enumerated by the index $\ell$ (here $\ell=1$
and $2$ for source and drain) and surface sites of the leads are
labeled by $c$ and $\tilde c$. Local charge current flow in the device
(bond current between sites $i$ and $j$) is written as
\begin{equation}
I_{ij} = e\int_{-\infty}^{\infty}
\left[ t_{ij}G_{ji}^<(E)-t_{ji}G_{ij}^<(E) \right]dE.
\label{bondcurrent}
\end{equation}
The transmission function can also be written in terms of the retarded
Green's function and self-energies of the leads,
\begin{equation}
T(E) = \mbox{Tr}\left[ \Gamma_1(E) G^R_{12}(E) \Gamma_2(E) G^A_{21}(E) \right]
\end{equation}
where $\Gamma_{\ell}=i[\Sigma_{\ell}^R -
(\Sigma_{\ell}^R)^{\dagger}]$, $G^R_{12}$ is the propagator between
leads $1$ and $2$, and the trace is over the surface sites. Or we may
compute $T(E)$ by integrating the bond-currents
[Eq.~\ref{bondcurrent}] flowing through an interface of the device.
\section{Transistor effect}
In Fig.~\ref{Fig-transistor}(a) we show an example of a transmission
function for one molecule in the center of the graphene nanogap in a
symmetric position, here for armchair leads. The transmission displays
typical resonance features near the molecular levels of the isolated
molecule. The levels are shifted and broadened by the coupling to the
leads. The amount of broadening depends on the exact coupling of the
molecule to the graphene, but also on the nature of the molecular
orbitals. For this particular molecule, the LUMO is mainly centered on
the C60 anchoring groups that act as effective extensions of the
leads, while the bridge acts as the weak link in the
system. Functionality can be added to the device by choosing a
different bridge by exchanging the benzene ring during molecular
synthesis.\cite{BDC_JOC} But here we shall continue working with the
benzene bridge and focus on the transistor effect.
In Fig.~\ref{Fig-transistor}(b) we present a contour plot of the
transmission function for energies (vertical axis) near the LUMO as we
rigidly shift the band structure of the graphene leads relative the
molecular level by a back gate voltage (horizontal axis with a
transfer function $\alpha$ between the gate voltage and the shift of
the Dirac point). The gate effect we have in mind can be visualized
starting from Fig.~\ref{Fig-ELD} as moving the graphene bands
vertically keeping molecular levels and the Fermi level fixed. When
the Dirac point is far from the molecular level, the transmission
resonance corresponds to the transistor in the on-state. As the Dirac
point in the bandstructure approaches the molecular level, either from
below or above in energy, the transmission resonance is shifted due to
hybridization with zigzag nanogap edge states. The possibility of such
hybridization was also noted recently in a DFT
calculation.\cite{mot2011}. When the Dirac point passes through the
level, the transmission is suppressed and the transistor is in the
off-state, see Fig.~\ref{Fig-transistor}(b) at $\alpha
eV_g=E_{\mbox{\footnotesize LUMO}}\approx 0.133t$. The on and off
states will be well separated when the Dirac point can be shifted by
$\delta E_D>\Gamma$, larger than the molecular level broadening
$\Gamma$.
\begin{figure}
\includegraphics[width=\columnwidth]{fig4_Transmission1.jpg}
\caption{Electron transport through a single dumbbell molecule
connected to armchair graphene leads with perfect edges. Anchoring
of the dumbbell molecule in orientation O2. (a) Transmission as
function of energy when the Dirac point (here $E=0$) is below the
LUMO, $E_{\mbox{\footnotesize LUMO}}\approx 0.133t$. (b)
Transmission as function of energy and back-gate voltage for
energies close to the LUMO. As the Dirac point is tuned by the
back-gate through the molecular level, the transmission is quenched
which leads to a transistor effect. (c) A sketch of the movement
of the Dirac point through the molecular level as the graphene lead
work function is tuned by the back gate voltage. We estimate (see
text) that the charge transfer to the molecule increases as the
Dirac point $E_D$ in the graphene band structure is tuned to be
below the Fermi level.}
\label{Fig-transistor}
\end{figure}
The transistor on-off ratio will be large when the Fermi level $E_F$
is aligned close to a molecular level. This is the case here, with
$E_F$ in the broadened LUMO. We can estimate the molecular level
alignment\cite{Datta_Review} with respect to the Fermi level by
estimating the charge transfer between graphene and C60. We do that by
comparing the work function of graphene with metals for which the
charge transfer to C60 has been measured. It has been shown by
scanning Kelvin probe microscopy\cite{Yu_NL2009} that the application
of a back gate voltage results in a change of the graphene work
function between 4.5~eV (electron doped) and 4.8~eV (hole
doped). Scanning tunneling experiments and DFT calculations of C60 on
gold and silver surfaces show\cite{Lu_PRB2004} that the charge
transfer to C60 from gold is vanishingly small, while it is of order
$0.2e$ from silver. The work function of silver is 4.6~eV, while that
of gold is 5.3~eV. This picture is corroborated by our vdW-DF study
which identified a net charge transfer and dipole formation. In
summary, we draw the conclusion that there is considerable
charge-transfer effects already for hole-doped leads ($W=4.8$~eV),
which results in the Fermi level energy in our system aligned inside
the broadened LUMO. As the gate voltage is changed, the charge
transfer to the molecule will increase as we go through the Dirac
(neutrality) point of graphene to the electron doped side (eventually
reaching $W=4.5$~eV), which results in a Fermi level deeper inside the
LUMO. Based on these estimates, where the Dirac point can be shifted
by at least $0.1t$ through the LUMO, see Fig.~\ref{Fig-transistor}(d),
the on-off ratio will be large but the precise value will ultimately
have to be determined by experiment. The back gate is straightforward
to use compared with the traditional direct gating of the molecule
itself. In fact, the transistor effect is most effective when the gate
is decoupled from the molecule and only affects the graphene leads.
In Fig.~\ref{Fig-transistor2}(a) we show the gate effect for the case
of wide zigzag graphene leads, with the anchoring of the C60 end
groups in orientation O1. For this orientation, there is no nanogap
edge states, since the nanogap has armchair orientation. The strong
hybridization of the molecular level with lead states is therefor
absent and we predict a simple weak shift of the molecular level with
gate voltage. As the Dirac point is tuned through the molecular level,
the transmission is quenched, and we have a transistor effect
analogous to the case with armchair leads discussed above.
\begin{figure}
\includegraphics[width=\columnwidth]{fig5_Transmission2.jpg}
\caption{Electron transport through a single dumbbell molecule
connected to graphene leads. (a) Zigzag leads with perfect edges;
anchoring of the dumbbell molecule in orientation O1. (b) Armchair
leads with random rough edges; compare Fig.~\ref{Fig-transistor}(b)
for perfect edges.}
\label{Fig-transistor2}
\end{figure}
We note that graphene itself (without nanogap and molecules as weak
links) works as a transistor via the back gate. However, the graphene
transistor can not be set in the off-state as the minimal conductivity
is of order $e^2/h$, in contrast to the molecular transistor with
graphene leads that we study here. A nanoribbon has an energy gap
related to its width, and would work as a transistor with an
off-state. However, the required ribbon width is small (a few nm) and
it is very difficult to control the nanopatterning with the required
atomic resolution. In contrast, as we show below, the molecular
graphene nano-gap transistor is more robust against edge disorder in
the graphene leads.
\section{Charge flow patterns}
Since graphene is 100~\% surface, it is an ideal material to study by
scanning techniques.\cite{con2010} Previously, atoms and molecules on
metal surfaces\cite{Lu_PRB2004} or large-area graphene\cite{bra2011}
have been studied by scanning tunneling microscopy (STM) and
spectroscopy (STS) and valuable details about e.g. orbitals and charge
transfer have been obtained. Scanning techniques have been utilized to
reveal local information about Coulomb interactions in graphene
nanostructures.\cite{schnez10} Quantum transport through quantum point
contacts in 2DEGs have been mapped out\cite{Topinka_review} by
scanning gate spectrocsopy (SGS) and revealed so-called branched flow
originating from a background random potential due to doping
impurities in the layers forming the 2DEG. SGS has also been used to
reveal coherent electron transport in large-area graphene
flakes.\cite{ber2010a,ber2010b} Clearly, transport in a molecular
device with metal electrodes is hidden by the bulky nature of the
metallic contacts. Graphene on the other hand, being two-dimensional,
would be a uniquely suitable electrode enabling information about
quantum transport in a molecular device to be revealed by scanning
techniques.
\begin{figure}
\includegraphics[width=\columnwidth]{fig6_flow.jpg}
\caption{(a) Spectral current flow pattern through a transistor
with one molecule in the nanogap. Note that the molecule is
much smaller than the lobe structured pattern in the charge flow
in and out of the molecule visible in the leads. (b) Spectral
current flow pattern for the case of two molecules in the nanogap.
(c) Spectral current flow patterns disturbed by random edges of
the leads. In all cases, we assume zero temperature and study the
linear low-voltage regime.}
\label{Fig-flow}
\end{figure}
In Fig.~\ref{Fig-flow}(a) we present the spectral current flow pattern
[the integrand in Eq.~\ref{bondcurrent}] through the device for an
energy corresponding to the top of the LUMO peak in the transmission
shown in Fig.~\ref{Fig-transistor}(a). The position of the molecule is
clearly visible in the current flow pattern. The molecule forms the
weak link where all current is channeled through. The current is
flowing in and out of the molecule in a characteristic lobe pattern,
that is due to the specific anchoring of the C60 on graphene. Deep
inside the electrodes, the current is carried throughout the width of
the ribbon.
In Fig.~\ref{Fig-flow}(a) the molecule is in a symmetric position in
the nanogap. If the molecule is in an asymmetric position, the charge
flow pattern is simply displaced vertically and changes in an
intuitive way (not shown) and the transmission function remains
unchanged unless the molecule is very close to the upper or lower
edges of the nanogap, within a few rings in the graphene
leads. Similarly, if two molecules are bridging the nanogap, to a good
approximation two flow patterns are simply superimposed, as we show in
Fig.~\ref{Fig-flow}(b). We may expect quantum intereference between
the two pathways (two molecules). The effect on the transmission
function is weak in this example, however, unless the two molecules
are very close to each other, with the anchoring groups only separated
by a few rings in the graphene leads.
In a real device, perfect armchair or zigzag edges are hard to
fabricate. In Fig.~\ref{Fig-flow}(c) we show an example of the effect
of edge disorder, consisting of randomly removed carbon atoms within
one ring from the original perfect edges. The interference patterns in
the leads are now affected, but the molecular weak link remains
clearly visible in the patterns of current flowing in to and out of
the molecule. Also the transmission function $T(E)$ is in its main
features unaffected by defects in the leads, as we show in
Fig.~\ref{Fig-transistor2}(b) [compare
Fig.~\ref{Fig-transistor}(b)]. Ideal graphene leads are not crucial
for the transistor to function, since the weak link is the molecule.
\section{Summary and Conclusions}
In conclusion, we have studied a single molecule device with graphene
leads working in the quantum coherent transport regime. We predict a
transistor effect that is pronounced when the gate coupling to the
leads is much stronger than to the molecule. This opens new avenues
for research of gate tunable quantum coherent molecular electronics
with single molecules.
\section*{Acknowledgement}
We would like to thank V. Geskin and G. Wendin for valuable
discussions at various stages of this work. This work has been
supported by the European Union through the projects SINGLE and
ConceptGraphene, as well as SSF, the Swedish foundation for strategic
research.
|
1,108,101,563,833 | arxiv | \section{Introduction}
The quantum linear systems (QLS) problem asks for a state that encodes the solution of a linear system $\mat A\vec{x}=\vec{b}$ where $\mat A \in \mathbb R^{n\times n}$ and $\vec{b} \in \mathbb R^n$.\footnote{Without loss of generality, one may assume that $\mat A$ is Hermitian.}
In a seminal work, Harrow, Hassidim, and Lloyd showed how to solve the QLS-problem using only $\polylog(n)$ queries to the input~\cite{HHL09}. Their algorithm has a polynomial dependence on the condition number $\kappa$ of $\mat A$ and the desired precision $\varepsilon>0$. Subsequent work has improved the $\kappa$-dependence to near linear~\cite{Ambainis2012VTAA},\footnote{Here by a near linear runtime in terms of $\kappa$ we mean a runtime that scales as $\kappa \polylog(\kappa)$.} and the error-dependence to $\polylog(1/\varepsilon)$~\cite{CKS17}. The algorithms in \cites{HHL09,Ambainis2012VTAA,CKS17} can all be viewed as implementing a polynomial transformation of $\mat A $ that approximates the inverse. They are based on various combinations of Hamiltonian simulation, quantum walks, linear combinations of unitaries, and most recently the quantum singular value transformation framework~\cite{Low2019hamiltonian,Gilyen2019SVT}.
More generally, an efficient QLS algorithm is a key building block for many downstream applications in optimization and in machine learning. Some examples include least-squares regression \cite{Chakraborty2019BE}, support-vector machines \cite{Rebentrost2014SVM,Kerenidis2021QSVM}, as well as differential equations \cite{linden2020quantum,tong2020fast}. Thus, optimizing the resources (depth in particular) required by the QLS algorithm would bring us closer to running these algorithms on near-term quantum hardware.
Currently, the best QLS algorithm is based on the polynomial by Childs, Kothari and Somma \cite{CKS17} (abbreviated as CKS from now on), which is evaluated by the quantum singular value transformation (QSVT) framework \cite{Gilyen2019SVT} by Gily\'{e}n, Su, Low and Wiebe, and sped-up using the variable-time amplitude amplification technique due to Ambainis \cite{Ambainis2012VTAA}. In a nutshell, the CKS polynomial is obtained by starting from the polynomial $p_t(x):= \frac{1-(1-x^2)^t}{x}$ for $t = \widetilde{O}(\kappa^2)$, expressing it in the Chebyshev basis, and truncating the sum after $\widetilde{O}(\kappa)$ terms. To obtain a quantum algorithm the resulting polynomial is then combined with either the linear combination of unitaries (LCU) lemma \cite{Berry2015Hamiltonian} or with the QSVT framework. The LCU approach is simpler but yields circuits with an extra multiplicative logarithmic factor in depth and an extra additive logarithmic number of ancillary qubits. On the other hand, the QSVT framework requires the computation of certain angles (see \cref{sec: QSVT} for details), and doing so efficiently in a numerically stable way is the subject of ongoing research \cites{Haah2019product, chao2020finding, dong2021efficient}.
In this work, we revisit quantum linear system solvers by conceptually linking the previous and our techniques to classical gradient descent methods, and by providing the optimal quantum circuits within this framework. In particular, we point out a connection between classical iterative methods for solving $\mat A\vec{x}=\vec{b}$ and the polynomials used in the quantum algorithms, namely that they correspond to the ones used in the (basic) gradient descent on the convex function $\|\mat A\vec{x}-\vec{b}\|_2^2$.
Our main contribution is to show that the optimal classical iterative method (known as \emph{Chebyshev iteration}) also leads to polynomials that can be implemented on a quantum computer, see \cref{thm:main}. This leads to a considerable constant-factor improvement in the runtime of QLS-solvers (or, conversely, an improved error with the same runtime/circuit depth).
In more detail, our approach is as follows. First recall that the Chebyshev iteration corresponds to the polynomial
\[
q_t(x) := \frac{\left. \mathcal{T}_t\left(\frac{1+1/\kappa^2-2x^2}{1-1/\kappa^2}\right) \middle/ \mathcal{T}_t\left(\frac{1+1/\kappa^2}{1-1/\kappa^2}\right) \right.}{x},
\]
where $\mathcal{T}_t$ is the $t$-th Chebyshev polynomial of the first kind. These Chebyshev polynomials are defined as $\mathcal{T}_0(x) = 1$, $\mathcal{T}_1(x) = x$, and $\mathcal{T}_{t+1}(x) = 2x \mathcal{T}_t(x) -\mathcal{T}_{t-1}(x)$ for $t \geq 1$. They have the property that $|\mathcal{T}_t(x)|\leq 1$ for all $x \in [-1,1]$ and $t \geq 0$. One can show that the polynomial $q_t$ is an $\varepsilon$-approximation of the inverse on the domain $x \in [-1, -1/\kappa] \cup [1/\kappa, 1]$, whenever $t \geq \frac12 \kappa \log(2\kappa^2 / \varepsilon)$. To bound the maximum absolute value of $q_t$ on $[-1,1]$, we express $q_t(x)$ as $\sum_{i=0}^{t-1} c_i \mathcal{T}_{2i+1}(x)$ and bound the $1$-norm of the vector $\vec c$. The vector of coefficients can be used to implement $q_t(\mat A)/\|\vec c\|_1$ either directly via the linear combinations of unitaries approach, or via the quantum singular value transformation approach. In \cref{app:Good example functions for LCU} we show that this approach of bounding the $1$-norm of the vector of coefficients in the Chebyshev basis more generally leads to near optimal quantum algorithms via the LCU framework for a variety of continuous functions (powers of monomials, exponentials, logarithms) and discontinuous functions (the error function and by extension the sign and rectangle functions).
The state of the art quantum linear systems solvers have a complexity that grows linearly in the condition number $\kappa$. In the small-$\kappa$ regime ($\kappa = O(n)$), it has long been known that $\Omega(\kappa)$ queries to the entries of the matrix are also needed for general linear systems~\cite{HHL09} and recently this bound has (surprisingly) been extended to the case of positive definite systems \cite{orsucci2021solving}. For larger $\kappa$ less is known. For example, we do not know if quantum algorithms can improve classical algorithms if $\kappa$ is large (i.e., can we beat matrix multiplication time?). We do not even have a linear lower bound: are $\Omega(n^2)$ queries needed when $\kappa = \Omega(n^2)$? In \cite{Dorn2009QueryComplexity} this question was answered positively when one wants to obtain a \emph{classical description} of $\mat A^{-1}\vec b$ and here we present a simplified proof of this result.
\paragraph{Organization.} In \cref{sec:Convex optimization} we recall the gradient descent algorithm and elaborate its connection to the algorithm of~\cite{CKS17}. In \cref{sec:Chebyshev iteration} we discuss Chebyshev iteration, the optimal\footnote{We will make clear in what sense it is optimal in \cref{sec:Chebyshev iteration}} iterative method for solving linear systems. We show in \cref{sec:Quantum algorithm} that Chebyshev iteration lead to polynomials that can be efficiently implemented using for example the QSVT framework.
Finally, in \cref{sec:Query lower bound}, we give an overview of known lower bounds on the complexity of quantum linear system solvers both in the small $\kappa$ regime and in the large $\kappa$ regime.
\section{Preliminaries}
\subsection{Polynomials and approximations}
\paragraph{Problem definition.} We consider linear systems that are defined by a Hermitian $n$-by-$n$ matrix $\mat A \in \mathbb C^{n \times n}$ and a unit vector $\vec{b} \in \mathbb C^n$. We use $\kappa$ to denote the condition number of $\mat A $, that is, we assume that all non-zero eigenvalues of $\mat A $ lie in the set $D_\kappa := [-1,-1/\kappa] \cup [1/\kappa,1]$. Our goal is to \emph{approximately} solve the linear system
\[
\mat A\vec{x} =\vec{b}.
\]
One can consider different notions of approximate solutions. Two natural ones are the following:
\begin{enumerate}
\item[1)] return $\vec{\tilde x}$ such that $\|\vec{\tilde x} - \mat A^{-1}\vec{b}\| \leq \varepsilon$.
\item[2)] return $\vec{\tilde x}$ such that $\|\mat A \vec{\tilde x} - \vec{b}\| \leq \varepsilon$.
\end{enumerate}
Up to a change in $\varepsilon$, the two notions are equivalent. Indeed, we have the chain of inequalities
\begin{equation}\label{eq:equivalence of QLSP definitions}
\|\mat A\vec{x}-\vec{b}\| \leq \|\vec{x}-\mat A^{-1}\vec{b}\| \leq \kappa \|\mat A\vec{x}-\vec{b}\|.
\end{equation}
We will focus on algorithms that achieve a polylogarithmic dependence in $\varepsilon$. In \cref{sec:Chebyshev iteration} we construct the optimal degree-$t$ polynomial for approximation in the second notion, see \cref{def:QLSP}. Prior work \cite{CKS17, Chakraborty2019BE, Gilyen2019SVT} focused on the first notion of approximation, which is equivalent up to polylogarithm factors in the complexity. In \cref{sec: comparison} we show (numerically) that our polynomials also improve over prior work with respect to approximation in the first notion.
\paragraph{Polynomials.}
Given a polynomial $p(x) = \sum_{t=1}^T c_t x^t$ with coefficients $c_t \in \mathbb C$, and a Hermitian matrix $\mat A $, we define $p(\mat A) = \sum_{t=1}^T c_t \mat A^t$. If we let $\mat A = \sum_{i=1}^n \lambda_i \vec{u}_i \vec{u}_i^*$ be the eigendecomposition of $\mat A $, then $p(\mat A) = \sum_{i=1}^n p(\lambda_i) \vec{u}_i \vec{u}_i^*$.
\paragraph{Chebyshev decomposition.}
It is also useful to consider the \emph{Chebyshev decomposition} of $p(x)$, i.e., the decomposition
\[
p(x) = \sum_{i=0}^t c_i \mathcal{T}_i(x)
\]
in the basis $\{ \mathcal{T}_0(x), \mathcal{T}_1(x), \dots, \mathcal{T}_{t}(x)\}$, for some vector $\vec c = (c_i)_{i \in \{0,\ldots,t\}}$ of coefficients. One can give an analytic expression for the coefficients $c_i$ using the fact that the Chebyshev polynomials are orthogonal with respect to the \emph{Chebyshev measure} which is defined in terms of the Lebesgue measure as $\dif \mu(x) = (1-x^2)^{-1/2}\, \dif x$. In other words, $c_i = \int_{-1}^1 \frac{p(x) \mathcal{T}_i(x)}{\sqrt{1-x^2}}\dif x$. Note that in practice this integral is rarely computed explicitly, as there exist efficient interpolation-based methods for computing the coefficient-vector $\vec{c}$ \cite{Gentleman1972DCT}.
\paragraph{Approximating the inverse.} We focus on methods to obtain a vector $\vec{\tilde x}$ that approximates $\mat A ^{-1}\vec{b}$ that are based on polynomials that approximate the inverse function $\lambda \mapsto \lambda^{-1}$ on the domain $[1/\kappa,1]$ (in the case of positive definite matrices) or $D_\kappa$ (in the general case). For example, let $\mat A$ be a Hermitian matrix with eigenvalues in $[1/\kappa,1]$ and let $p:\mathbb R \to \mathbb R$ be a polynomial such that $|p(\lambda)-\lambda^{-1}|\leq \varepsilon$ for $\lambda \in [1/\kappa,1]$. Then $\vec{\tilde x} := p(\mat A) \vec{b}$ satisfies
\[
\|\vec{\tilde x} - \mat A^{-1}\vec{b}\| = \|\sum_i (p(\lambda_i) -\lambda_i^{-1}) \vec{u}_i \vec{u}_i^* \vec{b}\| \leq \|\sum_i (p(\lambda_i) -\lambda_i^{-1}) \vec{u}_i \vec{u}_i^*\| \|\vec{b}\| \leq \varepsilon \|\vec{b}\|.
\]
\subsection{Quantum preliminaries}
There exist different input models that one might consider when solving the linear system problem.
In the standard case of a dense matrix $\mat A $, one might assume that all entries of $\mat A $ are already stored in memory. Alternatively, if $\mat A $ is sparse, sometimes it is more efficient to consider oracle access to its nonzero entries. In the quantum setting, this \emph{sparse-access} model is particularly amenable to speedups. In the sparse-access model we assume that access to $\mat A $ is provided through two oracles
\[
\mathcal O_\text{nz}: \ket{j, \ell} \mapsto \ket{j, \nu(j, \ell)} \text{ and } \mathcal O_A: \ket{j, k, z} \mapsto \ket{j, k, z \oplus A_{jk}},
\]
where $\nu(j, \ell)$ is the row index of the $\ell$th nonzero entry of the $j$th column. Many quantum algorithms can be phrased naturally in terms of a different input model called the \emph{block-encoding} model \cite{Low2019hamiltonian, Chakraborty2019BE}. (One can efficiently construct a block-encoding, given sparse access.)
\begin{definition}[Block encoding]
Let $\mat A \in \mathbb R^{n \times n}$ be a Hermitian matrix, and let $N\in \mathbb N$ be such that $n = 2^N$, and let $\mu \geq 1$. The $(N + a)$-qubit operator $U_{\mat A}$ is a $(\mu, a)$-block-encoding of $\mat A $ if it satisfies $\mat A = \mu(\bra{0}^{\otimes a}\otimes I) U_{\mat A} (\ket{0}^{\otimes a}\otimes I)$.
\end{definition}
For convenience, if we are not interested in the number of ancillary qubits $a$, we simply call $U_{\mat A}$ a $\mu$-block-encoding.
In what follows, we assume that we have access to $U_{\mat A}$, an (exact\footnote{Constructing exact block-encodings of arbitrary matrices $\mat A $ that are given in the sparse-access input model is a priori not possible with a finite gate set. Instead, one can construct a block-encoding of an approximation $\tilde{\mat A}$, by allowing an overhead in the circuit depth that is proportional to $\log(\|\mat A - \tilde{\mat A}\|)$.}) $(1, a)$-block-encoding of $\mat A $. The case of $\mu$-block-encodings with $\mu > 1$ can be reduced to the former by replacing our starting matrix with $\mat A /\mu$, that has eigenvalues in $D_{\mu\kappa}$. Furthermore, we assume that $\mat A $ is invertible, with eigenvalues in $D_\kappa$.
Finally, we assume that we have access to $U_{\vec b}$, a unitary that (exactly) prepares the state $\ket{\vec{b}} = \vec{b}/\norm{\vec{b}}$ on input $\ket{\vec 0}$: $U_{\vec b} \ket{\vec 0} = \ket{\vec b}$.
We define the quantum linear system problem (QLSP) as follows:
\begin{definition}[Quantum linear systems]\label{def:QLSP}
Let $\mat A \in \mathbb R^{n\times n}$ be a Hermitian matrix with eigenvalues in $D_\kappa$, let $\vec{b} \in \mathbb R^n$, and let $\varepsilon > 0$. Given a block-encoding $U_{\mat A}$ of $\mat A $ and a state preparation oracle $U_{\vec b}$, output a state
\[
\ket{\phi} = \alpha \ket{0}\ket{\vec x} + \beta \ket{1}\ket{\psi}
\]
where $\|\ket{\mat A\vec x} - \ket{b}\| \leq \varepsilon$, $\ket{\psi}$ is an arbitrary state, and $\alpha, \beta \in \mathbb C$ are such that $|\alpha|^2 + |\beta|^2 = 1$ and $|\alpha|^2 \geq 2/3$.
\end{definition}
As mentioned before, the widely-used definition from the the literature \cites{CKS17, Chakraborty2019BE, Gilyen2019SVT} is equivalent to \cref{def:QLSP} up to a change in $\epsilon$. In this paper we use \cref{def:QLSP}, as our algorithm is optimal in this sense. In \cref{sec: comparison} we (numerically) show that our algorithm also improves over prior work with respect to the more widely used definition.
Recent approaches for solving the QLS problem are based on applying a block-encoding of $p(\mat A)$ to $\ket{\vec{b}}$. In the next two sections we describe two ways of computing a block-encoding of $p(\mat A)$: through the QSVT framework, or by decomposing $p$ in the Chebyshev basis, computing each term individually, and combining the results using the linear combination of unitaries lemma (the LCU approach).
\subsubsection{QSVT approach} \label{sec: QSVT}
The most straightforward way for evaluating a polynomial quantumly is through the quantum singular value transformation framework \cite{Gilyen2019SVT}. Using QSVT, one can directly evaluate any polynomial $p$ as long as its sup-norm is suitably bounded. Here the sup-norm of $p$ is defined as
\[
\norm{p}_\infty := \max_{x \in [-1,1]} |p(x)|.
\]
This is achieved by performing a series of rotations by angles $\vec{\Phi}=(\phi_1, \dots, \phi_t)$ on a single qubit, that induces a degree-$t$ polynomial transformation of the singular values of $\mat A $. Determining these angles efficiently in a numerically stable way is the subject of ongoing research \cites{Haah2019product, chao2020finding, dong2021efficient}.
Below, we state a version of QSVT suitable for evaluating even and odd polynomials, since this is the case we are most interested in.
\begin{thm}[\cite{Gilyen2019SVT}*{Corollary 18}, for block-encodings]\label{thm:QSVT}
Let $\mat A \in \mathbb R^{n \times n}$ be Hermitian, and let $U_{\mat A}$ be a 1-block-encoding of $\mat A $.
Let $\Pi = (\ketbra{0}{0})^{\otimes a} \otimes I$, and suppose that $p \in \mathbb R[x]$ is a degree-$t$ polynomial of parity-$(t \bmod 2)$ satisfying $\norm{p}_\infty \leq 1$. Then there exists a $\vec \Phi \in \mathbb R^t$ such that
\begin{equation*}
p(\mat A) = \begin{cases}
\left( \bra{+}\otimes\Pi \right) \left( \ketbra{0}{0} \otimes U_{\vec \Phi} + \ketbra{1}{1} \otimes U_{-\vec \Phi} \right) \left( \ket{+}\otimes \Pi \right) & \text{ if } t \text{ is odd, and}\\
\left( \bra{+}\otimes\Pi \right) \left( \ketbra{0}{0} \otimes U_{\vec \Phi} + \ketbra{1}{1} \otimes U_{-\vec \Phi} \right) \left( \ket{+}\otimes \Pi \right) & \text{ if } t \text{ is even,}
\end{cases}
\end{equation*}
where $U_\Phi$ is defined as the phased alternating sequence
\begin{equation*}
U_{\vec \Phi} := \begin{cases}
e^{\i\phi_1 (2 \Pi - I)}U_{\mat A} \prod_{j=1}^{(t-1)/2} \left( e^{\i\phi_{2j} (2\Pi - I)} U_{\mat A}^\dag e^{\i \phi_{2j+1}(2\Pi -I)}U_{\mat A} \right) & \text{ if } n \text{ is odd, and}\\
\prod_{j=1}^{t/2} \left( e^{\i\phi_{2j-1} (2\Pi - I)} U_{\mat A}^\dag e^{\i \phi_{2j}(2\Pi -I)}U_{\mat A} \right) & \text{ if } n \text{ is even.}
\end{cases}
\end{equation*}
\end{thm}
Note that QSVT is fundamentally limited to evaluating polynomials that are bounded by $1$ in absolute value on $[-1, 1]$ (since the output is a unitary matrix). Approximations $p$ of $x^{-1}$ on $D_\kappa$ are inherently not bounded by $1$ on the interval $[-1,1]$: they are around $\kappa$ for $x=1/\kappa$. The QSVT framework allows us to evaluate $p(x)/M$ on $\mat A $ where $M$ is an upper bound on $\norm{p}_\infty$. This subnormalization reduces the success probability of for example a QSVT-based QLS-solver. It is thus important to obtain polynomial approximations $p$ that moreover permit a good bound $M$.
\subsubsection{LCU approach}
An alternative approach is based on the Linear Combinations of Unitaries (LCU) lemma \cite{Berry2015Hamiltonian}. It uses the fact that Chebyshev polynomials have a particularly nice vector of angles, which permits an efficient implementation of the LCU circuit.
\begin{lem}[{\cite[Lem.~9]{Gilyen2019SVT}}] \label{lem: cheb angles}
Let $\vec \Phi \in \mathbb R^t$ be such that $\phi_1 = (1-t)\frac\pi2$ and $\phi_i = \frac\pi2$ for $2 \leq i \leq t$. For this choice of $\vec \Phi$, the polynomial $p$ from \cref{thm:QSVT} is $\mathcal{T}_t$, the $t$-th Chebyshev polynomial of the first kind.
\end{lem}
\paragraph{Computing a single Chebyshev polynomial.}
We consider in more detail the above circuit for computing $\mathcal{T}_{2t+1}(\mat A)$ for a matrix $\mat A $ with a 1-block-encoding $U_{\mat A}$. Let $\Pi = \ketbra{0}{0} \otimes I$ be the same projector as in \cref{thm:QSVT} (we drop the exponent ${\otimes a}$ for convenience, or equivalently, we assume that the block-encoding $U_{\mat A}$ has a single auxiliary qubit). By \cref{lem: cheb angles} the unitary
\[
U_{2t+1} = e^{-\pi \i t (2\Pi - I)}U_{\mat A} \prod_{j=1}^{t} \left( e^{\i \frac\pi2 (2\Pi - I)} U_{\mat A}^\dag e^{\i \frac\pi2(2\Pi -I)}U_{\mat A} \right)
\]
satisfies $(\bra{0}\otimes I) U_{2t+1} (\ket{0}\otimes I) = \mathcal{T}_{2t+1}(\mat A)$.
We first simplify the above. Note that $2\Pi - I$ has eigenvalues $\pm 1$ and therefore $e^{-\pi \i t (2\Pi-I)} = (-1)^t I$ and $e^{\i \frac{\pi}{2}(2\Pi-I)} = \i(2\Pi-I)$. This means that
\begin{align*}
U_{2t+1} &= (-1)^t U_{\mat A} \prod_{j=1}^{t} \left( \i(2\Pi-I) U_{\mat A}^\dag \i(2\Pi-I) U_{\mat A} \right) \\
&= (-1)^t (\i)^{2t} U_{\mat A} \prod_{j=1}^{t} \left( (2\Pi-I)
U_{\mat A}^\dag (2\Pi-I) U_{\mat A} \right) \\
&= U_{\mat A} \prod_{j=1}^{t} \Big(\underbrace{ (2\Pi-I) U_{\mat A}^\dag (2\Pi-I) U_{\mat A} }_{=:W}\Big) = U_{\mat A} W^t.
\end{align*}
In other words, $U_{2t+1}$ can be viewed as $t$ applications of the unitary $W$, followed by a single application of $U_{\mat A}$. The circuit for even Chebyshev polynomials $U_{2t}$ is very similar, and can be obtained from $U_{2t+1}$ by removing the final application of (left multiplication by) $U_{\mat A}$ -- however, since we are ultimately interested in implementing the inverse, an odd function, we do not describe the circuit in more detail.
\paragraph{Computing a linear combination of Chebyshev polynomials.}
Given the above circuit that computes block-encodings of $\mathcal{T}_{2k+1}(\mat A)$ for $k \geq 0$, the next step is to compute a block-encoding of linear combinations of the form
\begin{equation} \label{eq: linear comb}
p(\mat A) = \sum_{i=0}^{t-1} c_{i} \mathcal{T}_{2i+1}(\mat A).
\end{equation}
This can be achieved using a version of the LCU algorithm due to~\cite{CKS17}. In particular, the key to an efficient implementation of the linear combination $\sum_{i=0}^{t-1} c_i U_{2i+1}$ is the efficient implementation of the operator $\sum_{i=0}^{t-1} \ketbra ii \otimes U_{2i+1}$, which we achieve by introducing an $l = (\lceil \log_2t \rceil + 1)$-qubit \emph{counter} register, and successively applying $W, W^2, W^4, \dots, W^{2^{l-1}}$ controlled on qubits $0, 1, \dots, l-1$ of the counter, followed by a single application of $U_{\mat A}$ at the end. In~\cite{CKS17}*{Theorem~4}, this circuit is analyzed for a specific polynomial-approximation of the inverse. The analysis naturally extends to arbitrary polynomials of the form~\eqref{eq: linear comb}.
\begin{thm}[based on \cite{CKS17}] \label{thm: LCU + Cheb}
Let $\mat A $ be a Hermitian matrix with eigenvalues in $D_\kappa$, let $U_{\mat A}$ be its block-encoding, and let $U_{\sqrt{\vec{c}}}$ be a unitary that prepares the state $\frac{1}{\sqrt{\norm{\vec{c}}_1}} \sum_{i=1}^n \sqrt{c_i} \ket{i}$. Then, there exists an algorithm that computes a $\norm{\vec{c}}_1$-block-encoding of $p(\mat A)$ using $t+1$ calls to controlled versions of $U_{\mat A}$ and $U_{\mat A}^\dag$, and a single call to each of $U_{\sqrt{\vec{c}}}$ and $U_{\sqrt{\vec{c}}}^\dag$. This circuit uses a logarithmic number of additional qubits, and has a gate complexity of $O(t\polylog(nt\kappa/\varepsilon))$.
\end{thm}
Compared to the QSVT approach, for this circuit we only need to compute the Chebyshev coefficients $\vec{c}$, as opposed to the vector of angles $\vec{\Phi}$ -- this comes, however, at the cost of using $O(\log t)$ additional qubits. Moreover, the coefficient 1-norm $\norm{\vec{c}}_1$ represents an upper bound for $\norm{p}_\infty$, since
\begin{equation}\label{eq:coeff norm is an upper bound for p on the interval}
|p(x)| = \left| \sum_{i=0}^t c_i \mathcal{T}_i(x) \right| \leq \sum_{i=0}^t |c_i| \cdot |\mathcal{T}_i(x)| \leq \norm{\vec{c}}_1,\; \text{ for } |x| \leq 1.
\end{equation}
A natural question is how tight this bound is for general degree-$t$ polynomials $p$ with $\norm{p}_\infty \leq 1$. By norm conversion (\cref{eq:cheb ineq 1-norm} in particular), the ratio $\norm{\vec{c}}_1 / \norm{p}_\infty$ is provably upper bounded by $O(\sqrt{t})$ but in \cref{app:Good example functions for LCU} we observe that for many ``interesting'' functions the ratio $\norm{\vec{c}}_1 / \norm{p}_\infty$ is in fact only $O(\log(t))$.
A notable exception is the complex exponential $e^{\i \kappa x}$ (and thus $\sin(\kappa x)$ and $\cos(\kappa x)$) for which numerical experiments suggest that it attains the $O(\sqrt{t})$ upper bound.
\section{Convex optimization perspective}\label{sec:Convex optimization}
In this section we introduce the convex optimization approach to linear system solving, and reinterpret the CKS polynomial in this framework. Let us first assume that $\mat A$ is positive definite (PD). We start by defining the convex function $f:\mathbb R^n \to \mathbb R$
as
\[
f(x) := \frac{\vec{x}^\top \mat A \vec{x}}{2} - \vec{b}^\top \vec{x}.
\]
Note that $\nabla f(\vec{x}) = \mat A\vec{x} - \vec{b}$, so the minimizer of $f$ satisfies $\mat A \vec{x} =\vec{b}$. This observation forms the basis of the convex optimization approach to linear system solving. We refer the reader to, for example,~\cite{Polyak1987Optimization,Vishnoi2013Laplacian} for an overview of gradient descent type algorithms for solving linear systems.
\subsection{Gradient descent}\label{sec:Gradient descent}
One of the most well-known algorithms for minimizing a convex function $f$ is the family of gradient descent algorithms. Starting from an initial point $\vec{x}_1$ (we use 1-based indexing on purpose), such an algorithm performs the iterations
\begin{equation} \label{eq:GD}
\vec{x}_t = \vec{x}_{t-1} - \eta_t \nabla f(\vec{x}_{t-1}), \qquad \text{where }t = 2,3,\ldots
\end{equation}
where $\eta_t \in [0,\infty)$ is the `step size' in the $t$-th iteration. For the most basic version of gradient descent we take a constant step size, i.e., $\eta_t$ is independent of $t$.
For our quadratic function $f$ we can unpack this recurrence. As we have seen before $\nabla f(\vec{x}_{t-1}) = \mat A\vec{x}_{t-1} - \vec{b}$ and hence
\[
\vec{x}_{t} = (I-\eta_t \mat A) \vec{x}_{t-1} + \eta_t \vec{b}.
\]
If we set $\eta_t := 1$ for all $t \in \mathbb N$ and use the initial point $\vec{x}_1 = \vec{b}$ (or even $\vec{x}_0=\vec{0}$, if we allow empty sums), then we obtain
\[
\vec{x}_t = \sum_{k=0}^{t-1} (I-\mat A)^k \vec{b}.
\]
Let us define the polynomial $p_t^+(\lambda) = \sum_{k=0}^{t-1} (1-\lambda)^k$ so that $\vec{x}_t = p_t^+(\mat A)\vec{b}$. Observe that this is the degree-$(t-1)$ Taylor expansion of the function $1/\lambda$ around $1$.
\begin{lem} \label{lem:pt}
We have $|p_t^+(\lambda) - 1/\lambda| \leq \varepsilon$ for all $\lambda \in [1/\kappa,1]$ whenever $t \geq \kappa \log(\kappa/\varepsilon)$.
\end{lem}
\begin{proof}
Indeed, substituting $\delta = 1-\lambda$ we have $(1-\delta) \cdot p_t^+(1-\delta) = (1-\delta) \sum_{k=0}^{t-1} \delta^k = 1-\delta^t$ which shows that
\[
p_t^+(\lambda) = \frac{1-(1-\lambda)^t}\lambda.
\]
Therefore, for $\lambda \in [1/\kappa,1]$ and $t \geq \kappa \log(1/\varepsilon)$ we have
\[
|p_t^+(\lambda) - 1/\lambda| = |1/\lambda|\cdot|1-\lambda|^t \leq \kappa (1-1/\kappa)^t \leq \kappa e^{-\log(\kappa/\epsilon)} = \epsilon. \qedhere
\]
\end{proof}
\subsection{Chebyshev iteration} \label{sec:Chebyshev iteration}
In the previous section we saw that $t$-step iterative methods are roughly equivalent to degree-$(t-1)$ polynomials that approximate the function $1/x$ on $[1/\kappa, 1]$. Thus, the natural question to ask is what is the best such polynomial $q_t^+$? Here we use the notion of optimality that comes from \cref{def:QLSP}. In other words, what is the degree-$(t-1)$ polynomial $q_t^+$ that minimizes
\begin{equation}\label{eq:sense of optimality}
\max_{x \in [1/\kappa, 1]} |xq_t^+(x) - 1|.
\end{equation}
First observe that all such polynomials can be expressed in the form $q_t^+(x) = \frac{1 - r_t^+(x)}{x}$ where $r_t^+$ is a degree-$t$ polynomial that satisfies $r_t^+(0) = 1$.\footnote{For example, in the case of gradient descent we have $r_t^+(x) = (1-x)^t$.} Thus, our goal is to find a degree-$t$ polynomial $r_t^+(x)$ that has the smallest absolute value on the interval $[1/\kappa, 1]$ and satisfies the normalization constraint $r_t^+(0)=1$. It turns out that we can use extremal properties of the Chebyshev polynomials $\mathcal{T}_t(x)$ to determine an optimal $r_t^+(x)$. We use the following well-known result (cf.~\cite[Prop.~2.4]{Vishnoi2013Approximation}).
\begin{lem} \label{thm: Chebyshev extremality}
For any degree-$t$ polynomial $p(x)$ such that $|p(x)| \leq 1$ for all $x \in [-1, 1]$, and any $y$ such that $|y| > 1$, we have $|p(y)| \leq |\mathcal{T}_t(y)|$.
\end{lem}
Using the affine transformation $x \mapsto \frac{1+1/\kappa-2x}{1-1/\kappa}$ this gives the following corollary:
\begin{cor} \label{cor: cheb functions}
Let $\kappa > 1$ be real, and let $t > 0$ be an integer. Then, the polynomial
\[
r_t^+(x) = \left. \mathcal{T}_t\left(\frac{1+1/\kappa-2x}{1-1/\kappa}\right) \middle/ \mathcal{T}_t\left(\frac{1+1/\kappa}{1-1/\kappa}\right) \right.
\]
is a degree-$t$ polynomial that satisfies $r_t^+(0) = 1$, and minimizes the quantity $\max_{x \in [1, 1/\kappa]} |r_t^+(x)|$.
\end{cor}
Note that the polynomials $r_t^+$ satisfy a Chebyshev-like $3$-term recurrence. As a consequence, the polynomials $q_t^+(x) = (1-r_t^+(x))/x$ also satisfy such a recurrence. The corresponding iterative method is known as the Chebyshev iteration.
\begin{rem}[Chebyshev iteration] The polynomial $q_t^+(x)$ satisfies the recurrence
\begin{equation} \label{eq:qt recurrence}
q_{t+1}^+(x) = 2 \frac{\mathcal{T}_t(\gamma)}{\mathcal{T}_{t+1}(\gamma)} \frac{\kappa + 1 - 2\kappa x}{\kappa - 1} q_t^+(x) - \frac{\mathcal{T}_{t-1}(\gamma)}{\mathcal{T}_{t+1}(\gamma)}q_{t-1}^+(x) - \frac{4\kappa}{\kappa - 1}\frac{\mathcal{T}_t(\gamma)}{\mathcal{T}_{t+1}(\gamma)},
\end{equation}
where $\gamma = \frac{1+1/\kappa}{1-1/\kappa}$.
This recurrence corresponds to the iterative method $\vec{x}_1 = \vec{b}$ and
\[
\vec{x}_{t+1} = 2 \frac{\mathcal{T}_t(\gamma)}{\mathcal{T}_{t+1}(\gamma)} \frac{(\kappa + 1)I - 2\kappa \mat A}{\kappa - 1} \vec{x}_t - \frac{\mathcal{T}_{t-1}(\gamma)}{\mathcal{T}_{t+1}(\gamma)}\vec{x}_{t-1} - \frac{4\kappa}{\kappa - 1}\frac{\mathcal{T}_t(\gamma)}{\mathcal{T}_{t+1}(\gamma)}\vec{b}.
\]
\end{rem}
The convergence rate of this method is summarized by the following theorem:
\begin{thm} \label{thm: cheb approx pos}
Let $\kappa > 1$ and $\epsilon > 0$. Then, for $t \geq \frac12 \sqrt{\kappa} \log(2\kappa / \epsilon)$ we have
\[
\left\lvert q_t^+(x) - 1/x \right \rvert \leq \epsilon \text{ for all } x \in [1/\kappa, 1].
\]
\end{thm}
\begin{proof}
First, we define $s(x) = \frac{1+1/\kappa-2x}{1-1/\kappa}$, so that we have $r_t^+(x) = \mathcal{T}_t(s(x))/\mathcal{T}_t(s(0))$. Thus, for all $x \in [1/\kappa, 1]$, we have
\[
|q_t^+(x) - 1/x| = |r_t^+(x) / x| \leq \kappa |r_t^+(x)| = \kappa \left\lvert \mathcal{T}_t(s(x))/\mathcal{T}_t(s(0)) \right \rvert.
\]
Additionally, since $|s(x)| \leq 1$ on this interval, we also have $|\mathcal{T}_t(s(x))|\leq 1$. Thus, it suffices to find $t$ for which $\mathcal{T}_t(s(0)) = \mathcal{T}_t(1+ \frac{2}{\kappa - 1}) \geq \frac{\kappa}{\epsilon}$. Since the Chebyshev polynomial $\mathcal{T}_t(\cdot)$ can be computed as
\begin{equation}\label{eq:explicit expression for Tn for big x}
\mathcal{T}_t(x) = \frac12 \left( \left( x - \sqrt{x^2 - 1} \right)^t + \left( x + \sqrt{x^2 - 1} \right)^t \right) \text{ for } |x| \geq 1,
\end{equation}
we can conclude that $\mathcal{T}_t(s(0)) = \frac12 \left( \left(\frac{\sqrt\kappa - 1}{\sqrt\kappa+1}\right)^t + \left(\frac{\sqrt\kappa+1}{\sqrt\kappa-1}\right)^t \right) \geq \frac12 \left(\frac{\sqrt\kappa+1}{\sqrt\kappa-1}\right)^t$. Using the inequality $(1+\frac{x}{n})^{n+x/2} \geq e^x$ for $x,n \geq 0$, after substituting $t = \frac12 \sqrt{\kappa} \log(2\kappa / \epsilon)$ we have
\begin{align*}
\mathcal{T}_t(s(0)) &\geq \frac12\left(\frac{\sqrt\kappa+1}{\sqrt\kappa-1}\right)^t = \frac12\left( 1 + \frac{2}{\sqrt{\kappa} - 1} \right)^{(\sqrt{\kappa}-1 + 2/2)\frac{\log(2\kappa/\epsilon)}{2}} \\
&\geq \frac12 \exp(\log(2\kappa/\epsilon)) = \frac{\kappa}{\epsilon}. \qedhere
\end{align*}
\end{proof}
\subsection{The general case}
We now return to the setting where $\mat A$ is a Hermitian matrix and has eigenvalues in the domain $D_\kappa = [-1,-1/\kappa] \cup [1/\kappa,1]$. One can still solve such systems using gradient descent methods by reducing to the convex case. That is, by considering the equivalent linear system $\mat A ^2 \vec{x} = \mat A \vec{b}$ and the corresponding convex function $f(\vec{x}) = \frac12 \vec{x}^\top \mat A^2 \vec{x} - \vec{b}^\top \mat A \vec{x}$. In particular, this allows us to solve $\mat A\vec x = \vec b$ by using a method for solving PD systems applied to the system $\mat A ^2 \vec{x} = \mat A \vec{b}$.
\begin{cor}\label{cor:reducing general linear systems to psd}
Let $\varepsilon > 0$, $\kappa > 1$, and let $P_t$ be any degree-$(t-1)$ polynomial such that $|P_t(\lambda) - 1/\lambda| \leq \varepsilon$ for all $\lambda \in [1/\kappa^2, 1]$. Then, $|P_t(\mu^2)\mu - 1/\mu| \leq \epsilon$ for all $\mu \in D_\kappa$.
\end{cor}
\begin{proof}
Since $|\mu| \leq 1$ on $D_\kappa$, we have
$|P_t(\mu^2)\mu - 1/\mu| = |\mu|\cdot|P_t(\mu^2) - 1/\mu^2| \leq \varepsilon$.
\end{proof}
We define the following two polynomials as the respective analogs of $p_t^+$ and $q_t^+$ for $D_\kappa$:
\begin{align}
p_t(x) &= x p_t^+(x^2) = \frac{1 - (1-x^2)^t}{x}, \text{ and}\\
q_t(x) &= x q_t^+(x^2) = \frac{1 - \mathcal{T}_t(\frac{1+1/\kappa^2 - 2x^2}{1-1/\kappa^2}) / \mathcal{T}_t(\frac{1+\kappa^2}{1-\kappa^2})}{x}. \label{eq: def qt}
\end{align}
Both $p_t$ and $q_t$ are degree-$(2t-1)$ polynomials, but different values of $t$ are required in order to achieve an $\varepsilon$-approximation of $1/x$ on $D_\kappa$. In particular, the following degrees are required:
\begin{cor}\label{cor:degrees of pt and qt}
Let $\kappa > 1$ and $\varepsilon > 0$. Then,
\begin{enumerate}
\item $|p_t(x) - 1/x| \leq \varepsilon$ for all $x \in D_\kappa$ whenever $t \geq \kappa^2 \log(\kappa^2 / \varepsilon)$,
\item $|q_t(x) - 1/x| \leq \varepsilon$ for all $x \in D_\kappa$ whenever $t \geq \frac12 \kappa \log(2\kappa^2 / \varepsilon)$.
\end{enumerate}
\end{cor}
\begin{lem}\label{cor:chebyshev iteration is optimal for general matrices}
Let $t \in \mathbb N$ and $\kappa > 1$. The polynomial $q_t$ is a degree-$(2t-1)$ polynomial that minimizes the quantity $\max_{x \in D_\kappa} |x P(x) - 1|$ among all degree-$(2t-1)$ polynomials $P \in \mathbb R[x]$.
\end{lem}
\begin{proof}
For a given $t$, we define
\[
\varepsilon^+ := \min_{\substack{P^+ \in \mathbb R[y]\\\operatorname{deg} P^+ = t-1}}\max_{y \in [1/\kappa^2, 1]} |y P^+(y) - 1|, \qquad \varepsilon := \min_{\substack{P \in \mathbb R[x]\\\operatorname{deg} P = 2t-1}}\max_{x \in D_\kappa} |x P(x) - 1|.
\]
We first show that $q_t$ certifies that $\varepsilon \leq \varepsilon^+$, and then we show $\varepsilon=\varepsilon^+$. From \cref{sec:Chebyshev iteration}, we know that $\varepsilon^+$ is achieved by the degree-$t-1$ polynomial $q_t^+(x) := \frac{1 - \mathcal{T}_t(s(x)) / \mathcal{T}_t(s(0))}{x}$, where $s(x) := \frac{1+1/\kappa^2 - 2x}{1-1/\kappa^2}$. Then, for $q_t(x) := \frac{1 - \mathcal{T}_t(s(x^2)) / \mathcal{T}_t(s(0))}{x}$ we have
\[
\max_{x \in D_\kappa} |x q_t(x) - 1| = \max_{x \in D_{\kappa}} |x^2 q_t^+(x^2) - 1| = \max_{y \in [1/\kappa^2,1]} |y q_t^+(y) - 1| = \varepsilon^+,
\]
where in the first equality we use \cref{eq: def qt}. We now show that $\varepsilon = \varepsilon^+$. Let $P(x)$ be a degree-$(2t-1)$ polynomial that satisfies $\max_{x \in D_\kappa} |xP(x)-1| = \varepsilon$. We first show that $P$ is odd. To do this, decompose $P$ as $P(x) = P_{\mathrm{even}}(x) + P_{\mathrm{odd}}(x)$ where $P_{\mathrm{even}}$ is even and $P_{\mathrm{odd}}$ is odd. Then
\begin{align*}
\max_{x \in D_\kappa} |x P(x) - 1| &= \max_{x \in [1/\kappa, 1]} \max\{ |xP(x) - 1|, |-xP(-x) - 1| \} \\
&= \max_{x \in [1/\kappa, 1]} \max\{ |xP_\mathrm{odd}(x) + xP_\mathrm{even}(x) - 1|, |xP_\mathrm{odd}(x) - xP_\mathrm{even}(x) - 1| \} \\
&\geq \max_{x \in [1/\kappa, 1]} |xP_\mathrm{odd}(x) - 1| = \max_{x \in D_\kappa} |xP_\mathrm{odd}(x) - 1|.
\end{align*}
Hence replacing $P$ by $P_\mathrm{odd}$ decreases $\varepsilon$, so we may assume that $P(x)$ is odd. Then $P(x)/x$ is a degree-$(2t-2)$ even polynomial. Let $P^+(y)$ be the degree-$(t-1)$ polynomial for which $P(x)/x = P^+(x^2)$. Then we have
\[
\max_{y \in [1/\kappa^2,1]} |yP^+(y) - 1| = \max_{x \in [1/\kappa,1]} |x^2 P^+(x^2) - 1| = \max_{x \in D_\kappa} |x P(x) - 1| = \varepsilon
\]
This shows that $\varepsilon^+ \leq \varepsilon$ which concludes the proof: $q_t$ is the degree-$(2t-1)$ polynomial that minimizes $\max_{x \in D_\kappa} |xP(x)-1|$ over polynomials of degree $2t-1$.
\end{proof}
\subsection{Relation to the Chebyshev approach of Childs-Kothari-Somma} \label{sec: comparison}
In~\cite{CKS17}, Childs, Kothari, and Somma approached the quantum linear system solver-problem by approximating the function $1/x$ on the domain $D_\kappa$ by (low-degree) polynomials. To start, they approximate $1/x$ by a function that is bounded near the origin: they multiply $1/x$ by a function that is small at the origin and close to 1 on $D_\kappa$. A natural choice for such a function is $1 - (1-x^2)^t$, so the function they end up with turns out to be exactly $p_t(x)$, the polynomial corresponding to $t$ steps of gradient descent applied to the quadratic $\frac12 \vec{x}^\top \mat A^2 \vec{x} - \vec{b}^\top \mat A \vec{x}$! So indeed, this is a good approximation of $1/x$ whenever $t \geq \kappa^2 \log(\kappa^2/\varepsilon)$.
The polynomial $p_t$ can be written in the Chebyshev basis as follows:
\begin{equation}\label{eq:CKS polynomial expanded}
p_t(x) = 4 \sum_{j=0}^{t-1} (-1)^j \left(\frac{\sum_{i=j+1}^t \binom{2t}{t+i}}{2^{2t}}\right) \mathcal{T}_{2j+1}(x).
\end{equation}
The key insight of~\cite{CKS17} is that this expansion can be truncated at $\tilde O(\kappa)$ terms, since the Chebyshev coefficients decay exponentially. This can be shown by relating the absolute value of the $j$-th coefficient (for $j=0,1,\dots$) to the probability of more than $t+j$ heads appearing in $2t$ tosses of a fair coin. This probability decreases as $e^{-j^2/t}$ which can be seen by applying the Chernoff bound. Thus, starting from $p_t$, an $\varepsilon$-approximation of the inverse, we obtain an $\varepsilon$-approximation of $p_t$ by truncating the summation at $j = \sqrt{t \log(4t / \varepsilon)} = \tilde O(\kappa)$ -- so, for these parameters, the CKS polynomial is a $2\varepsilon$-approximation of the inverse on $D_\kappa$.
Although this truncated polynomial is asymptotically optimal, it is not an optimum of \eqref{eq:sense of optimality}. Hence, the Chebyshev iteration polynomial provides a better approximation for a fixed degree, or conversely requires a lower degree to reach the same error on $D_\kappa$. In \cref{fig:degree comparison table}, we use \cref{cor:degrees of pt and qt} to compute the degree required to achieve error $\varepsilon$ on $D_\kappa$, and observe that the degree of the CKS polynomial is roughly twice the degree of the corresponding Chebyshev iteration polynomial.
\begin{table}[h]
\centering
\subfloat[][CKS polynomial]{
\begin{tabular}{c|c|c|c|c}
\diagbox{$\kappa$}{$\varepsilon$} & 0.5 & $10^{-2}$ & $10^{-4}$ & $10^{-6}$ \\\hline
2 & 15 & 33 & 53 & 71 \\ \hline
10 & 115 & 203 & 301 & 399 \\ \hline
100 & 1819 & 2687 & 3669 & 4633 \\ \hline
1000 & 24913 & 33515 & 43337 & 52989
\end{tabular}
}
\subfloat[][Chebyshev iteration]{
\begin{tabular}{c|c|c|c|c}
\diagbox{$\kappa$}{$\varepsilon$} & 0.5 & $10^{-2}$ & $10^{-4}$ & $10^{-6}$ \\\hline
2 & 7 & 15 & 25 & 33 \\ \hline
10 & 61 & 101 & 147 & 193 \\ \hline
100 & 1061 & 1453 & 1913 & 2373 \\ \hline
1000 & 15203 & 19115 & 23721 & 28327
\end{tabular}
}
\caption{Degrees of approximation polynomials for a given condition condition number $\kappa$ and error $\varepsilon$, computed according to \cref{cor:degrees of pt and qt}.}
\label{fig:degree comparison table}
\end{table}
\begin{figure}[h]
\centering
\begingroup
\pgfplotsset{every axis/.style={scale=0.4}}
\begin{subfigure}[h]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{error_kappa=16_degree=var.tikz}
\caption{$\kappa = 16$}
\end{subfigure}~
\begin{subfigure}[h]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{error_kappa=32_degree=var.tikz}
\caption{$\kappa = 32$}
\end{subfigure}~
\begin{subfigure}[h]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{error_kappa=64_degree=var.tikz}
\caption{$\kappa = 64$}
\end{subfigure}
\endgroup
\caption{Error comparison between Chebyshev iteration and the truncated gradient descent polynomial, for a fixed condition number $\kappa$ and varying degrees.}
\label{fig:error wrt degree}
\end{figure}
\begin{figure}[h]
\centering
\begingroup
\pgfplotsset{every axis/.style={scale=0.4}}
\begin{subfigure}[h]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{error_kappa=var_degree=127.tikz}
\caption{degree $=127$}
\end{subfigure}~
\begin{subfigure}[h]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{error_kappa=var_degree=255.tikz}
\caption{degree $=255$}
\end{subfigure}~
\begin{subfigure}[h]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{error_kappa=var_degree=511.tikz}
\caption{degree $=511$}
\end{subfigure}
\endgroup
\caption{Error comparison between Chebyshev iteration and the truncated gradient descent polynomial, for a fixed degree and varying condition numbers.}
\label{fig:error wrt kappa}
\end{figure}
In Figures \ref{fig:error wrt degree} and \ref{fig:error wrt kappa} we compute the actual errors achieved by the two polynomials, for a given degree and condition number. In particular, in \cref{fig:error wrt degree} we see that that for a fixed condition number, the convergence is linear for both polynomials, with a faster rate of convergence in case of Chebyshev iteration (so, for the same degree, the difference in errors is a few orders of magnitude). Further numerical experiments indicate that the ratio of the convergence rates (i.e. the slopes of the lines on \cref{fig:error wrt degree}) is roughly 2, independently of the choice of $\kappa$. Conversely, in \cref{fig:error wrt kappa} we see that with circuits of fixed depth, the error of Chebyshev iteration is an order of magnitude lower, no matter the condition number (in the figure we only consider polynomials that achieve an error $\varepsilon \leq 1$).
\section{A quantum algorithm}\label{sec:Quantum algorithm}
As mentioned before, our algorithm can be described as applying the polynomial $q_t$ to a 1-block-encoding of the input matrix $\mat A $. This yields an $O(t)$-block-encoding of $q_t(\mat A)$, which can then be applied to the input state $\ket{\vec{b}}$. Formally, we show the following.
\begin{thm}[Main result]\label{thm:main}
Let $\mat A $ be a Hermitian matrix with eigenvalues in $D_\kappa$, let $U_{\mat A}$ be a 1-block-encoding of $\mat A $, and let $\varepsilon > 0$. Then, for $t \geq \frac12 \kappa \log(2\kappa^2 / \varepsilon)$, a $2(1+\varepsilon/\kappa^2)t$-block-encoding of $q_t(\mat A)$ can be constructed using $2t-1$ calls to $U_{\mat A}$ and $U_{\mat A}^\dag$.
\end{thm}
\begin{proof}
The algorithm consists of applying QSVT (\cref{thm:QSVT}) to the polynomial $q_t(x)/\norm{q_t}_\infty$. This allows us to construct a $\norm{q_t}_\infty$-block-encoding of $q_t(\mat A)$ with the desired complexity. It remains to upper bound $\norm{q_t}_\infty$ by $2(1+\varepsilon/\kappa^2)t$. Motivated by \cref{eq:coeff norm is an upper bound for p on the interval}, it suffices to upper bound the $1$-norm of the vector $\vec c$ of coefficients of $q_t$ in the Chebyshev basis (again by $2(1+\varepsilon/\kappa^2)t$). In \cref{cor:bound for ct norm} we show that $\|\vec c_t\|_1 \leq 2(1+\varepsilon/\kappa^2)t$.
\end{proof}
The block-encoding of $q_t(\mat A)$ can now be used as a black-box replacement for the block-encoding of the corresponding CKS polynomial evaluated at $\mat A $. For example, using variable-time amplitude amplification, an $\widetilde{O}(\kappa)$-query (to $U_{\mat A}$) complexity QLS algorithm can be derived. We refer the reader to \cite{CKS17,Gilyen2019SVT,martyn2021grand} for an overview of these techniques.
As an alternative approach, one could use the fact that $\|\vec c_t\|_1$ is bounded in order to evaluate $q_t$ via LCU (\cref{thm: LCU + Cheb}). At the cost of using $O(\log t)$ additional qubits, an LCU-based approach would yield a more ``natural'' quantum algorithm, that does away with the classical angle computation preprocessing step required by QSVT -- computing these angles efficiently in a numerically stable way is the subject of ongoing research \cite{chao2020finding, dong2021efficient, Haah2019product}. Moreover, in \cref{app:Good example functions for LCU} we consider some other commonly-used functions, and bound their coefficient norms using similar techniques. For these functions, the coefficient norm is only a logarithmic factor away from the maximum absolute value on the interval $[-1, 1]$, meaning that they can be approximately evaluated with LCU in addition to QSVT, with slightly deeper circuits (multiplicative logarithmic overhead) and slightly more qubits (additive logarithmic overhead).
\subsection{Bounding the Chebyshev coefficients}\label{sec:Bounding the coeffs}
As discussed above, in order to apply (a normalized version of) $q_t$ to a block-encoding of a Hermitian matrix with eigenvalues in $D_\kappa$, we need a bound on the sup-norm of $q_t$ on the interval~$[-1,1]$. In order to derive such a bound, we express $q_t$ in the basis of Chebyshev polynomials. Each of the Chebyshev polynomials has sup-norm equal to $1$ and therefore a bound on the $1$-norm of the coefficient vector provides a bound on the sup-norm of $q_t$. Recall that since $q_t$ is an odd polynomial, its expansion in the Chebyshev basis only involves the odd-degree Chebyshev polynomials. That is, we can write
\begin{equation} \label{eq:def ct}
q_t(x) = \sum_{i=0}^{t-1} c_{t,i} \mathcal{T}_{2i+1}(x)
\end{equation}
for some vector $\vec c_t = (c_{t,i})_{i \in \{0,\ldots,t-1\}}$ of coefficients. One can give an analytic expression for $c_{t,i}$ using the fact that the Chebyshev polynomials are orthogonal with respect to the \emph{Chebyshev measure}.
Here we take a different approach and use the following discrete orthogonality relations. Fix a degree $m \in \mathbb N$ and let $\{x_1,\ldots,x_{m}\}$ be the roots of $\mathcal{T}_m(x)$. The $x_k$'s are called the \emph{Chebyshev nodes} and they admit an analytic formula:
\begin{equation}
x_k = \cos\left(\frac{(k-\tfrac12)\pi}{m}\right) \qquad \text{for } k=1,\ldots,m
\end{equation}
The discrete orthogonality relation that we will use is the following. For $0 \leq i,j < m$, we have
\begin{align}\label{eq:chebyshev discrete orthogonality}
\sum_{k=1}^m \mathcal{T}_i(x_k) \mathcal{T}_j(x_k) = \begin{cases}
m & \text{ if } i=j=0, \\
\frac{m}{2} & \text{ if } i=j<m, \\
0 & \text{ if } i \neq j.
\end{cases}
\end{align}
Since $q_t$ is a polynomial of degree $2t-1$, we will use the discrete orthogonality conditions corresponding to $m = 2t$ to recover the coefficient of $\mathcal{T}_{2i+1}$ in $q_t$. We have
\begin{equation}
c_{t,i} = \frac{1}{t} \sum_{k=1}^{2t} q_t(x_k) \mathcal{T}_{2i+1}(x_k)
\end{equation}
for all $i \in \{0,1,\ldots,t-1\}$. We can equivalently write this in matrix form, $\vec{c}_t = \frac1t \bm{\mathcal{T}}_t \vec{q}_t$, where
\[
\bm{\mathcal{T}}_t = \begin{bmatrix}
\mathcal{T}_1(x_1) & \mathcal{T}_1(x_2) & \dots & \mathcal{T}_1(x_{2t}) \\
\mathcal{T}_3(x_1) & \mathcal{T}_3(x_2) & \dots & \mathcal{T}_3(x_{2t}) \\
\vdots & \vdots & \ddots & \vdots \\
\mathcal{T}_{2t-1}(x_1) & \mathcal{T}_{2t-1}(x_2) & \dots & \mathcal{T}_{2t-1}(x_{2t})
\end{bmatrix} \text{ and } \vec{q}_t = \begin{bmatrix}
q_t(x_1) \\
q_t(x_2) \\
\vdots \\
q_t(x_{2t})
\end{bmatrix}.
\]
Our goal is to show that $\|\vec c_t\|_1 \leq C\cdot t$ for a small constant $C$. To do so, we first use the Cauchy-Schwarz inequality to obtain
\begin{equation} \label{eq:cheb ineq 1-norm}
\|\vec c_t\|_1 \leq \sqrt{t} \|\vec c_t\|_2 = \frac{1}{\sqrt{t}} \|\vec \bm{\mathcal{T}}_t \vec q_t\|_2 = \frac{\|\bm{\mathcal{T}}_t\|}{\sqrt{t}} \|\vec q_t\|_2 = \|\vec q_t\|_2
\end{equation}
where the second to last equality follows from the discrete orthogonality relations \cref{eq:chebyshev discrete orthogonality}: we see that $\bm{\mathcal{T}}_t \bm{\mathcal{T}}_t^* = t I_t$ and therefore $\|\bm{\mathcal{T}}_t\| = \sqrt{t}$. We are thus left to bound $\|\vec q_t\|_2$.
\begin{lem}
We have $\|\vec{q}_t\|_2 \leq 2(1+\frac{1}{\mathcal{T}_t(s(0))}) t$ for all $t \in \mathbb N$. In particular, for $t \geq \frac12 \kappa \log(2\kappa^2 / \varepsilon)$ we have $\|\vec q_t\|_2 \leq 2(1+\varepsilon/\kappa^2)t$.
\end{lem}
\begin{proof}
We start by bounding $\abs{q_t(x)}$ on $[-1, 1]$, and we recall that
\[
q_t(x) = \frac{1-\mathcal{T}_t(s(x))/\mathcal{T}_t(s(0))}{x}, \quad\text{where}\quad s(x) = \frac{1+1/\kappa^2 - 2x^2}{1-1/\kappa^2}.
\]
On one hand, when $x \in D_\kappa$ we have $s(x) \in [-1,1]$ and thus $\abs{1-\mathcal{T}_t(s(x))/\mathcal{T}_t(s(0))} \leq 1 + 1/\mathcal{T}_t(s(0))$. On the other hand, when $\abs{x} \leq 1/\kappa$ we have $1 \leq s(x) \leq s(0) = \frac{1+1/\kappa^2}{1-1/\kappa^2}$.
Since $\mathcal{T}_t(x)$ is increasing for $x \geq 1$, it follows that $0 \leq 1-\mathcal{T}_t(s(x))/\mathcal{T}_t(s(0)) \leq 1$ for all $\abs{x} \leq 1/\kappa$. Together this shows that
\[
\abs{q_t(x)} = \abs{\frac{1-\mathcal{T}_t(s(x))/\mathcal{T}_t(s(0))}{x}} \leq \frac{1 + 1/\mathcal{T}_t(s(0))}{\abs{x}} \qquad \text{for all } x \in [-1,1] \setminus \{0\}.
\]
We now bound the norm of $\vec q_t$. We have
\begin{align*}
\norm{\vec{q}_t}^2 &= \sum_{k=1}^{2t} q_t(x_k)^2 \leq \left(1 + \frac{1}{\mathcal{T}_t(s(0))}\right)^2 \sum_{k=1}^{2t} \frac{1}{x_k^2} = \left(1 + \frac{1}{\mathcal{T}_t(s(0))}\right)^2 \sum_{k=1}^{2t} \frac{1}{\cos^2 \left( \frac{2k-1}{4t}\pi \right)},
\end{align*}
where we substituted the exact expression for the Chebyshev nodes $x_k = \cos \left( \frac{2k-1}{4t}\pi \right)$. Moreover, we have
\[
\cos^2\left( \frac{2(2t-k+1)-1}{4t}\pi \right) = \cos^2 \left( \frac{2k-1}{4t}\pi \right) = \frac{1-\cos\left(\frac{2k-1}{2t}\pi\right)}{2} \quad\text{for all}\quad 1 \leq k \leq t,
\]
where the first equality comes from $x_{2t - k + 1} = -x_k$. Therefore, we have
\[
\norm{\vec{q}_t}^2 \leq 4 \left(1 + \frac{1}{\mathcal{T}_t(s(0))}\right)^2 \sum_{k=1}^t \frac{1}{1-\cos(\frac{2k-1}{2t}\pi)}.
\]
We note that the roots of $\mathcal{T}_t(x)$ are exactly $\cos(\frac{2k-1}{2t}\pi)$. For any polynomial $P(x)=C\prod_{k=1}^t (x-r_k)$ the following identity holds for all $x$ for which $P(x) \neq 0$:
\[
\sum_{k=1}^t \frac{1}{x-r_k} = \frac{P'(x)}{P(x)}.
\]
Applying the above to $P(x)=\mathcal{T}_t(x)$ and $x=1$ (which is not a root of $\mathcal{T}_t$), we get
\[
\sum_{k=1}^t \frac{1}{1-\cos(\frac{2k-1}{2t}\pi)} = \frac{\mathcal{T}_t'(1)}{\mathcal{T}_t(1)} = \frac{t \cdot \mathcal U_{t-1}(1)}{1} = t^2.
\]
This concludes the main part of the proof: we have shown that $\|\vec q_t\| \leq 2 (1+\frac{1}{\mathcal{T}_t(s(0))}) t$.
Finally, for $t \geq \frac12 \kappa \log(2\kappa^2 / \varepsilon)$, we bound $1/\mathcal{T}_t(s(0))$ as in the proof of \cref{thm: cheb approx pos}. Namely, using the same inequalities, we have
\[
\mathcal{T}_t(s(0)) \geq \frac12 \left( \frac{\kappa + 1}{\kappa - 1} \right)^t \geq \frac12 \left( 1 + \frac{2}{\kappa - 1} \right)^{\frac12 \kappa \log(2\kappa^2 / \varepsilon)} \geq \frac{\kappa^2}{\varepsilon}. \qedhere
\]
\end{proof}
Combining this lemma with \cref{eq:cheb ineq 1-norm}, we derive the same bound for $\norm{\vec{c}_t}_1$:
\begin{cor}\label{cor:bound for ct norm}
For all $t \in \mathbb N$, $\norm{\vec{c}_t}_1 \leq 2(1+\frac{1}{\mathcal{T}_t(s(0))}) t$. In particular, for $t \geq \frac12 \kappa \log(2\kappa^2 / \varepsilon)$ we have $\|\vec c_t\|_1 \leq 2(1+\varepsilon/\kappa^2)t$.
\end{cor}
\subsection{Efficiently computing the coefficients}
In the case of evaluating $q_t$ via LCU, one question of practical relevance is how to compute the coefficients $\vec{c}_t$. Naively using the recurrence \eqref{eq:qt recurrence} to compute $\vec{c}_t$ gives rise to an algorithm with $O(t^2)$ arithmetic operations with real numbers. Alternatively, one can use FFT-based Chebyshev interpolation algorithms that can compute $\vec{c}_t$ with $O(t \log t)$ operations given the vector $\vec{q}_t$ of the values of $q_t(x)$ at the order-$t$ Chebyshev nodes~\cite{Gentleman1972DCT}. Thus, in order to get an $O(t \log t)$-operation algorithm for computing $\vec{c}_t$, it suffices to show that $q_t(x)$ can be evaluated at a single Chebyshev node $x_k$ with $O(\log t)$-operations. Given the form of $q_t$, this means that we need to compute $\mathcal{T}_t(s(x_k))$ with $O(\log t)$ operations. One way to do this is via the degree-halving identities
\[
\mathcal{T}_{2t}(x) = 2\mathcal{T}_t(x)^2 - 1 \quad\text{and}\quad \mathcal{T}_{2t+1}(x) = 2\mathcal{T}_{t+1}(x)\mathcal{T}_t(x) - x.
\]
\subsection{A more natural quantum algorithm?}
Given the reduction of the general linear system problem to the PD case (\cref{cor:reducing general linear systems to psd}), one might be tempted to mirror this reduction when designing a quantum algorithm, with the goal of achieving $\tilde O(\sqrt{\kappa})$ complexity for solving PD systems. The input of such an algorithm would be a (block-encoding of a) Hermitian matrix $\mat A $ with eigenvalues in $[1/\kappa, 1]$, and the output would be a block-encoding of $q_t^+(\mat A)$. To evaluate this polynomial using QSVT, we first need to normalize it by dividing it by $\max_{x\in [-1, 1]} |q_t^+(x)|$. It turns out that this maximum grows exponentially with $t$: one can lower bound it by $|q_t^+(-1)|$ and we have
\begin{align*}
|q_t^+(-1)| &\geq \frac{\mathcal{T}_t\left(\frac{1+1/\kappa+2}{1-1/\kappa}\right)}{\mathcal{T}_t\left(\frac{1+1/\kappa}{1-1/\kappa}\right)} - 1 \geq \frac{\mathcal{T}_t( 3+4/(\kappa - 1) )}{\mathcal{T}_t(1 + 2/(\kappa - 1))} \\
&\geq \frac{\mathcal{T}_t(3)}{\mathcal{T}_t(2)} \geq \frac12 \left(\frac{3 + 2\sqrt{2}}{2+\sqrt{3}}\right)^t \geq \frac12 \left(\frac32\right)^t.
\end{align*}
Therefore, amplifying the output of QSVT would take exponential time. In the case of LCU, the coefficient 1-norm is lower bounded by $|q_t^+(-1)|$ (by \cref{eq:coeff norm is an upper bound for p on the interval}), so the output of a LCU-based algorithm would also need to be amplified exponentially. Alternative approaches of multiplying $q_t^+(x)$ by a rectangle function that is close to $1$ on $[1/\kappa, 1]$ and close to $0$ elsewhere are similarly fruitless as the degree of the resulting approximation polynomial would become linear in $\kappa$. It should be noted, however, that these issues can be avoided if we assume that the mapping $x \mapsto \frac{1+1/\kappa - 2x}{1-1/\kappa}$ has already been performed ``ahead of time'': in \cite{orsucci2021solving}, Orsucci and Dunjko have shown that PD matrices can indeed be inverted in $\widetilde{O}(\sqrt{\kappa})$, provided that a block-encoding of $I - \alpha \mat A$ is given as input (for suitable $\alpha$).
Another natural alternative approach would be to quantize a method such as momentum gradient descent, which also converges in $\widetilde{O}(\sqrt{\kappa})$ for PD matrices \cite{Polyak1987Optimization}. One way to achieve this would be using the approach of Kerenidis and Prakash \cite{KP20Gradient}, who quantized the basic gradient descent algorithm by implementing the recurrence $\vec{r}_{t+1} = (I - \eta \mat A)\vec{r}_t$ satisfied by the differences $\vec{r}_t := \vec{x}_t - \vec{x}_{t-1}$ of successive iterates. Applying this idea to momentum gradient descent, one gets a recurrence involving two successive differences:
\[
\begin{bmatrix}
\vec{r}_{t+1} \\
\vec{r}_t
\end{bmatrix} = \underbrace{\begin{bmatrix}
(1 + \beta) I - \eta \mat A & -\beta I \\
I & 0
\end{bmatrix}}_{\mat M} \begin{bmatrix}
\vec{r}_t \\
\vec{r}_{t-1}
\end{bmatrix},
\]
for suitable choices of $\eta$ and $\beta$. For example, following \cite[Chapter 3]{Polyak1987Optimization}, one can set $\eta = 4/(1+\sqrt{1/\kappa})^2$ and $\beta = \left( 1-2/(1+\sqrt{\kappa}) \right)^2$.
Implementing a similar approach as in \cite{KP20Gradient} would require the construction of $O(1)$-block-encodings of powers of $\mat M$. In particular, this would require $\mat M$ to have a small norm. Unfortunately, for large enough $\kappa \geq 9$ and the above choice of $\eta,\beta$, one has $\|\mat M\| \geq \sqrt{2}$ which means that a block-encoding of $\mat M^t$ needs to have sub-normalization at least $2^{t/2}$.
\section{Query lower bounds}\label{sec:Query lower bound}
So far, we have been considering algorithms (i.e. upper bounds) for the QLS problem. The complexity of the best algorithm for the QLS problem
depends linearly on $\kappa$ (we ignore the polylogarithmic factors in this section), so a natural question is whether this dependence is optimal. In \cite{HHL09} it has been shown that this is indeed the case: in the sparse access input model (the setting in which such lower bounds are usually proven), the complexity of QLS for general systems is $\Omega(\min(\kappa,n))$. Recently, it has been shown \cite{orsucci2021solving} that the same $\Omega(\min(\kappa,n))$ lower bound even holds for the restriction of QLS to PD matrices -- this is surprising since in the classical setting a $\sqrt{\kappa}$-separation exists between the general and the PD case. We note that both of these lower bounds apply when the output of the QLS solver is the quantum state $\ket{\mat A^{-1} \vec{b}}$. As a consequence, one can show that computing a classical description of $\mat A ^{-1}\vec{b}$ is just as hard.
Both of the above results apply to the small-$\kappa$ regime. In particular, they leave open the possibility of a $o(n^\omega)$-time quantum algorithm for solving linear systems (with classical output). The existence of such an algorithm would speed up many classical optimization algorithms (e.g., interior point methods) in a black-box way.
In \cite{Dorn2009QueryComplexity} it was shown that one cannot obtain a large quantum speedup when the output is required to be classical: $\Omega(n^2)$ quantum queries to the entries of $\mat A $ are needed to obtain a classical description of a single coordinate of $\mat A ^{-1} e_n$, where $e_n$ is the $n$-th standard basis vector in $\mathbb R^n$. The statement is robust in the following sense: after normalizing $\mat A ^{-1}e_n$, it suffices to obtain a $\delta$-additive approximation of the first coordinate for some $\delta = O(1/n^2)$. We present a simplified proof of this result of \cite{Dorn2009QueryComplexity} at the end of this section.
Note that this high precision prevents one from lifting the bound to the quantum-output setting: to obtain a $\delta$-additive approximation of a single coordinate of $\ket{\mat A^{-1}b}$ one can use roughly $1/\delta$ rounds of amplitude estimation on a QLS-solver~$\mathcal A$. With $\delta=O(1/n^2)$ this only implies that $n^2 \cdot \text{cost}(\mathcal A) = \Omega(n^2)$. A second type of quantum lower bound is described in \cite{Gilyen2019SVT}: roughly speaking, if a (smooth) function $f:I \to [-1,1]$ has a derivative whose absolute value is $d$, then $\Omega(d)$ uses of a $1$-block-encoding $U_{\mat A}$ of $\mat A$ are needed to create a block-encoding of $f(\mat A)$. Here $I$ is a subset of $[-1,1]$ that contains the eigenvalues of the Hermitian matrix $\mat A$. Applied to $f(x) = 1/(\kappa x)$, this shows that indeed $\Omega(\kappa)$ applications of $U_{\mat A}$ are needed to create a block-encoding of $\mat A^{-1}$. As mentioned before, a block-encoding of $\mat A^{-1}$ can be combined with a state preparation oracle for $\vec b$ to solve the QLS problem. Such a strategy however naturally incurs a $\kappa$-dependence in the runtime, and it remains an interesting open question whether one could solve the QLS problem (with quantum output!) without such a dependence in $\kappa$ and in time $o(n^\omega)$.
\subsection{Lower bound for matrix inversion with classical output} \label{app: Lower bound}
We present a simplified proof of a matrix-inversion lower bound result of~\cite{Dorn2009QueryComplexity}. It is based on the quantum query complexity of the majority function $\mathrm{MAJ}_n:\{0,1\}^n \to \{0,1\}$ which is takes value $1$ on input $\vec x$ if and only if $\sum_{i \in [n]} x_i > n/2$. It is well known that the quantum query complexity of $\mathrm{MAJ}_n$ is $\Theta(n)$~\cite{Beals01PolynomialMethod}.
\begin{lem}\label{lem:Matrix power lower bound}
Let $\mat X \in \{0, 1\}^{n\times n}$. Then, the matrix $\mat A \in \{0, 1\}^{(2n+2) \times (2n+2)}$ defined as
\[
\mat A = \left\lbrack\begin{array}{@{}c|c@{}}
\begin{matrix}
0 & 1_n^* \\
1_n & 0
\end{matrix}
& \begin{matrix}
0 & 0 \\
\mat X & 0
\end{matrix} \\ \hline
\begin{matrix}
0 & \mat X^* \\
0 & 0
\end{matrix} &
\begin{matrix}
0 & 1_n \\
1_n^* & 0
\end{matrix}
\end{array}\right\rbrack
\]
satisfies $(\mat A^3)_{1, 2n+2} = \sum_{i=1}^n \sum_{j=1}^n X_{i,j}$.
\end{lem}
\begin{proof}
$\mat A $ is the adjacency matrix of an undirected graph that can be described as follows. We start with a bipartite graph between two sets of $n$ vertices whose edge set is described by $X$, then we add two vertices labeled $1$ and $2n+2$ that we connect respectively to the first set of vertices and the second set of vertices.
The entry $(1, 2n+2)$ of $\mat A ^3$ counts the number of paths of length $3$ from $1$ to $2n+2$ in this graph. This equals the number of edges between the sets $\{2, \dots, n+1\}$ and $\{n+2, \dots, 2n+1\}$, that is, $(\mat A^3)_{1,2n+2} = \sum_{i=1}^n \sum_{j=1}^n X_{i,j}$.
\end{proof}
\begin{cor}
Let $\mat A \in \{0,1\}^{n \times n}$. Determining a single off-diagonal entry of $\mat A ^{3}$, with success probability $\geq 2/3$, takes $\Theta(n^2)$ quantum queries to $\mat A $.
\end{cor}
\begin{lem}
Let $\mat A \in \{0, 1\}^{n \times n}$. Then, for $N = 4n$, the matrix $\mat B \in \{ 0,1 \}^{N \times N}$ defined by
\[
\mat B = \begin{bmatrix}
I & \mat A && \\
& I & \mat A & \\
&& I & \mat A \\
&&& I
\end{bmatrix},
\]
satisfies $(\mat B^{-1})_{1,N} = -(\mat A^3)_{1,n}$.
\end{lem}
\begin{proof}
It is straightforward to verify that the inverse of $\mat B$ is
\[
\mat B^{-1} = \begin{bmatrix}
I & -\mat A & \mat A^2 & -\mat A^3 \\
& I & -\mat A & \mat A^2 \\
& & I & -\mat A \\
& & & I \\
\end{bmatrix}. \qedhere
\]
\end{proof}
If $\mat A $ is the adjacency matrix of the directed version of the graph described in Lemma~\ref{lem:Matrix power lower bound}, we can also compute the norm of the last column as follows:
\[
\norm{B^{-1} \vec{e}_{4n}}^2 = \left( \sum_{i,j} X_{i,j} \right)^2 + \sum_i \left(\sum_j X_{i,j} \right)^2 + n+1.
\]
In particular, for the hard instances (where $|n/2 - \sum_{i,j} X_{i,j}| \leq 1$), we have that $\norm{\mat B^{-1} \vec{e}_{4n}} = \Theta(n^2)$.
\begin{cor}
Let $\mat A \in \{0,1\}^{n \times n}$. Determining a single off-diagonal entry of $\mat A ^{-1}$ up to precision $<1/2$, with success probability $\geq 2/3$, takes $\Theta(n^2)$ quantum queries to $\mat A $.
\end{cor}
\bibliographystyle{plain}
|
1,108,101,563,834 | arxiv | \section{Introduction}
Systems with finite chemical potential for fermions are a challenging and actual subject. The main reason for this interest is that in many physical systems we have to take into account a fermionic density, such as, in heavy-ion collisions, neutron stars, and condensed matter theories, among others. For a review, one can see, {\sl e. g.}, Refs. \cite{Rajagopal:2000wf, Boyanovsky:2001pt}.
In particular, nuclear matter can be treated within a very well-established theory, which is based on first principles and a nonperturbative approach, called lattice QCD (LQCD). The calculations in LQCD are numerical and usually via Monte Carlo simulations. However, at finite chemical potential LQCD seems to crash due to the sign problem, meaning that action of the theory becomes complex \cite{Goy:2016egl, Troyer:2004ge}.
Some proposals have been made to overcome this difficulty. For instance, in Refs. \cite{Alford:1998sd, Giudice:2004se}, a complex chemical potential was used. In Refs. \cite{Barbour:1997ej, Fodor:2001au}, the authors have used reweighting approaches; in Ref. \cite{Blum:1995cb}, nonrelativistic expansions were used; and in Ref. \cite{Roberge:1986mm}, the authors have dealt with the reconstruction of the partition function. Very recently, in Ref. \cite{Borsanyi:2020fev},
the authors proposed a solution to this problem and provided extremely accurate results for the QCD transition, extrapolating from imaginary chemical potential up to real baryonic potential $\mu_{B}=300$ MeV.
There is an alternative approach to nonperturbative QCD, or even LQCD, based on the AdS/CFT correspondence \cite{Maldacena:1997re, Witten:1998zw}. Presented in 1997, this correspondence, generically referred to as holography, relates a strong coupling theory, without gravity, in a four-dimensional space to a weak coupling theory, including gravity, in a curved higher-dimensional space. The theoretical framework to deal with nuclear matter in the presence of a finite chemical potential within the AdS/CFT correspondence was put forward in many important works. See for instance, Refs. \cite{Lee:2009bya, Jo:2009xr, Colangelo:2010pe, Colangelo:2011sr} and, more recently, Refs. \cite{Bohra:2019ebj, Ghoroku:2020fkv, Evans:2020whc, Cao:2020ske, He:2020fdi, Zhou:2020ssi, Cao:2020ryx, Ballon-Bayona:2020xls, Mamani:2020pks, Braga:2019xwl, Braga:2020myi}.
Despite real QCD being a $(3+1)$-dimensional gauge theory the numerical calculations in a such a background (in the presence of a magnetic field) are extremely hard and reliable only for low values of the magnetic field, as can be seen in Refs. \cite{Dudal:2015wfn, Mamo:2015dea, Li:2016gfn, Evans:2016jzo, Chelabi:2015cwn, Chelabi:2015gpc, Bohra:2020qom} in the holographic context as well as in the nonholographic approach \cite{Bali:2011qj, Endrodi:2015oba}.
Here, in this work, our focus is to study the finite density effects on chiral symmetry breaking in the presence of a background constant magnetic field $B$ in $ 2+1 $ dimensions based on holographic studies done at zero density in Refs.
\cite{Rodrigues:2017iqi,Rodrigues:2017cha,Rodrigues:2018pep, Rodrigues:2018chh}.
Our choice for a dimensional reduction comes from the fact that, in $ 2+1 $ dimensions, our model has a computational task easier than in real QCD even when considering both nonzero chemical potentials and magnetic fields. This approach is very useful since we can learn from this model and try to extrapolate it to real QCD. Some previous nonholographic works in 2+1 dimensions with magnetic fields can be seen, for instance, in \cite{Klimenko:1990rh, Klimenko:1991he, Gusynin:1994re, Shovkovy:2012zn, Miransky:2015ava, Miransky:2002rp}.
This work is organized as follows. In section \ref{holomodel} we describe our holographic model. In subsection \ref{geo} we detail the background geometry which is an AdS$_4$-Reissner-Nordstrom black hole in the presence of a background magnetic field and present all relevant quantities for our further calculations. In the subsection \ref{effec} we show the holographic description of our effective action for a complex scalar field interacting with a non-Abelian gauge field. Such a complex scalar field will be related to the chiral condensate at the boundary theory. In section \ref{results} we present our numerical results where we observe inverse magnetic catalysis (IMC) at finite density as well as the decreasing of the chiral condensate when the density increases. In section \ref{conclusions} we make our conclusions.
\section{Holographic Model}\label{holomodel}
\subsection{Background geometry} \label{geo}
In this section we will establish the holographic description of our background geometry. Considering the Einstein-Maxwell action on $AdS_4$ (for more details see \cite{Rodrigues:2017cha,Rodrigues:2017iqi} and references therein):
\begin{equation} \label{AdS4Action}
S = \dfrac{1}{2\kappa^2_4}\int d^{4}x \sqrt{-g}\left(-\dfrac{6}{L^2} - L^{2}F_{MN}F^{MN}\right),
\end{equation}
where $ \kappa^2_4 $ is the four-dimensional coupling constant (with the relation $2\kappa^2_4\equiv16\pi G_4 $ with 4D Newton's constant $G_4$), $x^{M} = (t,z,x,y) $, with $ z $ being the holographic coordinate, and $F_{MN} = \partial_{M} A_N - \partial_{N} A_M$ is the field strength for the $U(1)$ gauge field $ A_{M} $. Throughout the text we will work in units such that $2\kappa^2_4 = L = 1$.
The field equations from Eq. \eqref{AdS4Action} are
\begin{eqnarray}
R_{MN} &=& 2\left(F_{M}^{P}F_{NP} - \dfrac{1}{4}g_{MN}F^2\right) - {3}g_{MN}, \label{FieldEquations}\\
\nabla_{M}F^{MN} &=& 0 \label{BianchiIdentity},
\end{eqnarray}
where $R_{MN}$ is the Ricci tensor and $g_{MN}$ is the metric tensor. Since we want to include a nonzero chemical potential $\mu$ and a constant magnetic field $B$, from now on in this work, it is worthwhile to mention that the magnetic field $B$ that we are considering is always a background field.
The ansatz we are going to consider is the dyonic AdS/Reissner-Nordstrom black hole solution \cite{Hartnoll:2007ai,Hartnoll:2009sz}, with both electric and magnetic charge, given by:
\begin{eqnarray}
ds^2 &=& \dfrac{1}{z^2}\left(-f(z)dt^2 + \dfrac{dz^2}{f(z)} + dx^2 + dy^2\right), \label{AnsatzMetrica} \\
f(z) &=& 1-(1+\mu^2\,z_h^2+B^2\,z_h^4)\left(\dfrac{z}{z_h}\right)^3+(\mu^2\,z_h^2+B^2\,z_h^4)\left(\dfrac{z}{z_h}\right)^4 \label{horfunction},\\
A &=& \mu\,\left(1-\dfrac{z}{z_h}\right) \,dt + \dfrac{B}{2}(x\,dy-y\,dx). \label{ansatzA}
\end{eqnarray}
The temperature of the black hole solution can be obtained through the Hawking formula,
\begin{equation}\label{eq-T-1}
T = -\dfrac{f'(z_h)}{4\pi},
\end{equation}
and is given by
\begin{equation}\label{T}
T(z_h,\mu,B) = \dfrac{1}{4\pi\,z_{h}}\left(3 - B^2\,z_h^4 - \mu^2\,z_h^2\right).
\end{equation}
Note that this equation for $z_h$ has four roots for fixed $T, \mu, B$, but only one is real and positive which means physically acceptable (note that by taking Eq.~\eqref{eq-T-1}, we have chosen the outer horizon). So, from now on, we just consider the branch $z_h >0$.
\subsubsection{IR near-horizon geometry: Emergence of AdS$_2$$\times\mathbb{R}^2$}
Here, we briefly discuss an important feature of the near-horizon ($z\to z_h$) geometry of extremal ($T=0$) charged black holes in asymptotically AdS spacetime, which is the emergence of the AdS$_2 \times\mathbb{R}^2$ space \cite{Edalati:2009bi,Faulkner:2009wj,Sachdev:2019bjn}. An extremal black hole is characterized by the fact that its temperature $ T $, Eq. \eqref{T}, is zero. In this case the horizon function, Eq. \eqref{horfunction}, becomes
\begin{equation}
f(z)\Big|_{T=0} = 1-4\,\left(\frac{z}{z_h}\right)^3 + 3\,\left(\frac{z}{z_h}\right)^4,
\end{equation}
where it has a double zero at the horizon and can be Taylor expanded as
\begin{equation}
\setlength{\jot}{10pt}
\begin{aligned}
f(z)&\approx \frac{6}{z_h^2}\,(z-z_h)^2.
\end{aligned}
\end{equation}
Now, defining a dimensionless coordinate $ w $ through the rescaling $w := z/z_h$ and changing variables according to
$ w = 1+z_h\,\eta$, we find that the geometry in the near-horizon region ($z\to z_h$) becomes
\begin{equation}
ds^2 \approx \left(- \frac{\eta^2}{L_{\mathrm{eff}}^2}\,dt^2 + \frac{L_{\mathrm{eff}}^2}{\eta^2}\,d\eta^2 \right) + \frac{1}{z_h^2}\,\left(dx^2+dy^2 \right),
\end{equation}
which is the AdS$_2 \times\mathbb{R}^2$, with AdS$_2$ curvature radius $L_{\mathrm{eff}}\equiv 1/\sqrt{6}$, in units of $L=1$. Thus, in the near-horizon regime (IR), the AdS$_2 \times\mathbb{R}^2$ space controls the low-energy physics of the dual gauge theory on the boundary.
Therefore, it seems that supergravity on AdS$_4$ flows in the IR to a gravity theory on AdS$_2$ which, in turn, is dual to a ($0+1$)-dimensional effective conformal quantum theory, which is referred to as an IR CFT$_1$. For a more extensive discussion on this topic, we refer the reader to Refs. \cite{Faulkner:2009wj,Sachdev:2019bjn}.
\subsection{Effective action for chiral symmetry breaking} \label{effec}
The effective action we consider to describe the chiral symmetry breaking is given by (we refer the reader to Refs. \cite{Rodrigues:2018chh, Rodrigues:2018pep} and references therein)
\begin{equation}\label{ChiralAction}
S = \dfrac{1}{2\kappa^2_4}\int d^{3}x \, dz \sqrt{-g}\,e^{-\Phi(z)}\mathrm{Tr}\left(D_{M}X^{\dagger}\,D^{M}X - V(X) - G^2 \right),
\end{equation}
where $ X $ is a complex scalar field with mass squared $ M_4^2 = -2 $ dual to the chiral condensate $ \sigma\equiv\left\langle \bar{\psi}\psi\right\rangle $ in three spacetime dimensions, whose conformal dimension is $ \Delta=2 $. $ D_{M} $ is the covariant derivative defined as $ D_{M} \equiv \partial_{M} + i\mathcal{A}_{M} $, with $ \mathcal{A}_{M} $ being a non-Abelian gauge field, and its field strength $ G_{MN} $ is defined as $ G_{MN} \equiv \partial_{M}\mathcal{A}_{N} - \partial_{N}\mathcal{A}_{M} - i[\mathcal{A}_{M},\mathcal{A}_{N}] $. $ V(X)$ is the potential for the complex scalar field $X$ given by $ V(X) = -2X^{2} + \lambda X^{4}$, where $ \lambda $ is the quartic coupling, which we will fix as $ \lambda = 1 $ from now on. This coupling allows the spontaneous and explicit chiral symmetry breakings to occur independently as pointed in Ref. \cite{Gherghetta:2009ac}. One should note that the magnetic field enters only as a background field on the AdS$_{4}$/RN geometry, and its contribution is encapsulated in the determinant of the spacetime metric $g$ in the effective action, and it does not couple directly to the complex scalar field nor the non-Abelian gauge field (there is no backreaction of the probe B-sensitive quark matter degrees of freedom).
Concerning the dilaton profile $ \Phi(z) $ appearing in Eq. \eqref{ChiralAction} we will consider \cite{Chelabi:2015cwn,Chelabi:2015gpc,Li:2016gfn}
\begin{equation}\label{dilatonprofilez}
\Phi(z) = - \phi_{0}z^2 + (\phi_{0} + \phi_{\infty})z^2\tanh(\phi_2\,z^2),
\end{equation}
having three parameters, which captures both IR and UV behaviors. It interpolates between the positive quadratic dilaton profile in the IR, $ \Phi(z\to\infty) = \phi_{\infty}\,z^2 $, $\phi_{\infty} > 0$, and the negative quadratic dilaton profile in the UV, $ \Phi(z\to0) = - \phi_{0}\,z^2 $, with $\phi_{0}>0$.
This dilaton field plays the role of a soft IR cutoff promoting the breaking of the conformal invariance.
Note that in Ref.\cite{Karch:2006pv} the authors proposed the soft wall model with a quadratic dilaton profile $\Phi(z)=a z^2$ with a positive constant $a$ reproducing the spectrum of vector mesons with linear Regge trajectories. In Ref. \cite{Karch:2010eg} the authors discuss the sign of the dilaton in soft wall models and claim that the constant $a$ should be positive; otherwise there will be unphysical massless vector mesons. We should emphasize that in our case there will be no unphysical mode since our dilaton in the IR regime is positive, i.e, $\phi(z) = k\,z^2$, with positive $k$.
On the other side, it was shown in Refs. \cite{Chelabi:2015cwn,Chelabi:2015gpc,Ballon-Bayona:2020qpq} for the positive sign of the quadratic dilaton $\Phi(z)=k z^2$, with positive $k$, that spontaneous chiral symmetry breaking cannot be reproduced. In our case, with the dilaton profile Eq.\eqref{dilatonprofilez}, we do not have this problem since it has a positive sign on the IR and a negative sign on the UV.
Assuming that the expectation value of the complex scalar field $X$ takes a diagonal form $ \left\langle X \right\rangle = \frac 12 \, \chi(z)\, I_2 $ for the $ SU(2) $ case \cite{Dudal:2015wfn,Li:2016gfn}, where $ I_2 $ is the $ 2\times2 $ identity matrix, the field equations for $\chi(z)$, derived from \eqref{ChiralAction}, are given by
\begin{equation}\label{ChiralFieldEquations2}
\chi''(z) + \left(- \frac{2}{z} - \Phi'(z) + \frac{f'(z)}{f(z)}\right) \chi'(z) - \dfrac{1}{z^2 f(z)}\partial_{\chi}V(\chi) = 0,
\end{equation}
where $ ' $ means derivative with respect to $ z $, $f(z)$ is given by \eqref{horfunction}, and the potential becomes
$ V(\chi)\equiv\mathrm{Tr}\,V(X) = -\chi^2 + \chi^4$.
The boundary conditions used to solve \eqref{ChiralFieldEquations2} are \cite{Rodrigues:2018chh,Rodrigues:2018pep}:
\begin{eqnarray}
\chi(z) &=& m_{f}\,z +\sigma z^2+O(z^3)...,\quad z\to 0, \\
\chi(z) &=& c_0 +\frac{c_0\,(4\,c_0^2 -2)}{z_h \left(B^2\, z_h^4+\mu^2\, z_h^2-3\right)}(z-z_h)+O((z-z_h)^2)...,\quad z\to z_h,
\end{eqnarray}
where $m_f$ is the source (fermion mass), and $\sigma$ is the chiral condensate. Moreover, $c_0$ is a coefficient obtained from evaluating Eq. \eqref{ChiralFieldEquations2} as a series expansion. Since we want to study spontaneous symmetry breaking, most of the results in this work will be derived with the source turned off, i.e., $m_f=0$. Then, in both the UV and IR sides, we have one undetermined coefficient, $\sigma$ and $c_0$ respectively. With certain values of the two coefficients, one can obtain the solutions $\chi_{\text{UV}}$ and $\chi_{\text{IR}}$ from both sides. Requiring $\chi_{\text{UV}}=\chi_{IR}$ and $\chi^\prime_{\text{UV}}=\chi^\prime_{\text{IR}}$, one obtains two equations and could solve out $\sigma$ and $c_0$.
In the next section, we will present our results concerning the chiral symmetry breaking at finite density and magnetic field as well as the phase diagram in the $\mu-T$ plane. For convenience, we will express the dilaton parameters in units of the mass scale $\sqrt{\phi_{\infty}}$ as well as all our results. To be more clear, note that one can define a dimensionless variable by rescaling the $z$ coordinate as $u:=\sqrt{\phi_{\infty}}\,z$, so that the dilaton profile \eqref{dilatonprofilez} takes the form
\begin{equation}
\Phi(u) = -\tilde{\phi_{0}}\,u^2+(1+\tilde{\phi_{0}})\,u^2\,\tanh(\tilde{\phi_2}\,u^2),
\end{equation}
where $\tilde{\phi_{0}}:=\frac{\phi_{0}}{\phi_{\infty}}$ and $\tilde{\phi_{2}}:=\frac{\phi_{2}}{\phi_{\infty}}$ are the dimensionless parameters.
Furthermore, one can check that the tachyon equation \eqref{ChiralFieldEquations2} can also be put in the dimensionless form by redefining all the dimensional quantities, for instance, $(z_h,\mu, B)$, in units of $\phi_{\infty}$. In this way, we have a two-parameter dilaton profile controlled by the dimensionless parameters $(\tilde{\phi_{0}},\tilde{\phi_{2}})$. Also, note that $ \tilde{\phi_{0}} $ is just the ratio between the dilaton parameter in the UV, $\phi_0$, and the dilaton parameter in the IR, $\phi_{\infty}$. Finally, for reference in the next section, we fix our parameters as $\tilde{\phi_{0}}=4.675$ and $\tilde{\phi_{2}} = 0.0375$.
\section{Results}
\label{results}
In this section, we present our results concerning the chiral phase transition in $2+1$ dimensions at finite temperature and density, in the presence of a background magnetic field.
It is important to note that, in our model, we just consider the deconfined phase of the dual gauge theory, since we are working in the finite temperature ansatz for the black hole. In a more realistic model, one should also consider a confined phase (thermal AdS) which appears for low temperatures. These two phases are separated by a Hawking-Page phase transition. However, we were able to extrapolate some of our results for very low temperatures within the deconfined phase in order to show that our model realizes spontaneous breaking of chiral symmetry.
All the physical quantities presented in this section have a tilde, meaning that they are in units of the mass scale $\sqrt{\phi_{\infty}}$. To be more precise:
\begin{equation}
\setlength{\jot}{12pt}
\begin{aligned}
(\tilde T,\tilde\mu) \equiv \left(\frac{T}{\sqrt{\phi_{\infty}}},\frac{\mu}{\sqrt{\phi_{\infty}}}\right)\,\,\, {\rm and} \,\,\,(\tilde B, \tilde \sigma) \equiv \left(\frac{B}{\phi_{\infty}},\frac{\sigma}{\phi_{\infty}}\right).
\end{aligned}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[scale = 0.35]{svsT1}
\hfill
\includegraphics[scale = 0.35]{svsT3}
\caption{The chiral condensate $\tilde\sigma$ versus the temperature $\tilde T$. {\sl Left panel}: $\tilde\sigma$ versus $\tilde T$ for $\tilde\mu=0$ and $\tilde B=0$. In the chiral limit, i.e, $\tilde m_f=0$, one sees a second-order phase transition, while for $\tilde m_f\not=0$ there is a crossover. {\sl Right panel}: $\tilde\sigma$ versus $\tilde T$ in the chiral limit $\tilde m_f=0$ for different values of magnetic field and chemical potential. All quantities are in units of $\sqrt{\phi_{\infty}}$, in both panels.}
\label{fig:svsTa}
\end{figure}
In Figure \ref{fig:svsTa} the behavior of the chiral condensate $\tilde{\sigma}$ as a function of temperature $\tilde{T}$ is presented. In the {\sl Left panel}, for zero magnetic field and density, one can see that the chiral phase transition is second-order in the chiral limit ($\tilde m_f=0$), while for finite fermion mass ($\tilde m_f\neq0$) the phase transition turns to a crossover. We have checked numerically that there is always spontaneous chiral symmetry breaking whenever the parameter $\tilde{\phi_{0}}\neq 0$. However, in the limit $\tilde{\phi_{0}}\to 0$ the chiral condensate vanishes. This limit corresponds exactly to the situation where the positive quadratic dilaton profile ($\phi(z) \sim \phi_{\infty}\,z^2$) dominates, and in this case it is known that the chiral symmetry breaking cannot be reproduced \cite{Chelabi:2015cwn,Chelabi:2015gpc,Ballon-Bayona:2020qpq}.
In the {\sl Right panel} of Figure \ref{fig:svsTa}, the chiral condensate $\tilde{\sigma}$ as a function of the temperature $\tilde{T}$ for different values of magnetic field and density is presented. At zero magnetic field and finite density, the value of the chiral condensate is reduced from its value at both zero magnetic field and density. At zero density and finite magnetic field, this reduction is observed, in agreement with previous works \cite{Rodrigues:2018chh,Rodrigues:2018pep}, signaling an inverse magnetic catalysis (IMC) effect.
At finite density, with or without magnetic field, we observe a reduction of the condensate with respect to the condensate at zero density and magnetic field. This is expected to happen since the introduction of a chemical potential generates an asymmetry \cite{Preis:2010cq, Preis:2012fh} between the fermions ($\Psi$) and antifermions ($\bar{\Psi}$) which renders difficult the pairing $\bar{\Psi}\Psi$, i.e., the formation of a chiral condensate. At finite density and magnetic field there is also an IMC due to a summation of the effects such that the net effect is a reduction of the chiral condensate.\footnote{This reduction in the chiral condensate at finite density also appears in more sophisticated holographic approaches in higher dimensions, see for instance \cite{Gursoy:2017wzz,Ballon-Bayona:2017dvv}. Furthermore, for an alternative interpretation of IMC at vanishing chemical potential, based on the anisotropy caused by a magnetic field, see \cite{Gursoy:2018ydr}.}
\begin{figure}[H]
\centering
\includegraphics[scale = 0.35]{svsmuT0.pdf}
\hfill
\includegraphics[scale = 0.35]{svsmuTfinite.pdf}
\caption{The chiral condensate $\tilde \sigma$ versus the chemical potential $\tilde\mu$ in the chiral limit $\tilde m_f=0$. {\sl Left panel}: $\tilde\sigma$ versus $\tilde\mu$ at zero temperature and different values of magnetic field. {\sl Right panel}: $\tilde\sigma$ versus $\tilde \mu$ at small finite temperatures and different values of magnetic field. All quantities are in units of $\sqrt{\phi_{\infty}}$, in both panels.}
\label{fig:svsTb}
\end{figure}
In Figure \ref{fig:svsTb}, the chiral condensate as a function of the chemical potential in the chiral limit $\tilde m_f=0$ is shown. In the {\sl Left panel} this behavior is shown at zero temperature. One can see that the finite density affects the chiral condensate destructively, i.e., causing a decreasing, until a critical chemical potential from which the chiral condensate starts to increase slowly again at large chemical potential.
As two comments, in the context of canonical (nonholographic) QCD, it is worthy to point out first that at large chemical potentials the chiral symmetry is expected to be restored. Second, at extremely high densities ($\mu>>T$) chiral symmetry can be broken through the formation of a condensate of quark Cooper pairs in the color-flavor-locked (CFL) phase, via a different mechanism \cite{Alford:1998mk, Alford:2007xm}. Note that this very mechanism was discussed within the AdS/CFT program, for instance, in Ref. \cite{Chen:2009kx}.
In the {\sl Right panel} of Figure \ref{fig:svsTb}, the chiral condensate as function of the chemical potential in the chiral limit $\tilde m_f=0$ for zero and finite magnetic fields is shown at finite small temperatures. One can observe in this case that the thermal effects affect the chiral condensate substantially, which is expected since thermal fluctuations have a huge impact on the chiral condensation, especially in $2+1$ dimensions \cite{Das:1995bn}. Moreover, with a finite magnetic field turned on, the decrease of the chiral condensate is much more pronounced, even at low temperatures, characterizing an IMC. Note, however that, at low temperatures, IMC is not expected to happen in QCD, since it is known that a magnetic field is a strong catalyst of chiral symmetry breaking, and, therefore, magnetic catalysis is expected to dominate in this low-temperature regime and, in particular, is universal behavior at zero temperature \cite{Gusynin:1994re,Miransky:2002rp,Shovkovy:2012zn,Miransky:2015ava}.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.35]{Tcvsmu.pdf}
\hfill
\includegraphics[scale = 0.34]{mucvsB.pdf}
\caption{{\sl Left panel}: critical temperature $\tilde T_{c} $ versus the chemical potential $\tilde \mu$ in the chiral limit for different values of the magnetic field. The critical temperature is defined as the temperature where the chiral condensate vanishes for fixed $\mu$ and $B$. {\sl Right panel}: critical chemical potential $\tilde \mu_{c} $ versus $\tilde B$ in the chiral limit for different values of temperature. As for the critical temperature, the critical chemical potential is defined as the chemical potential where the chiral condensate vanishes for fixed $T$ and $B$. All quantities are in units of $\sqrt{\phi_{\infty}}$, in both panels.}
\label{fig:svsTc}
\end{figure}
Finally, in Figure \ref{fig:svsTc} is presented the critical temperature $\tilde T_{c} $ as a function of the chemical potential $\tilde \mu$ in the chiral limit for different values of magnetic field ({\sl Left panel}) and the critical chemical potential $\tilde \mu_{c} $ versus $\tilde B$ in the chiral limit for different values of the temperature ({\sl Right panel}). These critical quantities ($T_c$,\,$\mu_c$) are defined as follows. The critical temperature $T_c$ is defined as the temperature where the chiral condensate vanishes for fixed $\mu$ and $B$. Analogously, the critical chemical potential $\mu_c$ is defined as the chemical potential where the chiral condensate vanishes for fixed $T$ and $B$.
These findings give additional support to the fact that our holographic model captures the IMC effect at zero and finite densities as well as a decrease on the chiral condensate with increasing chemical potential with or without magnetic fields. We also find that the two effects related to the presence of chemical potential and magnetic fields on the chiral condensate sum up decreasing the chiral condensate even more.
\section{Conclusions}\label{conclusions}
In this work we have described holographically finite density effects on the spontaneous chiral symmetry breaking and chiral phase transition of a system in $ 2+1 $ dimensions in the presence of magnetic fields. We observe inverse magnetic catalysis (IMC), which is the reduction of the chiral condensate with an increasing magnetic field, at zero or at finite density. We also observe a decreasing of the chiral condensate with increasing chemical potential, with or without magnetic fields. Furthermore, the reduction of the chiral condensate is even more pronounced when one takes both finite densities and magnetic fields simultaneously, as shown in Figures \ref{fig:svsTa} and \ref{fig:svsTb}.
Moreover, we have also find that the critical temperature $\tilde T_c$ diminishes with increasing chemical potential and that the critical chemical potential $\tilde \mu_c$ decreases with increasing magnetic field, as pictured in Figure \ref{fig:svsTc}. These results are in good agreement with other higher-dimensional holographic studies, such as the one presented in Refs.
\cite{Gursoy:2017wzz, Ballon-Bayona:2017dvv, Gursoy:2018ydr}.
As a possible extension to our holographic model, it would be interesting to include a Dirac-Born-Infeld (DBI) action in which the magnetic field and the tachyon are coupled. In this setup one might reproduce magnetic catalysis (MC) in our holographic model along with the standard inverse magnetic catalysis (IMC) which comes from the contribution of the magnetic field introduced via the metric. A possible clue in this direction is given by the recent higher-dimensional holographic analysis presented in Ref. \cite{Ballon-Bayona:2020xtf} at zero density where there is MC, as would be expected from QCD.
Another possible extension for our work is to consider
the confined phase for low temperatures besides the deconfined phase for high temperatures, separated by a Hawking-Page phase transition. In this case we could study the deconfinement phase transition together with the chiral phase transition which are expected to happen approximately at the same temperature in QCD.
\section*{Acknowledgments}
The authors thank Alfonso Ballon Bayona and Luis Mamani for useful conversations. DMR is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under grant No. 152447/2019-9. D.L. is supported by the National Natural Science Foundation of China (11805084), the PhD Start-up Fund of Natural Science Foundation of Guangdong Province (2018030310457) and Guangdong Pearl River Talents Plan (2017GC010480). H.B.-F. is partially supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under Grant No. 311079/2019-9.
|
1,108,101,563,835 | arxiv | \section{Introduction}
The violation of Fermi-liquid behavior in the normal state of the
high-$T_c$ superconductors has turned the study of
{\em non-Fermi liquid} phenomena into a major theme in condensed matter
physics \cite{nfl}. Additional motivation
comes from a growing number of experimental realizations of
low-dimensional electron structures - such as quasi one-dimensional (1D)
organic conductors \cite{jerome}, or point contact tunneling in fractional
quantum Hall devices \cite{milliken} - where electron correlations are
seen to produce manifest non-Fermi liquid behavior.
Two prominent model problems, serving as
paradigms for the study of non-Fermi liquids, are the {\em Luttinger liquid}
\cite{haldane,voit} and the (overscreened) {\em multichannel Kondo
effect}
\cite{nozieres,andrei-wiegmann,affleck-ludwig,schlottmann}. ''Luttinger
liquid'' is the code name for
the universal low-energy behavior of interacting electrons in 1D,
whereas the multichannel Kondo model describes a magnetic impurity
coupled via a spin exchange to
several degenerate bands of non-interacting electrons in 3D. Both problems have yielded
to exact solutions, exhibiting a wealth of properties not contained in
the standard Fermi liquid picture of the metallic state.
Here we consider a spin-$S$ magnetic impurity coupled to $m \ge 2$
degenerate bands of {\em interacting} electrons in 1D, thus extending the
ordinary multichannel Kondo model to the case of interacting electrons.
Specifically, we address the question about the influence of
electron-electron interaction on the low-temperature thermal and magnetic
response of the impurity-electron (screening-cloud) composite. This
problem is particularly interesting as it involves the interplay
between two kinds of electron correlations; one induced by the
spin exchange interaction with the impurity, the other coming from the
direct electron-electron interaction.
Moreover, the study of a magnetic impurity in the presence of
interacting electrons may shed new light on possible experimental
realizations of the multichannel Kondo effect, such as certain Uranium-containing
heavy-fermion materials \cite{cox}, or the coupling of conduction electrons
to structural defects in metal point contacts \cite{ralph}.
We study the problem using boundary conformal
field theory (BCFT) \cite{cardy}, assuming that at low temperatures the impurity-electron
interaction renormalizes onto a scale-invariant boundary condition on the
bulk theory. This approach, first suggested by Affleck and Ludwig
for the ordinary multichannel Kondo problem \cite{affleck-ludwig,review}, has
successfully been employed for a single channel of interacting electrons
coupled to a spin-1/2 impurity \cite{frojdh,durg}.
In the present case, with an arbitrary number of electron channels $m \ge 2$, a
BCFT analysis allows for a complete classification of all {\em possible}
critical behaviors of the impurity-electron composite. Being exact, this
information should prove useful as a guide to - and a test of the validity of - other,
more direct approaches to the problem (yet to be carried out).
Specifically, we shall show that
conformal invariance together with the internal symmetries of the problem
restrict the possible types of critical behavior to only two: {\em
Either} the theory is the same as for noninteracting electrons {\em or}
the electron interaction generates a specific boundary operator that produces a
new leading behavior of the impurity thermal response.
In both cases the leading impurity magnetic
response is insensitive to the electron-electron interaction, implying
that the screening of the impurity is realized in the same way as in the
noninteracting problem, with over-, exact, or underscreening depending on
the number of channels and the magnitude of the impurity spin.
While our method cannot pinpoint which of the two scenarios is actually realized, we conjecture -
guided by analogous results for the one-channel problem
\cite{furusaki,egger} - that electron interactions in the bulk {\em do} induce an anomalous term
in the
impurity specific heat in the case of exact screening $(m = 2S)$.
\section{The Model}
As microscopic bulk model we take a multiband Hubbard chain with repulsive
on-site interaction $U>0$,
\begin{equation}
\label{Hlattice}
H_{el} = -t\! \sum_{n, i, \sigma}( \cdop{n,i\sigma} \cop{n+1,i\sigma} +
h.c.)
+ U\!\sum_{n,ij,\mu \sigma} \! n_{n,i\sigma}n_{n,j\mu},
\end{equation}
where $\cop{n,i\sigma}$ is the electron operator at site $n$, with $i= 1,....,m$ and
$\sigma= \uparrow, \downarrow$ band- and spin
indices, respectively, and $n_{n,i\sigma}=\cdop{n,i\sigma} \cop{n,i\sigma}$ is
the number operator. This model - and its variants - has been extensively studied in the
literature, most recently in \cite{balatsky} where it was argued that
enhanced superconducting fluctuations may result when the degeneracy of
the on-site interband coupling $U$ is properly lifted.
At large wavelengths we can perform a continuum limit $\cop{n,i\sigma} \rightarrow
\sqrt{\frac{a}{2\pi}} \Psi_{i\sigma}(na)$ (with $a$ the lattice spacing),
which, for small $U$ and away from half-filling, takes (\ref{Hlattice}) onto
\begin{eqnarray}
\label{bulkHamiltonian}
H_{el} = &\frac{1}{2\pi}&\int dx \biggl\{ v_F
\biggl[\no{ \psidop{L,i\sigma}(x) i \frac{d}{dx} \psiop{L,i\sigma}(x) }
- \no{ \psidop{R,i\sigma}(x) i \frac{d}{dx} \psiop{R,i\sigma}(x) } \biggr]
\nonumber \\
&+&\frac{g}{2}\no{ \psidop{r,i\sigma}(x) \psiop{r,i\sigma}(x)
\psidop{s, j\mu}(x) \psiop{s, j\mu}(x) }
+ g\no{\psidop{L,i\sigma}(x) \psiop{R,i\sigma}(x)
\psidop{R, j\mu}(x) \psiop{L, j\mu}(x) } \biggr\}.
\end{eqnarray}
Here $\psi_{L/R,i\sigma}(x)$ are the left/right moving components
of the electron field $\Psi_{i\sigma}(x)$, expanded about the
Fermi points $\pm k_F$: $\Psi_{i\sigma}(x)=e^{-ik_Fx}\psiop{L,i\sigma}(x)+
e^{ik_Fx}\psiop{R,i\sigma}(x)$. Summation over repeated indices for band,
spin, and chirality $r,s=R,L$ is implied.
The normal ordering is defined w.r.t. the filled Dirac sea, and
$v_F$ and $g$ are given by $v_F=2at\sin(ak_F)$ and $g=Ua/\pi$ respectively.
For the purpose of implementing BCFT techniques in the presence of a Kondo impurity
(yet to be added), we
decouple charge-, spin-, and band- {\em (flavor-)} degrees of freedom in
(\ref{bulkHamiltonian}) by a Sugawara construction
\cite{fuchs}, using the
$U(1)$ (charge), $SU(2)_m$ (level $m$, spin), and $SU(m)_2$ (level $2$,
flavor) Kac-Moody currents:
\begin{eqnarray}
J_{r}(x)&=&\no{ \psidop{r,i\sigma}(x) \psiop{r,i\sigma}(x) }
\label{U1charge} \\
\vJ_r(x)&=&\no{\psidop{r,i\sigma}(x)\frac{1}{2}
\bsigma{\sigma\mu} \psiop{r,i\mu}(x)} \label{SU2spin} \\
J_r^A(x)&=&\no{\psidop{r,i\sigma}T_{ij}^A\psiop{r,j\sigma}},
\label{SU2flavor}
\end{eqnarray}
where $\bsigma{}$ are the Pauli matrices, and $T^A, A\!\in\{1,\ldots,m^2-1\}$,
are generators of the defining representation of $SU(m)$ with normalization
$\mbox{tr}T^AT^B = 1/2\delta^{AB}$.
Diagonalizing the charge sector by a Bogoliubov transformation
$J_{L/R}=\cosh(\theta) j_{L/R}-\sinh(\theta) j_{R/L}$, with
$\coth({2\theta})=1+v_F/((2m-1)g)$, we obtain the critical bulk Hamiltonian
\begin{equation}
\label{sugawara-hamiltonian}
H^*_{el} =\!\frac{1}{2\pi}\int\! dx \biggl\{ \frac{v_c}{4m}
\no{j^i_L(x) j^i_L(x)}
\!+\frac{v_s}{m+2}\no{{\vJ}^i_L(x)\!\cdot\!\vJ^i_L(x)}
+\frac{v_f}{m+2}\no{J^{iA}_L(x) J^{iA}_L(x)}\biggr\},
\end{equation}
where $v_c\!=\!v_F(1+2(2m-1)g/v_F)^{1/2}, v_s\!=\!v_f\!=\!v_F\!-\!g$.
We have here retained only exactly marginal terms in the interaction by removing two
(marginally) irrelevant terms in the spin and flavor sectors \cite{footnote0}.
We have also replaced the right-moving currents with a second species
(labeled by ``2'') of left-moving currents: $j_{L/R}^2(x)\equiv
j_{R/L}(-x)$ for $x>0$ (with $j_{L/R}^1(x)\equiv j_{L/R}(x)$), and
analogously for spin and flavor currents. This amounts to
folding the system onto the positive $x$-axis, with a boundary condition
\begin{equation}
j_L^{1/2}(0) = j_R^{2/1}(0)
\label{trivialBC}
\end{equation}
at the origin, and then analytically continuing the currents back to the full $x-$axis.
Here (\ref{trivialBC}) simulates the continuity at $x=0$ of the
original bulk theory (with the analogous boundary conditions in
spin- and flavor sectors). The Sugawara form of $H^*_{el}$ in
(\ref{sugawara-hamiltonian}) implies invariance
under independent $U(1)^i$, $SU(2)^i_m$, and $SU(m)^i_2$ transformations (with
$i=1,2$ labeling the two species), reflecting the {\em chiral} symmetry of the
critical bulk theory.
We now insert a local spin ${\vS}$ at $x=0$, and couple it to the
electrons by an antiferromagnetic $(\lambda > 0)$ spin exchange interaction
\begin{equation}
\label{HKondo}
H_{\!K}\!=\!\lambda\!\no{(\psidop{L,i\sigma}(0)\! +\! \psidop{R, i\sigma}(0))
\frac{\bsigma{\sigma\mu}}{2}
(\psiop{L,i\mu}(0)\! +\! \psiop{R, i\mu}(0))\! \cdot\! {\vS}}.
\end {equation}
In the low-energy limit the impurity is expected \cite{review} to renormalize
to a conformally invariant boundary condition on the bulk theory, changing
(\ref{trivialBC}) into a new nontrivial boundary condition on $H^*_{el}$.
By using BCFT to extract the set of {\em boundary operators} present for
this boundary condition, finite-temperature effects due to the impurity
can be accessed via standard finite-size scaling by treating (Euclidean) time as
an inverse temperature.
\section{Impurity Critical Behavior}
The set of boundary operators ${\cal O}_j$ and the corresponding {\em boundary
scaling dimensions} $\Delta_j$ can be derived from the finite-size spectrum
of $H^*_{el}$ through a conformal mapping from the half-plane to a
semi-infinite strip. The mapping is such that the boundary condition
corresponding to the impurity on the half-plane is mapped to both sides of
the strip, with the boundary operators in the plane in one-to-one
correspondence to the eigenstates on the strip. In particular, the boundary
dimensions are related to the energy spectrum through the relation
$E=E_0+\pi v \Delta/\ell$, with $E_0$ the groundstate
energy, and $\ell$ the width of the strip \cite{cardy}.
A complete set of eigenstates of $H^*_{el}$ are given by the charge-, spin- and flavor
{\it conformal towers} , each tower defined by
a {\it Kac-Moody primary state} and its descendants \cite{fuchs}.
The primaries are labeled by $U(1)$ quantum numbers $q^i$ in the
charge sector, $SU(2)$ quantum numbers $j^i$ in the spin sector
and $SU(m)$ quantum numbers {\em (Dynkin labels)}
$(m^i_1,\ldots,m^i_{m-1})$ in the flavor
sector:
\begin{equation}
q^{{}^1_2} = C^{{}^1_2}\frac{e^{\theta}}{2} \pm
D^{{}^1_2}\frac{e^{-\theta}}{2}, \ \ \
j^i=0,\frac{1}{2},...,\frac{m}{2}, \ \ \
\sum_{k=1}^{m-1}m^i_k=0,1,2 \ ,
\label{QuantumNumbers}
\end{equation}
where $C^i, D^i \in \Z$ and $m^i_k\in\N$, with $i=1,2$ labelling the two species as above.
We can express the complete energy spectrum,
and consequently the
complete set of possible boundary scaling dimensions $\Delta=\Delta^1+\Delta^2$ in terms of the
quantum numbers in (\ref{QuantumNumbers}):
\begin{equation}
\Delta^i=\frac{(q^i)^2}{4m}+\frac{j^i(j^i+1)}{m+2}+
\frac{1}{2(m+2)}
\sum_{k=1}^{m-1} \!m^i_k\!\left(\!f_m(k,k)\!+\!\sum_{l=1}^{m-1} m^i_l
\frac{f_m(k,l)}{m}\right)+{\cal N},
\label{Dimensions}
\end{equation}
where $f_m(k,l)\!=\!\mbox{min}(k,l)(m-\mbox{max}(k,l))$ and ${\cal N}\in\N$.
Each conformal boundary condition corresponds to a {\em selection rule} which
specifies that only certain combinations of conformal towers are allowed.
Since the trivial boundary condition (\ref{trivialBC}) simply defines the
bulk theory in terms of a boundary theory, the associated selection rule
reproduces the {\em bulk} scaling dimensions of
$H^*_{el}$ \cite{frojdh}. It is less obvious how to identify the correct selection rule
for the nontrivial boundary condition representing (\ref{HKondo}).
Fortunately, we do not need the full selection rule to extract the
leading impurity critical behavior. For this purpose
it is sufficient to identify the {\em leading
irrelevant boundary operator} \cite{affleck-ludwig} (LIBO) that can appear in
the scaling
Hamiltonian,
as this is the operator that drives
the dominant response of the impurity.
As the possible correction-to-scaling-operators are
boundary operators constrained by the symmetries of the Hamiltonian,
this sets our strategy: We
consider {\em all} selection rules for combining conformal towers in
(\ref{QuantumNumbers}) (thus exhausting all
conceivable boundary fixed points), for each selecting the corresponding
LIBO, using (\ref{Dimensions}). We then extract the {\em possible} impurity critical
behaviors by identifying those LIBOs that (i) {\em produce a
noninteracting limit $g \rightarrow 0$ consistent with known results, and} (ii)
{\em respect
the symmetries of $H^*_{el} + H_K$.}
\subsection{Overscreening: $m > 2S$}
Let us first focus on the case $m > 2S$. Here the noninteracting
$(g\!=\!0)$ problem renormalizes to a nontrivial fixed point, as
can be seen by passing to a basis of definite-parity $(P= \pm)$ fields
$\psiop{\pm,i\sigma}(x) = (1/\sqrt{2})(\psiop{L,i\sigma}(x) \pm
\psiop{R,i\sigma}(-x) )$ \cite{footnote2}. In this basis
$H_{el}^*[g\!=\!0] + H_K$
becomes identical to the Hamiltonian representing 3D noninteracting
electrons in $2m$ channels $(P=\pm,
i=1,...m)$, coupled to a local spin in the $m$ positive parity channels only.
At low temperatures this system flows
to the overscreened $m-$channel Kondo fixed point with a LIBO of
dimension $\Delta = (4+m)/(2+m)$
\cite{pang}. Consider first the case that this fixed point is stable
against perturbations in $g$ (or connected to a line of $g>0$ boundary fixed points via an
exactly marginal operator).
To search for a novel {\em leading} scaling
behavior for $g \neq 0$ it is then sufficient to search
for boundary operators with dimensions in the interval $1 \le \Delta \le (4+m)/(2+m)$
which produce an impurity response analytically connected to that of the
non-interacting theory.
An operator with $1 \le \Delta < 3/2$ contributes an impurity specific heat
scaling as $(\Delta-1)^2 T^{2\Delta-2}$ \cite{affleck-ludwig},
and condition (i) then
requires that, as $g \rightarrow 0$, $\Delta_{LIBO} \rightarrow (4+m)/(2+m)$ {\em or}
that $\Delta_{LIBO} \rightarrow 1$ (with in this case next-leading dimension
$\Delta \rightarrow (4+m)/(2+m)$). On the other hand, if the $g=0$
fixed point gets destabilized as $g$ is switched on, condition
(i) enforces the LIBO at the new $g > 0$ boundary fixed point
to become marginally relevant for $g=0$ (so as to produce the necessary
flow back to the known $g=0$ overscreened $m-$channel fixed point) \cite{footnote1}. Thus,
for this case
condition (i) unambiguously requires that $\Delta_{LIBO} \rightarrow 1$ as $g \rightarrow 0$.
Turning to condition (ii), we note that the Kondo interaction
(\ref{HKondo}) couples $L$ and $R$ fields and thus breaks the chiral
gauge invariance in all three sectors. This implies that the Kac-Moody
symmetries gets broken down to their diagonal subgroups, i.e.
$U(1)^1\times U(1)^2 \rightarrow U(1)$ in the charge sector,
$SU(2)_m^1 \times SU(2)_m^2 \rightarrow SU(2)_{2m}$ in the spin sector, and
$SU(m)_2^1 \times SU(m)_2^2 \rightarrow SU(m)_{4}$ in the flavor sector.
Operators with
non-zero values of $q_1$ and $q_2$ may thus appear in the charge sector,
provided that $q_1 = -
q_2$ as required by conservation of total charge. Similarly, operators in spin- and
flavor sectors with non-zero quantum numbers $j^i$ and $m^i_j$
are now allowed, provided that they
transform as singlets under the diagonal subgroups.
Remarkably, {\em there exists precisely one generic class of g-dependent
operators which satisfy condition (i) and (ii).} It is
given by
\begin{equation}
{\cal O}_1 \sim \, \no{\mbox{exp}(\frac{i\sqrt{\pi}}{2mK_{\rho,m}}
\, \phi^1_L)} \times \no{\mbox{exp}(\frac{i\sqrt{\pi}}{2mK_{\rho,m}}
\, \phi^2_L)} \times \, \varphi^s \times \varphi^f
\label{LIBO}
\end{equation}
of dimension $\Delta_{{\cal O}_1} = 1+(K^{-1}_{\rho,m}-1)/2m
\rightarrow 1$
as $g \rightarrow 0$.
Here $\phi^i_L(x)$ is a chiral charge boson of
species $i$ (i.e. $\phi^i_L(x) = \int dx \ j_L^i(x)$), while
$\varphi^s$ and $\varphi^f$ are the singlet fields (under the diagonal
subgroups)
in the decomposition of the product of primary fields,
$(j^1\!=\!1/2) \times (j^2\!=\!1/2)$ and
$(1,0,...,0) \times (0,0,...,1)$
in spin- and flavor sectors, respectively. These carry dimensions
$\Delta_{\varphi^s} = 3/(2(m+2))$ and $\Delta_{\varphi^f} =
(m^2-1)/(m(m+2))$.
The parameter $K_{\rho,m}$ is given by
\begin{equation}
K_{\rho,m}
= (1+2(2m-1)\frac{g}{v_F})^{-\frac{1}{2}} = \frac{v_F}{v_c} \le 1
\label{chargeparameter}
\end{equation}
and plays the role of a generalized (channel-dependent) Luttinger liquid
parameter \cite{voit}.
The next-leading generic irrelevant boundary operator satisfying (i) and (ii)
is independent of $g$, and given by
\begin{equation}
{\cal O}_2
\sim \vJ^1_{-1} \cdot
\vphi^1\times\1^2 + \1^1\times\vJ^2_{-1} \cdot \vphi^2 ,
\label{NextLIBO}
\end{equation}
with $\vJ^i_{-1} \cdot \vphi^i$
the first Kac-Moody descendant of the spin-1 primary field
$\vphi^i$, obtained by contraction with the Fourier mode $\vJ^i_{-1}$ of the
$SU(2)_m$ currents. It
carries dimension $\Delta_{{\cal O}_2} = (m+4)/(m+2)$, and is the same
operator that drives the leading impurity response in the non-interacting
problem. For certain special values of $m$ additional $g$-dependent boundary operators
satisfying (i) and (ii) appear, but as these are non-generic and, for
given $m$, of higher dimensions than $\Delta_{{\cal O}_1}$, we do not consider
them here \cite{NonGeneric}.
Piecing together the results, it follows
that either ${\cal O}_1$ in (\ref{LIBO}) {\em
or} ${\cal O}_2$ in (\ref{NextLIBO}) plays the role of a LIBO at the $g>0$ boundary fixed
point.
We may thus define a scaling Hamiltonian
\begin{equation}
H_{scaling} = H^*_{el} + \mu_1 {\cal O}_1(0) + \mu_2 {\cal
O}_2(0) + \mu_3 {\cal O}_3(0)...,
\label{ScalingHamiltonian}
\end{equation}
with $\mu_j$ conjugate scaling fields, and ${\cal O}_{j>2}$ less relevant
operators. In the case that ${\cal O}_1$ in (\ref{LIBO}) does {\em not}
appear, $\mu_1 \equiv 0$. Using
(\ref{ScalingHamiltonian}), the thermal response may now
be calculated perturbatively
in the scaling fields $\mu_j$, using standard techniques
\cite{affleck-ludwig}. We thus obtain for the impurity specific heat:
\begin{eqnarray}
C_{imp} & = & c_1(1-K_{\rho,m}^{-1})^2T^{(K^{-1}_{\rho,m} - 1)/m}
+ \left\{ \begin{array}{ll}
c_2 T \mbox{ln}(\frac{T_K}{T}) + ... & m=2, S= \frac{1}{2} \nonumber \\
c_2' T^{\frac{4}{m+2}} + ... & m > 2, m > 2S \nonumber \\
\end{array} \right . \ \ T \rightarrow 0. \\
\label{scaling}
\end{eqnarray}
Here $c_1, c_2$ and $c_2'$ are amplitudes of second order in the
scaling fields with corresponding indices, $T_K$ plays the role of a Kondo temperature,
and ``...'' denotes subleading terms.
Note that the amplitude of the leading term in (\ref{scaling}) always vanishes when $g=0$,
thus making the second $g-$independent term dominant.
The result in (\ref{scaling}) is exact and independent
of the precise nature of the boundary fixed point. In the case that $\mu_1 \equiv 0$,
and hence $c_1 \equiv 0$, the
$g > 0$ fixed point is the {\em same} as for the ordinary (noninteracting) overscreened
problem, although the content of subleading irrelevant operators ${\cal O}_{j>2}$ may
differ. In the alternative case, with $\mu_1 \neq 0 \ (c_1 \neq 0)$, the situation is more intricate,
with, in principle, three possibilities: (a) the $g=0$ (ordinary overscreened Kondo) and
$g \neq 0$ fixed points are the same, but with different contents of irrelevant
operators, (b) the $g=0$ and $g > 0$ fixed points are continuosly
connected via a critical line by an exactly
marginal boundary operator with a scaling field parameterized by $g$, or (c) the $g=0$
and $g>0$ fixed points are distinct and the flow between them is governed by a marginally
relevant operator \cite{footnote1}.
We postpone a discussion of the various possibilities to the next section.
Turning to the impurity magnetic susceptibility $\chi_{imp}$, its leading scaling
behavior is produced by the lowest-dimension boundary operator which
contains a {\em nontrivial} singlet $SU(2)_{2m}$ factor that couples to the
total spin density \cite{frojdh}.
Since ${\cal O}_1$ in (\ref{LIBO}) only contains the
identity in the $SU(2)_{2m}$ sector the desired operator is identified as ${\cal O}_2$ in
(\ref{NextLIBO}). Thus, the {\em leading} term in the magnetic susceptibility
$\chi_{imp}$ due to the impurity
is independent of the electron-electron interaction, and remains the
{\em same} as for the noninteracting overscreened $m-$channel Kondo problem
\cite{andrei-wiegmann,affleck-ludwig}:
\begin{eqnarray}
\chi_{imp}& = & \left\{ \begin{array}{ll} {c}_2\ln({\frac{T_K}{T}}) +...
& m=2, S=1/2 \\
{c'}_2 T^{\frac{2-m}{m+2}}+ ... & m>2, m>2S \end{array} \right . \ \ \ \ T
\rightarrow 0
\label{impsusc}
\end{eqnarray}
where $c_2$ and ${c'}_2$ are second order in scaling fields
and "..." denotes subleading terms. We find that there are no subleading
interaction-dependent divergent contributions possible, as these would give rise to
new divergences also in the noninteracting limit.
\subsection{Exact screening and underscreening: $m \le 2S$}
An analysis analogous to the one above can be carried out for $m \le
2S$ as well. Again passing to a definite-parity basis and exploring known results
\cite{schlottmann}, one verifies that the {\em noninteracting} 1D electron groundstate
carries a spin $S-m/2$ corresponding to a strong coupling fixed point. When $m=2S$ the
impurity is completely screened and the situation is essentially the same as for the
ordinary single-channel Kondo problem with impurity spin 1/2: the electron screening cloud
behaves as a local Fermi liquid with a $\pi/2$ phase shift of the single-electron wave
functions. Analogous to the single-channel problem \cite{frojdh}, there are three degenerate
LIBOs for this case, given by the energy-momentum tensors (of dimension $\Delta=2$) in
charge-, spin-, and flavor sectors. When $m=2S$ these produce the {\em leading} term in the
impurity specific heat, $C_{imp} = b_1T + ... $,
as well as in the susceptibility, $\chi_{imp} = b_2 - b_3T^2 + ....$,
with $b_{1,2,3} > 0$ amplitudes linear in the scaling fields, and with higher powers
in temperature coming from higher-order descendants of the identity operator. When $m<2S$ the
impurity spin is only partially screened as there are not enough conduction electron
channels to yield a singlet groundstate. This leaves an asymptotically
decoupled spin $S-m/2 >
0$, adding a Curie-like contribution to $C_{imp}$ and $\chi_{imp}$,
in addition to logarithmic corrections characteristic of
asymptotic freedom \cite{schlottmann}.
Let us study the case $m=2S$ and explore what happens when turning on the electron
interaction. Implementing condition (i) from the previous section, possible LIBOs appearing
for $g>0$ must have dimensions $\Delta$ with the property $\Delta \rightarrow 1$ or $\Delta
\rightarrow 2$ as $g \rightarrow 0$. Using condition (ii), we find that in addition
to ${\cal O}_1$ in (\ref{LIBO}) there are several new allowed classes of $g-$dependent
generic boundary operators, all
with $\Delta \rightarrow 2$ as $g \rightarrow 0$. As any boundary operator with dimension
$\Delta > 3/2$ produces the same {\em leading} scaling in temperature as a $\Delta =2$ operator
(although of different amplitudes) \cite{frojdh}, the only {\em possible} leading
term with an interaction-dependent exponent is again generated
by ${\cal O}_1$, as in the overscreened case. Thus, since ${\cal O}_1$ does not contribute to the
impurity susceptibility, its leading behavior
remains the same as in the non-interacting problem, exhibiting exact
screening with a constant zero-temperature contribution,
\begin{equation}
\chi_{imp} = b_2 + ... - b_3T^2 + ....\ , \ \ \ m=2S, \ g \ge 0, \ T \rightarrow 0
\label{gspecific}
\end{equation}
where ``....'' denotes possible second-order contributions in scaling
fields. The amplitude $b_2$ is the same as in the noninteracting problem,
while $b_3$ may pick up second-order interaction-dependent terms contributed by
subleading operators.
The leading {\em possible} interaction-dependent correction to (\ref{gspecific}) scales as
\begin{equation}
\chi_{imp}^{corr} \sim(K^{-1}_{\rho,m}-1)T^{1+(K^{-1}_{\rho,m}-1)/m}
\label{corr}
\end{equation}
and is produced at second order by the composite boundary operator
\begin{equation}
{\cal O}_3 \sim \no{\mbox{exp}(\frac{i\sqrt{\pi}}{2mK_{\rho,m}}
\, \phi^1_L)} \times \no{\mbox{exp}(\frac{i\sqrt{\pi}}{2mK_{\rho,m}}
\, \phi^2_L)} \times \, \vJ^{diag}_{-1}\cdot\mbox{\boldmath $\varphi$}^s
\times \varphi^f.
\label{corrop}
\end{equation}
Here $\vJ^{diag}\equiv \vJ^1_L+\vJ^2_L$ is the generator of the diagonal
$SU(2)_{2m}$ subgroup in the spin sector, $\mbox{\boldmath $\varphi$}^s$ is a diagonal
spin-$1$ field in the product of primaries
$(j^1\!=\!1/2) \times (j^2\!=\!1/2)$,and the charge and flavor
factors are the same as for ${\cal O}_1$ above. The operator ${\cal O}_3$ has scaling dimension
$\Delta_3=1+\Delta_1=2+(K^{-1}_{\rho,m}-1)/2m$ and, as seen in
(\ref{corr}), gives a vanishing amplitude at $g=0$, thus ensuring the
correct behavior in the noninteracting limit.
Whether ${\cal O}_3$ appears in the spectrum or not, however, must be
checked by an independent method.
Two possible
scenarios again emerge for the scaling of the impurity specific heat: {\em Either} it
remains the same as in the noninteracting exactly screened problem
{\em or}, in the case that ${\cal O}_1$ appears as a LIBO:
\begin{equation}
C_{imp} = c_1(1-K^{-1}_{\rho,m})^2 T^{(K^{-1}_{\rho,m} - 1)/m}
+ b_1T + ... \ , \ \ \ m=2S, \ g \ge 0, \ T \rightarrow 0
\label{NFLS}
\end{equation}
where the amplitude $c_1$ is independent of $K_{\rho, m}$, and $b_1$
may differ from the noninteracting problem by second order additive terms coming
from interaction-dependent subleading corrections. Notably, by putting
$m=1$ in (\ref{NFLS})
we recover the critical exponent conjectured by Furusaki and Nagaosa
\cite{furusaki} for the impurity specific heat in the (exactly screened) single-channel problem.
The leading {\em possible} interaction-dependent correction to (\ref{NFLS}) is also produced
by ${\cal O}_3$ and scales as
\begin{equation}
C_{imp}^{corr} \sim (K_{\rho,m}^{-1} -1) T^{2+(K^{-1}_{\rho,m}-1)/m},
\label{spec-corr}
\end{equation}
with a vanishing amplitude in the noninteracting limit.
For $m<2S$, the influence of the electron-electron interactions on the
impurity-electron composite corresponding to the screened part of the
impurity spin is the same as for exact screening, with the same
scenarios for the critical behavior. However, the weak coupling between the
uncompensated (asymptotically free) part of the impurity spin and the conduction
electrons produce corrections at finite $T$ \cite{schlottmann} that may get modified by
the electron-electron interaction. We have not attempted to include these effects here.
\section{Discussion}
To conclude, we have presented an analysis of the possible
low-temperature thermodynamics of the multichannel Kondo problem for an
interacting electron system in 1D. While the leading term in the impurity susceptibility
remains the same
as for noninteracting electrons (for any number of channels $m$ and
impurity spin $S$), there are two possible behaviors for the impurity specific
heat consistent with the symmetries of the problem: {\em Either} it
remains the same as in
the noninteracting problem {\em or} it acquires a new leading term,
scaling with a non-Fermi liquid exponent $\alpha_m =
(K_{\rho,m}^{-1} - 1)/m$, with $K_{\rho,m} \le 1$ in (\ref{chargeparameter})
measuring the strength of the repulsive electron-electron interaction.
These results are exact, given the existence of a stable
boundary fixed point, an assumption common to all applications of BCFT to
a quantum impurity
problem \cite{review}.
As we discussed in Sec. III. A,
this fixed point (for given $m$ and $S$) is disconnected from that of the
noninteracting
problem only if the LIBO ${\cal O}_1$ in (\ref{LIBO}) turns marginally relevant as $g$ is
put to zero. Does this happen? A conclusive answer would require a construction of the
corresponding exact renormalization group equations. This is a nontrivial
task, and we have only carried out a perturbative analysis to second
order in the scaling fields. Turning (\ref{ScalingHamiltonian}) into a Lagrangian, and
integrating out the short-time degrees of freedom using the operator product
expansion \cite{CardyBook}, one obtains the 1-loop RG equation for the scaling field $\mu_1$
conjugate to ${\cal O}_1$:
\begin{equation}
\frac{d\mu_1}{d\ln{\tau_0}} = -(q_1+q_2)(\lambda_1+\lambda_2)\mu_1 \ ,
\end{equation}
with $\tau_0$ a short-time cut-off,
$q_1$ and $q_2$ the charge quantum numbers of ${\cal O}_1$, and
$\lambda_1$ and $\lambda_2$ the scaling fields of
the exactly marginal charge currents
$j^1_L$ and $j^2_L$ (allowed due to breaking of particle-hole symmetry
in the microscopic Hamiltonian (\ref{Hlattice}) \cite{Cbreaking}). Conservation of total charge
$q_1+q_2=0$ thus gives $d\mu_1/d\ln{\tau_0}=0$ to second order in the
scaling fields.
If this property does persist to higher orders (as suggested by the cancellation of the
1-loop contribution due to a symmetry), the $g\neq0$ and $g=0$ boundary fixed points
are either a) the same (but with different contents of irrelevant operators) {\em or} b)
connected via a line of fixed points by the exactly marginal charge currents $j_L^1$ and $j_L^2$
(with scaling fields $\lambda_1$ and $\lambda_2$ parameterized by $g$). As we are unable to
determine the actual scenario,
the question about the
relation of the $g>0$ fixed point to that of the noninteracting problem remains open.
A second open issue is whether the electron-electron interaction
influences the impurity-electron (screening cloud) composite
differently when the impurity is overscreened $(m >2S)$ as compared to
under- or exact screening $(m \le 2S)$. In the overscreened case with
free electrons the
impurity induces a critical behavior where the size of the screening cloud
diverges as one approaches zero temperature. As a consequence, {\em all}
conduction electrons become correlated due to the presence of
the impurity. By turning on a weak (screened) Coulomb interaction among the electrons
(in 1D simulated by the local e-e interaction in (\ref{bulkHamiltonian}))
these correlations may change. Will the change be such as
to produce a
novel impurity critical behavior? While our analysis does not
provide an answer, it predicts its exact form if it does appear.
In the case of exact screening there is strong evidence
that the impurity scaling behavior is indeed governed by interaction-dependent
exponents. In this case the screening cloud has a finite
extent. Turning on the Coulomb interaction, electrons "outside" of the cloud will
become correlated, most likely influencing the rate
with which they tunnel into and out of the cloud, hence influencing its
properties. As this impurity-electron composite is
described by a Fermi-liquid fixed point (where the electrons simply acquire a
phase shift) in exact analogy with the single-channel Kondo problem, we
expect that the effect of turning on electron interactions will indeed be
similar to the single-channel case. Considering the recent Monte Carlo
data by Egger and Komnik \cite{egger} supporting the single-channel
Furusaki-Nagaosa scaling \cite{furusaki}, this strongly favors the
appearance of the interaction-dependent exponents in (\ref{corr}),
(\ref{NFLS}) and (\ref{spec-corr}) when $m = 2S$.
\subsection*{Acknowledgment}
We thank I. Affleck, N. Andrei, R. Egger, P. Fr\"ojdh, and A. W. W. Ludwig
for valuable input.
H. J. acknowledges support from the Swedish Natural Science Research Council.
|
1,108,101,563,836 | arxiv | \section{Introduction}
Inclusive semileptonic decays of $B$-mesons into charmed final states
are benchmark processes at $B$-factories.
Because of relatively large rates and
clean experimental signatures, these decays can be studied
with great precision. On the other hand, theoretical
description of semileptonic $B$ decays
is robust thanks to the Operator Product Expansion (OPE)
in inverse powers of the $b$-quark mass $m_b$.
The application of the OPE to semileptonic decays of $B$-mesons
leads to the
conclusion that both the total
decay rate and various kinematic distributions can be described by
power series in
$\Lambda_{\rm QCD}/m_b$ \cite{ope}.
For infinitely heavy $b$-quark,
the decay rate
$B \to X_cl\bar \nu_l$ coincides with the rate computed
at the quark level. For realistic values of bottom and charm masses,
a few non-perturbative matrix elements that enter at order
$(\Lambda_{\rm QCD}/m_b)^n$, $n=2,3$ are accounted for in existing theoretical
predictions.
In recent years, many measurements of moments
of charged lepton energy and hadronic invariant mass
in $B \to X_cl\bar \nu_l$ decays
have been performed
by BABAR, BELLE, CLEO, CDF and DELPHI
\cite{exp_babar,exp_del,exp_belle,exp_cleo,exp_cdf}.
Comparison of these experimental results
with theoretical predictions for corresponding observables
leads to the determination of the Cabibbo-Kobayashi-Maskawa (CKM)
matrix element $|V_{\rm cb}|$,
bottom and charm quark masses and a number of
non-perturbative parameters such as $\mu_\pi^2$
and $\mu_G^2$ \cite{fit1,fit2}. Typical precision claimed in these
analyses is about one percent for $|V_{\rm cb}|$
and $m_b$ and a few percent for $m_c$ and non-perturbative
matrix elements \cite{fit1,fit2}.
To achieve such precision, advances
in theoretical understanding of semileptonic $B$-decays were necessary,
including
subtle interplay between perturbative and non-perturbative physics and
significant
developments in technology of multi-loop computations. While one-loop corrections to both $b \to cl\bar \nu_l$ total decay rate \cite{1lrate} and a number of important differential distributions are known since long \cite{diffdistr}, it is interesting to remark
that phenomenologically relevant triple differential distribution in
charged lepton energy, leptonic invariant mass and
hadronic invariant mass was computed through ${\cal O}(\alpha_s)$
only a few years ago \cite{trott,kolya1}. This fact illustrates
the complexity of perturbative calculations, when
massive particles are involved, at a fully differential level.
Given the precision of available
experimental measurements, good understanding
of non-perturbative effects and a fairly large value of the strong coupling
constant $\alpha_s(m_b) \approx 0.24$, it is expected
that ${\cal O}(\alpha_s^2)$ corrections to $b \to X_cl\bar \nu_l$ decays
are required for a consistent theoretical description. However,
as was realized long ago, technical
complexity of such an endeavor
is daunting. To simplify the problem, the ${\cal O}(\alpha_s^2)$
corrections to $b \to X_cl \bar \nu_l$
were computed in three
specific kinematic points \cite{mcz0,mczm,mczi}. These results were used
in Ref.\cite{mczi} to estimate the NNLO QCD corrections
to $\Gamma(b \to X_cl\bar \nu_l)$.
Unfortunately, such a description is necessarily limited in its scope even for the total rate and
a generalization of such an approach to more differential quantities, such
as lepton energy and hadronic invariant mass moments, is clearly out of question.
On the other hand, a subset of the NNLO QCD corrections, the
BLM corrections \cite{blm}, received significant attention recently.
The BLM corrections are associated with the running
of the strong coupling constant; they are potentially important since
the QCD $\beta$-function is large.
For $B$ decays, however, the BLM corrections
are known to be modest if proper
definition of quark masses is adopted and judicious choice of the
renormalization scale in the strong coupling constant is made.
The BLM effects are the easiest NNLO effects to calculate since they can
be obtained from a one-loop computation if the latter is performed
with a non-vanishing gluon mass \cite{voloshin}. For this reason,
in the past, the BLM
corrections to $b \to cl\bar \nu_l$ were calculated for the total
rate and various kinematic moments \cite{wise,kolya2}. However, the
NNLO QCD corrections beyond the BLM
approximation, for which genuine two-loop computations are required, remained
missing.
Calculation of these two-loop corrections became possible recently
thanks to developments in numerical approaches to multi-loop
computations \cite{method}.
These numerical methods benefit from the absence
of mass hierarchy in the problem which is the case for
$b \to c$ decays, since masses of bottom and charm quarks are close.
The possibility to use the approach of Ref.\cite{method} to describe decays of
charged particles was recently pointed out in \cite{method1} where
electron energy spectrum in muon decay was computed through
second order in the perturbative expansion in QED.
The goal of this Letter is to present the computation of ${\cal O}(\alpha_s^2)$
corrections to $b \to X_cl\bar \nu_l$
decay rate at a fully differential level.
Our results can be used to calculate {\it arbitrary} observables related
to inclusive $b \to c$ transition through NNLO in QCD.
For example, second order QCD corrections
to such popular observables as lepton energy,
hadronic invariant mass and hadronic energy moments can be studied
in dependence
of the cut on charged lepton energy. Inclusion of
the results of our computation into global fits, should lead to a reduction
of the theoretical uncertainty in the determination of $|V_{\rm cb}|$,
the bottom and charm quark masses and the non-perturbative parameters that
contribute to the decay rate.
\section{Computation}
In this Section, we set up our notation and briefly describe
technical aspects of the computation. A detailed description of the
method can be found in \cite{method,method1}.
Consider the decay $b \to X_cl\bar \nu_l$ where the final state lepton
is massless. The
differential decay rate can be written as
\begin{equation}
{\rm d} \Gamma = \frac{G_F |V_{\rm cb}|^2 m_b^5}{192 \pi^3}
\left ( {\rm d}F_0 + a\; {\rm d}F_1 + a^2\; {\rm d}F_2 \right ),
\end{equation}
where $G_F$ is the Fermi constant, $m_b$ is the $b$-quark pole mass,
$a = \alpha_s/\pi$
and $\alpha_s$ is the ${\overline {\rm MS}}$
strong coupling constant defined in the theory with
five active flavors and
renormalized at the scale $m_b$. For numerical computations, we use $m_b = 4.6~{\rm GeV}$ and $m_c = 1.15~{\rm GeV}$. While these numerical values for
the quark masses can not be justified in the pole scheme, our choice
is motivated by an eventual necessity to transform the pole scheme computation
to a more suitable scheme. The values of the quark masses that we employ
in this Letter correspond
to the central values of $m_{b,c}$ in the ``kinetic scheme'' \cite{kin},
derived in recent fits to inclusive semileptonic $B$-decays \cite{fit1,fit2}.
To calculate the functions
${\rm d}F_{0-2}$, we have to account for different processes.
At leading order, ${\rm d}F_0$ is computed by squaring the matrix element of the
process $b \to cl\bar \nu_l$ and summing or averaging over spins and
colors, as appropriate. At next-to-leading order, ${\rm d}F_1$
receives contributions
from virtual ${\cal O}(\alpha_s)$ corrections to $b \to cl\bar \nu_l$
and from the real-emission process $b \to cl\bar \nu_l + g$.
To compute ${\rm d}F_2$,
we require two-loop ${\cal O}(\alpha_s^2)$ corrections
to $b \to cl\bar \nu_l$, one-loop ${\cal O}(\alpha_s)$ corrections
to $b \to cl\bar \nu_l + g$ and the double real-emission corrections
$b \to cl\nu_l+X$, where $X$ refers to
two gluons or a quark-antiquark pair or a ghost-antighost pair.
We will refer to these corrections as double-virtual, real-virtual and
double-real, respectively. In addition, we have to account for a variety of
renormalization constants, when computing higher order corrections.
We do not include the process $b \to cl \bar \nu_l + c \bar c$
in our calculation since the energy release in this process is so small
that it can not be treated perturbatively.
To calculate the NNLO QCD corrections,
the method for multiloop computations
developed in \cite{method,method1} is employed; in those references
a detailed discussion of many
technical issues relevant for the current computation can be found.
One technical aspect that we improve upon relative
to Refs.\cite{method,method1} is how virtual corrections
to single gluon emission process $b \to cl\bar \nu_l + g $
are treated. In Refs. \cite{method,method1}
these corrections
were dealt with by an analytic reduction to master integrals followed
by a numerical evaluation of those. This
method, however, becomes impractical quite rapidly, once the number of
external particles or the number of massive particles in the problem
increases. In principle,
the real-virtual corrections can be computed numerically, but for
heavy-to-light decays this is complicated because
some Feynman diagrams develop imaginary parts. To handle these imaginary
parts, we proceed as follows. For all Feynman
diagrams that contribute to real-virtual corrections,
it turns out possible to identify a Feynman parameter that enters
the denominator of the integrand linearly. Let us call this Feynman parameter $x_1$. Then,
a typical integral that has to be computed reads
\begin{equation}
I(0,1) = \int \limits_{0}^{1} \frac{{\rm d}x_1 x_1^{-\epsilon + n}}{(-a + b x_1+i0)^{1+\epsilon}}.
\end{equation}
Here $n \ge -1$, $b > a > 0$ and both $a$ and $b$ depend on other
Feynman parameters
and the kinematic variables. The two arguments of the
function $I$ refer to lower and upper limits for $x_1$ integration.
To calculate $I(0,1)$, we
note that by extending upper integration boundary
to infinity, a solvable
integral for arbitrary $a,b$ and $n$ is obtained. On the other hand, since
\begin{equation}
I(0,1) = I(0,\infty) - I(1,\infty),
\end{equation}
and because the denominator of the integrand in $I(1,\infty)$ is sign-definite,
$I(1,\infty)$ can be computed numerically in a straightforward way. It turns
out that, up to minor modifications,
this trick can be used to avoid dealing with
the imaginary parts for all Feynman diagrams that contribute
to one-loop corrections to single gluon emission process in $b \to c$
decays.
Because couplings of quarks and leptons to the charged current are chiral,
proper treatment of the Dirac matrix
$\gamma_5$ in $d = 4-2\epsilon$ dimensions is important. While
this problem can be avoided in the computation of the
total decay rate, for more differential
quantities
it becomes an issue.
We use the approach of Ref.\cite{larin} where a consistent framework
for extending the axial vector current to $d$-dimensions is given.
Our computation can be checked in a number of ways.
First, the double-virtual,
real-virtual and double-real corrections are divergent when taken separately but these divergences must cancel in physical observables.
We have checked these cancellations for a variety of observables,
from the inclusive rate to various moments with cuts on both charged lepton
energy and the hadronic invariant mass. Second, in the limit $m_c \to m_b$,
the NNLO QCD corrections to the decay rate $b \to cl\bar \nu_l$ are described
by the so-called zero-recoil form factors computed through
${\cal O}(\alpha_s^2)$ long time ago \cite{mcz0}. We have checked that in the
limit $m_c \to m_b$ our computation reproduces the zero-recoil form factors.
Third, we can use published results for
the BLM corrections to the total rate and charged lepton energy, hadronic
invariant mass and hadronic energy moments \cite{kolya2} to check
parts of our computation related to massless quark contributions to gluon
vacuum polarization. Finally, considering the limit $m_c \ll m_b$, we reproduce the NNLO QCD corrections to $b \to u l \bar \nu_l$
decay rate reported in Ref.\cite{timo}.
\section{Results}
We are now in position to discuss the results of our computation. We consider
a number of observables, mostly for illustration purposes.
We present the results in the pole mass scheme and use the strong
coupling constant renormalized at the scale $m_b$. While the pole mass
scheme is known to be an unfortunate choice inasmuch as
the convergence of the perturbative expansion is concerned,
we decided to present
our results in this way for clarity. However, we emphasize that
the impact of the NNLO QCD corrections, computed in this paper,
on the determination
of $|V_{\rm cb}|$, heavy quark masses and the non-perturbative parameters, including kinetic and chromomagnetic heavy
quark operators, can only be assessed once the
pole mass scheme is abandoned in
favor of a more suitable quark mass definition and the NNLO QCD corrections
are included into the fit.
To present the results, we follow Ref.\cite{kolya2} and define
\begin{eqnarray}
L_n (E_{\rm cut}) = \frac{
\langle (E_l/m_b)^n\;\theta(E_l -E_{\rm cut} )\; {\rm d}\Gamma \rangle }
{\langle {\rm d}\Gamma_0 \rangle },
\label{eq_3_1} \\
H_n (E_{\rm cut}) = \frac{
\langle (E_h/m_b)^n\;\theta(E_l -E_{\rm cut} )\; {\rm d}\Gamma \rangle }
{\langle {\rm d}\Gamma_0 \rangle },
\label{eq_3_1a}
\end{eqnarray}
where $\langle ... \rangle $ denotes average over the phase-space of all
final state particles,
$E_{l,h}$ is the energy of the charged lepton or hadronic system
in the $b$-quark rest frame and
\begin{equation}
{\rm d} \Gamma_0 = \frac{G_F |V_{\rm cb}|^2 m_b^5}{192 \pi^3}\; {\rm d}F_0.
\label{eq_3_2}
\end{equation}
The lepton energy moments introduced in Eq.(\ref{eq_3_1}) can be written as
\begin{equation}
L_n = L_n^{(0)} + a L_n^{(1)} +
a^2 \left ( \beta_0 L_n^{(2,{\rm BLM})} + L_n^{(2)} \right )+...,
\end{equation}
where ellipses stands for higher order terms in the perturbative expansion
in QCD. Similar decomposition can be performed for the
hadronic energy moments $H_n$.
In addition, we use
$\beta_0 = 11 - 2/3 N_f$ and define the non-BLM corrections $L_n^{(2)},H_n^{(2)}$ as
the difference of the complete ${\cal O}(\alpha_s^2)$ correction
and the BLM correction computed with $N_f = 3$.
\begin{tiny}
\begin{table}[htbp]
\vspace{0.1cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline\hline
$n$ & $E_{\rm cut}$, GeV & $L_n^{(0)}$ & $L_n^{(1)}$ & $L_n^{(2,{\rm BLM})}$ & $L_n^{(2)}$ \\ \hline\hline
$0$ & $0$ & $1$ & -1.77759 & $-1.9170$ & $3.40$ \\ \hline
$1$ & $0$ & $0.307202$ & $-0.55126 $ & $-0.6179$ & $1.11 $ \\ \hline
$2$ & $0$ & $0.10299$ & $-0.1877 $ & $-0.2175$ & $0.394 $ \\ \hline \hline
$0$ & $1$& $0.81483$ & -1.4394 & $-1.5999$ & $2.63$ \\ \hline
$1$ & $1$& $0.27763$ & -0.49755 & $-0.5667$ & $1.00$ \\ \hline
$2$ & $1$& $0.09793$ & -0.17846 & $-0.20875$ & $0.382$ \\ \hline \hline
\end{tabular}
\caption{\label{table1} Lepton energy moments.}
\vspace{-0.1cm}
\end{center}
\end{table}
\end{tiny}
In Tables~\ref{table1},\ref{table2}
the results for lepton energy and hadronic energy
moments with and without a
cut on the lepton energy are displayed. The numerical accuracy of
$L_n^{(0,1)},H_n^{(0,1)}$ and $L_n^{(2,\rm BLM)},H_n^{(2,\rm BLM)}$ is about $0.1-0.2\%$ whereas
the numerical accuracy of $L_n^{(2)},H_n^{(2)}$ is about $1-3\%$. It is possible to
improve on the accuracy but this requires somewhat large CPU time. Nevertheless,
for all practical applications the achieved numerical accuracy is sufficient.
There are a few interesting observations
that follow from Tables~\ref{table1},\ref{table2}.
Quite generally, the non-BLM corrections and the BLM corrections
have opposite signs; given their relative magnitude and
the value of $\beta_0$, it is easy to see that the ${\cal O}(\alpha_s^2)$
corrections
are about twenty percent smaller than what the BLM-based estimates
suggest. The relative magnitude of the non-BLM and BLM corrections
is largely independent of $n$ and of whether the
lepton energy cut is applied.
\begin{tiny}
\begin{table}[htbp]
\vspace{0.1cm}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline\hline
$n$ & $E_{\rm cut}$, GeV & $H_n^{(0)}$ & $H_n^{(1)}$ & $H_n^{(2,{\rm BLM})}$ & $H_n^{(2)}$ \\ \hline\hline
$1$ & $1$& $0.334$ & -0.57728 & $-0.6118$ & $1.02$ \\ \hline
$2$ & $1$& $0.14111$ & -0.23456 & $-0.2343$ & $0.362$\\ \hline \hline
\end{tabular}
\caption{\label{table2} Hadronic energy moments.}
\vspace{-0.1cm}
\end{center}
\end{table}
\end{tiny}
First row in Table~\ref{table1} provides the NNLO QCD corrections to
the total decay rate $b \to cl\bar \nu_l$ in the pole mass scheme.
Such corrections were estimated earlier in Ref.\cite{mczi}. Note that
in Ref.\cite{mczi} the numerical results are given for the
ratio of quark masses $m_c/m_b = 0.3$ and also
the BLM corrections are defined with $N_f=4$, rather than $N_f = 3$.
Calculating the non-BLM corrections for the set of parameters
employed in \cite{mczi}, we find
$L_0^{(2)} \approx 1.73$ which is to be compared with the estimate
$L_0^{(2)} \approx 0.9(3)$, reported in \cite{mczi,comm}.
The results of Ref.\cite{mczi} were used in Ref.\cite{kolya3} to estimate the
impact of the QCD corrections on $\Gamma(B \to X_c l \nu_l)$.
In Ref.\cite{kolya3} the perturbative corrections to $b \to c l \bar \nu_l$
decay rate
are described by a factor $A^{\rm pert}$, defined as
\begin{equation}
\Gamma(b \to X_c l \bar \nu_l) = A^{\rm pert}(r) \; \langle {\rm d} \Gamma_0 \rangle,
\end{equation}
where $r = m_c/m_b$.
$A^{\rm pert}$ depends on the adopted scheme for the quark masses. In the
kinetic mass scheme, $A^{\rm pert}(0.25) = 0.908$ is quoted.
To arrive at this result,
Ref.\cite{kolya3} uses $L_0^{(2)} =1.4$ which
is about a factor $2.5$ smaller than the corresponding entry
in Table~\ref{table1}. Correcting for this discrepancy, we derive
\begin{equation}
A^{\rm pert}(0.25) = 0.919.
\end{equation}
We believe that this value for the perturbative renormalization factor in the kinetic scheme for $m_c/m_b = 0.25$ should be employed
in global fits of semileptonic $B$-decays.
Further analysis of entries in Table~\ref{table1} suggests that
the QCD corrections in general and the non-BLM corrections in
particular mostly affect the overall normalization
rather than shapes of kinematic distributions. This follows from
the approximate independence of $L_n^{(1,2)}/L_n^{(0)}$ of $n$ and
also of whether or not the cut on the lepton energy is imposed.
It is therefore possible to speculate that
the non-BLM corrections computed in this Letter will mostly affect
the extraction of $|V_{\rm cb}|$ whereas their influence on, e.g.,
the $b$-quark mass determination
will be minor. Concerning the $|V_{\rm cb}|$, the
increase of the perturbative renormalization factor $A_{\rm pert}$ by
$10 \times 10^{-3}$ implies the change in the value of
$|V_{\rm cb}|$, extracted in Ref.\cite{fit2}, by about
$-0.25 \times 10^{-3}$. On the other hand, since non-BLM corrections
were not included in a fit of Ref.\cite{fit1}, the shift in the
value of $|V_{\rm cb}|$ derived in that reference will likely
be larger $\sim -0.5 \times 10^{-3}$. Although expected
shifts in central values
of $|V_{\rm cb}|$ are not large, we stress that they are comparable
to uncertainties in $|V_{\rm cb}|$, derived in the
global fits \cite{fit1,fit2}.
\section{Conclusions}
In this Letter, the computation of the NNLO QCD corrections to the fully differential $b \to cl\bar \nu_l$ decay rate is reported.
The differential nature of the
computation makes it possible to apply arbitrary cuts on the kinematic
variables of final state particles. This result allows to extend the existing
determinations of the CKM matrix element $|V_{\rm cb}|$, the bottom and charm
quark masses and the non-perturbative parameters $\mu_\pi^2$ and $\mu_G^2$
from global fits to semileptonic decays of $B$-mesons, by including the
NNLO QCD corrections {\it exactly}. We note that for a consistent
high-precision analysis of semileptonic $B$-decays,
also ${\cal O}(\alpha_s)$ corrections
to Wilson coefficients of non-perturbative kinetic and chromomagnetic
operators are required. Such a correction is available for the kinetic
operator \cite{tb} but is still missing for the chromomagnetic.
We presented a few results for charged lepton energy moments and hadronic
energy moments with and without a cut on the lepton energy in the
pole mass scheme. These results suggest
that
the magnitude of the non-BLM corrections does not depend strongly on the
kinematics; the non-BLM corrections
are approximately $2\%$ for all the moments considered.
We therefore expect that the non-BLM NNLO QCD corrections will mostly
affect the determination of $|V_{\rm cb}|$ decreasing its central value
by about one percent whereas their impact on the quark masses
and the non-perturbative parameters will probably be
quite mild.
As a final remark,
we note that it would be interesting to extend this calculation
in two ways. First, one may consider semileptonic decays of
$B$-mesons into massive leptons. Such an extension, relevant
for the description of $B \to X_c + \tau + \bar \nu_\tau$ decay
is straightforward. Second, it is interesting to extend the
current calculations to allow for a {\it massless} quark in the
final state. This is a difficult problem but it is highly
relevant for the determination of the CKM matrix element
$|V_{\rm ub}|$ from semileptonic $b \to u$ transitions.
\vspace*{0.2cm}
{\bf Acknowledgments}
Discussions with F.~Petriello and useful correspondence with T.~Becher
are gratefully acknowledged. I would like to thank A.~Czarnecki and
A.~Pak for informing me about their results prior to publication.
This research is partially supported
by the DOE under grant number
DE-FG03-94ER-40833.
|
1,108,101,563,837 | arxiv | \section{Introduction}
Let $K$ be a global field and let $L=(L_1,\dots,L_n)$ be an $n$-tuple ($n \geq 1$) of finite separable extensions of $K$. In this paper, we study the so-called \emph{multinorm principle} for $L$, which is said to hold if, for any $c \in K^*$, the affine $K$-variety
\begin{equation}\label{eq:Xc}
X_c : \prod\limits_{i=1}^{n} N_{L_i/K}(\Xi_i)=c
\end{equation}
\noindent satisfies the Hasse principle. In other words, $L$ satisfies the multinorm principle if, for all $c \in K^*$, the existence of points on $X_c$ over every completion of $K$ implies the existence of a $K$-point.
From a geometric viewpoint, $X_c$ defines a principal homogenous space under the \emph{multinorm one torus} $T$, defined by the exact sequence of $K$-algebraic groups
$$ 1 \to T \to \prod\limits_{i=1}^{n} R_{L_i/K} \Gm \xrightarrow{\prod_i N_{L_i/K}} \Gm \to 1,$$
\noindent where $R_{L_i/K} \Gm$ denotes the Weil restriction of $\Gm$ from $L_i$ to $K$. In this way, the Tate-Shafarevich group $\Sha(T)$ of $T$ is naturally identified with the \textit{obstruction to the multinorm principle} for $L$, defined as
$$\mathfrak{K}(L,K)=K^* \cap \prod\limits_{i=1}^{n} N_{L_i/K}(\mathbb{A}^*_{L_i}) / \prod\limits_{i=1}^{n} N_{L_i/K}(L_i^*),$$
\noindent where $\mathbb{A}^{*}_{L_i}$ denotes the id\`{e}le group of $L_i$ and the multinorm principle holds if and only if $\mathfrak{K}(L,K)=1$.
In the toric case, the Hasse principle for principal homogeneous spaces is strikingly connected with \emph{weak approximation}. This property is said to hold for a torus $T$ over $K$ if the \textit{defect of weak approximation}
$$A(T) =\prod\limits_v T(K_v)/\overline{T(K)}$$
\noindent is trivial (here $\overline{T(K)}$ denotes the closure of $T(K)$ in $\prod_v T(K_v)$ with respect to the product topology). In \cite[\S11.6]{Vosk}, Voskresenski\u{\i} showed the existence of an exact sequence
\begin{equation}\label{eq:Vosk}
0 \to A(T) \to \operatorname{H}^1(K,\Pic \overline{X})^{\vee} \to \Sha (T) \to 0,
\end{equation}
\noindent where $X$ denotes a smooth compactification of $T$, $\overline{X}$ the base change of $X$ to an algebraic closure of $K$ and $\phantom{ }^\vee$ stands for the Pontryagin dual of an abelian group.
Returning to the multinorm principle, when $n=1$ one recovers the classical \emph{Hasse norm principle} (HNP), a topic that has been extensively studied in the literature (see e.g. \cite[\S 6.3]{Platonov} or \cite[\S 1]{MN19} for a survey of known results). If $L/K$ is Galois, then there is an explicit description of the obstruction to the HNP (due to Tate in \cite[p. 198]{C-F}) in terms of the group cohomology of its local and global Galois groups. Drakokhrust later obtained (in \cite{Drak}) a more general description of this obstruction for an arbitrary extension $L/K$ in terms of generalized representation groups.
For $n > 1$, such a description has not yet been obtained. Nonetheless, multiple cases have been analyzed in the literature. For example, if $n=2$ it is known that the multinorm principle holds if
\begin{enumerate}
\item\label{hur} $L_1$ or $L_2$ is a cyclic extension of $K$ (\cite[Proposition 3.3]{Hur});
\item\label{pras} $L_1/K$ is abelian, satisfies the HNP and $L_2$ is linearly disjoint from $L_1$ (\cite[Proposition 4.2]{Prar});
\item\label{PR_res} the Galois closures of $L_1/K$ and $L_2/K$ are linearly disjoint over $K$ (\cite{PR}).
\end{enumerate}
\noindent Subsequent work of Demarche and Wei provided a generalization of the result in \eqref{PR_res} to $n$ extensions (\cite[Theorems 1 and 6]{demarche}), while also addressing weak approximation for the associated multinorm one torus. In \cite{pollio}, Pollio computed the obstruction to the multinorm principle for a pair of abelian extensions and, in \cite{eva}, Bayer-Fluckiger, Lee and Parimala provided sufficient and necessary conditions for the multinorm principle to hold assuming that one of the extensions $L_i/K$ is cyclic.
In this paper, we provide an explicit description of the obstructions to the multinorm principle and weak approximation for the multinorm one torus of $n$ arbitrary extensions. To achieve this, we generalize the concept (due to Drakokhrust and Platonov in \cite{DP}) of the \emph{first obstruction to the Hasse principle} (see Section \ref{sec:1st_obs}). By then adapting work of Drakokhrust (\cite{Drak}), we obtain our main result (Theorem \ref{thm:main_result}), describing the obstructions to the multinorm principle and weak approximation for the multinorm one torus in terms of generalized representation groups of the relevant local and global Galois groups. The formulas given in Theorem \ref{thm:main_result} are effectively computable and we provide algorithms in GAP \cite{gap} for this effect (see Remark \ref{rem:finite_time2}).
We also apply our techniques to describe the validity of the local-global principles in three concrete examples (see Section \ref{sec:applications}). We start by proving a result inspired by \cite[Theorem 6]{demarche} that compares the birational invariants $\operatorname{H}^1(K,\Pic \overline{X})$ and $\operatorname{H}^1(K,\Pic \overline{Y})$, where $Y$ is a smooth compactification of the norm one torus $S=R^1_{F/K} {\mathbb G}_m$ of the extension $F=\bigcap\limits_{i=1}^{n} L_i$. In particular, we show (Theorem \ref{thm:demarch_wei_thm}) that under certain conditions there is an isomorphism $$\operatorname{H}^1(K,\Pic \overline{X}) \xrightarrow{\simeq}\operatorname{H}^1(K,\Pic \overline{Y}).$$
\noindent This results further allows us to compare the defect of weak approximation for $T$ with the defect of weak approximation for $S$ (Corollary \ref{cor:dem1}).
Under the same assumptions, we also show (Theorem \ref{thm:pollio}) the existence of isomorphisms $$\mathfrak{K}(L,K) \cong \mathfrak{K}(F/K) \textrm{ and } A(T) \cong A(S)$$
\noindent when all the extensions $L_i/K$ are abelian. This theorem generalizes Pollio's result (in \cite{pollio}) on the obstruction to the multinorm principle for a pair of abelian extensions.
In Section \ref{sec:eva} we complement \cite[Theorem 8.3]{eva} by providing a characterization (Theorem \ref{thm:eva}) of weak approximation for the multinorm one torus of $n$ non-isomorphic cyclic extensions of prime degree $p$. More precisely, we show that both the multinorm principle and weak approximation for $T$ hold if $[L_1 \dots L_n : K]>p^2$. Otherwise, weak approximation holds if and only if the multinorm principle fails (a property that can be detected by precise local conditions, see Remark \ref{rem:eva}). While preparing this paper, we became aware of the recent (and independent) work of Lee \cite{lee}, who extends results of \cite[\S8]{eva} to provide a description of the multinorm principle and weak approximation for the multinorm one torus of $n$ non-isomorphic cyclic extensions (and, in this way, obtains a result more general than Theorem \ref{thm:eva}).
\subsection*{Notation}
Given a global field $K$, we denote its set of places by $\Omega_K$. For $v \in \Omega_K$, we use the notation $K_v$ for the completion of $K$ at $v$ and, if $L$ is a Galois extension of $K$, we denote by $G_v$ a choice of decomposition group of $L/K$ at $v $.
Given a finite group $G$, a subgroup $H$ of $G$, a $G$-module $A$, an integer $q$ and a prime number $p$, we use the notation:
\begin{longtable}{p{1.5cm} p{14cm}}
$|G|$ & the order of $G$\\
$Z(G)$ & the center of $G$\\
$[H,G]$ & the subgroup of $G$ generated by all commutators $[h,g]$ with $h \in H,g \in G$\\
$\Phi^{G}(H)$ & the subgroup of $H$ generated by all commutators $[h,g]$ with $h \in H \cap g H g^{-1},g \in G$\\
$G^{ab}$ & the abelianization $G/[G,G]$ of $G$\\
$G_p$ & a Sylow $p$-subgroup of $G$\\
$\hat{\operatorname{H}}^q(G,A)$ & the $q$-th Tate cohomology group
\end{longtable}
\noindent We also often use the notation $G'$ for the derived subgroup $[G,G]$ of $G$. If $H$ is a normal subgroup of $G$, we write $H \trianglelefteq G$. For $x,y \in G$ we adopt the convention $[x,y]=x^{-1}y^{-1}xy$ and $x^y=y^{-1}xy$. If $G$ is abelian, we denote its $p$-primary part by $G_{(p)}$.
\subsection*{Acknowledgements}
I would like to thank Prof. Eva Bayer-Fluckiger for a conversation that motivated this work and my supervisor Rachel Newton for useful discussions on the manuscript and for pointing out the recent preprint \cite{lee}. This work was supported by the FCT doctoral scholarship SFRH/BD/117955/2016.
\section{The first obstruction to the multinorm principle}\label{sec:1st_obs}
In this section we define the concept of the first obstruction to the multinorm principle and present several of its properties. We fix a global field $K$, an $n$-tuple $L=(L_1,\dots,L_n)$ of finite separable extensions of $K$ and a Galois extension $N/K$ containing all the fields $L_1,\dots,L_n$. We denote $G=\Gal(N/K)$, $H_i=\Gal(N/L_i)$ for $i=1,\dots,n$ and $H=\langle H_1,\dots,H_n \rangle$, the subgroup of $G$ generated by all the $H_i$. Note that $H=\Gal(N/F)$, where $F=\bigcap\limits_{i=1}^{n} L_i$.
\begin{definition}
We define the \emph{first obstruction to the multinorm principle for $L$ corresponding to $(N,L,K)$} as
$$\mathfrak{F}(N,L,K)=K^* \cap \prod\limits_{i=1}^{n} N_{L_i/K}(\mathbb{A}^{*}_{L_i}) / \prod\limits_{i=1}^{n} N_{L_i/K}(L_i^*)(K^* \cap N_{N/K}(\mathbb{A}^{*}_{N})).$$
\end{definition}
\begin{remark}
This notion generalizes the concept (introduced by Drakokhrust and Platonov in \cite{DP}) of the \emph{first obstruction to the Hasse principle for $L/K$ corresponding to a tower of fields $N/L/K$}, defined as $\mathfrak{F}(N/L/K)=K^* \cap N_{L/K}(\mathbb{A}^{*}_{L}) / N_{L/K}(L^*)(K^* \cap N_{N/K}(\mathbb{A}^{*}_{N})).$
\end{remark}
The first obstruction to the multinorm principle has various useful properties -- for example, it is clear from the definition that the total obstruction to the multinorm principle $\mathfrak{K}(L,K)$ surjects onto $\mathfrak{F}(N,L,K)$ with equality if the Hasse norm principle holds for $N/K$. Moreover, this equality also happens if the first obstruction to the Hasse principle for some extension $L_i/K$ coincides with the total obstruction to the Hasse norm principle $\mathfrak{K}(L_{i}/K)=K^* \cap N_{L_i/K}(\mathbb{A}^{*}_{L_i}) / N_{L_i/K}(L_i^*)$ (called the \emph{knot group} of $L_i/K$):
\begin{lemma}\label{lem:1stobs_equal_multiknot}
If $\mathfrak{K}(L_{i}/K)=\mathfrak{F}(N/L_{i}/K) $ for some $i =1,\dots,n$, then $\mathfrak{K}(L,K)=\mathfrak{F}(N,L,K)$.
\end{lemma}
\begin{proof}
The assumption translates into $K^* \cap N_{N/K}(\mathbb{A}^*_N) \subset N_{L_{i}/K}(L_{i}^*) $. This implies that \newline ${\prod\limits_{i=1}^{n}N_{L_i/K}(L_i^*)(K^* \cap N_{N/K}(\mathbb{A}^*_N)) = \prod\limits_{i=1}^{n} N_{L_i/K}(L_i^*)}$ and hence $\mathfrak{K}(L,K)=\mathfrak{F}(N,L,K)$.\end{proof}
\begin{corollary}\label{cor:square_free}
If $[L_{i}:K]$ is square-free for some $i = 1,\dots,n$, then $\mathfrak{K}(L,K)=\mathfrak{F}(N,L,K)$.
\end{corollary}
\begin{proof}
By \cite[Corollary 1]{DP}, if $[L_{i}:K]$ is square-free, then $ \mathfrak{K}(L_{i}/K)=\mathfrak{F}(N/L_{i}/K)$. Now apply Lemma \ref{lem:1stobs_equal_multiknot}.
\end{proof}
More generally, one has the following criterion (extending \cite[Theorem 3]{DP}) for the equality $\mathfrak{K}(L,K)=\mathfrak{F}(N,L,K)$.
\begin{proposition}
\label{prop:1stobs_cor}
Let $k_1,\dots,k_n$ be positive integers. For each $i=1,\dots,n$, choose a collection of $k_i$ subgroups $G_{i,1},\dots,G_{i,k_i}$ of $G$ and $k_i$ subgroups $H_{i,1},\dots,H_{i,k_i}$ such that $H_{i,j} \subset H_i \cap G_{i,j}$ for any $j=1,\dots, k_i$. Set $L_{i,j}=N^{H_{i,j}}$ and $K_{i,j}=N^{G_{i,j}}$ for all $i,j$. Suppose that the Hasse norm principle holds for all the extensions $L_{i,j}/K_{i,j}$ and that the map
$$\bigoplus\limits_{i=1}^{n}\bigoplus\limits_{j=1}^{k_i} \Cor^{G}_{G_{i,j}}:\bigoplus\limits_{i=1}^{n}\bigoplus\limits_{j=1}^{k_i} \hat{\operatorname{H}}^{-3}(G_{i,j},\mathbb{Z}) \to \hat{\operatorname{H}}^{-3}(G,\mathbb{Z}) $$
\noindent is surjective. Then $ \mathfrak{K}(L,K)=\mathfrak{F}(N,L,K)$.
\end{proposition}
\begin{proof}
The statement follows from an argument analogous to the one given by Drakokhrust and Platonov for the Hasse norm principle case, see \cite[Theorem 3]{DP}.
\end{proof}
A further trait of the first obstruction to the multinorm principle $\mathfrak{F}(N,L,K)$ is that it can be expressed in terms of the local and global Galois groups of the towers $N/L_i/K$ (in similar fashion to the first obstruction to the Hasse norm principle). In order to prove this, we mimic the work Drakokhrust and Platonov in \cite[\S 2]{DP}. We will use the following lemma:
\begin{lemma}\cite[Lemma 1]{DP}\label{lem1DP} Let $N/L/K$ be a tower of global fields with $N/K$ Galois. Set $G=\Gal(N/K)$ and $H=\Gal(N/L)$. Then, given a place $v$ of $K$, the set of places $w$ of $L$ above $v$ is in bijection with the set of double cosets in the decomposition $G = \bigcup\limits_{i=1}^{r_v} H x_i G_v$. If $w$ corresponds to $H x_{i} G_v$, then the decomposition group $H_w$ of the extension $N/L$ at ${w}$ equals $H \cap x_{i} G_v x_{i}^{-1}$.
\end{lemma}
In our situation, for any $v \in \Omega_K$ and $i=1,\dots,n$, let $G=\bigcup\limits_{k=1}^{r_{v,i}} H_i x_{i,k} G_v$ be a double coset decomposition. By the above lemma, $H_{i,w}:= H_i \cap x_{i,k} G_v x_{i,k}^{-1}$ is the decomposition group of $N/L_i$ at a place $w$ of $L_i$ above $v$ corresponding to the double coset $H_i x_{i,k} G_v$. Now consider the commutative diagram:
\begin{equation}\label{diag:1stobs_defn}
\xymatrix{
\bigoplus\limits_{i=1}^{n} {H}_i^{\textrm{ab}} \ar[r]^{{\psi}_1} & {G}^{\textrm{ab}}\\
\bigoplus\limits_{i=1}^{n}(\bigoplus\limits_{v \in \Omega_K} ( \bigoplus\limits_{w|v} {{H}_{i,w}^{\textrm{ab}} }) ) \ar[r]^{\ \ \ \ \ \ \ {\psi}_2} \ar[u]^{{\varphi}_1 }&\bigoplus\limits_{v \in \Omega_K}{{G}_v^{\textrm{ab}} }\ar[u]_{\varphi_2}
}
\end{equation}
\noindent Here the superscript$\phantom{a}^{\textrm{ab}}$ above a group denotes its abelianization and the inside sum over $w|v$ runs over all the places $w$ of $L_i$ above $v$. Additionally, the maps $\varphi_1,\psi_1$ and $\varphi_2$ are induced by the inclusions $H_{i,w} \hookrightarrow H_i, H_i \hookrightarrow G$ and $G_v \hookrightarrow G$, respectively, while $\psi_2$ is obtained from the product of all conjugation maps $H_{i,w}^{ab} \to G_v^{ab}$ sending $h_{i,k} [H_{i,w},H_{i,w}]$ to $x_{i,k}^{-1} h_{i,k} x_{i,k} [G_v,G_v]$. We denote
by ${\psi}_2^{v}$ (respectively, ${\psi}_2^{nr}$) the restriction of the map ${\psi}_2$ to the subgroup $\bigoplus\limits_{i=1}^{n} ( \bigoplus\limits_{w|v} {{H}_{i,w}^{\textrm{ab}} }) $ (respectively, $\bigoplus\limits_{i=1}^{n}(\bigoplus\limits_{\substack{v \in \Omega_K \\ v \text{ unramified}}} ( \bigoplus\limits_{w|v} {{H}_{i,w}^{\textrm{ab}} }) )$). With this notation set, we can now establish the main result of this section (generalizing \cite[Theorem 1]{DP}):
\begin{theorem}\label{thm:thm1DP_gen}
In the notation of diagram \eqref{diag:1stobs_defn}, we have
$$\mathfrak{F}(N,L,K) \cong \ker\psi_1/\varphi_1(\ker\psi_2).$$
\end{theorem}
\begin{proof}
Diagram \eqref{diag:1stobs_defn} can be written as
\begin{equation}\label{diag:1stobs_defn2}
\xymatrix{
\bigoplus\limits_{i=1}^{n} \hat{\operatorname{H}}^{-2}({H}_i,\mathbb{Z}) \ar[r]^{{\psi}_1} & \hat{\operatorname{H}}^{-2}(G,\mathbb{Z})\\
\bigoplus\limits_{i=1}^{n}(\bigoplus\limits_{v \in \Omega_K} ( \bigoplus\limits_{w|v} { \hat{\operatorname{H}}^{-2}(H_{i,w},\mathbb{Z}) }) ) \ar[r]^{\ \ \ \ \ \ \ {\psi}_2} \ar[u]^{{\varphi}_1}&\bigoplus\limits_{v \in \Omega_K}{ \hat{\operatorname{H}}^{-2}(G_v,\mathbb{Z}) }\ar[u]_{\varphi_2}
}
\end{equation}
\noindent By the local (respectively, global) Artin isomorphism, we have $\hat{\operatorname{H}}^{-2}(H_{i,w},\mathbb{Z}) \cong \hat{\operatorname{H}}^{0}(H_{i,w},N_w^*)$ and $\hat{\operatorname{H}}^{-2}(G_v,\mathbb{Z}) \cong \hat{\operatorname{H}}^{0}(G_v,N_v^*)$ (respectively, $\hat{\operatorname{H}}^{-2}({H}_i,\mathbb{Z}) \cong \hat{\operatorname{H}}^{0}(H_i,C_N)$ and $\hat{\operatorname{H}}^{-2}(G,\mathbb{Z}) \cong \hat{\operatorname{H}}^{0}(G,C_N)$, where $C_N$ is the idèle class group of $N/K$). Additionally, by \cite[Proposition 7.3(b)]{C-F} there are identifications $\bigoplus\limits_{v \in \Omega_K}( \bigoplus\limits_{w|v} \hat{\operatorname{H}}^{0}(H_{i,w},N_w^*)) \cong \hat{\operatorname{H}}^{0}(H_{i},\mathbb{A}^{*}_N)$ and $\bigoplus\limits_{v \in \Omega_K} \hat{\operatorname{H}}^{0}(G_v,N_v^*) \cong \hat{\operatorname{H}}^{0}(G,\mathbb{A}_{N}^{*})$. In this way, an argument analogous to the one given in \cite[\S 2]{DP} for the $n=1$ case shows that diagram \eqref{diag:1stobs_defn2} induces the commutative diagram
\begin{equation}\label{diag:1stobs_defn3}
\xymatrix{
\bigoplus\limits_{i=1}^{n} \hat{\operatorname{H}}^{0}({H}_i,C_N) \ar[r]^{\ {\psi}_1} & \hat{\operatorname{H}}^{0}(G,C_N)\\
\bigoplus\limits_{i=1}^{n} \hat{\operatorname{H}}^{0}(H_i,{\mathbb A}^*_{N}) \ar[r]^{\ {\psi}_2} \ar[u]^{{\varphi}_1}& \hat{\operatorname{H}}^{0}(G,{\mathbb A}^*_{N}) \ar[u]_{\varphi_2}
}
\end{equation}
\noindent where $\varphi_{1},\varphi_2$ are the natural projections and $\psi_{1},\psi_2$ are induced by the product of the norm maps $N_{L_i/K}$. Using the definition of the cohomology group $\hat{\operatorname{H}}^0$, this diagram is equal to
\begin{equation}\label{diag:1stobs_defn4}
\xymatrix{
\bigoplus\limits_{i=1}^{n} \frac{\mathbb{A}_{L_i}^{*}}{L_i^{*} N_{N/L_i}(\mathbb{A}^{*}_{N})} \ar[r]^{{\psi}_1} & \frac{\mathbb{A}_{K}^{*}}{K^{*} N_{N/K}(\mathbb{A}^{*}_{N})}\\
\bigoplus\limits_{i=1}^{n} \frac{\mathbb{A}_{L_i}^{*}}{ N_{N/L_i}(\mathbb{A}^{*}_{N})} \ar[r]^{\ \ \ {\psi}_2} \ar[u]^{{\varphi}_1}&{ \frac{\mathbb{A}_{K}^{*}}{ N_{N/K}(\mathbb{A}^{*}_{N})} }\ar[u]_{\varphi_2}
}
\end{equation}
\noindent From diagram \eqref{diag:1stobs_defn4}, it is clear that
$$\ker \psi_1 = \{ (x_i L_i^* N_{N/L_i}(\mathbb{A}^{*}_{N}))_{i=1}^{n} | \prod\limits_{i=1}^{n} N_{L_i/K}(x_i) \in K^* N_{N/K}(\mathbb{A}^{*}_N)\}$$
and
$$\varphi_1(\ker \psi_2) = \{ (x_i L_i^* N_{N/L_i}(\mathbb{A}^{*}_{N}))_{i=1}^{n} | \prod\limits_{i=1}^{n} N_{L_i/K}(x_i) \in N_{N/K}(\mathbb{A}^{*}_N)\}.$$
\noindent Now define
\begin{align*}
f \colon \ker \psi_1 / \varphi_1(\ker \psi_2) &\longrightarrow \mathfrak{F}(N,L,K) \\
(x_i L_i^* N_{N/L_i}(\mathbb{A}^{*}_{N}))_{i=1}^{n} &\longmapsto x \prod\limits_{i=1}^{n} N_{L_i/K}({L_i}^{*}) (K^* \cap N_{N/K}(\mathbb{A}^{*}_{N}) )
\end{align*}
\noindent where $x$ is any element of $ K^* \cap \prod\limits_{i=1}^{n} N_{L_i/K}(\mathbb{A}^{*}_{L_i})$ such that $\prod\limits_{i=1}^{n} N_{L_i/K}(x_i) \in x N_{N/K}(\mathbb{A}^{*}_N)$. It is straightforward to check that $f$ is well defined and an isomorphism.\end{proof}
\begin{remark}\label{rem:finite_time}
Given the knowledge of the local and global Galois groups of the towers $N/L_i/K$, the first obstruction to the multinorm principle can be computed in finite time by employing Theorem \ref{thm:thm1DP_gen}. First, it is clear that the computation of the groups $\ker \psi_1$ and $\varphi_1(\ker \psi_2^{v})$ for the ramified places $v$ of $N/K$ is finite. Moreover, from the definition of the maps in diagram \eqref{diag:1stobs_defn}, it is clear that if $v_1,v_2 \in \Omega_K$ are such that $G_{v_1}=G_{v_2}$, then $\varphi_1(\ker \psi_2^{v_1}) = \varphi_1(\ker \psi_2^{v_2})$. This shows that the computation of $\varphi_1(\ker \psi_2^{nr})$ is also finite. On this account, we designed a function in GAP \cite{gap} (whose code is available in \cite{macedo_code}) that takes as input the Galois groups $G, H_i$ and the decomposition groups $G_v$ at the ramified places of $N/K$ and outputs the group $\mathfrak{F}(N,L,K)$.\end{remark}
We conclude this section by providing two results that further reduce the amount of calculations necessary to compute $\mathfrak{F}(N,L,K)$ via Theorem \ref{thm:thm1DP_gen}. These are inspired by the same properties of the first obstruction to the Hasse norm principle (in \cite[\S 3]{DP}) and proved in the same way.
\begin{lemma}{\cite[Lemma 2]{DP}}
Let $v_1,v_2 \in \Omega_K$ be such that $G_{v_2} \subset G_{v_1}$. Then, in the notation of diagram \eqref{diag:1stobs_defn}, we have $$\varphi_1(\ker \psi_2^{v_2}) \subset \varphi_1(\ker \psi_2^{v_1}).$$
\end{lemma}
\begin{lemma}{\cite[Lemma 3]{DP}}
Let $v_1,v_2 \in \Omega_K$ be such that $G_{v_1} M = G_{v_2} M$ for some subgroup $M \subset Z(G) \cap \bigcap\limits_{i=1}^{n} H_i$. Then, in the notation of diagram \eqref{diag:1stobs_defn}, we have $$\varphi_1(\ker \psi_2^{v_1})= \varphi_1(\ker \psi_2^{v_2}).$$
\end{lemma}
\section{Generalized representation groups}\label{sec:gen_gps}
In this section we prove that the obstruction to the multinorm principle for $L$ can always be expressed in terms of the arithmetic of the extensions $L_i/K$ by using generalized representation groups (see Definition \ref{gen_rep_gp_defn} below) of $G=\Gal(N/K)$. Once again, many of the results in this section are inspired by and generalize Drakokhrust's work \cite{Drak} on the Hasse norm principle.
\begin{definition}\label{gen_rep_gp_defn}
Let $G$ be a finite group. A finite group $\overline{G}$ is called a \emph{generalized representation group} of $G$ if there exists a central extension
$$1 \to M \to \overline{G} \xrightarrow[]{\lambda} G \to 1$$
\noindent such that $M \cap [\overline{G},\overline{G}] \cong \hat{\operatorname{H}}^{-3}(G,\mathbb{Z})$. We call $M$ the base normal subgroup of $\overline{G}$. If in addition $M \subset [\overline{G},\overline{G}]$, we say that $\overline{G}$ is a \textit{Schur covering group of $G$}.
\end{definition}
\begin{proposition}\label{prop:1st=knot}
There exists a Galois extension $P/K$ containing $N$ and such that $$ \mathfrak{F}(P,L,K)=\mathfrak{K}(L,K).$$
\noindent Furthermore, this extension has the property that $\overline{G}=\Gal(P/K)$ is a generalized representation group of $G$ with base normal subgroup $\overline{M}=\Gal(P/N)$ and if $\overline{\lambda}:\overline{G} \to G$ is the associated projection map, we have $\Gal(P/L_i)=\overline{\lambda}^{-1}(H_i)$.
\end{proposition}
\begin{proof}
It follows from the proof of \cite[Lemma 1]{Drak} (see also \cite[Satz 3]{opolka}) that there exists a Galois extension $P/K$ such that the first obstruction to the Hasse norm principle $ \mathfrak{F}(P/L_i/K)$ coincides with the knot group $\mathfrak{K}(L_i/K)$ for all $L_i \in L$. Now apply Lemma \ref{lem:1stobs_equal_multiknot}. The stated properties of $P/K$ are shown in the references given above.
\end{proof}
As remarked in \cite{Drak}, the extension $P/K$ is not uniquely determined and the computation of its arithmetic is not always easy. Nonetheless, one can still compute $\mathfrak{F}(P,L,K)$ by commencing with an arbitrary generalized represention group of $G$.
Let $\widetilde{G}$ be any generalized representation group of $G$ with projection map $\widetilde{\lambda}$ and base normal subgroup $\widetilde{M}$. For any subgroup $B$ of $G$, define $\widetilde{B}=\widetilde{\lambda}^{-1}(B)$. We will use the following auxiliary lemma:
\begin{lemma}\label{lem:tau_1}
There exists an isomorphism $$\tau:[\widetilde{G},\widetilde{G}] \xrightarrow[]{\simeq} [\overline{G},\overline{G}]$$
\noindent with the following properties:
\begin{enumerate}[label=(\roman{*})]
\item\label{prop_tau1} $\overline{\lambda}(\tau(a))=\widetilde{\lambda}(a)$ for every $a \in [\widetilde{G},\widetilde{G}]$;
\item\label{prop_tau2} $\tau([\widetilde{g}_1,\widetilde{g}_2])=[\overline{g}_1,\overline{g}_2]$ for all $\widetilde{g}_1,\widetilde{g}_2 \in \widetilde{G}$ and $\overline{g}_1,\overline{g}_2 \in \overline{G}$ such that $\widetilde{\lambda}(\widetilde{g}_i)=\overline{\lambda}(\overline{g}_i)$.
\end{enumerate}
\noindent For any subgroup $B$ of $G$, $\tau$ further identifies
\begin{itemize}
\item $[\widetilde{B},\widetilde{B}] \cong [\overline{B},\overline{B}]$ and
\item $\widetilde{M} \cap [\widetilde{B},\widetilde{B}] \cong \overline{M} \cap [\overline{B},\overline{B}].$
\end{itemize}
\end{lemma}
\begin{proof}
The isomorphism $\tau$ is constructed in \cite[Theorems 2.4.6(iv) and 2.5.1(i)]{kar} and the stated properties are clear from this construction. The additional identifications follow from \ref{prop_tau1} and \ref{prop_tau2}.\end{proof}
Let $R$ be the set of ramified places of $N/K$. For any $v \in \Omega_K$, set
\[\widetilde{S}_v=\begin{cases}
\widetilde{G_v}\textrm{, if $v \in R$,}\\
\textrm{a cyclic subgroup of } \widetilde{G_v} \textrm{ such that } \widetilde{\lambda}(\widetilde{S}_v) = G_v \textrm{, otherwise.}
\end{cases}\]
\noindent Furthermore, by the Chebotarev density theorem we can (and do) choose the subgroups $\widetilde{S}_v$ for $v \not\in R$ in such a way that all the cyclic subgroups of $\widetilde{G_v}$ such that $\widetilde{\lambda}(\widetilde{S}_v) = G_v$ occur.
\begin{remark}
As pointed out in \cite[p. 31]{Drak}, a double coset decomposition $\overline{G}=\bigcup\limits_{k=1}^{r_{v,i}} \overline{H_i} \overline{x}_{i,k} \overline{G_v}$ corresponds to a double coset decomposition $\widetilde{G}=\bigcup\limits_{k=1}^{r_{v,i}} \widetilde{H_i} \widetilde{x}_{i,k} \widetilde{S}_v$, where $\widetilde{x}_{i,k}$ are any elements of $\widetilde{G}$ such that $\widetilde{\lambda}(\widetilde{x}_{i,k} )=\overline{\lambda}(\overline{x}_{i,k} )$.
\end{remark}
Consider the following diagram analogous to \eqref{diag:1stobs_defn}:
\begin{equation}\label{diag:1stobs_defn_generalized}
\xymatrix{
\bigoplus\limits_{i=1}^{n} \widetilde{H_i}^{\textrm{ab}} \ar[r]^{\widetilde{\psi}_1} & \widetilde{G}^{\textrm{ab}}\\
\bigoplus\limits_{i=1}^{n}(\bigoplus\limits_{v \in \Omega_K} ( \bigoplus\limits_{w|v} {\widetilde{H}_{i,w}^{\textrm{ab}} }) ) \ar[r]^{\ \ \ \ \ \ \ \widetilde{\psi}_2} \ar[u]^{\widetilde{\varphi}_1}&\bigoplus\limits_{v \in \Omega_K}{\widetilde{S}_v^{\textrm{ab}} }\ar[u]_{\widetilde{\varphi}_2}
}
\end{equation}
\noindent where $\widetilde{H}_{i,w}=\widetilde{H_i} \cap \widetilde{x}_{i,k} \widetilde{S}_v \widetilde{x}_{i,k}^{-1}$ and all the maps are defined as in diagram \eqref{diag:1stobs_defn}.
We now prove the main result of this section, namely that the object $\ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2)$ does not depend on the choice of generalized representation group (and thus, by Theorem \ref{thm:thm1DP_gen} and Proposition \ref{prop:1st=knot}, it always coincides with $\mathfrak{K}(L,K)$). Before we show this, we need a lemma. To ease the notation, we often omit the cosets $\widetilde{H_i}'$ and $\overline{H_i}'$ when working with elements of $\ker \widetilde{\psi}_1$ or $\ker \overline{\psi}_1$.
\begin{lemma}\label{lem:simpl_inters}
For any indices $1 \leq i_1 < i_2 \leq n$ and any $m \in \widetilde{H_{i_1}} \cap\widetilde{H_{i_2}} $, we have
$$h=(1,\dots,\underbrace{m}_{i_1\textrm{-th entry}}, 1,\dots, 1, \underbrace{m^{-1}}_{i_2\textrm{-th entry}}, 1,\dots, 1) \in \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}).$$
\end{lemma}
\begin{proof}
We construct a vector $\alpha \in \bigoplus\limits_{i=1}^{n}(\bigoplus\limits_{\substack{v \in \Omega_K \\ v \text{ unramified}}}( \bigoplus\limits_{w|v} {\widetilde{H}_{i,w}^{\textrm{ab}} }) )$ such that $ \widetilde{\psi}_2(\alpha)=1$ and $\widetilde{\varphi}_1(\alpha)=h$. Let $v$ be an unramified place of $K$ such that $\widetilde{S}_v = \langle m \rangle$. By definition, if $\widetilde{G}=\bigcup\limits_{k=1}^{r_{v,i}} \widetilde{H_i} \widetilde{x}_{i,k} \widetilde{S}_v$ is a double coset decomposition of $\widetilde{G}$, then $\widetilde{H}_{i,w}= \widetilde{H_i} \cap \widetilde{x}_{i,k} \widetilde{S}_v \widetilde{x}_{i,k}^{-1}$. Let us suppose, without loss of generality, that $\widetilde{x}_{i_1,{k_1}}=1=\widetilde{x}_{i_2,{k_2}}$ for some index $1 \leq k_1 \leq r_{v,i_1}$ (respectively, $1 \leq k_2 \leq r_{v,i_2}$) corresponding to a place $w_1 \in \Omega_{L_{i_1}}$ (respectively, $w_2 \in \Omega_{L_{i_2}}$) via Lemma \ref{lem1DP}. In this way, we have $m \in \widetilde{H}_{i_1,w_1}$ and $m^{-1} \in \widetilde{H}_{i_2,w_2}$. Setting the $(i_1,v,w_1)$-th (respectively, $(i_2,v,w_2)$-th) entry of $\alpha$ to be equal to $m$ (respectively, $m^{-1}$) and all other entries equal to $1$, we obtain $
\widetilde{\psi}_2(\alpha)=1$ and $\widetilde{\varphi}_1(\alpha)=h$.
\end{proof}
\begin{theorem}\label{thm:main_knot}
In the notation of diagram \eqref{diag:1stobs_defn_generalized}, we have
$$\mathfrak{K}(L,K) \cong \ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2).$$
\end{theorem}
\begin{proof}
By Theorem \ref{thm:thm1DP_gen} and Proposition \ref{prop:1st=knot}, we have $\mathfrak{K}(L,K) \cong \ker \overline{\psi}_1 / \overline{\varphi}_1(\ker \overline{\psi}_2)$, where the $\overline{\phantom{a}}$ notation is as in diagram \eqref{diag:1stobs_defn_generalized} with respect to the groups of Proposition \ref{prop:1st=knot}. Therefore, it suffices to prove that $$\ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2) \cong \ker \overline{\psi}_1 / \overline{\varphi}_1(\ker \overline{\psi}_2).$$
\noindent Define
\begin{align*}
f \colon \ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2) &\longrightarrow \ker \overline{\psi}_1 / \overline{\varphi}_1(\ker \overline{\psi}_2) \\
(\widetilde{h}_1 ,\dots,\widetilde{h}_n ) &\longmapsto (\overline{h}_1 ,\dots, \overline{h}_n )
\end{align*}
\noindent where, for each $i=1,\dots,n$, the element $\overline{h}_i \in \overline{H_i}$ is selected as follows:
take $\overline{h}_i \in \overline{H_i}$ such that $\overline{\lambda}(\overline{h}_i)=\widetilde{\lambda}(\widetilde{h}_i)$ (note that $\overline{h}_i$ is only defined modulo ${\overline{M}}=\ker \overline{\lambda}$). In this way, we have $\overline{\lambda}(\overline{h}_1 \dots \overline{h}_n)=\widetilde{\lambda}(\widetilde{h}_1 \dots \widetilde{h}_n)$. Additionally, by Lemma \ref{lem:tau_1}\ref{prop_tau1}, $\overline{\lambda}(\tau(\widetilde{h}_1 \dots \widetilde{h}_n))=\widetilde{\lambda}(\widetilde{h}_1 \dots \widetilde{h}_n)$ and thus
\begin{equation}\label{tau_eq}
\tau(\widetilde{h}_1 \dots \widetilde{h}_n)=\overline{h}_1 \dots \overline{h}_n m
\end{equation}
\noindent for some $m \in \overline{M}$. Changing $\overline{h}_n$ if necessary, we assume that $m=1$ so that $\overline{h}_1 \dots \overline{h}_n \in [\overline{G},\overline{G}]$ and therefore $ (\overline{h}_1 ,\dots, \overline{h}_n ) \in \ker \overline{\psi}_1$.
\medskip
\textbf{Claim 1:} $f$ is well defined, i.e. it does not depend on the choice of the elements $\overline{h}_i$ and moreover ${f(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2)) \subset \overline{\varphi}_1(\ker \overline{\psi}_2)}$.
\textbf{Proof:} We first prove that $f$ does not depend on the choice of $\overline{h}_i$. Suppose that, for each $i=1,\dots,n$, we choose elements $\underline{h}_i \in \overline{H_i}$ satisfying $\widetilde{\lambda}(\widetilde{h}_i)=\overline{\lambda}(\underline{h}_i)$ and $\tau(\widetilde{h}_1\dots\widetilde{h}_n)=\underline{h}_1 \dots \underline{h}_n$. We show that $(\underline{h}_1,\dots,\underline{h}_n)=(\overline{h}_1,\dots,\overline{h}_n)$ in $\ker \overline{\psi}_1 / \overline{\varphi}_1(\ker \overline{\psi}_2)$. Writing $\underline{h}_i=\overline{h}_i m_i$ for some $m_i \in \overline{M}$, it suffices to prove that $(m_1,\dots,m_n) \in \overline{\varphi}_1(\ker \overline{\psi}_2)$. Since $\overline{h}_1\dots \overline{h}_n =\tau(\widetilde{h}_1 \dots\widetilde{h}_n)=\underline{h}_1 \dots \underline{h}_n$ and the elements $m_i$ are in $\overline{M} \subset Z(\overline{G})$, we obtain $m_1 \dots m_n=1$. As $\overline{M} \subset \bigcap\limits_{i=1}^{n} \overline{H_i}$, multiplying $(m_1,\dots,m_n)$ by $(m_2,m_2^{-1},1,\dots,1)$ (which lies in $\overline{\varphi}_1(\ker \overline{\psi}_2)$ by Lemma \ref{lem:simpl_inters}), we have $(m_1,\dots,m_n)\equiv(m_1 m_2, 1,m_3,\dots , m_n) \pmod{\overline{\varphi}_1(\ker \overline{\psi}_2)}$. Repeating this procedure, we obtain $(m_1,\dots,m_n) \equiv (m_1\dots m_n,\dots,1)=(1,\dots,1) \pmod{\overline{\varphi}_1(\ker \overline{\psi}_2)}$ and therefore $(m_1,\dots,m_n)$ is in $\overline{\varphi}_1(\ker \overline{\psi}_2)$, as desired.
We now show that $f(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2)) \subset \overline{\varphi}_1(\ker \overline{\psi}_2)$. It suffices to check that $f(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v})) \subset \overline{\varphi}_1(\ker \overline{\psi}_2^{v})$ for any $v \in \Omega_K$. For $i=1,\dots,n$, let $\widetilde{G}=\bigcup\limits_{k=1}^{r_{v,i}} \widetilde{H_i} \widetilde{x}_{i,k} \widetilde{S}_v$ be a double coset decomposition of $\widetilde{G}$ and recall that, by definition, the group $\widetilde{H}_{i,w}$ equals $\widetilde{H_i} \cap \widetilde{x}_{i,k} \widetilde{S}_v \widetilde{x}_{i,k}^{-1}$ if $w \in \Omega_{L_i}$ corresponds to the double coset $\widetilde{H_i} \widetilde{x}_{i,k} \widetilde{S}_v$. Let $\alpha = \bigoplus\limits_{i=1}^{n} \bigoplus\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k} \in \ker \widetilde{\psi}_2^{v}$, where $\widetilde{h}_{i,k} \in \widetilde{H}_{i,w}$ for all possible $i,k$. We thus have
\begin{equation}\label{assumpt_v0}
\widetilde{\psi}_2(\alpha)=\prod\limits_{i=1}^{n}\prod\limits_{k=1}^{r_{v,i}} \widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k} \widetilde{x}_{i,k} \in [\widetilde{S}_v,\widetilde{S}_v].
\end{equation}
\noindent For any $i=1,\dots,n$ define $\widetilde{h}_i= \prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k}$. We need to show that $f(\widetilde{h}_1,\dots,\widetilde{h}_n)$ is in $\overline{\varphi}_1(\ker \overline{\psi}_2^v)$.
Set $x_{i,k}:=\widetilde{\lambda}(\widetilde{x}_{i,k}) \in G$ and $h_{i,k}:=\widetilde{\lambda}(\widetilde{h}_{i,k}) \in {H}_{i} \cap x_{i,k} G_v x_{i,k}^{-1}$ for all possible $i,k$. We have $\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} x_{i,k}^{-1} h_{i,k} x_{i,k} \in [G_v,G_v]$. Let $\overline{x}_{i,k} \in \overline{G}$ be such that $\overline{\lambda}(\overline{x}_{i,k})=x_{i,k}$ and $\overline{h}_{i,k} \in \overline{H}_{i} \cap \overline{x}_{i,k} \overline{G_v} \overline{x}_{i,k}^{-1}$ satisfying $\overline{\lambda}(\overline{h}_{i,k})=h_{i,k}$. Multiplying one of the $\overline{h}_{1,k}$ by an element of $\overline{M}$ if necessary, we can assure that \begin{equation}\label{assumpt_v}
\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \overline{x}_{i,k}^{-1} \overline{h}_{i,k} \overline{x}_{i,k} \in [\overline{G_v},\overline{G_v}].
\end{equation}
\noindent In particular, $\alpha':=\bigoplus\limits_{i=1}^{n}\bigoplus\limits_{k=1}^{r_{v,i}} \overline{h}_{i,k}$ is in $ \ker \overline{\psi}_2^{v}$. Defining $\overline{h}_i := \prod\limits_{k=1}^{r_{v,i}} \overline{h}_{i,k}$ for $i=1,\dots,n$, we get $\overline{\varphi_1}(\alpha')=(\overline{h}_1 , \dots, \overline{h}_n )$. We have $\widetilde{\lambda}(\widetilde{h}_i)=\overline{\lambda}(\overline{h}_i)$ by construction and therefore
$$\tau(\widetilde{h}_1 \dots \widetilde{h}_n)=\overline{h}_1 \dots \overline{h}_n m$$
\noindent for some $m \in \overline{M}$. We prove that $m$ is also in $[\overline{G_v},\overline{G_v}]$ so that, by multiplying one of the elements $\overline{h}_{1,k}$ by $m^{-1} \in \overline{M} \cap [\overline{G_v},\overline{G_v}]$ if necessary (note that doing so does not change condition \eqref{assumpt_v}), we obtain $f(\widetilde{h}_1,\dots,\widetilde{h}_n)=(\overline{h}_1,\dots,\overline{h}_n) $. As $(\overline{h}_1,\dots,\overline{h}_n)$ is in $\overline{\varphi}_1(\ker \overline{\psi}_2^{v})$, this proves the claim.
Note that
$$\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k}=(\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k}) (\prod\limits_{i=n}^{1} \prod\limits_{k=r_{v,i}}^{1} \widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k}^{-1} \widetilde{x}_{i,k}) \widetilde{\psi}_2(\alpha).$$
\noindent Denote $(\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k}) (\prod\limits_{i=n}^{1} \prod\limits_{k=r_{v,i}}^{1} \widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k}^{-1} \widetilde{x}_{i,k})$ by $\beta$. Then $\beta \in [\widetilde{G},\widetilde{G}]$ and using an explicit description of $\beta$ as a product of commutators and Lemma \ref{lem:tau_1}\ref{prop_tau2}, we deduce that $\tau(\beta)=\beta'$, where $\beta'=(\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \overline{h}_{i,k}) (\prod\limits_{i=n}^{1} \prod\limits_{k=r_{v,i}}^{1} \overline{x}_{i,k}^{-1} \overline{h}_{i,k}^{-1} \overline{x}_{i,k})$. Therefore, we have
$$\prod\limits_{i=1}^{n} \overline{h}_{i}=\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \overline{h}_{i,k} \equiv \beta' = \tau(\beta) \equiv \tau(\prod\limits_{i=1}^{n} \widetilde{h}_{i}) \pmod{[\overline{G_v},\overline{G_v}]},$$
\noindent and thus $m \in [\overline{G_v},\overline{G_v}]$, as desired.
\medskip
\textbf{Claim 2:} $f$ is a homomorphism.
\textbf{Proof:} Let $h=(\widetilde{h}_1 ,\dots,\widetilde{h}_n ),h'=(\widetilde{h}'_1 ,\dots,\widetilde{h}'_n ) \in \ker \widetilde{\psi}_1$ and write $f(h)=(\overline{h}_1 ,\dots, \overline{h}_n )$ and $f(h')=(\overline{h}'_1 ,\dots, \overline{h}'_n )$ for some elements $\overline{h}_i,\overline{h}'_i \in \overline{H_i}$. We have $f(h)f(h')=(\overline{h}_1\overline{h}'_1 ,\dots,\overline{h}_n\overline{h}'_n )$. On the other hand, ${hh'=(\widetilde{h}_1\widetilde{h}'_1 ,\dots,\widetilde{h}_n\widetilde{h}'_n )}$ and
$$\tau(\widetilde{h}_1 \widetilde{h}_1' \dots \widetilde{h}_n \widetilde{h}_n') \equiv \tau((\widetilde{h}_1 \dots \widetilde{h}_n)( \widetilde{h}_1' \dots \widetilde{h}_n')) = (\overline{h}_1\dots \overline{h}_n) (\overline{h}_1' \dots \overline{h}_n') \equiv \overline{h}_1\overline{h}_1'\dots \overline{h}_n\overline{h}_n' \pmod{[\overline{G},\overline{G}]}.$$
\noindent Since $\widetilde{\lambda}(\widetilde{h}_i \widetilde{h}_i')=\overline{\lambda}(\overline{h}_i \overline{h}_i')$ for all $i=1,\dots,n$ and $(\overline{h}_1\dots \overline{h}_n) (\overline{h}_1' \dots \overline{h}_n') \in [\overline{G},\overline{G}]$, by the definition of $f$ it follows that $f(h h')= (\overline{h}_1\overline{h}_1',\dots, \overline{h}_n\overline{h}_n')=f(h)f( h')$.
\medskip
\textbf{Claim 3:} $f$ is surjective.
\textbf{Proof:} For $i=1,\dots,n$, let $\overline{h}_i \in \overline{H_i}$ be such that $\overline{h}_1 \dots \overline{h}_n \in [\overline{G},\overline{G}]$. Take any elements $\widetilde{h}_i \in \widetilde{H_i}$ satisfying $\widetilde{\lambda}(\widetilde{h}_i)=\overline{\lambda}(\overline{h}_i)$. As above, by Lemma \ref{lem:tau_1}\ref{prop_tau1} this implies that there exists $m \in \overline{M}$ such that
$$\tau(\widetilde{h}_1\dots \widetilde{h}_n)=\overline{h}_1 \dots \overline{h}_n m \in [\overline{G},\overline{G}].$$
\noindent Since $\overline{h}_1\dots \overline{h}_n \in [\overline{G},\overline{G}]$, we have $m \in \overline{M} \cap [\overline{G},\overline{G}]$. But $\overline{M} \cap [\overline{G},\overline{G}] = \tau(\widetilde{M} \cap [\widetilde{G},\widetilde{G}])$ by Lemma \ref{lem:tau_1}. Therefore $m = \tau(m')$ for some $m' \in \widetilde{M} \cap [\widetilde{G},\widetilde{G}]$ and thus $(\overline{h}_1,\dots,\overline{h}_n)=f(\widetilde{h}_1,\dots,\widetilde{h}_n m'^{-1})$.
\medskip
\textbf{Claim 4:} $f$ is an isomorphism.
\textbf{Proof:} We have seen that $f$ is surjective. Now we can analogously define a surjective map from $\ker \overline{\psi}_1 / \overline{\varphi}_1(\ker \overline{\psi}_2)$ to $\ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2) $. It follows that the finite groups $\ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2) $ and $\ker \overline{\psi}_1 / \overline{\varphi}_1(\ker \overline{\psi}_2)$ have the same size and so $f$ is an isomorphism.\end{proof}
Using this theorem, one can also obtain descriptions of the birational invariant $\operatorname{H}^1(K,\Pic \overline{X})$ and the defect of weak approximation $A(T)$ for the multinorm one torus $T$:
\begin{theorem}\label{thm:main_result}
Let $T$ be the multinorm one torus associated to $L$ and let $X$ be a smooth compactification of $T$. In the notation of diagram \eqref{diag:1stobs_defn_generalized}, we have
$$\Sha(T) \cong \ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2),$$
$$\operatorname{H}^1(K,\Pic \overline{X})\cong \ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{{nr}}),$$
$$A(T) \cong \widetilde{\varphi}_1(\ker \widetilde{\psi}_2) / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{{nr}}).$$
\end{theorem}
\begin{proof}
The first isomorphism is the statement of Theorem \ref{thm:main_knot} (recall that $\Sha(T)$ is canonically isomorphic to $\mathfrak{K}(L,K)$). The two remaining isomorphisms follow in the same way as in the Hasse norm principle case, see \cite[p. 32--33]{Drak}.
\end{proof}
\begin{remark}\label{rem:finite_time2}
As explained in Remark \ref{rem:finite_time}, all the groups $ \ker \widetilde{\psi}_1, \widetilde{\varphi}_1(\ker \widetilde{\psi}_2)$ and $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ in Theorem \ref{thm:main_result} can be computed in finite time. To this extent, we assembled a function in GAP \cite{gap} (whose code is available in \cite{macedo_code}) that, given the relevant local and global Galois groups, outputs the obstructions to the multinorm principle and weak approximation for the multinorm one torus of a finite number of extensions by means of Theorem \ref{thm:main_result}.
\end{remark}
We end this section by generalizing Corollary \ref{cor:square_free} and proving that, in many situations, one can actually circumvent the use of generalized representation groups when computing the obstructions to the local-global principles.
Before we present this result, we need to introduce the notion of focal subgroups. For a moment, let $G$ be any finite group and let $H$ be a subgroup of $G$. The \textit{focal subgroup of $H$ in $G$} is defined as $\Phi^{{G}}({H})=\langle [h,x] | h \in {H} \cap x {H} x^{-1}, x \in {G} \rangle$. In \cite[Theorem 2]{DP}, it was proved that $${\varphi}_1(\ker {\psi}_2^{{nr}}) = \Phi^{G}(H) / [H,H]$$
\noindent in the setting of the first obstruction to the Hasse norm principle (case $n=1$). Returning to the multinorm context, this fact promptly implies that, in the notation of diagram \eqref{diag:1stobs_defn_generalized}, we have
\begin{equation}\label{eq:nr_hnp_inclusion}
(1,\dots,\underbrace{\Phi^{\widetilde{G}}(\widetilde{H_i})}_{i\textrm{-th entry}}, 1,\dots, 1) \subset \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}).
\end{equation}
\noindent for every $i=1,\dots,n$.
\begin{proposition}\label{prop:sq_free_mid_gp}
Suppose that there exists $j \in \{1,\dots,n\}$ such that, for every prime $p$ dividing $|\hat{\operatorname{H}}^{-3}(G,\mathbb{Z})|$, $p^2$ does not divide $[L_j:K]$. Then, in the notation of diagram \eqref{diag:1stobs_defn}, we have
$$\Sha(T) \cong \ker {\psi}_1 / {\varphi}_1(\ker {\psi}_2),$$
$$\operatorname{H}^1(K,\Pic \overline{X})\cong \ker {\psi}_1 / {\varphi}_1(\ker {\psi}_2^{{nr}}),$$
$$A(T) \cong {\varphi}_1(\ker {\psi}_2) / {\varphi}_1(\ker {\psi}_2^{{nr}}).$$
\end{proposition}
\begin{proof}
We prove only that $\operatorname{H}^1(K,\Pic \overline{X})\cong \ker {\psi}_1 / {\varphi}_1(\ker {\psi}_2^{{nr}})$ (the other two isomorphisms can be obtained by a similar argument). Assume, without loss of generality, that $j=1$ and $\widetilde{G}$ is a Schur covering group of $G$ so that $ \widetilde{M}$ is contained in $ [\widetilde{G},\widetilde{G}]$ and $\widetilde{M} \cong \hat{\operatorname{H}}^{-3}(G,\mathbb{Z})$. We show that the map
\begin{align*}
\rho \colon\ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) &\longrightarrow \ker {\psi}_1 / {\varphi}_1(\ker {\psi}_2^{nr})\\
h=(\widetilde{h}_1 ,\dots,\widetilde{h}_n ) &\longmapsto (\widetilde{\lambda}(\widetilde{h}_1) ,\dots,\widetilde{\lambda}(\widetilde{h}_n))
\end{align*}
\noindent is an isomorphism, which proves the desired statement by Theorem \ref{thm:main_result}.
We first verify that $\rho$ is well defined. It is enough to check that $\rho(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v})) \subset {\varphi}_1(\ker {\psi}_2^{v})$ for an unramified place $v$ of $N/K$. Note that if $\widetilde{G}=\bigcup\limits_{k=1}^{r_{v,i}} \widetilde{H_i} \widetilde{x}_{i,k} \widetilde{S}_v$ is a double coset decomposition of $\widetilde{G}$, then ${G}=\bigcup\limits_{k=1}^{r_{v,i}} {H}_i {x}_{i,k} G_v$ is a double coset decomposition of ${G}$, where ${x}_{i,k}=\widetilde{\lambda}(\widetilde{x}_{i,k})$. From this observation, it is straightforward to verify that $\rho(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v})) \subset {\varphi}_1(\ker {\psi}_2^{v})$.
We now prove that $\rho$ is surjective. Suppose that we are given, for $i=1,\dots,n$, elements $h_i \in H_i$ such that $h_1 \dots h_n \in [G,G]$. Since $\widetilde{M} \subset [\widetilde{G},\widetilde{G}]$, any choice of elements $\widetilde{h}_i \in \widetilde{H_i}$ such that $\widetilde{\lambda}(\widetilde{h}_i)=h_i$ will satisfy $\widetilde{h}_1 \dots \widetilde{h}_n \in [\widetilde{G},\widetilde{G}]$ and thus $({h}_1, \dots, {h}_n)=\rho(\widetilde{h}_1 ,\dots ,\widetilde{h}_n)$.
We finally show that $\rho$ is injective. Suppose that $(h_1,\dots,h_n)=\rho(h) \in {\varphi}_1(\ker {\psi}_2^{v})$ for some unramified place $v$ of $N/K$. Write $h_i=\varphi_1(\bigoplus\limits_{k=1}^{r_{v,i}} h_{i,k})$ for some elements $h_{i,k} \in H_i \cap x_{i,k} G_v x_{i,k}^{-1}$. As $\rho(h) \in {\varphi}_1(\ker {\psi}_2^{v})$, we obtain $\prod\limits_{i=1}^{n}\prod\limits_{k=1}^{r_{v,i}} x_{i,k}^{-1} h_{i,k} x_{i,k}=1$. Picking elements $\widetilde{h}_{i,k}\in\widetilde{\lambda}^{-1}(h_{i,k})$ and $\widetilde{x}_{i,k}\in\widetilde{\lambda}^{-1}(x_{i,k})$ for all possible $i,k$, we obtain $\prod\limits_{i=1}^{n}\prod\limits_{k=1}^{r_{v,i}} \widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k} \widetilde{x}_{i,k}=m$ for some $m \in \widetilde{M} = \ker \widetilde{\lambda}$. As $m \in Z(\widetilde{G}) \cap \bigcap\limits_{i=1}^{n} \widetilde{H_i}$, we have $(\widetilde{h}_1 m^{-1},\widetilde{h}_2, ,\dots,\widetilde{h}_n) \in \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$. Therefore, in order to prove that $h \in \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ it suffices to show that $(m^{-1},1,\dots,1) \in \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$. We prove that $m \in \Phi^{\widetilde{G}}(\widetilde{H_1})$, which completes the proof by \eqref{eq:nr_hnp_inclusion}.
\medskip
\textbf{Claim:} If $p^2$ does not divide $[L_1:K]$ for every prime $p$ dividing $|\widetilde{M}|$, then $\widetilde{M} \subset \Phi^{\widetilde{G}}(\widetilde{H_1})$.
\textbf{Proof:} We show that $\widetilde{M}_{(p)} \subset \Phi^{\widetilde{G}}(\widetilde{H_1})$. We have $[L_1:K]=[G:H_1]$ and therefore $[G_p:(H_1)_{p}] = [\widetilde{G}_p :(\widetilde{H_1})_{p}]=1$ or $p$. In any case, $(\widetilde{H_1})_p \trianglelefteq \widetilde{G}_p$ and we can write $\widetilde{G}_p = \langle x_p \rangle . (\widetilde{H_1})_p $ for some $x_p \in \widetilde{G}_p$. Since $\widetilde{M}_{(p)} \subset \widetilde{G}_p \cap [\widetilde{G},\widetilde{G}] \cap Z(\widetilde{G})$ and $\widetilde{G}_p \cap [\widetilde{G},\widetilde{G}] \cap Z(\widetilde{G}) \subset [\widetilde{G}_p,\widetilde{G}_p]$ (this last inclusion follows from properties of the transfer map, e.g. \cite[Lemma 5.5]{Isaacs}), we have $\widetilde{M}_{(p)} \subset [\widetilde{G}_p ,\widetilde{G}_p ]$ and so it suffices to prove that $[\widetilde{G}_p ,\widetilde{G}_p ] \subset \Phi^{\widetilde{G}}(\widetilde{H_1})$. Let $z=[x_p^a h_1, x_p^b h_1']$ for some $a,b \in \mathbb{Z}$ and $h_1,h_1' \in (\widetilde{H_1})_p$. Using the commutator properties, we have $z=[x_p^a,h_1']^{h_1}[h_1,h_1'][h_1,x_p^b]^{h_1'}$. As $(\widetilde{H_1})_p \trianglelefteq \widetilde{G}_p $ and $\Phi^{\widetilde{G}}(\widetilde{H_1}) \trianglelefteq \widetilde{H_1}$, it follows that each one of the commutators above is in $\Phi^{\widetilde{G}}(\widetilde{H_1})$.\qedhere
\end{proof}
As a consequence we obtain the following result, which can be thought of as an analog of {\cite[Corollary 1]{DP}} for the birational invariant $\operatorname{H}^1(K,\Pic \overline{X})$.
\begin{corollary}
Let $L/K$ be an extension of global fields and suppose that $[L:K]$ is square-free. Let $X$ be a smooth compactification of the norm one torus $R^1_{L/K} {\mathbb G}_m$. Then $$\operatorname{H}^1(K,\Pic \overline{X}) \cong \frac{H \cap [G,G]}{\Phi^{G}(H)}.$$
\end{corollary}
\begin{proof}
The conditions of Proposition \ref{prop:sq_free_mid_gp} are satisfied and hence $\operatorname{H}^1(K,\Pic \overline{X}) \cong \ker {\psi}_1 / {\varphi}_1(\ker {\psi}_2^{nr})$. The result then follows from \cite[Theorem 2]{DP}.
\end{proof}
\section{Applications}\label{sec:applications}
In this section we employ the techniques developed so far in order to analyze the multinorm principle or weak approximation for the multinorm one torus in three different situations. Namely, we extend results of Bayer-Fluckiger--Lee--Parimala \cite{eva}, Demarche--Wei \cite{demarche} and Pollio \cite{pollio}. The notation used throughout this section is as in Sections \ref{sec:1st_obs} and \ref{sec:gen_gps}. Additionally, we will make use of the norm one torus $S=R^1_{F/K} {\mathbb G}_m$ of the extension $F=\bigcap\limits_{i=1}^{n} L_i$ and we let $Y$ denote a smooth compactification of $S$. We start by establishing a few auxiliary lemmas to be used in later applications.
\subsection{Preliminary results}
\begin{lemma}\label{lem:incl_unr}
In the notation of diagram \eqref{diag:1stobs_defn_generalized}, we have $$\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) \subseteq \{ (h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}') \in \ker \widetilde{\psi}_1 | h_1\dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H}) \}.$$
\end{lemma}
A proof of this lemma can be obtained by following the same strategy as in the proof of the analogous result for the Hasse norm principle (case $n=1$) in \cite[Theorem 2]{DP}. Nonetheless, as the details are slightly intricate, we include a proof here for the benefit of the reader.
\begin{proof}
Since $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) = \prod\limits_{\substack{v \in \Omega_K \\ v \text{ unramified}}} \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{ v })$, it suffices to prove that
$$\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v}) \subseteq \{ (h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}') | h_1\dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H}) \} $$
\noindent for any unramified place $v$ of $N/K$. Let $\alpha \in \ker \widetilde{\psi}_2^{ v }$ and fix a double coset decomposition $\widetilde{G}=\bigcup\limits_{k=1}^{r_{v,i}} \widetilde{H_i} \widetilde{x}_{i,k} \widetilde{S}_v$. Write $\widetilde{S}_v=\langle g \rangle$ and $\alpha = \bigoplus\limits_{i=1}^{n} \bigoplus\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k} $ for some $g \in \widetilde{G}$, $\widetilde{h}_{i,k} = \widetilde{x}_{i,k} g^{e_{i,k}} \widetilde{x}_{i,k}^{-1} \in \widetilde{H_i} \cap \widetilde{x}_{i,k} \langle g \rangle \widetilde{x}_{i,k}^{-1}$ and some $e_{i,k} \in \mathbb{Z}$. By hypothesis, we have
$1=\widetilde{\psi}_2(\alpha)=g^{\sum_{i,k} e_{i,k}}$ and therefore
$$\sum\limits_{i,k} e_{i,k} \equiv 0 \pmod{m},$$
\noindent where $m$ is the order of $g$. Since $g^m=1$, by changing some of the $e_{i,k}$ if necessary, we can (and do) assume that \begin{equation}\label{eq:sum=0}
\sum\limits_{i,k} e_{i,k} = 0.
\end{equation}
Letting ${h_i} =\prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k} $ for any $1 \leq i \leq n$, we have $\widetilde{\varphi}_1(\alpha)=(h_1 \widetilde{H_1},\dots,h_n \widetilde{H_n}) \in \ker \widetilde{\psi}_1$. We prove that
$$\prod\limits_{i=1}^{n} {h_i} =\prod\limits_{i=1}^{n}(\prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k})=\prod\limits_{i=1}^{n} (\prod\limits_{k=1}^{r_{v,i}} \widetilde{x}_{i,k} g^{e_{i,k}} \widetilde{x}_{i,k}^{-1}) \in \Phi^{\widetilde{G}}(\widetilde{H})$$
\noindent by induction on $s:=\sum\limits_{i=1}^{n} r_{v,i}$. The case $s=1$ is trivial and the case $s=2$ is solved in \cite[p. 308]{DP}. Now let $s > 2$ and set $d=\gcd(e_{i,k} | 1 \leq i \leq n, 1 \leq k \leq r_{v,i})$ and $f_{i,k}=\frac{e_{i,k}}{d}$. It follows that $\gcd(f_{i,k} | 1 \leq i \leq n, 1 \leq k \leq r_{v,i})=1$ and, since $\sum\limits_{i,k} f_{i,k} = 0$ by \eqref{eq:sum=0}, we have ${\gcd(f_{i,k} | 1 \leq i \leq n, 1 \leq k \leq r_{v,i} \textrm{ and } (i,k) \neq (n,r_{v,n}))}=1$. Hence there exist $a_{i,k} \in \mathbb{Z}$ such that $\sum\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} f_{i,k} a_{i,k} = 1$. Consider the element
$$\beta=\Big(\bigoplus\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} \widetilde{x}_{i,k} g^{e_{i,k}f_{n,r_{v,n}}a_{i,k}} \widetilde{x}_{i,k}^{-1}\Big) \oplus \widetilde{x}_{n,r_{v,n}} g^{-e_{n,r_{v,n}}} \widetilde{x}_{n,r_{v,n}}^{-1} \in \bigoplus\limits_{i=1}^{n}\Big( \bigoplus\limits_{k=1}^{r_{v,i}} {\widetilde{H}_{i,w} }\Big) .$$
\noindent Since
$e_{i,k} f_{n,r_{v,n}}=e_{n,r_{v,n}} f_{i,k},$
\noindent we have
$$\widetilde{\psi}_2(\beta)=g^{\Big(\sum\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} e_{i,k}f_{n,r_{v,n}}a_{i,k}\Big)-e_{n,r_{v,n}}}=g^{\Big(\sum\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} e_{n,r_{v,n}}f_{i,k}a_{i,k}\Big)-e_{n,r_{v,n}}}=1$$
\noindent and so $\beta \in \ker \widetilde{\psi}_2^{v}$.
Additionally, if $\widetilde{\varphi}_1(\beta) = (\widetilde{h}_1,\dots,\widetilde{h}_n)$, we have
\begin{equation}
\begin{split}
\prod\limits_{i=1}^{n} \widetilde{h}_i & = \left(\prod\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} \widetilde{x}_{i,k} g^{e_{i,k}f_{n,r_{v,n}}a_{i,k}} \widetilde{x}_{i,k}^{-1} \right) \widetilde{x}_{n,r_{v,n}} g^{-e_{n,r_{v,n}}} \widetilde{x}_{n,r_{v,n}}^{-1} = \\
& = \left(\prod\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} \widetilde{x}_{i,k} g^{e_{i,k}f_{n,r_{v,n}}a_{i,k}} \widetilde{x}_{i,k}^{-1} \right) \widetilde{x}_{n,r_{v,n}} g^{-e_{n,r_{v,n}} \sum\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} f_{i,k} a_{i,k}} \widetilde{x}_{n,r_{v,n}}^{-1} \equiv \\
& \equiv \left(\prod\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} \widetilde{x}_{i,k} g^{e_{i,k}f_{n,r_{v,n}}a_{i,k}} \widetilde{x}_{i,k}^{-1} \widetilde{x}_{n,r_{v,n}} g^{-e_{i,k}f_{n,r_{v,n}}a_{i,k}} \widetilde{x}_{n,r_{v,n}}^{-1} \right) \pmod{[\widetilde{H},\widetilde{H}]}
\end{split}
\end{equation}
\noindent since the elements $\widetilde{x}_{i,k} g^{e_{i,k}} \widetilde{x}_{i,k}^{-1}$ (for all possible $i,k$) are in $\widetilde{H}$. Arguing similarly to the case $s=2$ (see \cite[p. 308]{DP}), we deduce that $\prod\limits_{i=1}^{n} \widetilde{h}_i \in \Phi^{\widetilde{G}}(\widetilde{H})$. Finally, consider the element $$\alpha'=\alpha \beta=\bigoplus\limits_{\substack{i,k\\(i,k) \neq (n,r_{v,n})}} \widetilde{x}_{i,k} g^{e_{i,k}(1+f_{n,r_{v,n}}a_{i,k})} \widetilde{x}_{i,k}^{-1} \in \bigoplus\limits_{i=1}^{n}\Big( \bigoplus\limits_{k=1}^{r_{v,i}} {\widetilde{H}_{i,w} }\Big).$$
\noindent It is clear that $\alpha' \in \ker \widetilde{\psi}_2^{v}$. By the induction hypothesis, if $\widetilde{\varphi}_1(\alpha')=(\widehat{h}_1,\dots,\widehat{h}_n)$ we have $\widehat{h}_1 \dots \widehat{h}_n \in \Phi^{\widetilde{G}}(\widetilde{H})$. Since $\widehat{h}_i \equiv h_i \widetilde{h}_i \pmod{[\widetilde{H},\widetilde{H}]}$ for all $i=1,\dots,n$, we conclude that ${h_1} \dots {h_n} \in \Phi^{\widetilde{G}}(\widetilde{H})$ as well.
\end{proof}
\begin{lemma}\label{lem:surject_int_Galois}
\begin{enumerate}[leftmargin=*,label=(\roman{*})]
\item\label{lem:surject_int_Galois1} There exists a surjection $f:\operatorname{H}^1(K,\Pic \overline{X}) \xrightarrow{} \operatorname{H}^1(K,\Pic \overline{Y})$. If in addition
$$\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) \supseteq \{ (h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}') | h_1\dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H}) \}$$
\noindent (in the notation of diagram \eqref{diag:1stobs_defn_generalized}), then $f$ is an isomorphism.
\item\label{lem:surject_int_Galois2} If $F/K$ is Galois, $f$ induces a surjection $\Sha(T) \twoheadrightarrow \Sha(S)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider the analog of diagram \eqref{diag:1stobs_defn_generalized} for the extension $F/K$ (note that this is the fixed field of the group ${H}$ inside $N/K$):
\begin{equation}\label{diag:F/K_v0}
\xymatrix{
\widetilde{H}^{\textrm{ab}} \ar[r]^{\widehat{\psi}_1} & \widetilde{G}^{\textrm{ab}}\\
\bigoplus\limits_{v \in \Omega_K} ( \bigoplus\limits_{w|v} {\widetilde{H}_{w}^{\textrm{ab}} }) \ar[r]^{\ \ \ \ \ \ \ \widehat{\psi}_2} \ar[u]^{\widehat{\varphi}_1}&\bigoplus\limits_{v \in \Omega_K}{\widetilde{S}_v^{\textrm{ab}} }\ar[u]_{\widehat{\varphi}_2}
}
\end{equation}
\noindent Here all the maps with the $\widehat{\phantom{a}}$ notation are defined as in diagram \eqref{diag:1stobs_defn_generalized} with respect to the extension $F/K$. Now define
\begin{align*}
f \colon \ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) &\longrightarrow \ker \widehat{\psi}_1 / \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr}) \\
(\widetilde{h}_1 \widetilde{H_1}',\dots,\widetilde{h}_n \widetilde{H_n}')\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) &\longmapsto (\widetilde{h}_1 \dots\widetilde{h}_n[\widetilde{H},\widetilde{H}]) \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})
\end{align*}
\noindent Since $\widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})=\Phi^{\widetilde{G}}(\widetilde{H})/[\widetilde{H},\widetilde{H}]$ (see \cite[Theorem 2]{DP}), the map $f$ is well defined by Lemma \ref{lem:incl_unr}. Additionally, as the target group is abelian, it is easy to check that $f$ is a homomorphism and surjective. By Theorem \ref{thm:main_result} we have $\operatorname{H}^1(K,\Pic \overline{X}) \cong \ker \widetilde{\psi}_1 / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ and $\operatorname{H}^1(K,\Pic \overline{Y}) \cong \ker \widehat{\psi}_1 / \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$. The statement in the first sentence follows. Finally, if we assume $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) \supseteq \{ (h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}') | h_1\dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H}) \}$, then it is clear that $f$ is injective.
We now prove \ref{lem:surject_int_Galois2}. By Theorem \ref{thm:main_result}, it is enough to show that $f( \widetilde{\varphi}_1(\ker \widetilde{\psi}_2)) \subset \widehat{\varphi}_1(\ker \widehat{\psi}_2)$. Since $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2)=\prod\limits_{v \in \Omega_K} \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v})$, it suffices to verify $f(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v}))\subset \widehat{\varphi}_1(\ker \widehat{\psi}_2)$ for all $v \in \Omega_K$. Let $\alpha \in \ker \widetilde{\psi}_2^{ v }$ and write $\alpha = \bigoplus\limits_{i=1}^{n} \bigoplus\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k} $ for some $\widetilde{h}_{i,k} \in \widetilde{H_i} \cap \widetilde{x}_{i,k} \widetilde{S}_v \widetilde{x}_{i,k}^{-1}$. Hence, we obtain $\widetilde{\varphi}_1(\alpha)=(\widetilde{h}_1,\dots,\widetilde{h}_n)$, where $\widetilde{h}_i=\prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k}$, and we wish to show that $\prod\limits_{i=1}^{n} \widetilde{h}_i \in \widehat{\varphi}_1(\ker \widehat{\psi}_2)$. Since $F/K$ is Galois, $\widetilde{H}$ is a normal subgroup of $ \widetilde{G}$ and thus $\Phi^{\widetilde{G}}(\widetilde{H})=[\widetilde{H},\widetilde{G}]$. In this way, we have $$\prod\limits_{i=1}^{n} \widetilde{h}_i = \prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \widetilde{h}_{i,k} \equiv \prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k} \widetilde{x}_{i,k} = \widetilde{\psi}_2(\alpha) \pmod{\Phi^{\widetilde{G}}(\widetilde{H})}.$$
\noindent As $\Phi^{\widetilde{G}}(\widetilde{H})/[\widetilde{H},\widetilde{H}]=\widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$, it suffices to prove that $\widetilde{\psi}_2(\alpha) \in \widehat{\varphi}_1(\ker \widehat{\psi}_2^{v})$. For this, let $\widetilde{G}=\bigcup\limits_{j=1}^{r} \widetilde{H} \widetilde{y}_j \widetilde{S}_v$ be a double coset decomposition and suppose, without loss of generality, that $\widetilde{y}_{j_0}=1$ for some index $1 \leq j_0 \leq r$ corresponding to a place $w_0$ of $F$ via Lemma \ref{lem1DP}. Therefore, we obtain $\widetilde{\psi}_2(\alpha)=\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} \widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k} \widetilde{x}_{i,k} \in \widetilde{H} \cap \widetilde{S}_v=\widetilde{H}_{w_0}$ since $\widetilde{x}_{i,k}^{-1} \widetilde{h}_{i,k} \widetilde{x}_{i,k} \in \widetilde{H}$ for all possible $i,k$. In this way, if $\beta \in \bigoplus\limits_{v \in \Omega_K} ( \bigoplus\limits_{w|v} {\widetilde{H}_{w}^{\textrm{ab}} }) $ is the vector with the $(v,w_0)$-th entry equal to $\widetilde{\psi}_2(\alpha)$ and all other entries equal to $1$, we have $\widehat{\psi}_2(\beta)=\widetilde{\psi}_2(\alpha) \in [\widetilde{S}_v,\widetilde{S}_v]$ (as $\alpha \in \ker \widetilde{\psi}_2^{ v }$) and so $\widetilde{\psi}_2(\alpha) =\widehat{\varphi}_1(\beta) \in \widehat{\varphi}_1(\ker \widehat{\psi}_2^{v}) $.
\end{proof}
\subsection{Multinorm principle for linearly disjoint extensions}
\hspace{1pt}
In this subsection we prove a theorem similar to the main result of \cite{demarche}, but with a slightly different hypothesis (and in some cases more general, see Remark \ref{rem:demarche_different} below).
\begin{theorem}\label{thm:demarch_wei_thm}
For any non-empty subset $I \subset \{1,\dots,n\}$, let $L_I \subseteq N$ be the compositum of the fields $L_i$ $(i \in I)$ and let $E_{I}$ be the Galois closure of $L_I/K$. Suppose that there exist indices $i_0,j_0 \in \{1,\dots,n\}$ such that, for every $1 \leq i \leq n$, there is a partition $I_i \sqcup J_i = \{1,\dots,n\}$ with $i_0 \in I_i,j_0 \in J_i$ and $E_{I_i} \cap E_{J_i} \subseteq L_i$. Then
$$ \operatorname{H}^1(K,\Pic \overline{X}) \cong \operatorname{H}^1(K,\Pic \overline{Y}). $$
\end{theorem}
\begin{proof}
If $n=1$ there is nothing to show, so assume $n \geq 2$. By Lemma \ref{lem:surject_int_Galois}\ref{lem:surject_int_Galois1} it suffices to prove that $$\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) \supseteq \{ (h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}') | h_1\dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H}) \}.$$
\noindent Let $\alpha=(h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}')$ be such that $h_1 \dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H})$. Renaming the fields $L_i$ if necessary, we assume that $i_0 =1$ and $j_0=2$. Denoting $B_{I_i}=\Gal(N/E_{I_i}), B_{J_i}=\Gal(N/E_{J_i})$ for all $1 \leq i \leq n$, the hypothesis $E_{I_i} \cap E_{J_i} \subseteq L_i$ is equivalent to $ B_{I_i}B_{J_i} \supseteq H_i$ and thus
\begin{equation}\label{eq:inc_hyp}
\widetilde{H_i} \subseteq \widetilde{B_{I_i}} \widetilde{B_{J_i}}
\end{equation}
\noindent with $1 \in I_i$, $2 \in J_i$ and $i \in I_i$ or $J_i$. If $n \geq 3$, this implies that for any $3 \leq i \leq n$ we can decompose $h_i = h_{1,i} h_{2,i}$ for some $h_{1,i} \in \widetilde{H_1} \cap \widetilde{H_i}$ and $h_{2,i} \in \widetilde{H_2} \cap \widetilde{H_i}$. Using Lemma \ref{lem:simpl_inters} as done in Claim 1 of the proof of Theorem \ref{thm:main_knot}, we obtain $$\alpha \equiv ((\prod\limits_{3 \leq i \leq n} h_{1,i}) h_1,(\prod\limits_{3 \leq i \leq n} h_{2,i}) h_2,1,\dots,1)$$
\noindent modulo $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$. We can thus assume $\alpha$ to be of the form $(h_1',h_2',1,\dots,1)$ for some $h_1' \in \widetilde{H_1},h_2' \in \widetilde{H_2}$ such that $h_1' h_2' \in \Phi^{\widetilde{G}}(\widetilde{H})$. Note that \eqref{eq:inc_hyp} implies that $\widetilde{H} =\langle \widetilde{H_i} \rangle \subset \widetilde{B_1} \widetilde{B_2}$, where $B_1=\Gal(N/E_{\{1\}})$ and $B_2=\Gal(N/E_{\{2\}})$. It thus follows that $\Phi^{\widetilde{G}}(\widetilde{H}) \subset \Phi^{\widetilde{G}}(\widetilde{B_1}\widetilde{B_2})=\Phi^{\widetilde{G}}(\widetilde{B_1})\Phi^{\widetilde{G}}(\widetilde{B_2}) $ and so $h_1' h_2' \in \Phi^{\widetilde{G}}(\widetilde{B_1}) \Phi^{\widetilde{G}}(\widetilde{B_2})$. Since $\Phi^{\widetilde{G}}(\widetilde{B_i}) \subset \Phi^{\widetilde{G}}(\widetilde{H_i})$ and recalling that $$(1,\dots,\underbrace{\Phi^{\widetilde{G}}(\widetilde{H_i})}_{i\textrm{-th entry}}, 1,\dots, 1) \subset \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$$
\noindent (see \eqref{eq:nr_hnp_inclusion} in Section \ref{sec:gen_gps}), we can multiply $h_1'$ and $h_2'$ by elements of $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ to attain $\alpha \equiv (h_1'',h_2'',1,\dots,1) \pmod{\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})}$ for some $h_1'' \in \widetilde{H_1},h_2'' \in \widetilde{H_2}$ such that $h_1'' h_2''=1$. Thus $h_2''=h_1''^{-1}$ and $\alpha = (h_1'',h_1''^{-1},1,\dots,1)$, which by Lemma \ref{lem:simpl_inters} is in $ \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$, as desired. \end{proof}
\begin{remark}\label{rem:demarche_different}
It is easy to see that if there exists a partition $I \sqcup J=\{1,\dots,n\}$ such that $E_I \cap E_J=F$ (the assumption in \cite[Theorem 6]{demarche} when $F_i=E_I$ and $F_j=E_J$ for every $i \in I, j \in J$), the conditions of Theorem \ref{thm:demarch_wei_thm} are satisfied. Therefore, our theorem applies to all the cases described in \cite[Example 9(i)--(iii)]{demarche}. Moreover, our hypothesis applies for $n$-tuples of fields for which the assumptions in \cite[Theorem 6]{demarche} might fail. For example, let $L=(\mathbb{Q}(\sqrt{2},\sqrt{3}),\mathbb{Q}(\sqrt{2},\sqrt{5}),\mathbb{Q}(\sqrt{3},\sqrt{5}))$. It is easy to see that the assumptions of Theorem \ref{thm:demarch_wei_thm} are satisfied, but \cite[Theorem 6]{demarche} does not apply to this tuple of fields. Indeed, Demarche and Wei's hypothesis imply that there is a partition $I \sqcup J=\{1,\dots,n\}$ such that $L_I \cap L_J=F$, which does not exist in the example above.
\end{remark}
As consequence of Theorem \ref{thm:demarch_wei_thm} we also obtain versions of \cite[Corollaries 7 and 8]{demarche}:
\begin{corollary}\label{cor:dem1}
Let $c\in K^*$. Assume the hypothesis of Theorem \ref{thm:demarch_wei_thm} and suppose that the $K$-variety $ N_{F/K}(\Xi)=c$ satisfies weak approximation. Then the multinorm equation $\prod\limits_{i=1}^{n} N_{L_i/K}(\Xi_i)=c$ satisfies weak approximation if and only if it has a $K$-point.
\end{corollary}
\begin{corollary}\label{cor:dem2}
Assume the hypothesis of Theorem \ref{thm:demarch_wei_thm} and suppose that the Hasse principle and weak approximation hold for all norm equations $ N_{F/K}(\Xi)=c$, $c \in K^*$. Then the Hasse principle and weak approximation hold for all multinorm equations $\prod\limits_{i=1}^{n} N_{L_i/K}(\Xi_i)=c$.
\end{corollary}
\subsection{Multinorm principle and weak approximation for abelian extensions}
\hspace{1pt}
In this subsection we generalize the main theorem of \cite{pollio} to $n$ abelian extensions under the conditions of Theorem \ref{thm:demarch_wei_thm}.
\begin{theorem}\label{thm:pollio}
Let $L=(L_1,\dots,L_n)$ be an $n$-tuple of abelian extensions of $K$ and suppose that the conditions of Theorem \ref{thm:demarch_wei_thm} are satisfied for $L$. Then
$$\Sha(T) \cong \Sha(S),$$
$$A(T) \cong A(S).$$
\end{theorem}
\begin{proof}
Note that if $A(T) \cong A(S)$, then by Theorem \ref{thm:demarch_wei_thm} and Voskresenski\u{\i}'s exact sequence \eqref{eq:Vosk} we deduce that $|\Sha(T)|=|\Sha(S)|$. Since $\Sha(T)$ surjects onto $\Sha(S)$ by Lemma \ref{lem:surject_int_Galois}\ref{lem:surject_int_Galois2}, we conclude that $\Sha(T) \cong \Sha(S)$. Therefore, it is enough to prove that $A(T) \cong A(S)$.
Let us again consider the analog of diagram \eqref{diag:1stobs_defn_generalized} for the extension $F/K$:
\begin{equation}\label{diag:F/K_v2}
\xymatrix{
\widetilde{H}^{\textrm{ab}} \ar[r]^{\widehat{\psi}_1} & \widetilde{G}^{\textrm{ab}}\\
\bigoplus\limits_{v \in \Omega_K} ( \bigoplus\limits_{w|v} {\widetilde{H}_{w}^{\textrm{ab}} }) \ar[r]^{\ \ \ \ \ \ \ \widehat{\psi}_2} \ar[u]^{\widehat{\varphi}_1}&\bigoplus\limits_{v \in \Omega_K}{\widetilde{S}_v^{\textrm{ab}} }\ar[u]_{\widehat{\varphi}_2}
}
\end{equation}
\noindent As before, in this diagram all the maps with the $\widehat{\phantom{a}}$ superscript are defined as in diagram \eqref{diag:1stobs_defn_generalized} with respect to $F/K$. By Theorem \ref{thm:main_result}, we have $A(T) \cong \widetilde{\varphi}_1(\ker \widetilde{\psi}_2)/ \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ (in the notation of diagram \eqref{diag:1stobs_defn_generalized}) and $A(S) \cong \widehat{\varphi}_1(\ker \widehat{\psi}_2) / \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$ (in the notation of diagram \eqref{diag:F/K_v2}). Therefore it suffices to show that $ \widetilde{\varphi}_1(\ker \widetilde{\psi}_2) / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ is isomorphic to $ \widehat{\varphi}_1(\ker \widehat{\psi}_2) / \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$. For this, we again consider the natural map
\begin{align*}
f \colon\widetilde{\varphi}_1(\ker \widetilde{\psi}_2) / \widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) &\longrightarrow \widehat{\varphi}_1(\ker \widehat{\psi}_2) / \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr}) \\
(\widetilde{h}_1 \widetilde{H_1}',\dots,\widetilde{h}_n \widetilde{H_n}')\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) &\longmapsto (\widetilde{h}_1 \dots \widetilde{h}_n [\widetilde{H},\widetilde{H}])\widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})
\end{align*}
\noindent In the proof of Lemma \ref{lem:surject_int_Galois}\ref{lem:surject_int_Galois2} it was shown that $f(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2))\subset \widehat{\varphi}_1(\ker \widehat{\psi}_2)$. Additionally, recalling that $\widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr}) =\Phi^{\widetilde{G}}(\widetilde{H})/[\widetilde{H},\widetilde{H}]$ by \cite[Theorem 2]{DP}, we have $f(\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}))= \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$ by Lemma \ref{lem:incl_unr} and the proof of Theorem \ref{thm:demarch_wei_thm}. This shows that $f$ is well defined and injective.
Finally, let us check that $f$ is surjective. Fix a place $v$ of $K$ and a double coset decomposition $\widetilde{G}=\bigcup\limits_{j=1}^{r}\widetilde{H} \widetilde{y}_j \widetilde{G}_v$ and let $\alpha \in \widehat{\varphi}_1(\ker \widehat{\psi}_2^{v})$. We can write $\alpha = \widehat{\varphi}_1(\bigoplus\limits_{j=1}^{r} \widetilde{h}_j)=\prod\limits_{j=1}^{r} \widetilde{h}_j$ for some $\widetilde{h}_j \in \widetilde{H} \cap \widetilde{y}_j \widetilde{S}_v \widetilde{y}_j^{-1}$ such that $\beta:=\widehat{\psi}_2(\bigoplus\limits_{j=1}^{r} \widetilde{h}_j)=\prod\limits_{j=1}^{r} \widetilde{y}_j^{-1} \widetilde{h}_j \widetilde{y}_j$ is in $[\widetilde{S}_v,\widetilde{S}_v]$. Note that as $G$ is abelian, we have $[\widetilde{G},\widetilde{G}] \subset \widetilde{M}$ and therefore $[\widetilde{S}_v,\widetilde{S}_v] \subset \widetilde{M} \subset \widetilde{H}_i$ for every $1 \leq i \leq n$. In particular, we have $\beta \in \widetilde{H}_1 \cap \widetilde{S}_v$ and from this one readily checks that the $n$-tuple $(\beta,1,\dots,1) $ is in $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{v})$. Since $\widetilde{H} \trianglelefteq \widetilde{G}$, we have $\Phi^{\widetilde{G}}(\widetilde{H})=[\widetilde{H},\widetilde{G}]$ and thus $f(\beta,1,\dots,1)=\beta=\prod\limits_{j} \widetilde{y}_j^{-1} \widetilde{h}_j \widetilde{y}_j \equiv \prod\limits_{j} \widetilde{h}_j = \alpha \pmod{\Phi^{\widetilde{G}}(\widetilde{H})}$. As $\Phi^{\widetilde{G}}(\widetilde{H})/[\widetilde{H},\widetilde{H}]= \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$, we obtain $\alpha = f(\beta,1,\dots,1)$ inside $\widehat{\varphi}_1(\ker \widehat{\psi}_2) / \widehat{\varphi}_1(\ker \widehat{\psi}_2^{nr})$.\end{proof}
\begin{remark}
Note that the conditions of Theorem \ref{thm:demarch_wei_thm} are always satisfied if $n=2$, so that Theorem \ref{thm:pollio} generalizes the main theorem of \cite{pollio}.
\end{remark}
\subsection{Weak approximation for cyclic extensions of prime degree}\label{sec:eva}
\hspace{1pt}
In this subsection we extend the result in \cite[Theorem 8.3]{eva} to include the weak approximation property for the multinorm one torus of $n$ cyclic extensions of prime degree $p$.
\begin{theorem}\label{thm:eva}
Let $L_1,\dots,L_n$ be non-isomorphic cyclic extensions of $K$ with prime degree $p$. Then, we have
\[\operatorname{H}^1(K,\Pic \overline{X})=\begin{cases}
(\mathbb{Z}/p)^{n-2} \textrm{, if $[L_1\dots L_n:K]=p^2$;}\\
0\textrm{, otherwise.}
\end{cases}\]
\end{theorem}
\begin{proof}
\noindent The case $n=1$ was proved in \cite[Proposition 9.1]{coll2} and for $n=2$ the result follows from Theorem \ref{thm:demarch_wei_thm}, so assume $n \geq 3$.
Suppose first that $[L_1 \dots L_n : K] > p^2$. Reordering the fields $L_3,\dots,L_n$ if necessary, we can (and do) assume that each one of the fields $L_1,\dots,L_{s-1}$ is contained in $L_1 L_2$ (for some $3 \leq s\leq n$), while none of $L_s,\dots,L_n$ is contained in $L_1 L_2$.
We prove two auxiliary claims:
\medskip
\textbf{Claim 1:} $\widetilde{H_i} \subset (\widetilde{H_1} \cap \widetilde{H_i} ).\widetilde{H_s}$ for any $i=1,\dots,s-1$.
\textbf{Proof:} Observe that $L_1L_i \cap L_s = K$ as otherwise we would have $L_s \subset L_1 L_i \subset L_1 L_2$, contradicting the assumption on $s$. Therefore $L_i \supset K= L_1 L_i \cap L_s$ and passing to subgroups this implies that ${H_i} \subset ({H_1} \cap {H_i}).{H_s}$, from which the claim follows.
\medskip
\textbf{Claim 2:} $\widetilde{H_i} \subset (\widetilde{H_1} \cap \widetilde{H_i}).\widetilde{H_2}$ for any $i=s,\dots,n$.
\textbf{Proof:} Observe that $L_2 \not\subset L_1 L_i$ as otherwise we would have $L_i \subset L_1 L_i = L_1 L_2$, contradicting the assumption on $L_i$. Therefore $L_i \supset K= L_1 L_i \cap L_2$ and passing to subgroups this implies that ${H_i} \subset ({H_1} \cap {H_i}).{H_2}$, from which the claim follows.
\medskip
Let us now prove that $\operatorname{H}^1(K,\Pic \overline{X})=0$. Since $\bigcap\limits_{i} L_i = K$, by Lemma \ref{lem:surject_int_Galois}\ref{lem:surject_int_Galois1} it suffices to show that $$\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr}) \supseteq \{ (h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}') | h_1\dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H}) \}.$$
\noindent Let $\alpha=(h_1 \widetilde{H_1}' , \dots , h_n \widetilde{H_n}')$ be such that $h_1 \dots h_n \in \Phi^{\widetilde{G}}(\widetilde{H})$. By Claim 1 above, for $i=3,\dots,s-1$ we can write $h_i = h_{1,i}h_{s,i}$, where $h_{1,i} \in \widetilde{H_1} \cap \widetilde{H_i}$ and $h_{s,i} \in \widetilde{H_s} \cap \widetilde{H_i}$. Using this decomposition, we can apply Lemma \ref{lem:simpl_inters} as done in the proof of Theorem \ref{thm:demarch_wei_thm} in order to simplify $\alpha$ modulo $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ and assume it has the form $(h_1',h_2,1,\dots,1,h_s',h_{s+1}\dots,h_n)$ for some $h_1' \in \widetilde{H_1},h_s' \in \widetilde{H_s}$. Using Claim 2 and Lemma \ref{lem:simpl_inters} in the same way, we further reduce $\alpha$ modulo $\widetilde{\varphi}_1(\ker \widetilde{\psi}_2^{nr})$ to a vector of the form $(h_1'',h_2',1,\dots,1)$ for some $h_1'' \in \widetilde{H_1},h_2' \in \widetilde{H_2}$ such that $h_1'' h_2' \in \Phi^{\widetilde{G}}(\widetilde{H})$. Finally, since $L_1 \cap L_2 = K$, we have $\widetilde{H}= \widetilde{H_1} \widetilde{H_2}$ and thus $\Phi^{\widetilde{G}}(\widetilde{H}) \subset \Phi^{\widetilde{G}}(\widetilde{H_1})\Phi^{\widetilde{G}}(\widetilde{H_2})$. The result follows by an argument similar to the one given at the end of the proof of Theorem \ref{thm:demarch_wei_thm}.
Now assume that $[L_1\dots L_n:K]=p^2$ (note that this is only possible if $ n \leq p+1$ as a bicyclic field has $p+1$ subfields of degree $p$) and therefore $G=C_p \times C_p$ is abelian. By Proposition \ref{prop:sq_free_mid_gp} it suffices to prove that $\ker {\psi}_1 / {\varphi}_1(\ker {\psi}_2^{nr}) \cong (\mathbb{Z}/p)^{n-2}$. We first show that ${\varphi}_1(\ker {\psi}_2^{nr})=1$. Let $\alpha \in \ker {\psi}_2^{v}$ for some unramified place $v$ of $N/K$. Write $G_v=\langle g \rangle$ and $\alpha = \bigoplus\limits_{i=1}^{n} \bigoplus\limits_{k=1}^{r_{v,i}} {h}_{i,k} $ for some $g \in G$ and ${h}_{i,k} \in {H_i} \cap {x}_{i,k} \langle g \rangle {x}_{i,k}^{-1}={H_i} \cap \langle g \rangle$. If $g \not\in H_i $ for all $i=1,\dots,n$, then $\alpha$ is the trivial vector and $\varphi_1(\alpha)=(1,\dots,1)$. Otherwise, if $g \in H_{i_0}\cong C_p$ for some index $i_0$, then $g \not\in H_i$ for all $i \neq i_0$ and thus $h_{i,k}=1$ for $i \neq i_0$. In this way, it follows that $1=\psi_2(\alpha)=\prod\limits_{i=1}^{n} \prod\limits_{k=1}^{r_{v,i}} {x}_{i,k}^{-1} {h}_{i,k}{x}_{i,k}= \prod\limits_{k=1}^{r_{v,i_0}} {h}_{i_0,k}$. Therefore, if $\varphi_1(\alpha)=(h_1,\dots,h_n)$, we have $h_i = 1$ if $i \neq i_0$ and $h_{i_0}=\prod\limits_{k=1}^{r_{v,i_0}} {h}_{i_0,k} = 1$. In conclusion, $\varphi_1(\alpha)=(1,\dots,1)$.
On the other hand, we have $\ker \psi_1 = \{(h_1,\dots,h_n) | h_i \in H_i, \prod\limits_{i=1}^{n} h_i = 1\}$. This group is the kernel of the surjective group homomorphism
\begin{align*}
f \colon H_1 \times \dots \times H_n &\longrightarrow G \\
(h_1 ,\dots,h_n ) &\longmapsto h_1 \dots h_n
\end{align*}
\noindent and thus $\ker \psi_1 = \ker f \cong (\mathbb{Z}/p)^{n-2}$, as desired.\end{proof}
\begin{corollary}\label{cor:eva}
Let $L=(L_1,\dots,L_n)$ be an $n$-tuple of non-isomorphic cyclic extensions of $K$ with prime degree $p$.
\begin{enumerate}
\item\label{cor410_case1} If $[L_1\dots L_n:K]=p^2$, then weak approximation for the multinorm one torus $T$ holds if and only if the multinorm principle for $L$ fails.
\item Otherwise, both the multinorm principle for $L$ and weak approximation for $T$ hold.
\end{enumerate}
\end{corollary}
\begin{proof}
Follows from Voskresenski\u{\i}'s exact sequence \eqref{eq:Vosk}, Theorem \ref{thm:eva} and \cite[Theorem 8.3]{eva}.
\end{proof}
\begin{remark}\label{rem:eva}
In \cite[Proposition 8.5]{eva} it is shown that, in the case \eqref{cor410_case1} above, the multinorm principle for $L$ fails if and only if all decomposition groups of the bicyclic extension $L_1 \dots L_n$ are cyclic. We thus have a simple criterion to test the validity of weak approximation for the associated multinorm one torus.
\end{remark}
|
1,108,101,563,838 | arxiv | \section*{Acknowledgements}
We are grateful to Sven Krippendorf and Fabian Rühle for useful discussions on the applications of machine learning to string theory and for comments on the draft.
The work of H.E.\ has been conducted under a Carl Friedrich von Siemens Research Fellowship of the Alexander von Humboldt Foundation for postdoctoral researchers during part of this project.
H.E.\ and R.F.\ are partially supported by the \textsc{Miur Prin} Contract \textsc{2015Mp2cx4} “Non-perturbative Aspects of Gauge Theories and Strings”.
\section{Discussion}
\label{sec:conclusion}
In this paper, we have proved how a proper data analysis can lead to improvements in predictions of Hodge numbers $h^{1,1}$ and $h^{2,1}$ for CICY $3$-folds.
Moreover, considering more complex neural networks -- in particular, architectures inspired by the Inception model~\cite{Szegedy:2014:GoingDeeperConvolutions, Szegedy:2015:RethinkingInceptionArchitecture, Szegedy:2016:Inceptionv4InceptionResNetImpact} -- allowed us to reach close to $100\%$ accuracy for $h^{1,1}$ with much less data and less parameters than in previous works.
While our analysis improved the accuracy for $h^{2,1}$ over what can be expected from a simple sequential neural network, we barely reached $50\%$.
Hence, it would be interesting to push further our study to improve the accuracy.
Possible solutions would be to use a deeper Inception network, find a better architecture including engineered features, and refine the ensembling (for example using StackNet~\cite{Package:StackNet}).
Another interesting question to probe is related to representation learning, i.e.\ finding a better description of the Calabi--Yau.
Indeed, one of the main difficulty in making predictions is the redundancy of the possible descriptions of a single manifold.
For example, one could try to set up a map from any matrix to its favourable representation (if it exists).
Or, on the contrary, one could generate more matrices for the same manifold in order to increase the size of the training set.
Another possibility is to use the graph representation of the configuration matrix to which is automatically invariant under permutations~\cite{Hubsch:1992:CalabiYauManifoldsBestiary} (another graph representation has been decisive in~\cite{Krippendorf:2020:DetectingSymmetriesNeural} to get a good accuracy).
Techniques such as (variational) autoencoder~\cite{Kingma:2014:AutoEncodingVariationalBayes, Rezende:2014:StochasticBackpropagationApproximate, Salimans:2015:MarkovChainMonte}, cycle GAN~\cite{Zhu:2017:UnpairedImagetoImageTranslation}, invertible neural networks~\cite{Ardizzone:2019:AnalyzingInverseProblems}, graph neural networks~\cite{Gori:2005:NewModelLearning, Scarselli:2004:GraphicalBasedLearningEnvironments} or more generally techniques from geometric deep learning~\cite{Bronstein:2017:GeometricDeepLearning} could be helpful.
Finally, our techniques apply directly to CICY $4$-folds~\cite{Gray:2013:AllCompleteIntersection, Gray:2014:TopologicalInvariantsFibration}.
However, there are much more manifolds in this case, such that one can expect to reach a better accuracy for the different Hodge numbers (the different learning curves for the $3$-folds indicate that the model training would benefit from more data).
We hope to report soon on these issues.
Another interesting class of manifolds to explore with our techniques are generalized CICY $3$-folds~\cite{Anderson:2016:NewConstructionCalabiYau}.
We leave these questions to future explorations.
\section{Machine Learning Analysis}
\label{sec:ml}
In this section, we compare the performances of different ML algorithms: linear regression, SVM, random forests, gradient boosted trees and neural networks.
Before reporting the results for each algorithm, we detail the feature selection (\Cref{sec:ml:selection}) and the evaluation strategy (\Cref{sec:ml:strategy}).
We obtain the best results in \Cref{sec:ml:nn:inception} where we present a neural network inspired by the Inception model~\cite{Szegedy:2014:GoingDeeperConvolutions, Szegedy:2015:RethinkingInceptionArchitecture, Szegedy:2016:Inceptionv4InceptionResNetImpact}.
We provide some details on the different algorithms in \Cref{app:ml-algo} and refer the reader to the literature~\cite{Goodfellow:2016:DeepLearning, Chollet:2017:DeepLearningPython, Geron:2019:HandsOnMachineLearning, Coursera:HowWinData, Skiena:2017:DataScience, Mehta:2019:HighbiasLowvarianceIntroduction, Carleo:2019:MachineLearningPhysical, Ruehle:2020:DataScienceApplications} for more details.
\subsection{Feature Extraction}
\label{sec:ml:selection}
In \Cref{sec:data}, the EDA showed that several engineered features are promising for predicting the Hodge numbers.
In what follows, we will compare the performances of various algorithms using different subsets of features:
\begin{itemize}
\item only the configuration matrix (no feature engineering);
\item only the number of projective spaces $m$;
\item only a subset of engineered features and not the configuration matrix nor its PCA;
\item a subset of engineered features and the PCA of the matrix.
\end{itemize}
Following the EDA and feature engineering, we finally select the features we use in the analysis by choosing the highest ranked features.
We will therefore keep the number of projective spaces (\texttt{num\_cp} in the dataset) and the list of the dimension of the projective spaces (\texttt{dim\_cp}) for both $h^{1,1}$ and $h^{2,1}$).
We will also include the dimension of the cohomology group of the ambient space \texttt{dim\_h0\_amb} but only for $h^{2,1}$.
\subsection{Analysis Strategy}
\label{sec:ml:strategy}
For the ML analysis, we split the dataset into training and test sets: we fit the algorithms on the first and then show the predictions on the test set, which will not be touched until the algorithms are ready.
\paragraph{Test split and validation}
The training set is made of \SI{90}{\percent} of the samples for training, which leaves the remaining \SI{10}{\percent} in the test set (i.e.\ $785$ manifolds out of the $7851$ in the set).\footnotemark{}
\footnotetext{%
Remember that we have removed outliers, see \Cref{sec:data:eda:outliers}.
Scores quoted in this paper are slightly different from~\cite{Erbin:2020:InceptionCICY} because, in that paper, outliers are kept in the test set.
}%
For most algorithms, we use \emph{leave-one-out} cross-validation on the training set as evaluation of the algorithm: we subdivide the training set in $9$ subsets, each of them containing \SI{10}{\percent} of the \emph{total} amount of samples, then, we train the algorithm on $8$ of them and evaluate it on the $9$th.
We then repeat the procedure changing the evaluation fold until the algorithm has been trained and evaluated on all of them.
The performance measure in validation is given by the average over all the left out folds.
When training neural networks, we will however use a single \emph{holdout validation} set made of \SI{10}{\percent} of the \emph{total} samples.
\paragraph{Predictions and metrics}
Since we are interested in predicting exactly the Hodge numbers, the appropriate metric measuring the success of the predictions is the accuracy (for each Hodge number separately):
\begin{equation}
\text{accuracy}
= \frac{1}{N}
\sum_{i=1}^N \delta\big(y_i^{\text{true}} - y_i^{\text{pred}} \big),
\end{equation}
where $N$ is the number of samples.
In the paper, accuracy of the predictions on the test set is rounded to the nearest integer.
Since the Hodge numbers are integers, the problem of predicting them looks like a classification task.
However, as argued in the introduction, we prefer to use a regression approach.
Indeed, regression does not require to specify the data boundaries and allows to extrapolate beyond them, contrary to a classification approach where the categories are fixed at the beginning.\footnotemark{}
\footnotetext{%
A natural way to transform the problem in a regression task is to \emph{normalize} the Hodge numbers, for example by shifting by the mean value and diving by the standard deviation.
Under this transformation, the Hodge numbers are mapped to real numbers.
While normalizing often improve ML algorithms, we found that the impact was mild or even negative.
}%
Most algorithms need a differentiable loss function since the optimization of parameters (such as neural networks weights) uses some variant of gradient descent.
For this reason, the accuracy cannot be used and the models are trained by minimizing the mean squared error (MSE), which is simply the squared $\ell_2$-norm between of the difference between the predictions and the real values.
There will however be also a restricted number of cases in which we will use either the mean absolute error (MAE), which is the $\ell_1$-norm of the same difference, or a weighted linear combination of MSE and MAE (also known as \textit{Huber} loss): we will point them out at the right time.
When predicting both Hodge numbers together, the total loss is the sum of each individual loss with equal weight: $h^{1,1}$ is simpler to learn so it is useful to put emphasis on learning $h^{2,1}$, but the magnitudes of the latter are higher, such that the associated loss is naturally bigger (since we did not normalize the data).
Since predictions are real numbers, we need to turn them into integers.
In general, rounding to the nearest integer gives the best result, but we found algorithms (such as linear regression) for which flooring to the integer below works better.
The optimal choice of the integer function is found for each algorithm as part of the hyperparameter optimization (described below).
The accuracy is computed after the rounding stage.
Learning curves for some models are displayed.
They show how the performances of a model improves by using more training data, for fixed hyperparameters.
To obtain it, we train models using from \SI{10}{\percent} to \SI{90}{\percent} of all the data (``training ratio'') and evaluate the accuracy on the remaining data.\footnotemark{}
\footnotetext{%
Statistics are not provided due to the limitations of our available computational resources.
However, we check manually on few examples that the reported results are typical.
}%
To avoid redundant information and to avoid cluttering the paper with graphs, the results for models predicting separately the Hodge numbers for the test set are reported in tables, while the results for the models predicting both numbers together are reported in the learning curves.
For the same reason, the latter are not displayed for the favourable dataset.
\paragraph{Visualisation of the performance}
Complementary to the predictions and the accuracy results, we also provide different visualisations of the performance of the models in the form of univariate plots (histograms) and multivariate distributions (scatter plots).
The usual assumption behind the statistical inference of a distribution is that the difference between the observed data and the predicted values can be modelled by a random variable called \textit{residual}~\cite{Lista:2017:StatisticalMethods,Coursera:DataScience}.\footnotemark{}
\footnotetext{The difference between the non observable \textit{true} value of the model and the observed data is known as \textit{statistical error}.
The difference between residuals and errors is subtle but the two definitions have different interpretations in the context of the regression analysis: in a sense, residuals are an estimate of the errors.}
As such we expect that its values can be sampled from a normal distribution with a constant variance (i.e.\ constant width), since it should not depend on specific observations, and centered around zero, since the regression algorithm tries to minimise the squared difference between observed and predicted values.
Histograms of the residual errors should therefore exhibit such properties graphically.
Another interesting kind of visual realisation of the residuals is to show their distribution against the variables used for the regression model: in the case of a simple regression model in one variable, it is customary to plot the residuals as a function of the independent variable, but in a multivariable regression analysis (such as the case at hand) the choice usually falls on the values predicted by the fit (not the observed data).
We shall therefore plot the residuals as functions of the predicted values.\footnotemark{}
\footnotetext{We will use the same strategy also for the fit using just the number of projective spaces in order to provide a way to compare the plots across different models.}
Given the assumption of the random distribution of the residuals, they should not present strong correlations with the predictions and should not exhibit trends.
In general the presence of correlated residuals is an indication of an incomplete or incorrect model which cannot explain the variance of the predicted data, meaning that the model is either not suitable for predictions or that we should add information (that is, add features) to it.
\paragraph{Hyperparameter optimisation}
One of the key steps in a ML analysis is the optimisation of the \emph{hyperparameters} of the algorithm.
These are internal parameters of each estimator (such as the number of trees in a random forest or the amount of regularisation in a linear model): they are not modified during the training of the model, but they directly influence it in terms of performance and outcome.
Hyperparameter optimization is performed by training many models with different hyperparameters, and keeping those which perform best according to some metric on the validation set(s).
As it does not need to be differentiable, we use the accuracy as a scoring function to evaluate the models.
There is however subtle issue because it is not clear how to combine the accuracy of $h^{1,1}$ and $h^{2,1}$ to get a single metric.
For this reason, we will perform the analysis on both Hodge numbers separately.
Then, we can design a single model computing both Hodge numbers simultaneously by making a compromise by hand between the hyperparameters found for the two models computing the Hodge numbers separately.
The optimization is implemented using the API from \texttt{scikit-learn}, using the function \texttt{metrics.make\_scorer} and the accuracy as a custom scoring function.
There are several approaches to perform this search automatically, in particular: grid search, random search, genetic evolution, and Bayes optimization.
Grid and random search are natively implemented in \texttt{scikit-learn}.
The first takes a list of possible discrete values of the hyperparameters and will evaluate the algorithm over all possible combinations.
The second samples values in both discrete sets and continuous intervals according to some probability distributions, repeating the process a fixed number of times.
The grid search method is particularly useful for discrete hyperparameters, less refined searches or for a small number of combinations, while the second method can be used to explore the hyperparameter space on a larger scale~\cite{Bergstra:2012:RandomSearchHyperparameter}.
Genetic algorithms are based on improving the choice of hyperparameters over \emph{generations} that successively select only the most promising values: in general, they require a lot of tuning and are easily influenced by the fact that the replication process can also lead to worse results totally at random~\cite{Rudolph:1994:GeneticAlgorithms}.
They are however effective when dealing with very deep or complex neural networks.
Bayes optimisation~\cite{Snoek:2012:PracticalBayesianOptimization, Shahriari:2016:TakingHumanOut} is a very well established mathematical procedure to find the stationary points of a function without knowing its analytical form~\cite{Mockus:1975:BayesianMethodsSeeking}.
It relies on assigning a \emph{prior} probability to a given parameter and then multiply it by the probability distribution (or \emph{likelihood}) of the scoring function to compute the probability of finding a better results given a set of hyperparameters.
This has proven to be very effective in our case and we adopted this solution as it does not require fine tuning and leads to better results for models which are not deep neural networks.
We choose to use \texttt{scikit-optimize}~\cite{Head:Scikitoptimize} whose method \texttt{BayesSearchCV} has a very well implemented Python interface compatible with \texttt{scikit-learn}.
We will in general perform $50$ iterations of the Bayes search algorithm, unless otherwise specified.
\subsection{Linear Models}
Linear models attempt to describe the labels as a linear combinations of the input features while keeping the coefficients at order one (\Cref{sec:app:linreg}).
However, non-linearity can still be introduced by engineering features which are non-linear in terms of the original data.
From the results of \Cref{sec:data:eda}, we made a hypothesis on the linear dependence of $h^{1,1}$ on the number of projective spaces $m$.
As a first approach, we can try to fit a linear model to the data as a baseline computation and to test whether there is actual linear correlation between the two quantities.
We will consider different linear models, including their regularised versions.
\paragraph{Parameters}
The linear regression is performed with the class \lstinline!linear_model.ElasticNet! from \lstinline!scikit-learn!.
The hyperparameters involved in this case are: the amount of regularisation $\alpha$, the relative ratio (\texttt{l1\_ratio}) between the $\ell_1$ and $\ell_2$ regularization losses, and the fit of the intercept.
By performing the hyperparameter optimization, we found that $\ell_2$ regularization has a minor impact and can be removed, which corresponds to setting the relative ratio to $1$ (this is equivalent to using \texttt{linear\_model.Lasso}).
In \Cref{tab:hyp:lin} we show the choices of the hyperparameters for the different models we built using the $\ell_1$ regularised linear regression.
For the original dataset, we floored the predictions to the integers below, while in the favourable we rounded to the next integer.
This choice for the original dataset makes sense: the majority of the samples lie on the line $h^{1,1} = m$, but there are still many samples with $h^{1,1} > m$ (see \Cref{fig:eda:distr}).
As a consequence, the ML prediction pulls the line up, which can only damage the accuracy.
Choosing the floor function is a way to counteract this effect.
Note that accuracy for $h^{2,1}$ is only slightly affected by the choice of rounding, so we just choose the same one as $h^{1,1}$ for simplification.
\begin{table}[htp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{\textbf{matrix}} & \multicolumn{2}{c}{\textbf{num\_cp}} & \multicolumn{2}{c}{\textbf{eng. feat.}} & \multicolumn{2}{c}{\textbf{PCA}} \\ \midrule
& & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} \\ \midrule
\multirow{2}{*}{$\alpha$} & $h^{1,1}$ & $2.0 \times 10^{-6}$ & $3.0 \times 10^{-5}$ & 0.10 & $2.0 \times 10^{-6}$ & 0.05 & 0.05 & 0.07 & 0.08 \\
& $h^{2,1}$ & $1.0 \times 10^{-6}$ & $1.0 \times 10^{-5}$ & 0.1 & $1.0 \times 10^{-6}$ & $3.0 \times 10^{-4}$ & $1.2 \times 10^{-3}$ & $2.0 \times 10^{-6}$ & $1.2 \times 10^{-3}$ \\ \midrule
\multirow{2}{*}{\texttt{fit\_intercept}} & $h^{1,1}$ & False & False & True & False & True & True & False & True \\
& $h^{2,1}$ & True & True & True & True & True & False & True & False \\ \midrule
\multirow{2}{*}{\texttt{normalize}} & $h^{1,1}$ & --- & --- & False & --- & False & False & --- & False \\
& $h^{2,1}$ & False & True & False & False & False & --- & True & --- \\ \bottomrule
\end{tabular}%
}
\caption{Hyperparameter choices of the $\ell_1$ regression model used. In addition to the known hyperparameters $\alpha$ and \texttt{fit\_intercept}, we also include the \texttt{normalize} parameter which indicates whether the samples have been centered and scaled by their $\ell_2$ norm before the fit: it is ignored when the intercept is ignored.}
\label{tab:hyp:lin}
\end{table}
\paragraph{Results}
In \Cref{tab:res:lin}, we show the accuracy for the best hyperparameters.
For $h^{1,1}$, the most precise predictions are given by the number of projective spaces which actually confirms the hypothesis of a strong linear dependence of $h^{1,1}$ on the number of projective spaces.
In fact, this gives close to $100\%$ accuracy for the favourable dataset, which shows that there is no need for more advanced ML algorithms.
Moreover, adding more engineered features \emph{decreases} the accuracy in most cases where regularization is not appropriate.
The accuracy for $h^{2,1}$ remains low but including engineered features definitely improves it.
In \Cref{fig:res:lin}, we show the plots of the residual errors of the model on the original dataset.
For the $\ell_1$ regularised linear model, the univariate plots show that the errors seem to follow normal distributions peaked at $0$ as they generally should: in the case of $h^{1,1}$, the width is also quite contained.
The scatter plots instead show that, in general, there is no correlation between a particular sector of the predictions and the error made by the model, thus the variance of the residuals is in general randomly distributed over the predictions.
Only the case of the fit of the number of projective spaces seems to show a slight correlation for $h^{2,1}$, signalling that the model using only one feature might be actually incomplete: in fact it is better to include also other engineered features.
The learning curves (\Cref{fig:lc:lin}) clearly shows that the model underfits.
Moreover, we also noticed that the models are only marginally affected by the number of samples used for training.
In particular, this provides a very strong baseline for $h^{1,1}$.
For comparison, we also give the learning curve for the favourable dataset in \Cref{fig:lc:lin-fav}: this shows that a linear regression is completely sufficient to determine $h^{1,1}$ in that case.
\begin{table}[htp]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
& & \textbf{matrix} & \textbf{num\_cp} & \textbf{eng. feat.} & \textbf{PCA} \\ \midrule
\multirow{2}{*}
{\emph{original}} & $h^{1,1}$ & 51\% & 63\% & 63\% & 64\% \\
& $h^{2,1}$ & 11\% & 8\% & 21\% & 21\% \\ \midrule
\multirow{2}{*}
{\emph{favourable}} & $h^{1,1}$ & 95\% & 100\% & 100\% & 100\% \\
& $h^{2,1}$ & 14\% & 15\% & 19\% & 19\% \\ \bottomrule
\end{tabular}
\caption{Best accuracy of the linear model using $\ell_1$ regularisation on the test split.}
\label{tab:res:lin}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{images/lss_reg_orig}
\caption{%
Plots of the residual error for the $\ell_1$ regularised linear model: rows show the different scenarios (fit with only the matrix, with only the number of projective spaces, with the engineered features, with the engineered features and the PCA).
Plots refer to the test split of the original dataset.
}
\label{fig:res:lin}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/linreg_learning_curve_matrix_outliers}
\caption{input: \lstinline!matrix!, $\alpha = \num{2e-4}$}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/linreg_learning_curve_num_cp_outliers}
\caption{input: \lstinline!num_cp!, $\alpha = 1$}
\end{subfigure}
\caption{Learning curves for the linear regression (original dataset), including outliers and using a single model for both Hodge numbers.}
\label{fig:lc:lin}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/linreg_learning_curve_num_cp_fav}
\caption{input: \lstinline!num_cp!, $\alpha = 1$}
\end{subfigure}
\caption{Learning curves for the linear regression (favourable dataset), including outliers and using a single model for both Hodge numbers.}
\label{fig:lc:lin-fav}
\end{figure}
\subsection{Support Vector Machines}
\label{sec:res:svr}
Support Vector Machines (SVM) are a family of algorithms which use a \emph{kernel trick} to map the space of input data vectors into a higher dimensional space where samples can be accurately separated and fitted to an appropriate curve (\Cref{sec:app:svr}).
In this analysis, we show two such kernels, namely a linear kernel (also known as \emph{no kernel} since no transformations are involved) and a Gaussian kernel (known as \texttt{rbf} in ML literature, from \emph{radial basis function}).
\subsubsection{Linear Kernel}
For this model we use the class \texttt{svm.LinearSVR} in \texttt{scikit-learn}.
\paragraph{Parameters}
In \Cref{tab:hyp:linsvr} we show the choices of the hyperparameters used for the model.
As we show in \Cref{sec:app:svr} parameters $C$ and $\epsilon$ are related to the penalty assigned to the samples lying outside the no-penalty boundary (the loss in this case is computed according to the $\ell_1$ or $\ell_2$ norm of the distance from the boundary as specified by the \texttt{loss} hyperparameter).
Other parameters are related to the use of the intercept to improve the prediction.
We rounded the predictions to the floor for the original dataset and to the next integer for the favourable dataset.
\begin{table}[htp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{\textbf{matrix}} & \multicolumn{2}{c}{\textbf{num\_cp}} & \multicolumn{2}{c}{\textbf{eng. feat.}} & \multicolumn{2}{c}{\textbf{PCA}} \\ \midrule
& & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} \\ \midrule
\multirow{2}{*}{\texttt{C}} & $h^{1,1}$ & 0.13 & 24 & 0.001 & 0.0010 & 0.13 & 0.001 & 0.007 & 0.4 \\
& $h^{2,1}$ & 0.30 & 100 & 0.05 & 0.0016 & 0.5 & 0.4 & 1.5 & 0.4 \\ \midrule
\multirow{2}{*}{$\epsilon$} & $h^{1,1}$ & 0.7 & 0.3 & 0.4 & 0.00 & 0.9 & 0.0 & 0.5 & 0.0 \\
& $h^{2,1}$ & 0.0 & 0.0 & 10 & 0.03 & 0.0 & 0.0 & 0.0 & 0.6 \\ \midrule
\multirow{2}{*}{\texttt{fit\_intercept}} & $h^{1,1}$ & True & False & True & False & True & False & False & False \\
& $h^{2,1}$ & True & False & True & True & True & True & True & False \\ \midrule
\multirow{2}{*}{\texttt{intercept\_scaling}} & $h^{1,1}$ & 0.13 & --- & 100 & --- & 0.01 & --- & --- & --- \\
& $h^{2,1}$ & 100 & --- & 13 & 92 & 100 & 0.01 & 100 & --- \\ \midrule
\multirow{2}{*}{\texttt{loss}} & $h^{1,1}$ & $|\epsilon|$ & $|\epsilon|$ & $|\epsilon|$ & $||\epsilon||^2$ & $|\epsilon|$ & $|\epsilon|$ & $|\epsilon|$ & $|\epsilon|$ \\
& $h^{2,1}$ & $|\epsilon|$ & $|\epsilon|$ & $||\epsilon||^2$ & $|\epsilon|$ & $|\epsilon|$ & $|\epsilon|$ & $|\epsilon|$ & $|\epsilon|$ \\ \bottomrule
\end{tabular}%
}
\caption{Hyperparameter choices of the linear SVR regression. The parameter \texttt{intercept\_scaling} is clearly only relevant when the intercept is used. The different losses used simply distinguish between the $\ell_1$ norm of the $\epsilon$-dependent boundary where no penalty is assigned and its $\ell_2$ norm.}
\label{tab:hyp:linsvr}
\end{table}
\paragraph{Results}
In \Cref{tab:res:linsvr}, we show the accuracy on the test set for the linear kernel.
As we can see the performance of the algorithm strongly resembles a linear model in terms of the accuracy reached.
It is interesting to notice that the contributions of the PCA do not improve the predictions using just the engineered features: it seems that the latter work better than using the configuration matrix or its principal components.
The residual plots in \Cref{fig:res:linsvr} confirm what we already said about the linear models with regularisation: the model with only the number of projective spaces shows a tendency to heteroscedasticity\footnotemark{} which can be balanced by adding more engineered feature, also helping in having more precise predictions (translated into peaked univariate distributions).
\footnotetext{%
That is the tendency to have a correlation between the predictions and the residuals: theoretically speaking, there should not be any, since we suppose the residuals to be independent on the model and normally sampled.
}%
In all cases, we notice that the model slightly overestimates the real values (residuals are computed as the difference between the prediction and the real value) as the second, small peaks in the histograms for $h^{1,1}$ suggest: this may also explain why flooring the predictions produces the highest accuracy.
As in general for linear models, the influence of the number of samples used for training is marginal also in this case: we only noticed a decrease in accuracy when also including the PCA or directly the matrix.
\begin{table}[htp]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
& & \textbf{matrix} & \textbf{num\_cp} & \textbf{eng. feat.} & \textbf{PCA} \\ \midrule
\multirow{2}{*}
{\emph{original}} & $h^{1,1}$ & 61\% & 63\% & 65\% & 62\% \\
& $h^{2,1}$ & 11\% & 9\% & 21\% & 20\% \\ \midrule
\multirow{2}{*}
{\emph{favourable}} & $h^{1,1}$ & 96\% & 100\% & 100\% & 100\% \\
& $h^{2,1}$ & 14\% & 14\% & 19\% & 20\% \\ \bottomrule
\end{tabular}
\caption{Accuracy of the linear SVM on the test split.}
\label{tab:res:linsvr}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\textwidth]{images/lin_svr_orig}
\caption{Plots of the residual errors for the SVM with linear kernel.}
\label{fig:res:linsvr}
\end{figure}
\subsubsection{Gaussian Kernel}
We then consider SVM using a Gaussian function as kernel.
The choice of the function can heavily influence the outcome of the predictions since they map the samples into a much higher dimensional space and create highly non-linear combinations of the features before fitting the algorithm.
In general, this can help in the presence of ``obscure'' features which badly correlate one another.
In our case, we can hope to leverage the already good correlations we found in the EDA with the kernel trick.
The implementation is done with the class \texttt{svm.SVR} from \texttt{scikit-learn}.
\paragraph{Parameters}
As we show in \Cref{sec:app:svr}, this particular choice of kernel leads to profoundly different behaviour with respect to linear models: we will round the predictions to the next integer in both datasets since the loss function strongly penalises unaligned samples.
In \Cref{tab:hyp:svrrbf}, we show the choices of the hyperparameters for the models using the Gaussian kernel.
As usual the hyperparameter \texttt{C} is connected to the penalty assigned to the samples outside the soft margin boundary (see \Cref{sec:app:svr}) delimited by the $\epsilon$.
Given the presence of a non linear kernel we have to introduce an additional hyperparameter $\gamma$ which controls the width of the Gaussian function used for the support vectors.
\begin{table}[htp]
\centering
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{\textbf{matrix}} & \multicolumn{2}{c}{\textbf{num\_cp}} & \multicolumn{2}{c}{\textbf{eng. feat.}} & \multicolumn{2}{c}{\textbf{PCA}} \\ \midrule
& & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} \\ \midrule
\multirow{2}{*}{\texttt{C}} & $h^{1,1}$ & 14 & 1000 & 170 & 36 & 3 & 40 & 1.0 & 1000 \\
& $h^{2,1}$ & 40 & 1000 & 1.0 & 1.0 & 84 & 62 & 45 & 40 \\ \midrule
\multirow{2}{*}{$\epsilon$} & $h^{1,1}$ & 0.01 & 0.01 & 0.45 & 0.03 & 0.05 & 0.3 & 0.02 & 0.01 \\
& $h^{2,1}$ & 0.01 & 0.01 & 0.01 & 0.09 & 0.29 & 0.10 & 0.20 & 0.09 \\ \midrule
\multirow{2}{*}{$\gamma$} & $h^{1,1}$ & 0.03 & 0.002 & 0.110 & 0.009 & 0.07 & 0.003 & 0.02 & 0.001 \\
& $h^{2,1}$ & 0.06 & 0.100 & 0.013 & 1000 & 0.016 & 0.005 & 0.013 & 0.006 \\ \bottomrule
\end{tabular}%
\caption{Hyperparameter choices of the SVR regression with Gaussian kernel.}
\label{tab:hyp:svrrbf}
\end{table}
\paragraph{Results}
In \Cref{tab:res:svrrbf}, we show the accuracy of the predictions on the test sets.
In the favourable dataset, we can immediately appreciate the strong linear dependence of $h^{1,1}$ on the number of projective spaces: even though there are a few non favourable embeddings in the dataset, the kernel trick is able to map them in a better representation and improve the accuracy.
The predictions for the original dataset have also improved and are the best results we found using shallow learning.
The predictions using only the configuration matrix matches~\cite{Bull:2018:MachineLearningCICY}, but we can slightly improve the accuracy by using a combination of engineered features and PCA.
In \Cref{fig:res:svrrbf}, we show the residual plots and their histograms for the original dataset: residuals follow peaked distributions which, in this case, do not present a second smaller peak (thus we need to round to the next integer the predictions) and good variate distribution over the predictions.
The Gaussian kernel is also more influenced by the size of the training set.
Using 50\% of the samples as training set we witnessed a drop in accuracy of 3\% while using engineered features and the PCA, and around 1\% to 2\% in all other cases.
The learning curves (\Cref{fig:lc:svrrbf}) show that the accuracy improves by using more data.
Interestingly, it shows that using all engineered features leads to an overfit on the training data since both Hodge numbers reach almost $100\%$, while this is not the case for $h^{2,1}$.
For comparison, we also display in \Cref{fig:lc:svrrbf-fav} the learning curve for the favourable dataset: this shows that predicting $h^{1,1}$ accurately works out-of-the-box.
\begin{table}[htp]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
& & \textbf{matrix} & \textbf{num\_cp} & \textbf{eng. feat.} & \textbf{PCA} \\ \midrule
\multirow{2}{*}
{\emph{original}} & $h^{1,1}$ & 70\% & 63\% & 66\% & 72\% \\
& $h^{2,1}$ & 22\% & 10\% & 36\% & 34\% \\ \midrule
\multirow{2}{*}
{\emph{favourable}} & $h^{1,1}$ & 99\% & 100\% & 100\% & 100\% \\
& $h^{2,1}$ & 22\% & 17\% & 32\% & 33\% \\ \bottomrule
\end{tabular}
\caption{Accuracy of the Gaussian SVM on the test split.}
\label{tab:res:svrrbf}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{images/svr_rbf_orig}
\caption{Plots of the residual errors for the SVM with Gaussian kernel.}
\label{fig:res:svrrbf}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/svm_learning_curve_matrix_outliers}
\caption{input: \lstinline!matrix!, $C = 15, \gamma = 0.03, \epsilon = 0.1$}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/svm_learning_curve_all_outliers}
\caption{input: all, $C = 10, \gamma = 0.03, \epsilon = 0.1$}
\end{subfigure}
\caption{Learning curves for the SVM with Gaussian kernel (original dataset), using a single model for both Hodge numbers.}
\label{fig:lc:svrrbf}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/svm_learning_curve_matrix_fav}
\caption{input: \lstinline!matrix!, $C = 20, \gamma = \mathtt{scale}, \epsilon = 0.1$}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/svm_learning_curve_num_cp_fav}
\caption{input: all, $C = 20, \gamma = \mathtt{scale}, \epsilon = 0.1$}
\end{subfigure}
\caption{Learning curves for the SVM with Gaussian kernel (favourable dataset), using a single model for both Hodge numbers.}
\label{fig:lc:svrrbf-fav}
\end{figure}
\subsection{Decision Trees}
\label{sec:ml:trees}
We now consider two algorithms based on decision trees: random forests and gradient boosted trees.
Decision trees are powerful algorithms which implement a simple decision rule (in the style of an \emph{if\dots then\dots else\dots} statement) to classify or assign a value to the predictions.
However, they have a tendency to adapt too well to the training set and to not be robust enough against small changes in the training data.
We consider a generalisation of this algorithm used for \emph{ensemble learning}: this is a technique in ML which uses multiple estimators (they can be the same or different) to improve the performances.
We will present the results of \emph{random forests} of trees which increase the bias compared to a single decision tree, and \emph{gradient boosted} decision trees, which can use smaller trees to decrease the variance and learn better representations of the input data by iterating their decision functions and use information on the previous runs to improve (see \Cref{sec:app:trees} for a more in-depth description).
\subsubsection{Random Forests}
The random forest algorithm is implemented with Scikit's \lstinline!ensemble.RandomForestRegressor!.
\paragraph{Parameters}
Hyperparameter tuning for decision trees can in general be quite challenging.
From the general theory on random forests (\Cref{sec:app:trees}), we can try and look for particular shapes of the trees: this ensemble learning technique usually prefers a small number of fully grown trees.
We performed only 25 iterations of the optimisation process due to the very long time taken to train all the decision trees.
In \Cref{tab:hyp:rndfor}, we show the hyperparameters used for the predictions.
As we can see from \texttt{n\_estimator}, random forests are usually built with a small number of fully grown (specified by \texttt{max\_depth} and \texttt{max\_leaf\_nodes}) trees (not always the case, though).
In order to avoid overfit we also tried to increase the number of samples necessary to split a branch or create a leaf node using \texttt{min\_samples\_leaf} and \texttt{min\_samples\_split} (introducing also a weight on the samples in the leaf nodes specified by \texttt{min\_weight\_fraction\_leaf} to balance the tree).
Finally the \texttt{criterion} chosen by the optimisation reflects the choice of the trees to measure the impurity of the predictions using either the mean squared error (\texttt{mse}) or the mean absolute error (\texttt{mae}) of the predictions (see \Cref{sec:app:trees}).
\begin{table}[htp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{\textbf{matrix}} & \multicolumn{2}{c}{\textbf{num\_cp}} & \multicolumn{2}{c}{\textbf{eng. feat.}} & \multicolumn{2}{c}{\textbf{PCA}} \\ \midrule
& & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} \\ \midrule
\multirow{2}{*}{\texttt{criterion}} & $h^{1,1}$ & \texttt{mse} & \texttt{mse} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mse} & \texttt{mae} & \texttt{mae} \\
& $h^{2,1}$ & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} \\ \midrule
\multirow{2}{*}{\texttt{max\_depth}} & $h^{1,1}$ & 100 & 100 & 100 & 30 & 90 & 30 & 30 & 60 \\
& $h^{2,1}$ & 90 & 100 & 90 & 75 & 100 & 100 & 100 & 60 \\ \midrule
\multirow{2}{*}{\texttt{max\_leaf\_nodes}} & $h^{1,1}$ & 100 & 80 & 90 & 20 & 20 & 35 & 90 & 90 \\
& $h^{2,1}$ & 90 & 100 & 100 & 75 & 100 & 60 & 100 & 100 \\ \midrule
\multirow{2}{*}{\texttt{min\_samples\_leaf}} & $h^{1,1}$ & 1 & 1 & 1 & 15 & 1 & 15 & 1 & 1 \\
& $h^{2,1}$ & 3 & 1 & 4 & 70 & 1 & 70 & 30 & 1 \\ \midrule
\multirow{2}{*}{\texttt{min\_samples\_split}} & $h^{1,1}$ & 2 & 30 & 20 & 35 & 10 & 10 & 100 & 100 \\
& $h^{2,1}$ & 30 & 2 & 50 & 45 & 2 & 100 & 2 & 100 \\ \midrule
\multirow{2}{*}{\texttt{min\_weight\_fraction\_leaf}} & $h^{1,1}$ & 0.0 & 0.0 & 0.0 & $1.7 \times 10^{-3}$ & 0.0 & 0.009 & 0.0 & 0.0 \\
& $h^{2,1}$ & $3.0 \times 10^{-4}$ & 0.0 & $1.0 \times 10^{-4}$ & 0.13 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule
\multirow{2}{*}{\texttt{n\_estimators}} & $h^{1,1}$ & 10 & 100 & 45 & 120 & 155 & 300 & 10 & 300 \\
& $h^{2,1}$ & 190 & 10 & 160 & 300 & 10 & 10 & 10 & 300 \\ \bottomrule
\end{tabular}%
}
\caption{Hyperparameter choices of the random forest regression.}
\label{tab:hyp:rndfor}
\end{table}
\paragraph{Results}
In \Cref{tab:res:rndfor}, we summarise the accuracy reached using random forests of decision trees as estimators.
As we already expected, the contribution of the number of projective spaces helps the algorithm to generate better predictions.
In general, it seems that the engineered features alone can already provide a good basis for predictions.
In the case of $h^{2,1}$, the introduction of the principal components of the configuration matrix also increases the prediction capabilities.
As in most other cases, we used the floor function for the predictions on the original dataset and the rounding to next integer for the favourable one.
As usual, in \Cref{fig:res:rndfor} we show the histograms of the distribution of the residual errors and the scatter plots of the residuals.
While the distributions of the errors are slightly wider than the SVM algorithms, the scatter plots of the residual show a strong heteroscedasticity in the case of the fit using the number of projective spaces: though quite accurate, the model is strongly incomplete.
The inclusion of the other engineered features definitely helps and also leads to better predictions.
Learning curves are displayed in \Cref{fig:lc:rndfor}.
\begin{table}[htp]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
& & \textbf{matrix} & \textbf{num\_cp} & \textbf{eng. feat.} & \textbf{PCA} \\ \midrule
\multirow{2}{*}
{\emph{original}} & $h^{1,1}$ & 55\% & 63\% & 66\% & 64\% \\
& $h^{2,1}$ & 12\% & 9\% & 17\% & 18\% \\ \midrule
\multirow{2}{*}
{\emph{favourable}} & $h^{1,1}$ & 89\% & 99\% & 98\% & 98\% \\
& $h^{2,1}$ & 14\% & 17\% & 22\% & 27\% \\ \bottomrule
\end{tabular}
\caption{Accuracy of the random forests on the test split.}
\label{tab:res:rndfor}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\textwidth]{images/rnd_for_orig}
\caption{Plots of the residual errors for the random forests.}
\label{fig:res:rndfor}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/forest_learning_curve_matrix_outliers}
\caption{input: \lstinline!matrix!, default parameters}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/forest_learning_curve_all_outliers}
\caption{input: all, default parameters}
\end{subfigure}
\caption{Learning curves for the random forest (original dataset), including outliers and using a single model for both Hodge numbers.}
\label{fig:lc:rndfor}
\end{figure}
\subsubsection{Gradient Boosted Trees}
We used the class \lstinline!ensemble.GradientBoostingRegressor! from Scikit in order to implement the gradient boosted trees.
\paragraph{Parameters}
Hyperparameter optimisation has been performed using 25 iterations of the Bayes search algorithm since by comparison the gradient boosting algorithms took the longest learning time.
We show the chosen hyperparameters in \Cref{tab:hyp:grdbst}.
With respect to the random forests, for the gradient boosting we also need to introduce the \texttt{learning\_rate} (or \emph{shrinking parameter}) which controls the gradient descent of the optimisation which is driven by the choice of the \texttt{loss} parameters (\texttt{ls} is the ordinary least squares loss, \texttt{lad} is the least absolute deviation and \texttt{huber} is a combination of the previous two losses weighted by the hyperparameter $\alpha$).
We also introduce the \texttt{subsample} hyperparameter which chooses a fraction of the samples to be fed into the algorithm at each iteration.
This procedure has both a regularisation effect on the trees, which should not adapt too much to the training set, and speeds up the training (at least by a very small amount).
\begin{table}[htp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lccccccccc@{}}
\toprule
& & \multicolumn{2}{c}{\textbf{matrix}} & \multicolumn{2}{c}{\textbf{num\_cp}} & \multicolumn{2}{c}{\textbf{eng. feat.}} & \multicolumn{2}{c}{\textbf{PCA}} \\ \midrule
& & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} & \textit{old} & \textit{fav.} \\ \midrule
\multirow{2}{*}{$\alpha$} & $h^{1,1}$ & 0.4 & --- & --- & --- & --- & --- & --- & --- \\
& $h^{2,1}$ & --- & 0.11 & --- & --- & 0.99 & --- & --- & --- \\ \midrule
\multirow{2}{*}{\texttt{criterion}} & $h^{1,1}$ & \texttt{mae} & \texttt{mae} & \texttt{friedman\_mse} & \texttt{mae} & \texttt{friedman\_mse} & \texttt{friedman\_mse} & \texttt{mae} & \texttt{mae} \\
& $h^{2,1}$ & \texttt{mae} & \texttt{mae} & \texttt{friedman\_mse} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} & \texttt{mae} \\ \midrule
\multirow{2}{*}{\texttt{learning\_rate}} & $h^{1,1}$ & 0.3 & 0.04 & 0.6 & 0.03 & 0.15 & 0.5 & 0.04 & 0.03 \\
& $h^{2,1}$ & 0.6 & 0.5 & 0.3 & 0.5 & 0.04 & 0.02 & 0.03 & 0.07 \\ \midrule
\multirow{2}{*}{\texttt{loss}} & $h^{1,1}$ & huber & ls & lad & ls & ls & lad & ls & ls \\
& $h^{2,1}$ & ls & huber & ls & ls & huber & ls & ls & lad \\ \midrule
\multirow{2}{*}{\texttt{max\_depth}} & $h^{1,1}$ & 100 & 100 & 15 & 60 & 2 & 100 & 55 & 2 \\
& $h^{2,1}$ & 85 & 100 & 100 & 30 & 35 & 60 & 15 & 2 \\ \midrule
\multirow{2}{*}{\texttt{min\_samples\_split}} & $h^{1,1}$ & 2 & 30 & 20 & 35 & 10 & 10 & 100 & 100 \\
& $h^{2,1}$ & 30 & 2 & 50 & 45 & 2 & 100 & 2 & 100 \\ \midrule
\multirow{2}{*}{\texttt{min\_weight\_fraction\_leaf}} & $h^{1,1}$ & 0.03 & 0.0 & 0.0 & 0.2 & 0.2 & 0.0 & 0.06 & 0.0 \\
& $h^{2,1}$ & 0.0 & 0.0 & 0.16 & 0.004 & 0.0 & 0.0 & 0.0 & 0.0 \\ \midrule
\multirow{2}{*}{\texttt{n\_estimators}} & $h^{1,1}$ & 90 & 240 & 120 & 220 & 100 & 130 & 180 & 290 \\
& $h^{2,1}$ & 100 & 300 & 10 & 20 & 200 & 300 & 300 & 300 \\ \midrule
\multirow{2}{*}{\texttt{subsample}} & $h^{1,1}$ & 0.8 & 0.8 & 0.9 & 0.6 & 0.1 & 0.1 & 1.0 & 0.9 \\
& $h^{2,1}$ & 0.7 & 1.0 & 0.1 & 0.9 & 0.1 & 0.9 & 0.1 & 0.2 \\ \bottomrule
\end{tabular}%
}
\caption{Hyperparameter choices of the gradient boosted decision trees.}
\label{tab:hyp:grdbst}
\end{table}
\paragraph{Results}
We show the results of gradient boosting in \Cref{tab:res:grdbst}.
As usual, the linear dependence of $h^{1,1}$ on the number of projective spaces is evident and in this case also produces the best accuracy result (using the floor function for the original dataset and rounding to the next integer for the favourable dataset) for $h^{1,1}$.
$h^{2,1}$ is once again strongly helped by the presence of the redundant features.
In \Cref{fig:res:grdbst}, we finally show the histograms and the scatter plots of the residual errors for the original dataset showing that also in this case the choice of the floor function can be justified and that the addition of the engineered features certainly improves the overall variance of the residuals.
\begin{table}[htp]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
& & \textbf{matrix} & \textbf{num\_cp} & \textbf{eng. feat.} & \textbf{PCA} \\ \midrule
\multirow{2}{*}
{\emph{original}} & $h^{1,1}$ & 50\% & 63\% & 61\% & 58\% \\
& $h^{2,1}$ & 14\% & 9\% & 23\% & 21\% \\ \midrule
\multirow{2}{*}
{\emph{favourable}} & $h^{1,1}$ & 97\% & 100\% & 99\% & 99\% \\
& $h^{2,1}$ & 17\% & 16\% & 35\% & 22\% \\ \bottomrule
\end{tabular}
\caption{Accuracy of the gradient boosting on the test split.}
\label{tab:res:grdbst}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\textwidth]{images/grd_bst_orig}
\caption{Plots of the residual errors for the gradient boosted trees.}
\label{fig:res:grdbst}
\end{figure}
\subsection{Neural Networks}
In this section we approach the problem of predicting the Hodge numbers using artificial neural networks (ANN), which we briefly review in \Cref{sec:app:nn}.
We use Google's \emph{Tensorflow} framework and \emph{Keras}, its high-level API, to implement the architectures and train the networks~\cite{Package:Tensorflow, Package:Keras}.
We explore different architectures and discuss the results.
Differently from the previous algorithms, we do not perform a cross-validation scoring but we simply retain \SI{10}{\percent} of the total set as a holdout validation set (also referred to as \emph{development} set) due to the computation power available.
Thus, we use \SI{80}{\percent} of the samples for training, \SI{10}{\percent} for evaluation and \SI{10}{\percent} as a test set.
For the same reason, the optimisation of the algorithm has been performed manually.
We always use the Adam optimiser with default learning rate $\num{e-3}$ to perform the gradient descent and a fix batch size of $32$.
The network is trained for a large number of epochs to avoid missing possible local optima.
In order to avoid overshooting the minimum, we dynamically reduce the learning rate both using the \emph{Adam} optimiser, which implements learning rate decay, and through the callback \texttt{callbacks.ReduceLROnPlateau} in Keras, which scales the learning rate by a given factor when the monitored quantity (e.g.\ the validation loss) does not decrease): we choose to reduce it by $0.3$ when the validation loss does not improve for at least $75$ epochs.
Moreover, we stop training when the validation loss does not improve during $200$ epochs.
Clearly, we then keep only the weights of the neural networks which gave the best results.
Batch normalization layers are used with a momentum of $0. 99$.
Training and evaluation were performed on a \texttt{NVidia GeForce 940MX} laptop GPU with \SI{2}{\giga B} of RAM memory.
\subsubsection{Fully Connected Network}
First, we reproduce the analysis from~\cite{Bull:2018:MachineLearningCICY} for the prediction of $h^{1,1}$.
\paragraph{Model}
The neural network presented in~\cite{Bull:2018:MachineLearningCICY} for the regression task contains $5$ hidden layers with $876$, $461$, $437$, $929$ and $404$ units (\Cref{fig:nn:dense}).
All layers (including the output layer) are followed by a ReLU activation and by a dropout layer with a rate of $\num{0.2072}$.
This network contains roughly $\num{1.58e6}$ parameters.
The other hyperparameters (like the optimiser, batch size, number of epochs, regularisation, etc.) are not mentioned.
In order to reproduce the results, we have filled the gap as follows:
\begin{itemize}
\item Adam optimiser with batch size of $32$;
\item maximal number epochs of $2000$ without early stopping;\footnote{It took around 20 minutes to train the model.}
\item we implement learning rate reduction by $0.3$ after $75$ epochs without improvement of the validation loss;
\item no $\ell_1$ or $\ell_2$ regularisation;
\item a batch normalization layer~\cite{Ioffe:2015:BatchNormalizationAccelerating} after each fully connected layer.
\end{itemize}
\paragraph{Results}
We have first reproduced the results from~\cite{Bull:2018:MachineLearningCICY}, which are summarized in \Cref{tab:res:neuralnet-bull}.
The training process was very quick and the loss function is reported in \Cref{fig:nn:bull_et_al_loss}.
We obtain an accuracy of $77\%$ both on the development and the test set of the original dataset with $80\%$ of training data (see \Cref{tab:res:ann}).
Using the same network, we also achieved $97\%$ of accuracy in the favourable dataset.
\begin{figure}[htp]
\centering
\begin{minipage}[t]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{images/fc}
\caption{Architecture of the fully connected network to predict $h^{1,1}$.
For simplicity we do not draw the dropout and batch normalisation layers present after every FC layer.}
\label{fig:nn:dense}
\end{minipage}
\hfill
\begin{minipage}[t]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/loss-lr_fc_orig}
\caption{Loss function of the FC network in the original dataset.}
\label{fig:nn:bull_et_al_loss}
\end{minipage}
\end{figure}
\begin{table}[htb]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
&
\multicolumn{5}{c}{\textbf{training data}}
\\
&
$\num{10}\%$ &
$\num{30}\%$ &
$\num{50}\%$ &
$\num{70}\%$ &
$\num{90}\%$
\\
\midrule
regression &
$\num{58}\%$ &
$\num{68}\%$ &
$\num{72}\%$ &
$\num{75}\%$ &
$\num{75}\%$
\\
classification &
$\num{68}\%$ &
$\num{78}\%$ &
$\num{82}\%$ &
$\num{85}\%$ &
$\num{88}\%$
\\
\bottomrule
\end{tabular}
\caption{Accuracy (approximate) for $h^{1,1}$ obtained in \cite[Figure~1]{Bull:2018:MachineLearningCICY}.}
\label{tab:res:neuralnet-bull}
\end{table}
\subsubsection{Convolutional Network}
We then present a new purely convolutional network to predict $h^{1,1}$ and $h^{2,1}$, separately or together.
The advantage of such networks is that it requires a smaller number of parameters and is insensitive to the size of the inputs.
The latter point can be helpful to work without padding the matrices (of the same or different representations), but the use of a flatten layer removes this benefit.
\paragraph{Model}
The neural network has $4$ convolutional layers.
They are connected to the output layer with a intermediate flatten layer.
After each convolutional layer, we use the ReLU activation function and a batch normalisation layer (with momentum 0.99).
Convolutional layers use the padding option \lstinline!same! and a kernel of size $(5, 5)$ to be able to extract more meaningful representations of the input, treating the configuration matrix somewhat similarly to an object segmentation task~\cite{Peng:2017:LargeKernelMatters}.
The output layer is also followed by a ReLU activation in order to force the prediction to be a positive number.
We use a dropout layer only after the convolutional network (before the flatten layer), but we introduced a combination of $\ell_2$ and $\ell_1$ regularisation to reduce the variance.
The dropout rate is 0.2 in the original dataset and 0.4 for the favourable dataset, while $\ell_1$ and $\ell_2$ regularisation are set to $10^{-5}$.
We train the model using the \emph{Adam} optimiser with a starting learning rate of $10^{-3}$ and a mini-batch size of $32$.
The architecture is more similar in style to the old \emph{LeNet} presented for the first time in 1998 by Y.\ LeCun during the ImageNet competition.
In our implementation, however, we do not include the pooling operations and swap the usual order of batch normalisation and activation function by first putting the ReLU activation.
In \Cref{fig:nn:lenet}, we show the model architecture in the case of the original dataset and of predicting $h^{1,1}$ alone.
The convolution layers have $180$, $100$, $40$ and $20$ units each.
\begin{figure}[htp]
\centering
\includegraphics[width=0.75\textwidth]{images/ccnn}
\caption{%
Pure convolutional neural network for predicting $h^{1,1}$.
It is made of $4$ modules composed by convolutional layer, ReLU activation, batch normalisation (in this order), followed by a dropout layer, a flatten layer and the output layer (in this order).
}
\label{fig:nn:lenet}
\end{figure}
\paragraph{Results}
With this setup, we were able to achieve an accuracy of 94\% on both the development and the test sets for the ``old'' database and 99\% for the favourable dataset in both validation and test sets (results are briefly summarised in \Cref{tab:res:ann}).
We thus improved the results of the densely connected network and proved that convolutional networks can be valuable assets when dealing with the extraction of a good representation of the input data: not only are CNNs very good at recognising patterns and rotationally invariant objects inside pictures or general matrices of data, but deep architectures are also capable of transforming the input using non linear transformations~\cite{Mallat:2016:UnderstandingDeepConvolutional} to create new patterns which can then be used for predictions.
Even though the convolution operation is very time consuming, another advantage of CNN is the extremely reduced number of parameters with respect to FC networks.\footnotemark{}
\footnotetext{%
It took around 4 hours of training (and no optimisation) for each Hodge number in each dataset.
}%
The architectures we used were in fact made of approximately $\num{5.8e5}$ parameters: way less than half the number of parameters used in the FC network.
Ultimately, this leads to a smaller number of training epochs necessary to achieve good predictions (see \Cref{fig:cnn:class-ccnn}).
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/loss-lr_ccnn_h11_orig}
\caption{Loss function of $h^{1,1}$.}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/loss-lr_ccnn_h21_orig}
\caption{Loss function of $h^{2,1}$.}
\end{subfigure}
\caption{
Loss function of the networks for the prediction of $h^{1,1}$ and $h^{2,1}$.
We can see that the validation loss flattens out while the training loss keeps decreasing: we took care of the overfit by using the weights of the network when the validation loss reached its minimum.
The use of mini-batch gradient descent also completely spoils the monotonicity of the loss functions which can therefore increase moving from one epoch to the other, while keeping the descending trend for most of its evolution.
}
\label{fig:cnn:class-ccnn}
\end{figure}
Using this classic setup, we tried different architectures.
The network for the original dataset seems to work best in the presence of larger kernels, dropping by roughly $5\%$ in accuracy when a more ``classical'' $3 \times 3$ kernel is used.
We also tried to use to set the padding to \lstinline!valid!, reducing the input from a $12 \times 15$ matrix to a $1 \times 1$ feature map over the course of $5$ layers with $180$, $100$, $75$, $40$ and $20$ filters.
The advantage is the reduction of the number of parameters (namely $\sim \num{4.9e5}$) mainly due to the small FC network at the end, but accuracy dropped to $87\%$.
The favourable dataset seems instead to be more independent of the specific architecture, retaining accuracy also with smaller kernels.
The analysis for $h^{2,1}$ follows the same prescriptions.
For both the original and favourable dataset, we opted for 4 convolutional layers with 250, 150, 100 and 50 filters and no FC network for a total amount of $\num{2.1e6}$ parameters.
In this scenario we were able to achieve $36\%$ of accuracy in the development set and $40\%$ on the test set for $h^{2,1}$ in the ``old'' dataset and $31\%$ in both development and test sets in the favourable set (see \Cref{tab:res:ann}).
The learning curves for both Hodge numbers are given in \Cref{fig:lc:class-ccnn}.
This model uses the same architecture as the one for predicting $h^{1,1}$ only, which explains why it is less accurate as it needs to also adapt to compute $h^{2,1}$ -- a difficult task, as we have seen (see for example \Cref{fig:lc:inception}).
\begin{figure}[htp]
\centering
\includegraphics[width=0.6\textwidth]{images/lc/conv_nn_learning_curve}
\caption{%
Learning curves for the classic convolutional neural network (original dataset), using a single model for both Hodge numbers.
}
\label{fig:lc:class-ccnn}
\end{figure}
\subsubsection{Inception-like Neural Network}
\label{sec:ml:nn:inception}
In the effort to find a better architecture, we took inspiration from Google's winning CNN in the annual \href{https://image-net.org/challenges/LSVRC/}{\emph{ImageNet challenge}} in 2014~\cite{Szegedy:2014:GoingDeeperConvolutions, Szegedy:2015:RethinkingInceptionArchitecture, Szegedy:2016:Inceptionv4InceptionResNetImpact}.
The architecture presented uses \emph{inception} modules in which separate $3 \times 3$, $5 \times 5$ convolutions are performed side by side (together with \emph{max pooling} operations) before recombining the outputs.
The modules are then repeated until the output layer is reached.
This has two evident advantages: users can avoid taking a completely arbitrary decision on the type of convolution to use since the network will take care of it tuning the weights, and the number of parameters is extremely restricted as the network can learn complicated functions using fewer layers.
As a consequence the architecture of such models can be made very deep while keeping the number of parameters contained, thus being able to learn very difficult representations of the input and producing accurate predictions.
Moreover, while the training phase might become very long due to the complicated convolutional operations, the small number of parameters is such that predictions can be generated in a very small amount of time, making inception-like models extremely appropriate whenever quick predictions are necessary.
Another advantage of the architecture is the presence of different kernel sizes inside each module: the network automatically learns features at different scales and different positions, thus leveraging the advantages of a deep architecture with the ability to learn different representations at the same time and compare them.
\paragraph{Model}
In \Cref{fig:nn:inception}, we show a schematic of our implementation.
Differently from the image classification task, we drop the pooling operation and implement two side-by-side convolution over rows ($12 \times 1$ kernel for the original dataset, $15 \times 1$ for the favourable) and one over columns ($1 \times 15$ and $1 \times 18$ respectively).\footnotemark{}
\footnotetext{%
Pooling operations are used to shrink the size of the input.
Similar to convolutions, they use a window of a given size to scan the input and select particular values inside.
For instance, we could select the average value inside the small portion selected, performing an \emph{average pooling} operation, or the maximum value, a \emph{max pooling} operation.
This usually improves image classification and object detection tasks as it can be used to sharpen edges and borders.
}%
We use \texttt{same} as padding option.
The output of the convolutions are then concatenated in the filter dimensions before repeating the ``inception'' module.
The results from the last module are directly connected to the output layer through a flatten layer.
In both datasets, we use batch normalisation layers (with momentum $0.99$) after each concatenation layer and a dropout layer (with rate $0.2$) before the FC network.\footnotemark{}
\footnotetext{%
The position of the batch normalisation is extremely important as the parameters computed by such layer directly influence the following batch.
We however opted to wait for the scan over rows and columns to finish before normalising the outcome to avoid biasing the resulting activation function.
}%
For both $h^{1,1}$ and $h^{2,1}$ (in both datasets), we used 3 modules made by 32, 64 and 32 filters for the first Hodge number, and 128, 128 and 64 filters for the second.
We also included $\ell_1$ and $\ell_2$ regularisation of magnitude $10^{-4}$ in all cases.
The number of parameters was thus restricted to $\num{2.3e5}$ parameters for $h^{1,1}$ in the original dataset and $\num{2.9e5}$ in the favourable set, and $\num{1.1e6}$ parameters for $h^{2,1}$ in the original dataset and $\num{1.4e6}$ in the favourable dataset.
In all cases, the number of parameters has decreased by a significant amount: in the case of $h^{1,1}$ they are roughly $\frac{1}{3}$ of the parameters used in the classical CNN and around $\frac{1}{6}$ of those used in the FC network.
For training we used the \emph{Adam} gradient descent with an initial learning rate of $10^{-3}$ and a batch size of $32$.
The callbacks helped to contain the training time (without optimisation) under 5 hours for each Hodge number in each dataset.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\textwidth]{images/icnn}
\caption{%
In each concatenation module (here shown for the ``old'' dataset) we operate with separate convolution operations over rows and columns, then concatenate the results. The overall architecture is composed of 3 ``inception'' modules made by two separate convolutions, a concatenation layer and a batch normalisation layer (strictly in this order), followed by a dropout layer, a flatten layer and the output layer with ReLU activation (in this order).}
\label{fig:nn:inception}
\end{figure}
\paragraph{Results}
With these architectures, we were able to achieve more than \SI{99}{\percent} of accuracy for $h^{1,1}$ in the test set (same for the development set) and \SI{50}{\percent} of accuracy for $h^{2,1}$ (a slightly smaller value for the development set).
We report the results in \Cref{tab:res:ann}.
We therefore increased the accuracy for both Hodge numbers (especially $h^{2,1}$) compared to what can achieve a simple sequential network, while at the same time reducing significantly the number of parameters of the network.\footnotemark{}
This increases the robustness of the method and its generalisation properties.
\footnotetext{%
In an attempt to improve the results for $h^{2,1}$ even further, we also considered to first predict $\ln( 1 + h^{2,1} )$ and then transform it back. However, the predictions dropped by almost $10\%$ in accuracy even using the ``inception'' network: the network seems to be able to approximate quite well the results (not better nor worse than simply $h^{2,1}$) but the subsequent exponentiation is taking apart predictions and true values.
Choosing a correct rounding strategy then becomes almost impossible.
}
In \Cref{fig:nn:inception_errors}, we show the distribution of the residuals and their scatter plot, showing that the distribution of the errors does not present pathological behaviour and the variance of the residuals is well distributed over the predictions.
In fact, this neural network is much more powerful than the previous networks we considered, as can be seen by studying the learning curves (\Cref{fig:lc:inception}).
When predicting only $h^{1,1}$, it surpasses $97\%$ accuracy using only $30\%$ of the data for training.
While it seems that the predictions suffer when using a single network for both Hodge numbers, this remains much better than any other algorithm.
It may seem counter-intuitive that convolutions work well on this data since they are not translation or rotation invariant, but only permutation invariant.
However, convolution alone is not sufficient to ensure invariances under these transformations but it must be supplemented with pooling operations~\cite{Goodfellow:2016:DeepLearning}, which we do not use.
Moreover, convolution layers do more than just taking translation properties into account: they allow to make highly complicated combinations of the inputs and to share weights among components, which allow to find subtler patterns than standard fully connected layers.
This network is more studied in more details in~\cite{Erbin:2020:InceptionCICY}.
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/loss-lr_icnn_h11_orig}
\caption{Loss of $h^{1,1}$.}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/loss-lr_icnn_h21_orig}
\caption{Loss of $h^{2,1}$.}
\end{subfigure}
\caption{The loss functions of ``inception'' network for $h^{1,1}$ and $h^{2,1}$ in the original dataset show that the number of epochs required for training is definitely larger than for simpler architectures, despite the reduced number of parameters.}
\label{fig:cnn:inception-loss}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{images/errors_icnn_h11_orig}
\caption{Residuals of $h^{1,1}$.}
\end{subfigure}
\quad
\begin{subfigure}[c]{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{images/errors_icnn_h21_orig}
\caption{Residuals of $h^{2,1}$.}
\end{subfigure}
\caption{Histograms of the residual errors and residual plots of the Inception network.}
\label{fig:nn:inception_errors}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/inc_nn_learning_curve}
\caption{predicting both $h^{1,1}$ and $h^{2,1}$}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/lc/inc_nn_learning_curve_h11}
\caption{predicting $h^{1,1}$ only}
\end{subfigure}
\caption{Learning curves for the Inception neural network (original dataset).}
\label{fig:lc:inception}
\end{figure}
\begin{table}[htb]
\centering
\begin{tabular}{@{}ccccccc@{}}
\toprule
& \multicolumn{2}{c}{\textbf{DenseNet}}
& \multicolumn{2}{c}{\textbf{classic ConvNet}}
& \multicolumn{2}{c}{\textbf{inception ConvNet}}
\\
& \emph{old} & \emph{fav.}
& \emph{old} & \emph{fav.}
& \emph{old} & \emph{fav.}
\\
\midrule
$h^{1,1}$
& 77\% & 97\%
& 94\% & 99\%
& 99\% & 99\%
\\
$h^{2,1}$
& - & -
& 36\% & 31\%
& 50\% & 48\%
\\
\bottomrule
\end{tabular}
\caption{Accuracy using \emph{rint} rounding on the predictions of the ANNs on $h^{1,1}$ and $h^{2,1}$ on the test set.}
\label{tab:res:ann}
\end{table}
\subsubsection{Boosting the Inception-like Model}
To improve further the accuracy of $h^{2,1}$, we have tried to modify the network by adding engineered features as auxiliary inputs.
This can be done by adding inputs to the inception neural network and merging the different branches at different stages.
There are two possibilities to train such a network: 1) train all the network directly, or 2) train the inception network alone, then freeze its weights and connect it to the additional inputs, training only the new layer.
We found that the architectures we tried did not improve the accuracy, but we briefly describe our attempts for completeness.
We focused in particular on the number of projective spaces, the vector of dimensions of the projective spaces and the vector of dimensions of the principal cohomology group) and predicting $h^{1,1}$ and $h^{2,1}$ at the same time.
The core of the neural network is the Inception network described in \Cref{sec:ml:nn:inception}.
Then, the engineered features are processed using fully connected layers and merged to the predictions from the Inception branch using a concatenation layer.
Obviously, output layers for $h^{1,1}$ and $h^{2,1}$ can be located on different branches, which allow for different processing of the features.
As mentioned earlier, a possible approach is to first train the Inception branch alone, before freezing its weights and connecting it to the rest of the network.
This can prevent spoiling the already good predictions and speed up the new learning process.
This is a common technique called \emph{transfer learning}: we can use a model previously trained on a slightly different task and use its weights as part of the new architecture.
Our trials involved shallow fully connected layers ($1$--$3$ layers with $10$ to $150$ units) between the engineered features and after the concatenation layer.
Since the EDA analysis (\Cref{sec:data:eda}) shows a correlation between both Hodge numbers, we tried architectures where the result for $h^{1,1}$ is used to predict $h^{2,1}$.
For the training phase, we also tried an alternative to the canonical choice of optimising the sum of the losses.
We first train the network and stop the process when the validation loss for $h^{1,1}$ does not longer improve, load back the best weights and save the results, keep training and stop when the loss for $h^{2,1}$ reaches a plateau.
With this setup we were able to slightly improve the predictions of $h^{1,1}$ in the original dataset, reaching almost \SI{100}{\percent} of accuracy in the predictions, while the favourable dataset stayed at around \SI{99}{\percent} of accuracy.
The only few missed predictions (4 manifolds out of 786 in the test set) are in very peculiar regions of the distribution of the Hodge number.
For $h^{2,1}$ no improvement has been noticed.
\subsection{Ensemble Learning: Stacking}
We conclude the ML analysis by describing a method very popular in ML competitions~\cite{Coursera:HowWinData}: ensembling.
This consists in taking several ML algorithms and combining together the predictions of each individual model to obtain a more precise predictions.
Using this technique it is possible to decrease the variance and improve generalization by compensating weaknesses of algorithms with strengths of others.
Indeed, the idea is to put together algorithms which perform best in different zones of the label distribution in order to combine them to build an algorithm better than any individual component.
The simplest such algorithm is \emph{stacking} whose principle is summarised in \Cref{fig:stack:def}.
First, the original training set is split in two parts (not necessarily even).
Second, a certain number of \emph{first-level learners} is trained over the first split and used to generate predictions over the second split.
Third, a ``meta learner'' is trained of the second split to combine the predictions from the first-level learners.
Predictions for the test set are obtained by applying both level of models one after the other.
We have selected the following models for the first level: linear regression, SVR with the Gaussian kernel, the random forest and the ``inception'' neural network.
The meta-learner is a simple linear regression with $\ell_1$ regularisation (Lasso).
The motivations for the first-level algorithms is that stacking works best with a group of algorithms which work in the most diverse way among them.
Also in this case, we use a cross-validation strategy with 5 splits for each level of the training: from \SI{90}{\percent} of total training set, we split into two halves containing each \SI{45}{\percent} of the total samples and then use 5 splits to grade the algorithm, thus using \SI{9}{\percent} of each split for cross correlation at each iteration) and the Bayes optimisation for all algorithms but the ANN (50 iterations for elastic net, SVR and lasso and 25 for the random forests).
The ANN was trained using a holdout validation set containing the same number of samples as each cross-validation fold, namely \SI{9}{\percent} of the total set.
The accuracy is then computed as usual using \texttt{numpy.rint} for SVR, neural networks, the meta learner and $h^{1,1}$ in the original dataset in general, and \texttt{numpy.floor} in the other cases.
In \Cref{tab:res:stack}, we show the accuracy of the ensemble learning.
We notice that accuracy improves slightly only for $h^{2,1}$ (original dataset) compared to the first-level learners.
However, this is much lower than what has been achieved in \Cref{sec:ml:nn:inception}.
The reason is that the learning suffers from the reduced size of the training set.
Another reason is that the different algorithms may perform similarly well in the same regions.
\begin{figure}[htp]
\centering
\includegraphics[width=0.65\textwidth]{images/stacking}
\caption{Stacking ensemble learning with two level learning.
The original training set is split into two training folds and the first level learners are trained on the first.
The trained models are then used to generate a new training set (here the ``1st level labels'') using the second split as input features.
The same also applies to the test set.
Finally a ``meta-learner'' uses the newly generated training set to produce the final predictions on the test set.}
\label{fig:stack:def}
\end{figure}
\begin{table}[htb]
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
&
& \multicolumn{2}{c}{$h^{1,1}$}
& \multicolumn{2}{c}{$h^{2,1}$}
\\
&
& \emph{old} & \emph{fav.}
& \emph{old} & \emph{fav.}
\\
\midrule
\multirow{4}{*}{\emph{1st level}}
& EN
& 65\% & 100\%
& 19\% & 19\%
\\
& SVR
& 70\% & 100\%
& 30\% & 34\%
\\
& RF
& 61\% & 98\%
& 18\% & 24\%
\\
& ANN
& 98\% & 98\%
& 33\% & 30\%
\\
\midrule
\multirow{1}{*}{\emph{2nd level}}
& Lasso
& 98\% & 98\%
& 36\% & 33\%
\\
\bottomrule
\end{tabular}
\caption{Accuracy of the first and second level predictions of the stacking ensemble for elastic net regression (EN), support vector with \texttt{rbf} kernel (SVR), random forest (RF) and the artificial neural network (ANN) as first level learners and lasso regression as meta learner.}
\label{tab:res:stack}
\end{table}
\section{Data Analysis}
\label{sec:data}
In this section, we introduce \emph{Calabi--Yau} (CY) manifolds before describing the two datasets of CICY manifolds (\Cref{sec:data:datasets}).
Since the CICY have been completely classified, they provide a good opportunity for testing ideas from ML in a controlled setting.
In order to select the most appropriate learning algorithm, we perform a preliminary \emph{exploratory data analysis} (EDA) in \Cref{sec:data:eda}.
\subsection{Calabi--Yau Manifolds}
\label{sec:data:cy}
A CY $n$-fold is a $n$-dimensional complex manifold $X$ with $\group{SU}(n)$ holonomy (they have $2n$ real dimensions).
An equivalent definition is the vanishing of its first Chern class.
A standard reference for the physicist is~\cite{Hubsch:1992:CalabiYauManifoldsBestiary} (see also~\cite{Anderson:2018:TASILecturesGeometric, He:2020:CalabiYauSpacesString} for useful references).
The most relevant case for superstring compactifications are CY $3$-folds.
Indeed, superstrings are well-defined only in $10$ dimensions: in order to recover a $4$-dimensional theory, it is necessary to compactify $6$ dimensions~\cite{Hubsch:1992:CalabiYauManifoldsBestiary}.
Importantly, the compactification on a CY leads to the breaking of a large part of the supersymmetry, which is phenomenologically more realistic.
Calabi--Yau manifolds are characterized by a certain number of topological properties, the most salient being the Hodge numbers $h^{1,1}$ and $h^{2,1}$, counting respectively the Kähler and complex structure deformations, and the Euler characteristics\footnotemark{}
\footnotetext{%
In full generality, the Hodge numbers $h^{p,q}$ count the numbers of harmonic $(p, q)$-forms.
}%
\begin{equation}
\chi = 2 (h^{1,1} - h^{2,1}).
\label{eq:cy:euler}
\end{equation}
Interestingly, topological properties of the manifold directly translates into features of the $4$-dimensional effective action (in particular, the number of fields, the representations and the gauge symmetry)~\cite{Hubsch:1992:CalabiYauManifoldsBestiary, Becker:2006:StringTheoryMTheory}.\footnotemark{}
\footnotetext{%
Another reason for sticking to topological properties is that there is no CY for which the metric is known.
Hence, it is not possible to perform explicitly the Kaluza--Klein reduction in order to derive the $4$-dimensional theory.
}%
In particular, the Hodge numbers count the number of chiral multiplets (in heterotic compactifications) and the number of hyper- and vector multiplets (in type II compactifications): these are related to the number of fermion generations ($3$ in the Standard Model) and is thus an important measure of the distance to the Standard Model.
The simplest CYs are constructed by considering the complete intersection of hypersurfaces in a product $\mc A$ of projective spaces $\P^{n_i}$ (called the ambient space)~\cite{Green:1987:CalabiYauManifoldsComplete, Green:1987:PolynomialDeformationsCohomology, Candelas:1988:CompleteIntersectionCalabiYau, Green:1989:AllHodgeNumbers, Anderson:2017:FibrationsCICYThreefolds, Anderson:2018:TASILecturesGeometric}:
\begin{equation}
\mc A = \P^{n_1} \times \cdots \times \P^{n_m}.
\end{equation}
Such hypersurfaces are defined by homogeneous polynomial equations: a Calabi--Yau $X$ is described by the solution to the system of equations, i.e.\ by the intersection of all these surfaces (that the intersection is “complete” means that the hypersurface is non-degenerate).
To gain some intuition, consider the case of a single projective space $\P^n$ with (homogeneous) coordinates $Z^I$, $I = 0, \ldots, n$.
In this case, a codimension $1$ subspace is obtained by imposing a single homogeneous polynomial equation of degree $a$ on the coordinates
\begin{equation}
\begin{gathered}
p_a(Z^0, \ldots, Z^n)
= P_{I_1 \cdots I_a} Z^{I_1} \cdots Z^{I_a}
= 0,
\\
p_a(\lambda Z^0, \ldots, \lambda Z^n) = \lambda^a \, p_a(Z^0, \ldots, Z^n).
\end{gathered}
\end{equation}
Each choice of the polynomial coefficients $P_{I_1 \cdots I_a}$ leads to a different manifold.
However, it can be shown that the manifolds are (generically) topologically equivalent.
Since we are interested only in classifying the CY as topological manifolds and not as complex manifolds, the information about $P_{I_1 \cdots I_a}$ can be forgotten and it is sufficient to keep track only on the dimension $n$ of the projective space and of the degree $a$ of the equation.
The resulting hypersurface is denoted equivalently as $[\P^n \mid a] = [n \mid a]$.
Finally, $[\P^n \mid a]$ is $3$-dimensional if $n = 4$ (the equation reduces the dimension by one), and it is a CY (the “quintic”) if $a = n + 1 = 5$ (this is required for the vanishing of its first Chern class).
The simplest representative of this class if Fermat's quintic defined by the equation
\begin{equation}
\sum_{I=0}^{4} (Z^I)^5 = 0.
\end{equation}
This construction can be generalized to include $m$ projective spaces and $k$ equations, which can mix the coordinates of the different spaces.
A CICY $3$-fold $X$ as a topological manifold is completely specified by a \emph{configuration matrix} denoted by the same symbol as the manifold:
\begin{equation}
X =
\left[
\begin{array}{c|ccc}
\mathbb P^{n_1} & a_1^1 & \cdots & a_k^1
\\
\vdots & \vdots & \ddots & \vdots
\\
\mathbb P^{n_m} & a_1^m & \cdots & a_k^m
\end{array}
\right]
\end{equation}
where the coefficients $a^r_\alpha$ are positive integers and satisfy the following constraints
\begin{equation}
\label{eq:cicy-constraints}
\dim_\C X = \sum_{r=1}^{m} n_r - k = 3,
\qquad
\forall r: \quad
n_r + 1 = \sum_{\alpha=1}^k a_\alpha^r.
\end{equation}
The first relation states that the dimension of the ambient space minus the number of equations equals the dimension of the CY $3$-fold.
The second set of constraints arise from the vanishing of its first Chern class; they imply that the $n_i$ can be recovered from the matrix elements.
In this case also, two manifolds described by the same configuration matrix but different polynomials are equivalent as real manifold (they are diffeomorphic) -- and thus as topological manifolds --, but they are different as complex manifolds.
Hence, it makes sense to write only the configuration matrix.
A given topological manifold is not described by a unique configuration matrix.
First, any permutation of the lines and columns leave the intersection unchanged (it amounts to relabelling the projective spaces and equations).
Secondly, two intersections can define the same manifold.
The ambiguity in the line and column permutations is often fixed by imposing some ordering of the coefficients.
Moreover, in most cases, there is an optimal representation of the manifold $X$, called favourable~\cite{Anderson:2017:FibrationsCICYThreefolds}: in such a form, topological properties of $X$ can be more easily derived from the ambient space $\mc A$.
\subsection{Datasets}
\label{sec:data:datasets}
Simple arguments~\cite{Green:1987:CalabiYauManifoldsComplete, Candelas:1988:CompleteIntersectionCalabiYau, Lutken:1988:RecentProgressCalabiYauology} show that the number of CICY is necessarily finite due to the constraints \eqref{eq:cicy-constraints} together with identities between complete intersection manifolds.
The classification of the CICY $3$-folds has been tackled in~\cite{Candelas:1988:CompleteIntersectionCalabiYau}, which established a dataset of $7890$ CICY.\footnotemark{}
\footnotetext{%
However, there are redundancies in this set~\cite{Candelas:1988:CompleteIntersectionCalabiYau, Anderson:2008:MonadBundlesHeterotic, Anderson:2017:FibrationsCICYThreefolds}; this fact will be ignored in this paper.
}%
The topological properties of each of these manifolds have been computed in~\cite{Green:1989:AllHodgeNumbers}.
More recently, a new classification has been performed~\cite{Anderson:2017:FibrationsCICYThreefolds} in order to find the favourable representation of each manifold whenever it is possible.
Below we show a list of the CICY properties and of their configuration matrices:
\begin{itemize}
\item general properties
\begin{itemize}
\item number of configurations: $7890$
\item number of product spaces (block diagonal matrix): $22$
\item $h^{11} \in [0, 19]$, $18$ distinct values (\Cref{fig:data:hist-h11})
\item $h^{21} \in [0, 101]$, $65$ distinct values (\Cref{fig:data:hist-h21})
\item unique Hodge number combinations: $266$
\end{itemize}
\item “original dataset”~\cite{Candelas:1988:CompleteIntersectionCalabiYau, Green:1989:AllHodgeNumbers}
\begin{itemize}
\item maximal size of the configuration matrices: $12 \times 15$
\item number of favourable matrices (excluding product spaces): $4874$ ($\num{61.8}\%$)
\item number of non-favourable matrices (excluding product spaces): $2994$
\item number of different ambient spaces: $235$
\end{itemize}
\item “favourable dataset”~\cite{Anderson:2017:FibrationsCICYThreefolds}
\begin{itemize}
\item maximal size of the configuration matrices: $15 \times 18$
\item number of favourable matrices (excluding product spaces): $7820$ ($\num{99.1}\%$)
\item number of non-favourable matrices (excluding product spaces): $48$
\item number of different ambient spaces: $126$
\end{itemize}
\end{itemize}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0.45in 6in 0}, clip]{images/label-distribution_orig}
\caption{$h^{1,1}$}
\label{fig:data:hist-h11}
\end{subfigure}
\quad
\begin{subfigure}[c]{.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={6in 0.45in 0 0}, clip]{images/label-distribution_orig}
\caption{$h^{2,1}$}
\label{fig:data:hist-h21}
\end{subfigure}
\caption{Distribution of the Hodge numbers (log scale).}
\label{fig:data:hist-hodge}
\end{figure}
The configuration matrix completely encodes the information of the CICY and all topological quantities can be derived from it.
However, the computations are involved and there is often no closed-form expression.
This situation is typical in algebraic geometry, and it can be even worse for some problems, in the sense that it is not even known how to compute the desired quantity (think to the metric of CYs).
For these reasons, it is interesting to study how we can retrieve these properties using ML algorithms.
In the current paper, following~\cite{He:2017:MachinelearningStringLandscape, Bull:2018:MachineLearningCICY}, we focus on the computation of the Hodge numbers with the initial scheme:
\begin{equation}
\text{Input: configuration matrix}
\quad \longrightarrow \quad
\text{Output: Hodge numbers}
\end{equation}
To provide a good test case for the use of ML in context where the mathematical theory is not completely understood, we will make no use of known formulas.
\subsection{Exploratory Data Analysis}
\label{sec:data:eda}
A typical ML project does not consist of feeding the raw data -- here, the configuration matrix -- to the algorithm.
It is instead preceded by a phase of exploration in order to better understand the data, which in turn can help to design the learning algorithms.
We call \emph{features} properties given as inputs, and \emph{labels} the targets of the predictions.
There are several phases in the exploratory data analysis (EDA):
\begin{enumerate}
\item \emph{feature engineering}: new features are derived from the inputs;
\item \emph{feature selection}: the most relevant features are chosen to explain the targets;
\item \emph{data augmentation}: new training data is generated from the existing ones;
\item \emph{data diminution}: part of the training data is not used.
\end{enumerate}
For pragmatical introductions, the reader is refereed to~\cite{Coursera:HowWinData, Skiena:2017:DataScience}.
Engineered features are redundant, by definition, but can help the algorithm learn more efficiently by providing an alternative formulation and by drawing attention on salient characteristics.
A simple example is the following: given a series of numbers, one can compute different statistics -- median, mean, variance, etc. -- and add them to the inputs.
It may happen that the initial series becomes then irrelevant once this new information is introduced.
Another approach to improve the learning process is to augment or decrease the number of training samples artificially.
For example, one can use invariances of the inputs to generate more training data.
This does not help in our case because the entries of the configuration matrices are partially ordered.
Another possibility is to remove outliers which can damage the learning process by driving the algorithm far from the best solution.
If there are few of them, it is better to ignore them altogether during training since an algorithm which is not robust to outliers will in any case make bad predictions (a standard illustration is given by the Pearson and Spearman correlation coefficients, with the first not being robust to outliers~\cite{Skiena:2017:DataScience}).
Finding good features and selecting those to keep requires trials and errors.
In general, it is not necessary to keep track of all steps, but we feel that it is useful to do so in this paper for a pedagogical purpose.
Before starting the EDA, the first step should be to split the data into training and validation sets to avoid biasing the choices of the algorithm and the strategy: the EDA should be performed only on the training set.
However, the dataset we consider is complete and quite uniform: a subset of it would display the same characteristics as the entire set.
To give a general overview of the properties -- which can be useful for the reader interested in understanding the statistics of the CICY and for applications to string compactifications -- we work with the full dataset.
\subsubsection{Engineering}
Any transformation of the input data which has some mathematical meaning can be a useful feature.
We have established the following list of possibly useful quantities (most of them are already used to characterise CICY in the literature~\cite{Hubsch:1992:CalabiYauManifoldsBestiary}):
\begin{itemize}
\item the number of projective spaces (rows), $m = $ \texttt{num\_cp};
\item the number of equations (columns), $k = $ \texttt{num\_eqs};
\item the number of $\P^1$, $f = $ \texttt{num\_cp\_1};
\item the number of $\P^2$, \texttt{num\_cp\_2};
\item the number of $\P^n$ with $n \neq 1$, $F = $ \texttt{num\_cp\_neq1};
\item the excess number $N_{ex} = \sum\limits_{r=1}^F (n_r + f + m - 2k) =$ \texttt{num\_ex};
\item the dimension of the cohomology group $H^0$ of the ambient space, \texttt{dim\_h0\_amb};
\item the Frobenius norm of the matrix, \texttt{norm\_matrix};
\item the list of the projective space dimensions \texttt{dim\_cp} and statistics thereof (min, max, median, mean);
\item the list of the equation degrees \texttt{deg\_eqs} and statistics thereof (min, max, median, mean);
\item $k$-means clustering on the components of the configuration matrix (with a number of clusters going from 2 to 15);\footnotemark{}
\footnotetext{%
The algorithm determines the centroids of conglomerates of data called \textit{clusters} using an iterative process which computes the distance of each sample from the center of the cluster.
It then assigns the label of the cluster to the nearest samples.
We used the class \texttt{cluster.KMeans} in \texttt{scikit-learn}.
}%
\item principal components of the configuration matrix derived using a principal components analysis (PCA) with 99\% of the variance retained (see \Cref{fig:eda:svd}).
\end{itemize}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={6in 0 0 0}, clip]{images/svd_orig}
\caption{original dataset}
\end{subfigure}
\quad
\begin{subfigure}[c]{.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={6in 0 0 0}, clip]{images/svd_fav}
\caption{favourable dataset}
\end{subfigure}
\caption{Cumulative retained variance of the principal components of the configuration matrix in the original and favourable dataset.}
\label{fig:eda:svd}
\end{figure}
\subsubsection{Selection}
\paragraph{Correlations}
To get a first general idea, it is useful to take a look at the correlation matrix of the features and the labels.\footnotemark{}
\footnotetext{%
The correlation is defined as the ratio between the covariance of two variables $\sigma(x, y) = \sum_{i} (x_i - \bar{x})(y_i - \bar{y})$ and the product of the standard deviations $\sigma(x)\sigma(y)$ (in this case $\bar{x}$ and $\bar{y}$ are the sample means).
}%
The correlation matrices for the scalar variables are displayed in \Cref{fig:eda:corr} for the original and favourable datasets (this excludes the configuration matrix).
As we can see, some engineered features are strongly correlated, especially in the favourable dataset.
In particular $h^{1,1}$ (respectively $h^{2,1}$) correlates (respectively anti-correlates) strongly with the number of projective spaces $m$ and with the norm and rank of the matrix.
This gives a first hint that these variables could help improve predictions by feeding them to the algorithm along with the matrix.
On the other hand, finer information on the number of projective spaces and equations do not correlate with the Hodge numbers.
From this analysis, in particular from \Cref{fig:eda:corr}, we find that the values of $h^{1,1}$ and $h^{2,1}$ are also correlated.
This motivates the simultaneous learning of both Hodge numbers since it can increase chances for the neural network to learn more universal features.
In fact, this is something that often happens in practice: counter-intuitively, it has been found that multi-tasking enhances the ability to generalize~\cite{Thrun:1995:LearningNthThing, Caruana:1997:MultitaskLearning, Baxter:2000:ModelInductiveBias, Maurer:2016:BenefitMultitaskRepresentation, Ndirango:2019:GeneralizationMultitaskDeep}.
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/corr-matrix_orig}
\caption{original dataset}
\end{subfigure}
\quad
\begin{subfigure}[c]{.45\linewidth}
\centering
\includegraphics[width=\textwidth]{images/corr-matrix_fav}
\caption{favourable dataset}
\label{}
\end{subfigure}
\caption{Correlations between the engineered scalar features and the labels.}
\label{fig:eda:corr}
\end{figure}
\paragraph{Feature importance}
A second option is to sort the features by order of importance.
This can be done using a decision tree which is capable to determine the weight of each variable towards making a prediction.
One advantage over correlations is that the algorithm is non-linear and can thus determine subtler relations between the features and labels.
To avoid biasing the results using only one decision tree, we trained a random forest of trees (using \texttt{ensemble.RandomForestRegressor} in \texttt{scikit-learn}).
It consists in a large number of decision trees which are trained on different random subsets of the training dataset and averaged over the outputs (see also \Cref{sec:ml:trees,sec:app:trees}).
The algorithm determines the importance of the different features to make predictions as a by-product of the learning process, because the most relevant features tend to be found at the first branches since they are the most important to make the prediction.
The importance of a variable is a number between $0$ and $1$, and the sum over all of them must be $1$.
Since a random forest contains many trees, the robustness of the variable ranking usually improves with respect to a single tree (\Cref{sec:app:trees}).
Moreover, as the main objective is to obtain a qualitative preliminary understanding of the features, there is no need for fine tuning at this stage and we use the default parameters (in particular, $100$ decision trees).
We computed feature importance for both datasets and for two different set of variables: one containing the engineered features and the configuration matrix, and one with the engineered features and the PCA components.
In the following figures, we show several comparisons of the importance of the features, dividing the figures into scalars, vectors and configuration matrix (or its PCA), and clusters.
The sum of importance of all features equals $1$.
In \Cref{fig:eda:scalars}, we show the ranking of the scalar features in the two datasets (differences between the set using the configuration matrix and the other using the PCA are marginal and are not shown to avoid redundant plots).
As already mentioned, we find again that the number of projective spaces is the most important feature by far.
It is followed by the matrix norm in the original dataset, and by the matrix rank for $h^{2,1}$ in the favourable dataset, but in a lesser measure.
Finally, it points out that the other features have a negligible impact on the determination of the labels and may as well be ignored during training.
The same analysis can be repeated for the vector features and the configuration matrix component by component.
In \Cref{fig:eda:tensor}, we show the cumulative importance of the features (i.e.\ the sum of the importance of each component).
We can appreciate that the list of the projective space dimensions plays a major role in the determination of the labels in both datasets.
In the case of $h^{2,1}$, we also have a large contribution from the dimensions of the cohomology group \texttt{dim\_h0\_amb}, as can be expected from algebraic topology~\cite{Hubsch:1992:CalabiYauManifoldsBestiary}.
In \Cref{fig:eda:cluster}, we finally show the importance associated to the number of clusters used during the EDA: no matter how many clusters we use, their relevance is definitely marginal compared to all other features used in the variable ranking (scalars, vectors, and the configuration matrix or its PCA) for both datasets.
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/scalar-features_orig}
\caption{original dataset}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/scalar-features_fav}
\caption{favourable dataset}
\end{subfigure}
\caption{Importance of the scalar features in the datasets.
The same computation involving the PCA of the configuration matrix shows very marginal differences in this case: the importance of the scalar features is mostly unchanged, especially for the higher ranked variables.}
\label{fig:eda:scalars}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{\textwidth}
\centering
\includegraphics[width=\textwidth]{images/vector-tensor-features_orig}
\caption{Original dataset}
\end{subfigure}
\begin{subfigure}[c]{\textwidth}
\centering
\includegraphics[width=\textwidth]{images/vector-tensor-features_fav}
\caption{Favourable dataset}
\end{subfigure}
\caption{Importance of the vector features the configuration matrix (or its principal components) in the datasets: notice how the PCA plays a much more important role with respect to the full configuration matrix.}
\label{fig:eda:tensor}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/cluster-features_orig}
\caption{Original dataset}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/cluster-features_fav}
\caption{Favourable dataset}
\end{subfigure}
\caption{Incidence of the numbers of clusters on the variable ranking.
Also in this case the difference between using the configuration matrix or its PCA is marginal and actually the clusters have even lower importance when using the latter.
We therefore avoid presenting nugatory information and show only the importance of clusters when using the configuration matrix.}
\label{fig:eda:cluster}
\end{figure}
\begin{figure}[htp]
\centering
\begin{minipage}[t]{\textwidth}
\centering
\includegraphics[width=\textwidth, trim={0 10in 0 0}, clip]{images/distr-labels-corr-feat_orig}
\caption*{Original dataset}
\end{minipage}
\begin{minipage}[t]{\textwidth}
\centering
\includegraphics[width=\textwidth, trim={0 10in 0 0}, clip]{images/distr-labels-corr-feat_fav}
\caption*{Favourable dataset}
\end{minipage}
\caption{Distribution of the labels with respect to the number of projective spaces.}
\label{fig:eda:distr}
\end{figure}
\paragraph{Conclusion}
It seems therefore that the number of projective spaces plays a relevant role in the determination of $h^{1,1}$ and $h^{2,1}$ as well as the list of dimensions of the projective spaces.
In order to validate this observation, in \Cref{fig:eda:distr} we present a scatter plot of the Hodge number distributions versus the number of projective spaces: it shows that there is indeed a linear dependence in $m$ for $h^{1,1}$, especially in the favourable dataset.
In fact, the only exceptions to this pattern in the latter case are the manifolds which do not have a favourable embedding~\cite{Anderson:2017:FibrationsCICYThreefolds}.
Hence, a simple data analysis hints naturally towards this mathematical result.
Finally, we found other features which may be relevant and are worth to be included in the algorithm: the matrix rank and norm, the list of projective space dimensions and of the associated cohomology dimensions.
However, we want to emphasize one caveat to this analysis: correlations look only for linear relations, and the random forest has not been optimized or could just be not powerful enough to make good predictions.
This means that feature selection just gives a hint but it may be necessary to adapt.
\subsubsection{Removing Outliers}
\label{sec:data:eda:outliers}
The Hodge number distributions (\Cref{fig:data:hist-hodge,fig:data:distr}) display few outliers which lie outside the tail of the main distributions.
Such outliers may negatively impact the learning process and drive down the accuracy: it makes sense to remove them from the training set.
It is easy to see that the $22$ outlying manifolds with $h^{1,1} = h^{2,1} = 0$ are product spaces, recognisable from their block-diagonal matrix.
Moreover, we will also remove outliers with $h^{1,1} = 19$ and $h^{2,1} > 86$, which represent $15$ and $2$ samples.
In total, this represents $39$ samples, or $\SI{0.49}{\percent}$ of the total data.
To simplify the overall presentation and because the dataset is complete, we will mainly focus on the pruned subset of the data obtained by removing outliers, even from the test set.\footnotemark{}
\footnotetext{%
There is no obligation to use a ML algorithm to label outliers in the training set, it is perfectly fine to decide which data to include or not, even based on targets.
However, for a real-world application, outliers in the test set should be labeled by some process based only on the input features.
Flagging possible outliers may improve the predictions by helping the machine understand that such samples require more caution.
}%
This implies that Hodge numbers lie in the ranges $1 \le h^{1,1} \le 16$ and $15 \le h^{2,1} \le 86$.
Except when stated otherwise, accuracy is indicated for this pruned dataset.
Obviously, the very small percentage of outliers makes the effect of removing them from the test set negligible when stating accuracy.
\begin{figure}[htp]
\centering
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={0 0 6in 0}, clip]{images/label-distribution-compare_orig}
\caption{$h^{1,1}$}
\end{subfigure}
\qquad
\begin{subfigure}[c]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth, trim={6in 0 0 0}, clip]{images/label-distribution-compare_orig}
\caption{$h^{2,1}$}
\end{subfigure}
\caption{%
Summary of the statistics for the distributions of both Hodge numbers.
The coloured box shows the three quartiles of the distributions, with the internal horizontal line corresponding to the median.
The ``whiskers'' cover the interquartile range, i.e.\ $1.5$ times the distance between the first and third quartiles, from the lower and upper limits of the boxes.
Isolated points show the remaining outliers.
which we however choose to keep to avoid excessively pruning the dataset.
}
\label{fig:data:distr}
\end{figure}
\section{Introduction}
The last few years have seen a major uprising of machine learning (ML), and more particularly of neural networks~\cite{Goodfellow:2016:DeepLearning, Chollet:2017:DeepLearningPython, Geron:2019:HandsOnMachineLearning}.
This technology is extremely efficient at discovering and predicting patterns and now pervades most fields of applied sciences and of the industry.
In view of its versatility, it is likely that ML will find its way towards high-energy and theoretical physics (see~\cite{Albertsson:2018:MachineLearningHigh, Mehta:2019:HighbiasLowvarianceIntroduction, Ntampaka:2019:RoleMachineLearning, Carleo:2019:MachineLearningPhysical, Buchanan:2019:PowerMachineLearning, Ruehle:2020:DataScienceApplications} for selected reviews).
One of the most critical places where progress can be expected is in understanding the geometries used to describe string compactifications.
String theory is the most developed candidate for a theory of quantum gravity together with the unification of matter and interactions.
However, it predicts ten spacetime dimensions: to recover our four-dimensional Universe, it is necessary to compactify six dimensions.
For string theory to be a fundamental theory of reality, a single compactification should describe the current Universe (obviously, other compactifications may enter at early or later stages since spacetime is dynamical).
Unfortunately, the number of possibilities -- forming the so-called string landscape -- is huge (numbers as high as $\num{e272000}$ have been suggested for some models)~\cite{Lerche:1989:ChiralFourDimensionalHeterotic, Douglas:2003:StatisticsStringMTheory, Ashok:2004:CountingFluxVacua, Douglas:2004:BasicResultsVacuum, Douglas:2007:FluxCompactification, Taylor:2015:FtheoryGeometryMost, Schellekens:2016:BigNumbersString, Halverson:2017:AlgorithmicUniversalityFtheory, Taylor:2018:ScanningSkeleton4D, Constantin:2019:CountingStringTheory}, the mathematical objects entering the compactifications are complex and typical problems are often NP-complete, NP-hard, or even undecidable~\cite{Denef:2007:ComputationalComplexityLandscape, Halverson:2019:ComputationalComplexityVacua, Ruehle:2020:DataScienceApplications}, making an exhaustive classification impossible.
Additionally, there is no single framework to describe all the possible (flux) compactifications.
As a consequence, each class of models must be studied with different methods.
This has prevented any precise connection to the existing and tested theories (in particular, the Standard Model of particle physics) or the proposal of a sharply defined and doable experiment.
Until recently, the string landscape has been studied using different methods: 1) analytic computations for simple examples, 2) general statistics, 3) random scans, 4) algorithmic enumerations of possibilities.
This has been a large endeavor of the string community, and we refer to the reviews~\cite{Grana:2006:FluxCompactificationsString, Lust:2009:SeeingStringLandscape, Ibanez:2012:StringTheoryParticle, Brennan:2018:StringLandscapeSwampland, Halverson:2018:TASILecturesRemnants, Ruehle:2020:DataScienceApplications} and to references therein for more details.
The main objective of such studies is to understand what are the generic predictions of string theory: even if “the” correct compactification has not been found, this helps to narrow down what to look for experimentally.
The first conclusion of these studies is that compactifications giving an effective theory close to the Standard Model are scarce.\footnotemark{}
\footnotetext{%
This means that the gauge group is not much bigger than $\group{SU}(3) \times \group{SU}(2) \times \group{U}(1)$ and that there are not too many additional particles.
The current bounds on BSM (Beyond Standard Model) physics put even stronger restrictions.
}%
Each of the four approaches display different limitations: 1) lacks of genericity, 2) is too much general, 3) ignores the structure of the landscape and has few chances to discover rare compactifications, 4) requires too much computational power to move beyond “simple” examples.
As a result, no major phenomenological progress has been seen in the last decade and finding a physical compactification looks still as remote.
In reaction to these difficulties and starting with the seminal paper~\cite{Abel:2014:GeneticAlgorithmsSearch}, new investigations based on ML appeared in the recent years, focusing on different aspects of the string landscape and of the geometries used in compactifications~\cite{Krefl:2017:MachineLearningCalabiYau, Ruehle:2017:EvolvingNeuralNetworks, He:2017:MachinelearningStringLandscape, Carifio:2017:MachineLearningString, Altman:2019:EstimatingCalabiYauHypersurface, Bull:2018:MachineLearningCICY, Cole:2019:TopologicalDataAnalysis, Klaewer:2019:MachineLearningLine, Mutter:2019:DeepLearningHeterotic, Wang:2018:LearningNonHiggsableGauge, Ashmore:2019:MachineLearningCalabiYau, Brodie:2020:MachineLearningLine, Bull:2019:GettingCICYHigh, Cole:2019:SearchingLandscapeFlux, Faraggi:2019:MachineLearningClassification, Halverson:2019:BranesBrainsExploring, He:2019:DistinguishingEllipticFibrations, Bies:2020:MachineLearningAlgebraic, Bizet:2020:TestingSwamplandConjectures, Halverson:2020:StatisticalPredictionsString, Krippendorf:2020:DetectingSymmetriesNeural, Otsuka:2020:DeepLearningKmeans, Parr:2020:ContrastDataMining, Parr:2020:PredictingOrbifoldOrigin} (see also~\cite{Erbin:2018:GANsGeneratingEFT, Betzler:2020:ConnectingDualitiesMachine, Chen:2020:MachineLearningEtudes, Gan:2017:HolographyDeepLearning, Hashimoto:2018:DeepLearningAdSCFT, Hashimoto:2018:DeepLearningHolographic, Hashimoto:2019:AdSCFTDeepBoltzmann, Tan:2019:DeepLearningHolographic, Akutagawa:2020:DeepLearningAdSQCD, Yan:2020:DeepLearningBlack, Comsa:2019:SO8SupergravityMagic, Bobev:2020:CornucopiaAdS5Vacua, Bobev:2020:PropertiesNewN, Krishnan:2020:MachineLearningcal} for related works).
For more context and a summary of the state of the art, the reader is referred to the excellent review~\cite{Ruehle:2020:DataScienceApplications}.
ML is extremely adequate when it comes to pattern search, which motivates two main applications to string theory: 1) explore systematically a space of possibilities (if they are not random, ML should be able to find a pattern, even if it is too complicated to be formulated explicitly), 2) obtain approximate results on distributions from which mathematical formulas can be deduced.
We want to address the question of computing the Hodge numbers $h^{1,1}$ and $h^{2,1}$ (positive integers) for \emph{complete intersection Calabi--Yau} (CICY) $3$-folds~\cite{Green:1987:CalabiYauManifoldsComplete} using different machine learning algorithms.
A CICY is completely specified by its \emph{configuration matrix} (with entries being positive integers), which is the basic input of the algorithms.
The CICY $3$-folds are the simplest Calabi--Yau and they have been well studied.
In particular, they have been completely classified and their topological properties computed~\cite{Candelas:1988:CompleteIntersectionCalabiYau, Green:1989:AllHodgeNumbers, Anderson:2017:FibrationsCICYThreefolds} (see~\cite{Lutken:1988:RecentProgressCalabiYauology, Hubsch:1992:CalabiYauManifoldsBestiary, Anderson:2018:TASILecturesGeometric, He:2020:CalabiYauSpacesString} for reviews).
For these reasons, they provide an excellent sandbox to test ML algorithms in a controlled environment.
More particularly, simple tests show that the task is difficult for simple ML algorithms -- even neural networks -- such that this is an interesting challenge to solve before moving to more difficult problems.
The goal is to predict two positive integers from a matrix of positive integers.
This task is complicated by various redundancies in the description (such as an independence in the permutations of lines and columns).
A simple sequential network taking only the matrix as input performs badly, especially for $h^{2,1}$.
As a consequence, more advanced methods are needed.
While usual physics application of ML reduces to feeding a (big) sequential neural network with raw data, real-world applications are built following a more general workflow~\cite{Coursera:HowWinData, Geron:2019:HandsOnMachineLearning, Skiena:2017:DataScience}: 1) understanding of the problem, 2) exploratory data analysis (EDA), 3) design of a baseline, 4) definition of a validation strategy, 5) feature engineering and selection, 6) design of ML models, 7) ensembling.
While the first step is straightforward, it is still interesting to notice that computations involved in string geometries (using algebraic topology) are far from standard applications of ML algorithms, which makes the problem even more interesting.
EDA aims at understanding better the dataset, in particular, by finding how the variables are distributed, correlated, determining if there are outliers, etc.
This analysis naturally leads to designing new features from the existing ones, which is called \emph{feature engineering}.
Indeed, putting derived features by hand may make the data more easily understandable by the ML algorithms, for example by emphasizing important properties.\footnotemark{}
\footnotetext{%
While one could expect ML algorithms to generate these features by themselves, this may complicate the learning process.
So in cases where it is straightforward to compute meaningful derived features, it is often worth considering them.
}%
This phase is followed by \emph{feature selection}, where different set of features are chosen according to the need of each algorithm from step~6).
In between, one needs to set up a validation strategy to ensure that the predictions appropriately reflect the real values, together with a baseline model, which gives a lower bound on the accuracy together with a working pipeline.\footnotemark{}
\footnotetext{%
For example, the original work on this topic~\cite{He:2017:MachinelearningStringLandscape} did not set up a validation strategy and reported the accuracy over both the training and test data.
Correcting this problem leads to an accuracy of $37\%$~\cite{Bull:2018:MachineLearningCICY}, which is lower than the linear regression baseline.
}%
For instance, we find that a simple linear regression using the configuration matrix as input gives \SIrange{43.6}{48.8}{\percent} for $h^{1,1}$ and \SIrange{9.6}{10.4}{\percent} for $h^{2,1}$ using from $20\%$ to $80\%$ of data for training.
Hence, any algorithm \emph{must} do better than this to be worth considering.
Finally, we can build different models in step~6), in particular, by considering different topologies of neural networks beyond the simplest sequential models.
The last optional step consists in combining different models together in order to improve the results.
With respect to the whole process, the purpose of this paper is also pedagogical and aims at exemplifying how these steps are performed in an applied ML project.
There is a finite number of $7890$ CICY $3$-folds.
Due to the freedom in representing the configuration matrix, two datasets have been constructed: the “original dataset”~\cite{Candelas:1988:CompleteIntersectionCalabiYau, Green:1989:AllHodgeNumbers} and the “favourable dataset”~\cite{Anderson:2017:FibrationsCICYThreefolds}.
A configuration matrix is said to be favorable if its second cohomology descends completely from the second cohomology of the ambient space: this implies that $h^{1,1}$ equals the number of projective spaces in the ambient space~\cite{Anderson:2017:FibrationsCICYThreefolds, Gray:2014:TopologicalInvariantsFibration}.
In the “favourable dataset”, all configuration matrices are favorable whenever possible (\SI{99.1}{\percent}), whereas in the “original dataset” only \SI{61.8}{\percent} of the matrices are favorable.
Both datasets will be described in more details in \Cref{sec:data:datasets}.
Our analysis continues and generalizes~\cite{He:2017:MachinelearningStringLandscape, Bull:2018:MachineLearningCICY} at different levels.
We compute $h^{2,1}$, which has been ignored in~\cite{He:2017:MachinelearningStringLandscape, Bull:2018:MachineLearningCICY}, where the authors argue that it can be computed from $h^{1,1}$ and from the Euler characteristics (a simple formula exists for the latter).
In our case, we want to push the idea of using ML to learn about the physics (or the mathematics) of CY to its very end: we assume that we do not know anything about the mathematics of the CICY, except that the configuration matrix is sufficient to derive all quantities.
Moreover, we have already mentioned that ML algorithms have rarely been used to derive data in algebraic topology, which can be a difficult task.
For this reason, obtaining also $h^{2,1}$ from ML techniques is an important first step towards using ML for more general problems in string geometries.
In particular, this helps to prepare the study of CICY $4$-folds (classified in~\cite{Gray:2013:AllCompleteIntersection}) for which there are four Hodge numbers which are expected to be even more difficult to compute.
Finally, regression is also more useful for extrapolating results: a classification approach assumes that we already know all the possible values of the Hodge numbers and has difficulties to predict labels which do not appear in the training set.
This is necessary when we move to a dataset for which not all topological quantities have been computed, for instance CY constructed from the Kreuzer--Skarke list of polytopes~\cite{Kreuzer:2002:CompleteClassificationReflexive}.
\begin{figure}[tbp]
\centering
\begin{subfigure}[c]{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{images/cicy_matrix_plots}
\caption{Accuracy reached when trained using only the configuration matrix.}
\end{subfigure}
\begin{subfigure}[c]{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{images/cicy_best_plots}
\caption{Accuracy reached using the best training set for each algorithm.}
\end{subfigure}
\caption{The plots show the best accuracy reached by the models considered in this paper for the old dataset.
The models are trained to predict separately $h^{1,1}$ and $h^{2,1}$ and using $30\%$ and $80\%$ of the data for training.}
\label{fig:intro:comparison}
\end{figure}
In this paper, we compare the performances of the following algorithms: linear regression, support vector machines (SVM) with linear and Gaussian kernels, decision trees and ensemble thereof -- random forests and gradient boosting --, and deep neural networks.
The best results obtained with and withou feature engineering are displayed in \Cref{fig:intro:comparison} for the old dataset.
We find that, in all cases except neural networks, using engineered features greatly enhance the performances.
The EDA reveals that the number of projective spaces forming the ambient space (equal to the number of rows) is a particularly distinguished feature.
In fact, all algorithms yield an accuracy of \SIrange{99}{100}{\percent} for $h^{1,1}$ in the favorable dataset.
For the linear regression, this directly gives the well-known results~\cite{Anderson:2017:FibrationsCICYThreefolds} that $h^{1,1}$ equals the number of projective spaces for favorable configuration matrix.
In the case of the original dataset, the best model is a neural network inspired by Google's Inception model~\cite{Szegedy:2014:GoingDeeperConvolutions, Szegedy:2015:RethinkingInceptionArchitecture, Szegedy:2016:Inceptionv4InceptionResNetImpact}, which allows to reach nearly \SI{100}{\percent} accuracy.
This neural network is further studied in~\cite{Erbin:2020:InceptionCICY}.
The algorithms are not as successful for $h^{2,1}$, with the Inception model giving again the best result, close to $\SI{50}{\percent}$ accuracy -- which is still much better that what the baseline or simple models do.
We leave improving the computation of $h^{2,1}$ and interpreting what the different algorithms learn for a future work.
The data analysis and ML are programmed in Python using standard open-source packages: \texttt{pandas}~\cite{McKinney:2011:Pandas}, \texttt{matplotlib}~\cite{Hunter:2007:Matplotlib}, \texttt{seaborn}~\cite{Waskom:2017:Seaborn}, \texttt{scikit-learn}~\cite{Pedregosa:2011:ScikitLearn}, \texttt{scikit-optimize}~\cite{Head:2018:ScikitOptimize}, \texttt{tensorflow}~\cite{Package:Tensorflow} (and its high level API \texttt{Keras}~\cite{Package:Keras}).
The code and its description are available on \href{https://thesfinox.github.io/ml-cicy/}{Github}.
This paper is organized as follows.
In \Cref{sec:data}, we first recall the definition of Calabi--Yau manifolds (\Cref{sec:data:cy}) and describe the two existing CICY datasets (\Cref{sec:data:datasets}).
We then engineer new features before performing an EDA for both datasets (\Cref{sec:data:eda}), reproducing some well-known figures from the literature.
Then, in \Cref{sec:ml}, we implement the different ML algorithms.
Our paper culminates in the description of the Inception-like neural network in \Cref{sec:ml:nn:inception} where we reach the highest accuracy.
Finally, we discuss our results in \Cref{sec:conclusion}.
\Cref{app:ml-algo} contains details on the different algorithms used in this paper.
\section{Machine Learning Algorithms}
\label{app:ml-algo}
\subsection{Linear regression}
\label{sec:app:linreg}
Considering a set of $F$ features $\{ x_n \}$ where $n = 1, \ldots, F$, a linear model learns a function
\begin{equation}
f(x_n)
= \sum_{n=1}^F w_n x_n + b,
\end{equation}
where $w$ and $b$ are the \emph{weights} and \emph{intercept} of the fit.
One of the key assumptions behind a linear fit is the independence of the residual error between the predicted point and the value of the model, which can therefore be assumed to be sampled from a normal distribution peaked at the average value~\cite{Lista:2017:StatisticalMethods,Coursera:DataScience}.
The parameters of the fit are then chosen to maximise their \emph{likelihood} function, or conversely to minimise its logarithm with a reversed sign (the $\chi^2$ function).
A related task is to minimise the mean squared error, without assuming a statistical distribution of the residual error: ML for regression usually implements this as loss function of the estimators.
In this sense, loss functions for regression are more general than a likelihood approach, but are nonetheless related.
For plain linear regression, the associated loss is
\begin{equation}
\mathcal{L}(w, b) =
\frac{1}{2N} \sum_{i=1}^N \sum_{n=1}^F
\left( y^{(i)} - (w_n x_n^{(i)} + b) \right)^2,
\end{equation}
where $N$ is the number of samples and $x_n^{(i)}$ the $n$th feature of the $i$th sample.
The values of the parameters will therefore be:
\begin{equation}
(w, b) = \underset{w,\,b}{\mathrm{argmin}}~ \mathcal{L}(w, b).
\end{equation}
This usually requires looping over all samples and all features, thus the \emph{least squares} method has a time complexity of $\mathrm{O}( F \times N )$: while the increase of the number of samples might be an issue, the number of engineered features and matrix components usually does not change and does not represent a huge effort in terms of rescaling the algorithm.
There are however different versions of possible regularisation which we might add to constrain the parameters of the fit and avoid adapting too well to the training set.
In particular we may be interested in adding a $\ell_1$ regularisation:
\begin{equation}
\mathcal{L}_1(w) = \sqrt{\sum\limits_{n=1}^F w_n^2},
\end{equation}
or the $\ell_2$ version:
\begin{equation}
\mathcal{L}_2(w) = \sum\limits_{n=1}^F w_n^2.
\end{equation}
Notice that in general we do not regularise the intercept.
These terms can be added to the plain loss function to try and avoid large parameters to influence the predictions and to keep better generalisation properties:
\begin{itemize}
\item add both $\ell_1$ and $\ell_2$ regularisation (this is called \emph{elastic net}):
\begin{equation}
\mathcal{L}_{en}(w, b;~\alpha_{en}, L) = \mathcal{L}(w,b) + \alpha_{en} \cdot L \cdot \mathcal{L}_1(w) + \frac{\alpha_{en}}{2} \cdot (1 - L) \cdot \mathcal{L}_2(w),
\end{equation}
\item keep only $\ell_1$ regularisation (i.e.\ the \emph{lasso} regression):
\begin{equation}
\mathcal{L}_{lss}(w, b;~\alpha_{lss}) = \mathcal{L}(w,b) + \alpha_{lss} \cdot \mathcal{L}_1(w),
\end{equation}
\item keep only $\ell_2$ regularisation (\emph{ridge} regression):
\begin{equation}
\mathcal{L}_{rdg}(w, b;~\alpha_{rdg}) = \mathcal{L}(w,b) + \alpha_{rdg} \cdot \mathcal{L}_2(w).
\label{eq:ridge:loss}
\end{equation}
\end{itemize}
The role of the hyperparameter $L$ is to balance the contribution of the additional terms. For larger values of the hyperparameter $\alpha$, $w$ (and $b$) assume smaller values and adapt less to the particular training set.
\subsection{Support Vector Machines for Regression}
\label{sec:app:svr}
This family of supervised ML algorithms were created with classification tasks in mind~\cite{Cortes:1995:SVMClassification} but have proven to be effective also for regression problems~\cite{Drucker:1997:SVMRegression}.
Differently from the linear regression, instead of minimising the squared distance of each sample, the algorithm assigns a penalty to predictions of samples $x^{(i)} \in \R^F$ (for $i = 1, 2, \dots, N$) which are further away than a certain hyperparameter $\varepsilon$ from their true value $y$, allowing however a \textit{soft margin} of tolerance represented by the penalties $\zeta$ above and $\xi$ below.
This is achieved by minimising $w,\, b,\, \zeta$ and $\xi$ in the function\footnotemark
\footnotetext{In a classification task the training objective would be the minimisation of the opposite of the log-likelihood function of predicting a positive class, that is $y^{(i)} ( w_n \phi_n( x^{(i)} ) + b )$, which should equal the unity for good predictions (we can consider $\varepsilon = 1$), instead of the regression objective $y^{(i)} - w_n \phi_n( x^{(i)} ) - b$.
The differences between SVR for classification purposes and regression follow as shown.}
\begin{equation}
\begin{split}
\mathcal{L}(w, b, \zeta, \xi)
& =
\frac{1}{2} \sum\limits_{n = 1}^{F'} w_n^2
+
C \sum\limits_{i = 1}^N \left( \zeta^{(i)} + \xi^{(i)} \right)
\\
& +
\sum\limits_{i = 1}^N \sum\limits_{n = 1}^{F'} \alpha^{(i)}
\left( y^{(i)} - w_n \phi_n(x^{(i)}) - b - \varepsilon - \zeta^{(i)} \right)
\\
& +
\sum\limits_{i = 1}^N \sum\limits_{n = 1}^{F'} \beta^{(i)}
\left( w_n \phi_n(x^{(i)}) + b - y^{(i)} - \varepsilon - \xi^{(i)} \right)
\\
& -
\sum\limits_{i = 1}^N \left( \rho^{(i)} \zeta^{(i)} + \sigma^{(i)} \xi^{(i)} \right)
\end{split}
\label{eq:svr:loss}
\end{equation}
where $\alpha^{(i)},\, \beta^{(i)},\, \rho^{(i)},\, \sigma^{(i)} \ge 0$ such that the previous expression encodes the constraints
\begin{equation}
\begin{cases}
y^{(i)} - \sum\limits_{n = 1}^{F'} w_n \phi_n( x^{(i)} ) - b & \le \varepsilon + \zeta^{(i)},
\qquad
\varepsilon \ge 0,
\quad
\zeta^{(i)} \ge 0,
\quad
i = 1, 2, \dots, N
\\
\sum\limits_{n = 1}^{F'} w_n \phi_n( x^{(i)} ) + b - y^{(i)} & \le \varepsilon + \xi^{(i)},
\qquad
\varepsilon \ge 0,
\quad
\xi^{(i)} \ge 0,
\quad
i = 1, 2, \dots, N
\end{cases}
\label{eq:svr:constraints}
\end{equation}
and where $\phi( x^{(i)} ) \in \R^{F'}$ is a function mapping the feature vector $x^{(i)} \in \R^F$ in a higher dimensional space ($F' > F$), whose interpretation will become clear in an instant.
The minimisation problem leads to
\begin{equation}
\begin{cases}
w_n - \sum\limits_{i = 1}^N \left( \alpha^{(i)} - \beta^{(i)} \right) \phi_n( x^{(i)} ) = 0
\\
\sum\limits_{i = 1}^N \left( \alpha^{(i)} - \beta^{(i)} \right) = 0
\\
\sum\limits_{i = 1}^N \left( \alpha^{(i)} + \rho^{(i)} \right)
=
\sum\limits_{i = 1}^N \left( \beta^{(i)} + \sigma^{(i)} \right)
=
C
\end{cases}
\end{equation}
such that $0 \le \alpha^{(i)},\, \beta^{(i)} \le C,~\forall\, i = 1, 2, \dots, N$. This can be reformulated as a \textit{dual} problem in finding the extrema of $\alpha^{(i)}$ and $\beta^{(i)}$ in
\begin{equation}
W(\alpha, \beta)
=
\frac{1}{2} \sum\limits_{i, j = 1}^N \theta^{(i)} \theta^{(j)} \mathrm{K}( x^{(i)}, x^{(j)} )
-
\varepsilon \sum\limits_{i = 1}^N \left( \alpha^{(i)} + \beta^{(i)} \right)
+
\sum\limits_{i = 1}^N y^{(i)} \theta^{(i)},
\label{eq:svr:loss-v2}
\end{equation}
where $\theta = \alpha - \beta$ are called \textit{dual coefficients} (accessible through the attribute \texttt{dual\_coef\_} of \texttt{svm.SVR} in \texttt{scikit-learn}) and $\mathrm{K}( x^{(i)}, x^{(j)} ) = \sum\limits_{n = 1}^{F'} \phi_n( x^{(i)} ) \phi_n( x^{(j)} )$ is the \textit{kernel} function.
Notice that the Lagrange multipliers $\alpha^{(i)}$ and $\beta^{(i)}$ are non vanishing only for particular sets of vectors $l^{(i)}$ which lie outside the $\varepsilon$ dependent bounds of \eqref{eq:svr:constraints} and operate as landmarks for the others.
They are called \textit{support vectors} (accessible using the attribute \texttt{support\_vectors\_} in \texttt{svm.SVR}), hence the name of the algorithm. There can be at most $N$ when $\varepsilon \to 0^+$.
As a consequence any sum involving $\alpha^{(i)}$ or $\beta^{(i)}$ can be restricted to the subset of support vectors.
Using the kernel notation, the predictions will therefore be
\begin{equation}
y_{pred}^{(i)}
=
y_{pred}( x^{(i)} )
=
\sum\limits_{n = 1}^{F'} w_n \phi_n( x^{(i)} ) + b
=
\sum\limits_{a \in A} \theta^{(a)} \mathrm{K}( x^{(i)}, l^{(a)} ) + b,
\end{equation}
where $A \subset \lbrace 1, 2, \dots, N \rbrace$ is the subset of labels of the support vectors.
In \Cref{sec:res:svr} we consider two different implementations of the SVM algorithm:
\begin{itemize}
\item the \textit{linear kernel}, namely the case when $K \equiv \mathrm{id}$ and the loss, in the \texttt{scikit-learn} implementation of \texttt{svm.LinearSVR}, can be simplified to
\begin{equation}
\mathcal{L}(w, b)
=
\mathrm{C} \sum\limits_{i = 1}^N \sum\limits_{n = 1}^{F'} \max\left( 0, \abs{ y^{(i)} - w_n \phi_n( x^{(i)} - b) } - \varepsilon \right) + \frac{1}{2} \sum\limits_{n = 1}^{F'} w_j^2,
\end{equation}
without resolving to the dual formulation of the problem.
\item the Gaussian kernel (called \texttt{rbf}, from \textit{radial basis function}) in which
\begin{equation}
\mathrm{K}(x^{(i)}, l^{(a)}) = \exp\left( - \gamma \sum\limits_{n = 1}^F \left( x^{(i)}_n - l^{(a)}_n \right)^2 \right).
\end{equation}
\end{itemize}
From the definition of the loss function \eqref{eq:svr:loss} and the kernels, we can appreciate the role of the main hyperparameters of the algorithm.
While the interpretation of $\varepsilon$ is straightforward as the margin allowed without penalty for the prediction, $\gamma$ represents the width of the normal distribution used to map the features in the higher dimensional space.
Furthermore, $C$ plays a similar role to the $l_2$ additional term in \eqref{eq:ridge:loss} by controlling the entity of the penalty for samples outside the $\varepsilon$-dependent bound, however its relation to the linear regularisation is $\alpha_{ridge} = C^{-1}$, thus $C > 0$ by definition.
Given the nature of the algorithm, SVMs are powerful tools which usually grant better results in both classification and regression tasks with respect to logistic and linear regression, but they scale poorly with the number of samples used during training.
In particular the time complexity is at worst\footnotemark $\mathrm{O}(F \times N^3)$ due to the quadratic nature of \eqref{eq:svr:loss-v2} and the computation of the kernel function for all samples: for large datasets ($N \gtrsim 10^4$) they are usually outperformed by ANNs.
\footnotetext{In general it is plausible that the time complexity is $\mathrm{O}(F \times N^2)$ based on good implementations of caching in the algorithm.}
\subsection{Decision Trees, Random Forests and Gradient Boosting}
\label{sec:app:trees}
Decision trees are supervised ML algorithms which model simple decision rules based on the input data~\cite{Quinlan:1986:DecisionTrees,Wittkowski:1986:CART}.
They are informally referred to with the acronym CART (from \textit{Classification And Regression Trees}) and their name descends from the binary tree structure coming from such decision functions separating the input data at each iteration (\textit{node}), thus creating a bifurcating structure with \textit{branches} (the different paths, or decisions made) and \textit{leaves} (the samples in each branch): the basic idea behind them is an \textit{if\dots then\dots else} structure.
In \texttt{scikit-learn} this is implemented in the classes \texttt{tree.DecisionTreeClassifier} and \texttt{tree.DecisionTreeRegressor}.
The idea behind it is to take input samples $x^{(i)} \in \R^F$ (for $i = 1, 2, \dots, N$) and partition the space in such a way that data with the same label $y^{(i)} \in \R$ is on the same subset of samples (while for classification this may be natural to visualise, for regression this amounts to approximate the input data with a step function whose value is constant inside the partition).
Let in fact $j = 1, 2, \dots, F$ be a feature and $x^{(i)}_j$ the corresponding value for the sample $i$, at each node $n$ of the tree we partition the set of input data $\mathcal{M}_n$ into two subsets:
\begin{equation}
\begin{split}
\mathcal{M}^{[1]}_n( t_{j,\, n} )
& =
\left\lbrace (x^{(i)}, y^{(i)}) \in \R^F \times \R \quad \vert \quad x^{(i)}_j < t_{j,\, n} \quad \forall i \in A_n \right\rbrace,
\\
\mathcal{M}^{[2]}_n( t_{j,\, n} )
& =
\mathcal{M}_n \setminus \mathcal{M}^{[1]}_n( t_{j,\, n} ),
\end{split}
\end{equation}
where $A_n$ is the full set of labels of the data samples in the node $n$ and $t_{j,\, n} \in \R$ is a threshold value for the feature $j$ at node $n$.
The measure of the ability of the split to reach the objective (classifying or creating a regression model to predict the labels) is modelled through an \textit{impurity} function (i.e. the measure of how often a random data point would be badly classified or how much it would be badly predicted).
Common choices in classification tasks are the Gini impurity, a special quadratic case of the Tsallis entropy (which in turn is a generalisation of the Boltzmann-Gibbs entropy, recovered as the first power of the Tsallis entropy) and the information theoretic definition of the entropy.
In regression tasks it is usually given by the $l_1$ and $l_2$ norms of the deviation from different estimators (mean and median) for each node $n$:
\begin{itemize}
\item \textit{mean absolute error}
\begin{equation}
H^{[l]}_n(x;\, t_{j,\, n}) = \frac{1}{\Abs{\mathcal{M}^{[l]}_n( t_{j,\, n} )}} \sum\limits_{i \in A^{[l]}_n} \Abs{y^{(i)} - \tilde{y}^{[l]}_{pred,\, n}( x )},
\quad
( x^{(i)}, y^{(i)} ) \in \mathcal{M}_n( t_{j,\, n} ),
\end{equation}
\item \textit{mean squared error}:
\begin{equation}
H^{[l]}_n(x;\, t_{j,\, n}) = \frac{1}{\Abs{\mathcal{M}^{[l]}_n( t_{j,\, n} )}} \sum\limits_{i \in A^{[l]}_n} \left( y^{(i)} - \bar{y}^{[l]}_{pred,\, n}( x ) \right)^2,
\quad
( x^{(i)}, y^{(i)} ) \in \mathcal{M}_n( t_{j,\, n} ),
\end{equation}
\end{itemize}
where $\Abs{\mathcal{M}^{[l]}_n( t_{j,\, n} )}$ is the cardinality of the set $\mathcal{M}^{[l]}_n( t_{j,\, n} )$ for $l = 1, 2$ and
\begin{equation}
\tilde{y}^{[l]}_{pred,\, n}( x ) = \underset{i \in A^{[l]}_n}{\mathrm{median}}~ y_{pred}( x^{(i)} ),
\qquad
\bar{y}^{[l]}_{pred,\, n}( x ) = \frac{1}{\Abs{A^{[l]}_n}} \sum\limits_{i \in A^{[l]}_n} y_{pred}( x^{(i)} ),
\end{equation}
where $A_n^{[l]} \subset A_n$ are the subset of labels in the left and right splits ($l = 1$ and $l = 2$, that is) of the node $n$.
The full measure of the impurity of the node $n$ and for a feature $j$ is then:
\begin{equation}
G_{j,\, n}(\mathcal{M};\, t_{j,\, n})
=
\frac{\Abs{\mathcal{M}_n^{[1]}( t_{j,\, n} )}}{\Abs{\mathcal{M}_n}} H^{[1]}_n( x;\, t_{j,\, n} )
+
\frac{\Abs{\mathcal{M}_n^{[2]}( t_{j,\, n} )}}{\Abs{\mathcal{M}_n}} H^{[2]}_n( x;\, t_{j,\, n} ),
\end{equation}
from which we select the parameters
\begin{equation}
\hat{t}_{j,\, n}
=
\underset{t_{j,\, n}}{\mathrm{argmin}}~ G_n( \mathcal{M}_n;\, t_{j,\, n} ).
\label{eq:trees:lossmin}
\end{equation}
We then recurse over all $\mathcal{M}_n^{[l]}( \hat{t}_{j,\, n} )$ (for $l = 1, 2$) until we reach the maximum allowed depth of the tree (at most $\Abs{\mathcal{M}_n} = 1$).
Other than just predicting a class or a numeric value, decision trees provide a criterion to assign the importance of each feature appearing in the nodes.
The implementation of the procedure can however vary between different libraries: in \texttt{scikit-learn} the importance of a feature is computed by the total reduction in the objective function due to the presence of the feature, normalised over all nodes.
Namely it is defined as the difference between the total impurity normalised by the total amount of samples in the node and the sum of the separate impurities of the left and right split normalised over the number of samples in the respective splits, summed over all the nodes.
Thus features with a high \textit{variable ranking} (or \textit{variable importance}) are those with a higher impact in reducing the loss of the algorithm and can be expected to be seen in the initial branches of the tree.
A measure of the variable importance is in general extremely useful for feature engineering and feature selection since it gives a natural way to pick features with a higher chance to provide a good prediction of the labels.
By nature decision trees have a query time complexity of $\mathrm{O}( \log(N) )$ as most binary search algorithms.
However their definition requires running over all $F$ features to find the best split for each sample thus increasing the time complexity to $\mathrm{O}( F \times N \log( N ) )$.
Summing over all samples in the whole node structure leads to the worst case scenario of a time complexity $\mathrm{O}( F \times N^2 \log( N ) )$.
Well balanced trees (that is, nodes are approximately symmetric with the same amount of data samples inside) can usually reduce that time by a factor $N$, but it may not always be the case.
Decision trees have the advantage to be very good at classifying or creating regression relations in the presence of ``well separable'' data samples and they usually provide very good predictions in a reasonable amount of time (especially when balanced).
However if $F$ is very large, a small variation of the data will almost always lead to a huge change in the decision thresholds and they are usually prone to overfit.
There are however smart ways to compensate this behaviour based on \textit{ensemble} learning such as \textit{bagging}\footnotemark and \textit{boosting} as well as \textit{pruning} methods such as limiting the depth of the tree or the number of splits and introducing a dropout parameter to remove certain nodes of the tree.
\footnotetext{The term \textit{bagging} comes from the contraction of \textit{bootstrap} and \textit{aggregating}: predictions are in fact made over randomly sampled partitions of the training set with substitution (i.e. samples can appear in different partitions, known as \textit{bootstrap} approach) and then averaged together (\textit{aggregating}).
Random forests are an improvement to this simple idea and work best for decision trees: while it is possible to bag simple trees and take their predictions, using the random subsampling as described usually leads to better performance and results.}
Also random forests of trees provide a variable ranking system by averaging the importance of each feature across all base estimators in the bagging aggregator.
As a reference, \textit{random forests} of decision trees (\texttt{ensemble.RandomForestRegressor} in \texttt{scikit-learn}) are ensemble learning algorithms based on fully grown (deep) decision trees.
They were created to overcome the issues related to overfitting and variability of the input data and are based on random sampling of the training data~\cite{Ho:1995:RandomForests}.
The idea is to take $K$ random partitions of the training data and train a different decision tree for each of them and combine the results: for a classification task this would resort to averaging the \textit{a posteriori} (or conditional) probability of predicting the class $c$ given an input $x$ (i.e. the Bayesan probability $P(c \vert x)$) over the $K$ trees, while for regression this amount to averaging the predictions of the trees $y_{pred,\, \hat{n}}^{(i)\, \lbrace k \rbrace}$ where $k = 1, 2, \dots, K$ and $\hat{n}$ is the final node (i.e. the node containing the final predictions).
This defines what has been called a \textit{random forest} of trees which can usually help in improving the predictions by reducing the variance due to trees adapting too much to training sets.
\textit{Boosting} methods are another implementation of ensemble learning algorithms in which more \textit{weak learners}, in this case shallow decision trees, are trained over the training dataset~\cite{Friedman:2001:Boosting,Friedman:2002:Boosting}. In general parameters $\hat{t}_{j,\, n}$ in \eqref{eq:trees:lossmin} can be approximated by an expansion
\begin{equation}
t_{j,\, n}( x )
=
\sum\limits_{m = 0}^M t^{\{m\}}_{j,\, n}( x )
=
\sum\limits_{m = 0}^M \beta^{\{m\}}_{j,\, n} g( x;\, a^{\{m\}}_{j,\, n} ),
\label{eq:trees:par}
\end{equation}
where $g( x;\, a^{\{m\}}_{j,\, n})$ are called \textit{base learners} and $M$ is the number of iterations\footnotemark.
\footnotetext{Different implementations of the algorithm refer to the number of iterations in different way.
For instance \texttt{scikit-learn} calls them \texttt{n\_estimators} in the class \texttt{ensemble.GradientBoostingRegressor} in analogy to the random forest where the same name is given to the number of trained decision trees, while \texttt{XGBoost} prefers \texttt{num\_boost\_rounds} and \texttt{num\_parallel\_tree} to name the number of boosting rounds (the iterations) and the number of trees trained in parallel in a forest.}
The values of $a^{\{m\}}_{j,\, n}$ and $\beta^{\{m\}}_{j,\, n}$ are enough to specify the value of $t_{j,\, n}( x )$ and can be compute by iterating \eqref{eq:trees:lossmin}:
\begin{equation}
( a^{\{m\}}_{j,\, n},\, \beta^{\{m\}}_{j,\, n} )
=
\underset{\{a_{j,\, n};\, \beta_{j,\, n}\}}{\mathrm{argmin}}~
G_{j,\, n}\left( \mathcal{M}_n;\, t^{\{m-1\}}_{j,\, n}( x ) + \beta_{j,\, n} g( x;\, a_{j,\, n} ) \right).
\label{eq:trees:iter}
\end{equation}
The specific case of boosted trees is simpler since the base learner predicts a constant value $g( x;\, a^{\{m\}}_{j,\, n} )$, thus \eqref{eq:trees:iter} simplifies to
\begin{equation}
\gamma^{\{m\}}_{j,\, n}
=
\underset{\gamma_{j,\, n}}{\mathrm{argmin}}~
G_{j,\, n}\left( \mathcal{M}_n;\, t^{\{m-1\}}_{j,\, n}( x ) + \gamma_{j,\, n} \right).
\end{equation}
Ultimately, the value of the parameters in \eqref{eq:trees:par} are updated using gradient descent as
\begin{equation}
t^{\{m\}}_{j,\, n}( x ) = t^{\{m-1\}}_{j,\, n}( x ) + \nu\, \gamma_{j,\, n}^{\{m\}},
\end{equation}
where $0 \le \nu \le 1$ is the \textit{learning rate} which controls the magnitude of the update.
Through this procedure, boosted trees can usually vastly improve the predictions of very small decision trees by increasing variance over bias.
Another way to prevent overfitting the training set is to randomly \textit{subsample} the features vector by taking a subset of them (in \texttt{scikit-learn} it is represented as a percentage of the total number of features).
Moreover \texttt{scikit-learn} introduces various ways to control the loss of gradient boosting: apart from the aforementioned \textit{least squares} and \textit{least absolute deviation}, we can have hybrid versions of these such as the \textit{huber} loss which combines the two previous losses with an additional hyperparameter $\alpha$ \cite{Fawcett:2001:Huber}. While more implementations are present, also the boosted trees provide a way to measure the importance of the variables as any decision tree algorithm.
\subsection{Artificial Neural Networks}
\label{sec:app:nn}
ANNs are a state of the art algorithm in ML.
They usually outperform any other algorithm in very large datasets (the size of our dataset is roughly at the threshold) and can learn very complicated decision boundaries and functions\footnotemark.
\footnotetext{Despite their fame in the face of the general public, even small networks can prove to be extremely good at learning complicated functions in a small amount of time.}
In the main text we used two types of neural networks: \textit{fully connected} (FC) networks and \textit{convolutional neural networks} (CNN).
They both rely on being built in a layered structure, starting from the input layers (e.g. the configuration matrix of CY manifolds or an RGB image or several engineered features) going towards the output layers (e.g. the Hodge numbers or the classification class of the image).
In FC networks the input of layer $l$ is a feature vector $a^{(i)\, \{l\}} \in \R^{n_l}$ (for $i = 1, 2, \dots, N$) and, as shown in \Cref{fig:nn:dense}, each layer is densely connected to the following\footnotemark.
\footnotetext{Clearly the input vector $x \in \R^F$ is equivalent to the vector $a^{\{0\}}$ and $n_0 = F$. Inputs to each layer are here represented as a matrix $a^{\{l\}}$ whose columns are made by samples and whose rows are filled with the values of the features.}
In other words, each entry of the vectors $a^{(i)\, \{l\}}_j$ (for $j = 1, 2, \dots, n_l$) is mapped through a function $\psi$ to all the components of the following layer $a^{\{l+1\}} \in \R^{n_{l+1}}$:
\begin{equation}
\begin{split}
\psi:~ & \R^{n_l}~~~ \longrightarrow \R^{n_{l+1}}
\\
& a^{(i)\, \{l\}} \longmapsto a^{(i)\, \{l+1\}} = \psi_j( a^{(i)\, \{l\}} ),
\end{split}
\end{equation}
such that
\begin{equation}
a^{(i)\, \{l+1\}}_j
=
\psi_j( a^{(i)\, \{l\}} )
=
\phi\left( \sum\limits_{k = 1}^{n_l} a^{(i)\, \{l\}}_k W^{\{l\}}_{kj} + b^{\{l\}}\, \mathbb{I}_{j} \right),
\end{equation}
where $\mathbb{I} \in \R^{n_{l+1}}$ is an identity vector.
The matrix $W^{\{l\}}$ is \textit{weight matrix} and $b^{\{l\}}$ is the \textit{bias} term.
The function $\phi$ is a non linear function and plays a fundamental role: without it the successive application of the linear map $a^{\{l\}} \cdot W^{\{l\}} + b\, \mathbb{I}$ would prevent the network from learning more complicated decision boundaries or functions as the ANN would only be capable of reproducing linear relations.
$\phi$ is known as \textit{activation function} and can assume different forms, as long as its non linearity is preserved (e.g. a \textit{sigmoid} function in the output layer of a network squeezes the results in the interval $[0, 1]$ thus reproducing the probabilities of of a classification).
A common choice is the \textit{rectified linear unit} ($\mathrm{ReLU}$) function
\begin{equation}
\phi( z ) = \mathrm{ReLU}( z ) = \max( 0, z ),
\end{equation}
which has been proven to be better at training deep learning architectures~\cite{Glorot:2011:ReLU}, or its modified version $\mathrm{LeakyReLU}( z ) = \max( \alpha z, z )$ which introduces a slope $\alpha > 0$ to improve the computational performance near the non differentiable point in the origin.
CNN architectures were born in the context of computer vision and object localisation~\cite{Tompson:2015:CNN}.
As one can suspect looking at \Cref{fig:nn:lenet} for instance, the fundamental difference with FC networks is that they use a convolution operation $K^{\{l\}} * a^{(i)\, \{l\}}$ instead of a linear map to transform the output of the layers, before applying the activation function\footnotemark.
\footnotetext{In general the input of each layer can be a generic tensor with an arbitrary number of axis.
For instance, an RGB image can be represented by a three dimensional tensor with indices representing the width of the image, its height and the number of filters (in this case $3$, one for each colour channel).}
This way the network is no longer densely connected, as the results of the convolution (\textit{feature map}) depends only on a restricted neighbourhood of the original feature, depending on the size of the \textit{kernel} window $K^{\{l\}}$ used and the shape of the input $a^{(i) \{l\}}$, which is no longer limited to flattened vectors.
In turn, its size influences the convolution operator which we can compute: one way to see this is to visualise an image being scanned by a smaller window function over all pixels or by skipping some a certain number of them (the length of the \textit{stride} of the kernel).
In general the output will therefore be different than the input, unless the latter is \textit{padded} (with zeros usually) before the convolution. The size of the output is therefore:
\begin{equation}
O_n = \frac{I_n - k_n + 2 p_n}{S_n} + 1, \qquad n = 1, 2, \dots,
\end{equation}
where $O$ is the output size, $I$ the input size, $k$ the size of the kernel used, $p$ the amount of padding (symmetric at the start and end of the axis considered) and $S$ the stride.
In the formula, $n$ runs over the number of components of the input tensor.
While any padding is possible, we are usually interested in two kinds of possible convolutions:
\begin{itemize}
\item ``same'' convolutions for which $O_n = I_n$, thus $p_n = \frac{I_n ( S_n - 1 ) - S_n + k_n}{2}$,
\item ``valid'' convolutions for which $O_n < I_n$ and $p_n = 0$.
\end{itemize}
In both cases the learning process aims to minimise the loss function defined for the task: in our regression implementation of the architecture we used the mean squared error of the predictions.
The objective is to find best possible values of weight and bias terms $W^{\{l\}}$ and $b^{\{l\}}$) or to build the best filter kernel $K^{\{l\}}$ through \textit{backpropagation}~\cite{Rumelhart:1986:Backprop}, that is by reconstructing the gradient of the loss function climbing back the network from the output layer to the input and then using the usual gradient descent procedure to select the optimal parameters.
For instance, in the case of FC networks we need to find
\begin{equation}
( \widehat{W}^{\{l\}}, \hat{b}^{\{l\}} )
=
\underset{W^{\{l\}},\, b^{\{l\}}}{\mathrm{argmin}} \frac{1}{2 N} \sum\limits_{i = 1}^N \left( y^{(i)} - a^{(i)\, \{L\}} \right)^2
\quad
\forall l = 1, 2, \dots, L,
\end{equation}
where $L$ is the total number of layers in the network.
A similar relation holds in the case of CNN architectures.
In the main text we use the \textit{Adam}~\cite{Diederik:2014:Adam} implementation of gradient descent and add batch normalisation layers to improve the convergence of the algorithm.
As we can see from their definition, ANNs are capable of learning very complex structures at the cost of having a large number of parameters to tune.
The risk of overfitting the training set is therefore quite evident.
There are in general several techniques to counteract the tendency to adapt the training set, one of them being the introduction of regularisation ($l_2$ and $l_1$) in the same fashion of a linear model (we show it in \Cref{sec:app:linreg}).
Another successful way is to introduce \textit{dropout} layers~\cite{Srivastava:2014:Dropout} where connections are randomly switched off according to a certain retention probability (or its complementary, the dropout \textit{rate}): this regularisation technique allows to keep good generalisation properties since the prediction can rely in a less incisive way on the particular architecture since which is randomly modified during training (dropout layers however act as the identity during predictions to avoid producing random results).
|
1,108,101,563,839 | arxiv | \section{Introduction}
A classic problem in quantitative finance is Market Making (MM). A Market Maker's task is to provide liquidity to other participants by continuously placing buy and sell orders, whilst remaining profitable. The Market Maker usually has a number of advantages with respect to other traders such as lower transaction costs, the ability to send orders at a higher frequency without penalties, or even monetary rewards for the provision of liquidity. These advantages compensate the obligation to provide liquidity for a significant proportion of trading session time, with maximum limits to the quoted bid-ask spread and/or a minimum amount of visible liquidity that needs to be provided.
A possible formulation of the MM problem is to model the environment as a stochastic optimal control problem with a suitable utility function to maximize as objective, as in \cite{Avellaneda}. This setup abstracts from many of the complexities of the micro-structure of a real multi-agent exchange, but still addresses the fundamental trade-off between holding inventory risk and profiting by capturing the bid-ask spread through MM activity.
Reinforcement learning (RL) is a general framework that allows agents to learn optimal behaviours through interaction with their environment. In recent years it has been successfully applied to solve difficult challenges that had until then defied other methods. RL agents have been trained to play Atari games \cite{rl_atari}, play Go \cite{rl_go}, manipulate robotic actuators \cite{rl_actuator}, and drive autonomous vehicles \cite{rl_driving}, among other applications, and many of these solutions display superhuman skills. It has proven particularly effective in situations where the environment can be simulated and the agent has access to virtually unlimited amounts of interaction data. The successes in other areas have spurred a stream of research that tries to apply similar techniques in the financial domain. Some applications of RL in finance include trading bots \cite{rl_trading_bot}, risk optimization \cite{rl_risk}, and portfolio management \cite{rl_portfolio}.
In this paper we show how the MM problem can be reformulated and the optimal solution recovered by training an agent using RL methods.
\section{Model and Algorithm}
We consider an asset whose price $s$ changes according to:
\vspace{-0.2cm}
\begin{equation*}
ds_t = \sigma dW_t,
\end{equation*}
\vspace{-0.1cm}
\noindent where $W_t$ is a one-dimensional Brownian motion and $\sigma$ is constant. The Market Maker can control the prices $p^b_t$ and $p^a_t$ at which she offers to buy or sell the security, respectively. The buy or sell orders will be 'hit' or 'lifted' by Poisson processes with rates $\lambda(\delta^b_t$), $\lambda(\delta^a_t)$, that depend on the distance between the bid and ask prices, and the asset price: $\delta^b_t=s_t-p^b_t$, $\delta^a_t=p^a_t-s_t$, respectively. We let the rate function be $\lambda(\delta) = A \exp(-k\delta)$, a decreasing function of $\delta$, again following \cite{Avellaneda}.
The cash $X$ of the agent evolves according to the following equation:
\vspace{-0.2cm}
\begin{equation*}
dX_t = p^a_t dN^a_t - p^b_t dN^b_t.
\end{equation*}
\vspace{-0.1cm}
That is, it accumulates whenever the asset is sold and decreases when it is bought, where $N^a_t$, $N^b_t$ are Poisson processes with rates $\lambda(\delta^b_t$), $\lambda(\delta^a_t)$. The inventory of the asset held at time $t$, denoted $q_t$, follows the dynamics:
\vspace{-0.2cm}
\begin{equation*}
dq_t = dN^b_t - dN^a_t.
\end{equation*}
\vspace{-0.1cm}
The agent's objective is to maximize the expected value of a concave utility function of the final wealth:
\vspace{-0.2cm}
\begin{equation}
\max_{\{p^b_t\}, \{p^a_t\}} \mathbb{E} [-\exp(-\beta (X_T + q_T s_T))],
\label{eq:objective_function}
\end{equation}
\vspace{-0.1cm}
\noindent where $w_T = X_T + q_T s_T$ is the agent's wealth at time $T$ (cash plus liquidation value of the inventory). The utility function adds risk-aversion to the agent's preferences, as opposed to pure maximum profit seeking. A first order approximation of the optimal agent was obtained analytically in \cite{Avellaneda}, resulting in a closed form solution: $p^{b}_{t} := \rho - \frac{\Phi_t}{2}$ for the bid price and $p^{a}_{t}:= \rho + \frac{\Phi_t}{2}$ for the ask price, where $\Phi_t := \frac{2}{\beta} \ln(1 + \frac{\beta}{k})$ is the bid-ask spread, and $\rho := s_t -\beta \sigma^2 (T-t) q_t$ is the mid-price adjusted by a factor times the negative inventory.
In the present setup this is equivalent to the mean-variance formulation, where the objective to maximize is replaced by $\max_{\{p^b_t\}, \{p^a_t\}} \mathbb{E} [w_T]-\frac{\kappa}{2} \mathbb{V}[w_T]$ (for a suitable $\kappa$). Assuming independence of changes in wealth across time-periods we obtain (see \cite{Ritter}):
\vspace{-0.2cm}
\begin{equation}
\mathbb{E} [w_T]-\frac{\kappa}{2} \mathbb{V}[w_T] = \sum \mathbb{E} [\delta w_t]-\frac{\kappa}{2} \sum \mathbb{V}[\delta w_t],
\end{equation}
\vspace{-0.1cm}
We can now proceed to design the reward that our RL agent will receive at each time step: $\delta w_t - \frac{\kappa}{2} (\delta w_t - \hat{\mu})^2 $, where $\hat{\mu}$ is a running estimate of the mean of the single period returns $\delta w_t$.
Our agent interacts with her environment in the traditional RL setting, as illustrated in Figure \ref{fig:reinfocement_struct}. To this purpose we discretize time at a reasonable resolution $dt$ and at each step provide the agent with an observation of the state of the world $S_t$. Here, we let $S_t = (s_t, q_t, T-t)$, a tuple with the asset mid-price $s_t$, the current inventory $q_t$, and the time remaining until the end of the period $T-t$. Based on said state the agent chooses the action $A_t = (b_t, a_t)$, which is a tuple of two numbers, the bid price and ask price she wants to quote. Come the next step the agent will receive a stochastic reward $R_{t+1} = \delta w_t - \frac{\kappa}{2} (\delta w_t - \hat{\mu})^2$ and the new state of the world $S_{t+1}$.
The cumulative discounted reward is defined as $G_t := \sum_{s=t}^{s=T} \gamma^{s-t}R_{s+1}$, and the agent's objective is to maximize its expectation. Setting $\gamma = 1.0$ and under the assumptions above, this is approximately equivalent to maximizing the objective function in Equation \ref{eq:objective_function}.
\vspace{-0.2cm}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{img/rl_struct.png}
\caption{Reinforcement learning problem structure}
\label{fig:reinfocement_struct}
\end{figure}
\vspace{-0.6cm}
\section{Methods and Experiments}
We train agents with two different RL algorithms, Deep Q-Learning (DQN) and Tabular Q-Learning (TQL), and compare the resulting agents with two benchmarks: the optimal agent and the symmetric agent, which quotes symmetrically around the mid-price with the same spread as the optimal agent. Both RL algorithms work by having an estimate $\hat{q}(s,a)$ of the state-action value function, defined as the expected value of the cumulative discounted reward given that we follow a policy $\pi$: $q_{\pi}(s,a) = \mathbb{E}_{\pi}[G_t]$. The estimate $\hat{q}(s,a)$ is updated at each interaction with the environment, continuously and simultaneously improving the policy and the estimate by using the update formula:
\vspace{-0.2cm}
\begin{equation}
\hat{q}_{new}(s_t,a_t) \gets \hat{q}(s_t,a_t) + \alpha (r_t + \gamma \max_a \hat{q}(s_{t+1},a) - Q(s_t,a_t)),
\label{eq:qlearning_update}
\end{equation}
\vspace{-0.1cm}
\noindent where in the case of TQL the estimate $\hat{q}(s,t)$ is stored as a table, and in DQN it is represented as a neural network and each update is used as an input-output pair to train the network through gradient descent. For both DQN and TQL we discretize the action space to have $n_a$ possible actions as in \cite{Spooner}, where the discrete actions represent the number of steps of size $d_a$ away from the mid-price. For the spread we use, as before, the optimal spread. From these two values the environment calculates the bid and ask prices quoted by the agent. In the case of TQL we also discretize each component of the state observation $S_t = (s_t, q_t, T-t)$, where $s_t$ is expressed as the number of steps $ds=\sigma \sqrt dt$ away from the starting price $s_0$, the inventory $q_t$ is in units of the asset, and $T-t$ is expressed as the number of time steps remaining until the end of the episode. Hence the components of the state are integers, the first two bounded in absolute value by the episode length, and the last is in the range $[0,T/dt]$. The number of possible states for TQL is then $(2\frac{T}{dt}+1)^2(\frac{T}{dt}+1)$.
We chose the following parameters for our simulation: $s_0 = 100$ (the starting price of the asset), $T = 1$ (the length of the simulation), $\sigma = 2.0$ (the standard deviation of the asset's price), $dt = 0.05$ (the discrete time increments), $q_0 = 0$ (the starting inventory), $\beta = 0.5$ (the agent's risk aversion), $\gamma = 1.0$ (the reward discount factor), $\kappa = 1.0$, $A = 137.45$, $k = 1.5$, $n_a = 21$ and $d_a = 0.2$ (the probability of 'lifting' or 'hitting' a price farther than $\frac{n_a-1}{2} d_a = 2.0$ away from the mid-price is very close to $0$ for $\lambda(\delta)$ given the chosen parameters, so we restrict quoted prices to the interval $[s_t-2.0,s_t+2.0]$).
Both Q-learning agents were trained using a learning rate $\alpha = 0.6$. DQN was trained for $\num[group-separator={,}]{1000}$ episodes, while TQL was trained for $\num[group-separator={,}]{5000000}$. For the DQN agent we used a network with two hidden fully connected layers of size 10 and ReLU activation functions, the network had a total parameter count of 381.
\section{Results}
After training the RL agents, we run 1000 simulations to evaluate their performance\footnote{Code for the simulations can be found at: \href{https://github.com/mselser95/optimal-market-making}{https://github.com/mselser95/optimal-market-making}}.
\vspace{-0.1cm}
\begin{figure}[!ht]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{img/beta_05_wealth.png}
\caption{Wealth for $\beta$ = 0.5}
\label{fig:comparison_hist_0.1}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{img/beta_05_reward.png}
\caption{Accumulated reward for $\beta = 0.5$}
\label{fig:comparison_hist_1}
\end{minipage}
\hfill
\end{figure}
\vspace{-0.2cm}
\begin{table}[!ht]
\centering
\begin{tabular}{| L{3.26cm} | C{2.75cm}| C{2.75cm} | C{2.75cm} | C{2.75cm} | }
\hline
& Optimal Agent & Symmetric & Tabular Q & Deep-Q \\ \hline
Mean Wealth & 47.79 & 57.67 & 44.21 & 53.47 \\ \hline
Std. Dev. Wealth& 6.09 & 11.86 & 7.08 & 6.67 \\ \hline
Sharpe Ratio & 7.83 & 4.86 & 6.24 & 8.00 \\ \hline
Mean Cum. Reward & 22.46 & -7.17 & 19.19 & 29.04 \\ \hline
Utility Estimate& -2.63e-9 & -4.34e-6 & -1.49e-6 & -2.20e-10 \\ \hline
\end{tabular}
\caption{1000 simulations for $\beta = 0.5$}
\label{tab:table_strat_tr}
\end{table}
\vspace{-0.2cm}
To compare the different methods and benchmarks, we calculate the mean and standard deviation of final wealth, and its Sharpe ratio. We also obtain the mean cumulative reward (what our RL agents try to maximize) and the Monte Carlo estimate of the original utility function in Equation \ref{eq:objective_function}.
The Symmetric agent obtains the highest mean wealth, but at the cost of high dispersion around that value. Surprisingly, the DQN agent manages to outperform the agent obtained in \cite{Avellaneda}, the reason could be that said agent uses a first order approximation of the order arrival rate. In contrast, the TQL agent does not manage to achieve the same level of performance, despite having been trained for several orders of magnitude more episodes. We believe that the cause is the huge size of the table holding the estimate of the value function, which potentially could have as many as $(2\frac{T}{dt}+1)^2(\frac{T}{dt}+1) n_a = \num[group-separator={,}]{675364200}$ entries, although many of the states corresponding to these entries are extremely unlikely or impossible to reach. In practice, our agent had $\num[group-separator={,}]{1925393}$ non-zero table entries after training, and we would need to train for longer to obtain a more precise estimate of the optimal policy's q-function and thus a better performing agent. In conclusion, we found that Deep Q-Learning, which shares weights across states and actions, can produce a more austere approximation than Tabular Q-Learning and requires far less experience to train effectively.
\vspace{-0.2cm}
\section{Discussion and Future Work}
During our experiments we found that the training of RL agents is finicky and very sensitive to hyper-parameter settings. Finding a solution to which the algorithms converge consistently, regardless of random seed (used for parameter initialization and for generating the episodes) is non-trivial. Effectively, this lack of robustness, combined with the difficulty in explaining the inner workings of machine learning models, remains a significant obstacle to their use in production. We were pleasantly surprised, however, that the solution found by DQN was superior to the first order approximation derived in \cite{Avellaneda}. We are hopeful that in future, with superior tools to improve convergence and better error measures, these techniques will become part of the standard Quant toolbox.
As future research we would like to characterize the final wealth distribution, so as to be able to choose the $\kappa$ parameter in a principled manner. We would also like to obtain the optimal agent (without linearly approximating the order arrival rate function) through other methods and compare it to the solution obtained here. Additionally, we would like to use similar techniques in more complex settings, where we take into account the order book micro-structure of the market or we simulate a multi-agent environment where agents with diverse behaviours interact.
\vspace{-0.2cm}
\section*{Acknowledgments}
We thank Sebastian Di Tella and Lisandro Kaunitz for useful comments and for proofreading the article.
\vspace{-0.6cm}
|
1,108,101,563,840 | arxiv | \section{Introduction} \label{s_intro}
Since hydrogen is the most common element in the universe, and makes up most of its mass, the distribution of this element is intimately connected with the dynamics of matter on scales from the cosmic down to the stellar. Although in its ionized or molecular forms hydrogen is problematic to detect, neutral atomic hydrogen (\ion{H}{i}) emits (or absorbs, if it is cooler than the background), due to a hyperfine transition, radio waves at a rest frequency of 1420 MHz. The transition rate is very low, which makes the line faint and relatively difficult to detect, but this has two compensating advantages: firstly, clouds of hydrogen up to the galactic scale are usually optically thin, which makes it relatively straightforward to deduce the mass of the cloud from the intensity of the line; and secondly that the spectral line is intrinsically very narrow, making it a useful way to study the motions, whether thermal, internal or bulk, of the emitting cloud, via Doppler shift and broadening.
In recent years there have been a number of blind surveys for \ion{H}{i} in the local universe, some complete \citep[e.g. HIPASS,][]{barnes_2001}, others ongoing (ALFALFA \citealt{giovanelli_2005}; EBHIS \citealt{winkel_2010, kerp_2011}; CHILES \citealt{fernandez_2013}). Further, deeper blind searches for \ion{H}{i} (LADUMA \citealt{holwerda_2012}; WALLABY \citealt{koribalski_2009}; DINGO \citealt{meyer_2009}; see also \citealt{duffy_2012}) are planned with the upcoming SKA precursor instruments MeerKAT \citep{jonas_2009} and ASKAP \citep{johnston_2008}. Such surveys complement those performed at other wavelengths and avoid potential biases from preferential selection of galaxies which are bright at these wavelengths.
The majority of hydrogen in the universe occurs in galaxies. The width of a galaxy's \ion{H}{i} spectral profile gives its bulk rotation speed, and the area under the profile is proportional to its \ion{H}{i} mass - both quantities of importance in cosmology.
From simple geometrical considerations it is clear that in any survey of objects uniformly distributed in space, the number frequency of objects will increase as their angular size decreases. Since the detectability of objects takes a sharp downturn as their angular sizes become smaller than the instrument resolution, the most common survey object therefore can be expected to be one which is only just brighter than the survey sensitivity cutoff, and which has an angular size no larger than the beam of the instrument. Since detection of \ion{H}{i} sources usually makes use also of spectral information, a galaxy at the limit of detection may be unresolved in any one channel map, but nevertheless kinematically resolved, such that the flux peak moves progressively across channels (see the EBHIS observation of DDO 154 described in section \ref{sss_ebhis_results} for an example of this).
The principal task therefore for post-detection processing of blind \ion{H}{i} surveys is to measure as accurately as possible the width, area and other properties of low signal-to-noise (S/N) spectral lines of unresolved sources: and not only for spatially-integrated spectral lines, but also for the line at each spatial pixel across the extent of the source. This is the aim of the technique discussed in the present paper.
Fitting of a model to a spatially-integrated spectral profile where the source is well-resolved on the other hand should not be expected to yield better values of galaxy parameters than other methods, because with such sources there are problems determining which spatial pixels to include in the sum - significant numbers of pixels with small but non-zero contributions may be excluded, or pixels containing nothing but a chance spike in noise may be mistakenly included. We make use of the well-resolved THINGS observations here for practical, demonstrative reasons. It is unavoidable that pixel masking effects cause flux biases in such spectra, although careful treatment of the data can minimize this.
The question of spatial masking or selection is one that arises even if the source is unresolved, because its flux remains distributed across the imaging beam or point spread function (PSF) of the instrument. Any cutoff criterion will miss some contribution in the wings, but a too generous cutoff will include too much noise from off-source directions. Fitting a model which includes a spatial component shaped like the PSF seems like a simple way around this, but is unsatisfactory because the noise in adjacent spatial pixels is also convolved by the PSF, and is therefore not statistically independent. Statistically speaking, it is better to fit to data which is as unprocessed as practical - i.e., to include instrumental response in the model and fit to raw data, rather than to fit a simpler model to data which has been processed (with accompanying muddying of the statistical waters) in order to remove or at least systematize the instrumental response. But an exploration of such matters is beyond the scope of the present paper. Here we content ourselves with construction of global spectra by spatial masking and summing, and make the associated caveats about the reliability of resulting flux measurements.
The plan of the paper is as follows. The advantages of having a model of the \ion{H}{i} spectral line is discussed in Sect. \ref{ss_theory_intro}, and the model itself is described in Sect. \ref{ss_theory_model}. Various technical matters connected with the process of fitting, including the Bayesian methodology, estimation of uncertainties, and a suitable goodness-of-fit measure, are discussed in Sect. \ref{ss_theory_fitting}. The tests performed on the model are presented in Sect. \ref{s_tests}. \ion{H}{i} spectral lines were obtained under a variety of conditions of noise, spectral resolution and baseline, and were fitted by the model.
We posed three questions about the fits. Firstly, how good a fit is the model in the case of optimum frequency resolution and S/N - does it return good values of bulk properties of the galaxies? This question is addressed in Sect. \ref{ss_tests_orig}, where we fit the model to a spatially-integrated spectrum from each of the 34 THINGS galaxies observed by \citet{walter_2008}.
Secondly, if we add noise to the spectra, and bin the channels more coarsely, does this affect the fitted parameter values? We address this question in Sect. \ref{ss_tests_coarse}.
Thirdly, how does the model perform fitting to a single-dish observation, where it is necessary also to fit a baseline? And can the model return useful kinetic information when the galaxy is at the limit of resolution? In Sect. \ref{ss_tests_ebhis}, data from the EBHIS survey \citep{winkel_2010, kerp_2011} are used to test this.
\section{A model of the \ion{H}{i} spectral line} \label{s_theory}
\subsection{Introduction} \label{ss_theory_intro}
The aim of any model of a physical process is to approximate that process with a small, hence manageable, number of adjustable parameters. A good model will be a good match to the data; will have a small number of parameters; and it is also useful if the model is not just empirical but tied in some way to the underlying physics of the object. Since these desirables can be in opposition, a model is usually then also a compromise.
The shape of an \ion{H}{i} spectral line represents the distribution and motions of neutral hydrogen within a galaxy. On top of the bulk rotation which prevents gravitational collapse there can be innumerable variations in the pattern of local \ion{H}{i} flow from galaxy to galaxy, which will be reflected in differences between the shapes of the respective spectral lines. It might therefore seem difficult to formulate a simple model of the \ion{H}{i} spectral line. However as shown in Sect. \ref{sss_orig_results}, in practice the 6-parameter model presented here works fairly well. Local deviations in flux density between the data and the fitted model don't exceed about 10\%, and tend not to affect either the total flux or the linewidth.
In all the \ion{H}{i} surveys the authors are aware of, total flux and linewidth have been estimated from the data directly, without fitting a model to the line profile.\footnote{\citet{saintonge_2007} described a model constructed from Hermite polynomials which is used in the ALFALFA source-\emph{detection} procedure \citep{haynes_2011}, but these authors do not, so far as we are aware, make further use of the fit parameters.} So why use one? Firstly because it is more systematic: there is no need for either human intervention or ad-hoc prescriptions for the number of channels to consider. Secondly, as is shown in Sect. \ref{ss_tests_coarse}, the bias in linewidth measurement is much reduced through model fitting. Thirdly, as shown in Sect. \ref{ss_tests_ebhis}, the modelling approach very naturally accommodates a modelling of the spectral background or baseline. It's no longer necessary to decide where the line profile `ends' before fitting the baseline, since both can be considered together. Fourthly, fitting of a model arguably lends itself more easily to automated processing, which becomes an ever more pressing consideration as survey datasets grow in size.
Lastly, a parametrized model opens the door to the use of Bayesian techniques, which are becoming increasingly accepted as useful tools in astronomy. Advantages here fall under three main heads. The first of these is that a Bayesian formulation is the formally correct (and therefore optimum) procedure for estimating the parameters of interest and for incorporating prior knowledge. This method is applied in the present paper. The second head or category is the use of Bayes' theorem to assess the relative suitability of differing models. This is also used in the present paper, to determine the best order of baseline model (Sect. \ref{ss_app_B_bkg}). Note that the same formulation can be used to estimate detection probability directly, although this topic is not explored in depth in the present paper: the Bayesian approach automatically takes into account both prior knowledge of the expected range of line shapes, as well as the total bandwidth and area of the survey. Such an approach to detection is more fundamental and rigorous than relying on either the 5-sigma rule or ad hoc prescriptions for calculating signal-to-noise ratio (S/N). The third advantage of the Bayesian formalism is in the extraction of statistical results from large ensembles of low-quality data via hierarchical modelling \citep{loredo_2012u}. This technique is however not used here.
\citet{roberts_1978} discussed the \ion{H}{i} spectral line and explained why it often has a double-horned shape with a relatively steep rise and fall. Roberts explored several semi-physical models of the line profile, and in fact his model C is a special case of ours. Roberts was concerned rather to emphasize and explore the connection between galactic linewidth and luminosity now more usually associated with \citet{tully_1977}, and his profile models don't have the flexibility to fit a wide variety of \ion{H}{i} line shapes.
In the second of two papers which presented a comprehensive simulation of gas in the local universe, \citet{obreshkow_2009b} described a profile model (which they apply to both \ion{H}{i} and CO spectral lines) which uses 5 parameters. These parameters, labelled by the authors $k_1$ to $k_5$, are purely empirical in themselves, but can be derived from more fundamental properties of the line profile via a set of formulae (equations A2 to A6, appendix A of their paper). This second tier of parameters includes the flux density at profile centre, the maximum flux density, and the velocity widths at the 50\% and 20\% levels. In an earlier paper by some of the same authors \citep{obreshkow_2009a}, the second-tier parameters are derived by constructing each profile from an appropriate projection and integration of a comprehensive model of the distribution of \ion{H}{i} gas density and velocity as a function of radius from the galaxy centre. This radial model in turn depends on several third-tier fundamental parameters, such as masses and characteristic radii for disk, halo and bulge components. It might be possible to find a reasonably simple way to connect these fundamental parameters with the eventual $k$ values, but these authors did not attempt this themselves. Their intent was to generate the second-tier parameters for a large number of simulated galaxies and make these available in a database. The $k$-prescription was provided simply as a convenience for the user who wished to construct approximate profiles from these data.
One could make use of the $k$-model of \citeauthor{obreshkow_2009b} for fitting to real data, but there are a number of desirable improvements:
\begin{enumerate}
\item The $k$ parameters provide no direct physical insight.
\item Many real \ion{H}{i} spectral lines exhibit a noticeable asymmetry \citep[see e.g.][]{richter_1994}. The $k$ model does not cater for this.
\item Since the prescribed functions of the $k$ model are discontinuous at sharply-defined velocity bounds, not aligned \emph{a priori} with velocity channel boundaries, integrating the model over a finite channel width is a little fiddly.
\item There is no way to simulate any artefacts arising from the autocorrelation process which generates spectra from radio signals; nor can the $k$-model simulate applied filters, such as Hanning or Tukey filters.
\end{enumerate}
The model described in the present paper addresses all these issues.
\subsection{The model} \label{ss_theory_model}
In order to derive a model we need to arrive at a reasonable approximation to the motion of \ion{H}{i} in a generic galaxy. We will begin by breaking the motions into two categories: bulk vs. random.
\subsubsection{Bulk motions} \label{sss_model_bulk}
We approximate bulk motions firstly by assuming that all gas in a (late-type, therefore gas-rich) galaxy rotates about a common centre and in a common plane. As shown for example in \citet{deblok_2008}, this is a reasonable assumption for the majority of spiral galaxies. We don't assume that the density of gas is azimuthally symmetric: departures from same are catered for, albeit in a crude manner, via the asymmetry parameter of our model.
As is well known, for most galaxies with a total \ion{H}{i} mass larger than about $10^9$ solar masses, the curve of rotation speed as a function of radius from the centre is seen to be remarkably flat outside the core. We assume here that the rotation curve is always exactly flat outside a certain radius. Within the core itself, the velocity usually rises steadily from (nominally) zero at the galactic centre, then turns over smoothly when it reaches the `flat' value of velocity. In the present model we approximate this rise by a straight line, such as would be observed for example in a rotating solid body; we also assume that the density of gas in the core is uniform. This simplified rotation curve is similar in shape to that labelled `C' in Fig. 3 of \citet{roberts_1978}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_u.eps}
\caption{A schematic showing the model distribution of \ion{H}{i} in phase space for an edge-on galaxy. The disk and the ring represent respectively the inner and outer components of the model. The height of each body represents the density of \ion{H}{i} as a function of velocity. $v_\mathrm{LOS}$ is the velocity in the line of sight and $v_\mathrm{normal}$ is the velocity normal to that, but in the plane of the galaxy. The figures in red represent the \ion{H}{i} densities due to the two components, as projected onto the line of sight. Since the densities are displayed as functions of velocity, these red projections in fact give directly the spectral line shapes of the two components. Notes: (i) The ring formally speaking ought to be infinitely thin, but some visible width has been given to it for the sake of easier interpretation. (ii) No attempt has been made to make the height scales of the projection graphs consistent with those of the density figures.
}
\label{fig_U}
\end{figure}
Understanding of the way a profile model represents the bulk gas motions is assisted by mapping the gas motions and densities in phase space - that is, presenting the gas density not as a function of spatial location but of velocity components. The line profile may then be obtained by projecting the phase space gas distribution onto a line directed towards the observer. This is diagrammed in Fig. \ref{fig_U}. In phase space, the outer mass of gas with a constant rotation speed appears as a ring of infinitesimal thickness, whereas the `solid-rotating' inner part appears as a uniform disk.
The ring gives in projection the characteristic double-horned profile so often observed in \ion{H}{i} spectral lines:
\begin{equation} \label{equ_model_outer}
s_\mathrm{outer}(v) = \frac{2 S}{\pi \Delta v} \ \rho^{-1} \! \left( \frac{v - v_\mathrm{ctr}}{\Delta v/ 2} \right)
\end{equation}
where
{\setlength\arraycolsep{2pt}
\begin{eqnarray*}
\rho^2(u) & = & 1 - u^2 \textrm{ for } |u|<1,\nonumber \\
& = & 0 \textrm{ else.}
\end{eqnarray*}
}
Here $S$ is the total flux from the \ion{H}{i} in this section, $\Delta v$ is the range between maximum and minimum gas velocities in the line of sight, and $v_\mathrm{ctr}$ is the mean line-of-sight velocity. The disk in projection yields a half ellipse:
\begin{displaymath}
s_\mathrm{inner}(v) = \frac{4 S}{\pi \Delta v} \, \rho \left( \frac{v - v_\mathrm{ctr}}{\Delta v / 2} \right).
\end{displaymath}
These are the two fundamental components of our model. Asymmetry in the line profile is accommodated by multiplying both components by
\begin{displaymath}
1 + 2 \alpha (v - v_\mathrm{ctr})/\Delta v
\end{displaymath}
To describe the model so far we require 5 parameters: the line centre $v_\mathrm{ctr}$; the so-called intrinsic line width $\Delta v$; the total flux $S$; the fraction $f$ of the gas which is found in the `solid-rotating' part which is associated with the inner part of the galaxy; and the asymmetry parameter $\alpha$. The model so far is thus represented by
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{equ_model_a}
s_\mathrm{intrinsic}(v) & = & \frac{2 S}{\pi \Delta v} \, \left[1 + \frac{2 \alpha (v - v_\mathrm{ctr})}{\Delta v} \right] \times\nonumber\\
& \times & \left[(1-f) \rho^{-1} \! \left( \frac{v - v_\mathrm{ctr}}{\Delta v /2} \right) + 2 f \rho \left( \frac{v - v_\mathrm{ctr}}{\Delta v / 2} \right) \right].
\end{eqnarray}
}
\subsubsection{Random motions} \label{sss_model_random}
Random motions include both thermal motions of the atoms and turbulent motions. The latter may occur on length scales much larger than the atomic, but provided they are much smaller than the scale of bulk, i.e. galaxy-scale motions, we can lump them together with thermal motions. In the present study we assume that the distribution of random velocities is the same throughout any galaxy. Maps of the local linewidth in \citet{walter_2008} indicate that this is a reasonable assumption for the THINGS galaxies. In this approximation, random motions may be treated mathematically as a 3-dimensional convolution of the bulk-motion line profile. Further, if we approximate the distribution of random velocities by a 3D Gaussian, then the 1D projection of this in any direction is also a Gaussian:
\begin{equation} \label{equ_dispersion}
g(v) = \frac{1}{\Delta v_\mathrm{rand} \sqrt{2 \pi}} \exp \left( \frac{-v^2}{2 [\Delta v_\mathrm{rand}]^2} \right).
\end{equation}
The characteristic width $\Delta v_\mathrm{rand}$ is the sixth and final model parameter.
It should be emphasized that representation of random gas motions by a single Gaussian is only an approximation. In reality there may be several co-located components exhibiting different degrees of dispersion \citep[see for example][]{braun_1997}.
The full equation for the profile model is
\begin{equation} \label{equ_model_full}
s(v) = s_\mathrm{intrinsic}(v) \star g(v)
\end{equation}
where $\star$ indicates convolution. The full list of 6 model parameters is $v_\mathrm{ctr}$, $\Delta v$, $S$, $\Delta v_\mathrm{rand}$, $f$ and $\alpha$. These have natural ranges, that is, ranges imposed by physical reasonableness, as follows:
\begin{itemize}
\item $v_\mathrm{ctr}$: constrained in practice by the ends of the spectrum.
\item $\Delta v>=0$
\item $S>0$
\item $\Delta v_\mathrm{rand}>0$
\item $0<=f<=1$
\item $-1<=\alpha<=1$
\end{itemize}
Note however that, for some of the fits reported in the present paper, up to 6 additional parameters were used for fitting the baseline.
The asymmetry and fraction-solid parameters can be expected to be correlated respectively with the third and fourth moments of the spectral line, as described by \citet{andersen_2009}. These authors make a comprehensive study of the statistics of \ion{H}{i} and \ion{H}{ii} spectral line shapes and their relation to galaxy morphology. We don't here retrace that analysis, but only observe that calculation of the appropriate moments of our spectral line model is easy and avoids the necessity, mentioned by \citeauthor{andersen_2009}, of \emph{a priori} human selection of a velocity range, when extracting moments from data. Other measures of asymmetry, such as that of \citet{tifft_1988}, are equally easy to calculate from our model.
\subsection{Fourier-space formulation} \label{ss_theory_method}
Although the model can be calculated directly using equation \ref{equ_model_full} and the preceding formalism, there are several advantages to calculating the profile first in Fourier space, then transforming to velocity space. Firstly, the convolution in equation \ref{equ_model_full} turns into a product; secondly, the singularities in equation \ref{equ_model_outer} are avoided; and thirdly, the process mimics the processing of real signals in an XF-type correlator, and thus allows inclusion of some of the artefacts which result from same.
This Fourier-space model construction was followed in all the calculations described in the present paper. To make it easy for readers to do this themselves, the Fourier transform of equation \ref{equ_model_full} is given in appendix \ref{s_app_A}.
\subsection{Fitting considerations} \label{ss_theory_fitting}
A frequent use for such a model is to fit it to data. For this purpose one needs an objective function describing the goodness of the fit, and one must choose an algorithm for minimizing this function. These considerations are discussed in Sects. \ref{sss_fitting_bayes} through \ref{sss_fitting_goodness}.
\subsubsection{Bayesian formulation} \label{sss_fitting_bayes}
According to Bayes' theorem, given a set of measurements of flux density $\mathbf{y}$, the posterior probability density function of the model parameters $p(\mathbf{q}|\mathbf{y})$, where we use $\mathbf{q}$ as shorthand for the six model parameters, is given by \citep[see e.g.][]{dagostini_2003}
\begin{equation} \label{equ_bayes}
p(\mathbf{q}|\mathbf{y}) = \frac{1}{E} \, p(\mathbf{q}) \, p(\mathbf{y}|\mathbf{q}),
\end{equation}
where $E$, known as the evidence, is just a normalizing constant:
\begin{equation} \label{equ_evidence}
E = \int d\mathbf{q} \, p(\mathbf{q}) \, p(\mathbf{y}|\mathbf{q}).
\end{equation}
The first function in the integrand is the prior probability distribution which represents our prior knowledge of the parameter values; the second is the likelihood. For $N$ data values $y_j$ which include Gaussian-distributed noise, the likelihood is given by
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{equ_like}
p(\mathbf{y}|\mathbf{q}) & = & \prod_{j=1}^N \frac{1}{\sigma_j \sqrt{2\pi}} \exp \left\{ \frac{-[y_j - s(v_j,\mathbf{q})]^2}{2\sigma_j^2} \right\}\nonumber \\
& = & (2\pi)^{-N/2} \exp \left( \frac{-\chi^2}{2} \right) \prod_{j=1}^N \frac{1}{\sigma_j}
\end{eqnarray}
}
where $s(v_j,\mathbf{q})$ is the profile model evaluated for velocity channel $j$, $\sigma_j$ is the standard deviation of the noise in channel $j$, and $\chi^2$ has its usual formulation.
A Bayesian fit (that is, a fit procedure which optimizes the Bayesian posterior probability) may tend towards being data-dominated, or it may be prior-dominated. The first case occurs if the data has high signal-to-noise (S/N) and if there has not been much earlier fitting experience. The original THINGS dataset observed with the VLA matches this criterion for us. The EBHIS observations of the THINGS galaxies however generally have much lower S/N values, as do the semi-simulated profiles described in Sect. \ref{ss_tests_coarse}. For the latter two cases we therefore thought it appropriate to set some cautious priors, derived from the fits to the VLA THINGS profiles. Some of the model parameters are poorly constrained by the data in these lower-S/N fits and these are thus prior-dominated.
The low-S/N priors are described in detail in appendix \ref{ss_app_B_justification}.
\subsubsection{Fitting algorithms} \label{sss_fitting_algorithms}
Three fitting techniques have been used in the present study: the Levenberg-Marquardt (LM) method \citep[chapter 15.5]{press_1992}, simplex optimization (\citealt{nelder_1965}, see also \citealt{press_1992} chapter 10.4) and Markov-chain Monte Carlo (MCMC), specifically using the Metropolis algorithm (\citealt{metropolis_1953}, see also \citealt{bhanot_1988} for other useful references). For fitting to the THINGS profiles, both from the original VLA observations (Sect. \ref{ss_tests_orig}) and as observed as part of EBHIS (Sect. \ref{ss_tests_ebhis}), the LM procedure was used to obtain initial values of the approximate centre and width of the posterior, then the exact form of the posterior was explored via MCMC. For the semi-artificial profiles described in Sect. \ref{ss_tests_coarse}, the simplex procedure alone was used.
The simplex algorithm is robust but slow. In practice it was `fast enough', requiring only about 10 to 20 seconds on a standard laptop to fit a 6-parameter model to 450 data points.
Technical details of the MCMC are given in appendix \ref{ss_app_B_mcmc}.
\subsubsection{Uncertainties} \label{sss_fitting_uncerts}
The parameter uncertainties quoted in Table \ref{tab_O} are the square roots of the diagonal elements of a covariance matrix estimated from the ensemble of MCMC points. For the LM and simplex fits, the covariance matrix was obtained where necessary by inverting the matrix of second derivatives (known as the Hessian) of the posterior with respect to the parameters. As can be seen in the figures in appendix \ref{s_app_C}, the MCMC values are often about 50\% larger than the LM ones. The reason for this is not known.
All flavours of uncertainty scale with the uncertainties in the original values of flux density. The calculation of these is described in appendix \ref{ss_app_B_uncerts}.
\subsubsection{Goodness of fit} \label{sss_fitting_goodness}
The value of $\chi^2$ is commonly used to assess how well a model fits the data. In effect, reduced $\chi^2$ is a measure of the ratio between the data-minus-model residuals and the amplitude of the measurement noise. Noise plays the vital role in this because it affects the probability that the observed residuals would occur by chance with a perfect fit. In the present case however this is not quite what we want. We expect from the start that the model will not be a perfect fit to the data, so in a sense this question is already decided in the negative: what we want is rather to measure the size of the imperfections. Noise plays no role in this, provided only that it is not so large as to swamp the deviations; hence the bare value of $\chi^2$ will not return the information we want.
The line shape of NGC 925 in Fig. \ref{fig_H} provides a good example of the issue. Clearly seen are many local `bumps' or `wiggles' in the \ion{H}{i} distribution which the model is not able to track. It is the relative size of these bumps and dips compared to the average height of the profile which it would be most useful to know. For example, from Fig. \ref{fig_H} one can clearly see that the model is a cleaner fit to NGC 3184 than to NGC 925: what we want is some formula to quantify this difference.
The formula we devised to estimate the wiggle fraction $J$ is
\begin{equation} \label{equ_bump}
J^2 = \frac{1}{S^3} \left( \frac{\Delta v}{\Delta v_\mathrm{chan}} \right)^2 \sum_j y_j \left[ \left( y_j - s_j \right)^2 - \sigma_j^2 \right].
\end{equation}
where $S$ and $\Delta v$ are respectively the total flux and line width model parameters as described in Sect. \ref{ss_theory_model}, and $\Delta v_\mathrm{chan}$ is the width of the spectral channels. The reasoning behind this formula is as follows. Firstly, we want the square of the residual in each channel. We then want to subtract the noise from this in quadrature. The resulting term should be weighted by the flux density $y_j$ of the line profile. This is normalized by dividing by the sum $S$ of the model profile values. The result so far is the square of the average residual, within the line profile, due solely to model/data mismatch. Finally the square root is taken and that result divided by the average flux density of the model, which is
\begin{displaymath}
\langle s \rangle = \frac{\Delta v_\mathrm{chan}}{\Delta v} S.
\end{displaymath}
The result is the average fractional residual due to `wiggles'. This value is given in Col. 8 of Table \ref{tab_O}.
Local fluctuations in \ion{H}{i} can be visually deceptive. A good example is the global spectrum of NGC 4214, which is shown together with a model fit in Fig. \ref{fig_H}. At first inspection it is not clear why the model has not better fitted the seemingly regular double-horned profile. However, the linewidth of this galaxy is, at 56 km s$^{-1}$, relatively small: less than 5 times the `sigma' width of the turbulent broadening, which here is 12 km s$^{-1}$. Further consultation of Table \ref{tab_O} shows that the fraction of solid rotation fitted (which tends to reduce the depth of the valley between the horns) is small - consistent with zero. In fact the fitting procedure has chosen the sharpest and deepest possible double-horned profile which, after smoothing by the turbulent-broadening Gaussian, is consistent with the line slopes. With this amount of smoothing it is not possible for the model to follow deviations over velocity scales less than 10 km s$^{-1}$, as occurs with NGC 4214. These fluctuations only fool the eye into assuming a regularity which isn't in fact there, because NGC 4214 is too narrow to make obvious the actually random nature of its wiggles; in contrast to the spectrum of NGC 925, for example (shown in the same figure).
It is worth observing too that, although the wiggles in NGC 4214 do rather offend the eye, the $J$ factor for this galaxy is only 2.3\%, well below the average seen in Fig. \ref{fig_T}.
\section{Tests of the model} \label{s_tests}
\subsection{Spectra from the original THINGS observations} \label{ss_tests_orig}
\subsubsection{Introduction} \label{sss_orig_desc}
\citet{walter_2008} used the VLA to observe the \ion{H}{i} distribution in 34 nearby late-type galaxies. Spatial resolution, frequency resolution and S/N were all relatively high, certainly when compared to the most common objects detected in blind \ion{H}{i} surveys. The set of observations is known as THINGS (The \ion{H}{i} Nearby Galaxy Survey). We fitted our profile model to a spectrum extracted from THINGS observations of each of the 34 targets.
\citet{walter_2008} provide these spectra in the online version of their paper. According to their description, the global spectral profile for each galaxy is obtained by adding together a subset of pixels for each channel of the data cube. Pixels where there was no measurable emission from \ion{H}{i} were not included in the sum. This has the effect of making the noise in a channel proportional to the square root of the number of unmasked pixels. The mask cubes are not available on the THINGS web site but were kindly provided on request by F. Walter. This allowed us to estimate the noise per channel for each global profile. The exact procedure for doing so is described in appendix \ref{ss_app_B_uncerts}.
Some of the THINGS galaxies have a velocity range which straddles zero. For some of these, interference from Milky-Way \ion{H}{i} is evident. A few channels where this effect was obvious have been omitted when fitting to the spectra for galaxies DDO 53, NGC 2976, NGC 3077 and NGC 6946. Note that the total flux values in Col. 4 of Table \ref{tab_O} are the values of the respective parameter of the fitted model: thus these represent an interpolation over missing channels for the four galaxies mentioned. One can however also obtain the total flux under the fitted profile by adding together the samples of the profile model obtained at each channel. For Fig. \ref{fig_A}, in which the flux under the data spectrum is compared to the model, the relevant channels have been omitted from these sums for both the data and the model for the four galaxies mentioned. The percentage differences in model flux from the values in the Table are respectively 7, 13, 4 and 3.
The question discussed in the remainder of Sect. \ref{ss_tests_orig} is: how good a fit is the model in the case of optimum frequency resolution and S/N - does it return good values of bulk properties of the galaxies?
\subsubsection{Graphical and tabular display of fit results} \label{sss_orig_results}
Five examples of profiles fitted to THINGS spectra are shown in Fig. \ref{fig_H}. These are the five galaxies chosen as `simulation inputs' in Sect. \ref{ss_tests_coarse}. Figures showing all 34 individual fit results are given in appendix \ref{s_app_C}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_h.eps}
\caption{Fitted profiles (red curves) compared to raw data (black error bars) for 5 of the THINGS galaxies. Widths and heights of the profiles have not been altered but arbitrary offsets to both flux density and velocity have been added for clarity of plotting.
}
\label{fig_H}
\end{figure}
The results of the fits are given in Table \ref{tab_O}. Shown in Cols. 2 to 7 are the mean values with uncertainties for each of the six model parameters. Also given in Col. 8 is the `wiggle fraction' $J$ calculated from equation \ref{equ_bump}. A low value indicates a good match between model and data. Note that it is possible for $J^2$ to be negative. In this case, formally speaking, the root is imaginary, as listed in the table. This has no physical meaning, it just indicates that any local deviations in \ion{H}{i} from the model are insignificant compared to the measurement noise.
A histogram showing the distribution of wiggle fraction is given in Fig. \ref{fig_T}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_t.eps}
\caption{A histogram showing the distribution of `wiggle fraction' $J$ among the THINGS galaxies, as specified by equation \ref{equ_bump}. This quantity indicates the approximate extent of local fluctuations of \ion{H}{i} density away from the fitted model. Values shown here as less than zero are technically imaginary (see the respective values in Col. 8 of Table \ref{tab_O}), since they are square roots of differences in quadrature which turned out to be negative. Such values simply indicate that fluctuations away from the model fit are, for that galaxy, dominated by measurement noise.
}
\label{fig_T}
\end{figure}
Column 10 of the table gives the velocity separation between points on the fitted profile where the flux density decreases to 20\% of its maximum value. The remaining columns, 9 and 11, are described in the following section.
\subsubsection{Numerical comparisons between model and data} \label{sss_orig_comparison}
The reason for doing these fits is to see how well the model matches a variety of real spectra. One can gain a qualitative impression from looking at the profiles but it is more informative to perform some numerical comparisons between the fitted model parameters and equivalent values from other sources. Some examples are shown in Figs. \ref{fig_A} to \ref{fig_D}, which are described individually below.
Figure \ref{fig_A} compares the total flux $S_\mathrm{fit}$ from summing valid values of the fitted flux density to a similar sum $S_\mathrm{data}$ over the raw data values. What is displayed in the figure is the fractional difference between these `fit' and `data' flux values for each galaxy.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_a.eps}
\caption{Fractional difference between the flux $S_\mathrm{data}$, derived from summing the flux densities in each data channel, and the value $S_\mathrm{fit}$, the equivalent sum of flux densities from the fitted line-profile model. This fraction was calculated from the expression $2(S_\mathrm{data}-S_\mathrm{fit})/(S_\mathrm{data}+S_\mathrm{fit})$. The $x$ coordinate is the fitted flux $S$ given in Col. 4 of Table \ref{tab_O}, whereas the fraction itself is given in Col. 9. $S_\mathrm{fit}$ is identical to $S$ except in the cases of DDO 53, NGC 2976, NGC 3077 and NGC 6946. For these galaxies, some channels around velocity zero were omitted from the calculation of both $S_\mathrm{data}$ and $S_\mathrm{fit}$. Uncertainties were calculated from the two independent contributions using the standard propagation formula.
}
\label{fig_A}
\end{figure}
\begin{sidewaystable*}
\caption{Parameter values for profile models fitted to the original THINGS global spectra.}
\label{tab_O}
\centering
\begin{tabular}{l r @{$\pm$} l r @{$\pm$} l r @{$\pm$} l r @{$\pm$} l r @{$\pm$} l r @{$\pm$} l c r @{$\pm$} l c c}
\hline\hline
$\ \ \ 1$ & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{3} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{5} & \multicolumn{2}{c}{6} & \multicolumn{2}{c}{7} & 8 & \multicolumn{2}{c}{9} & 10 & 11\\
Name & \multicolumn{2}{c}{$v_\mathrm{ctr}$} & \multicolumn{2}{c}{$\Delta v$} & \multicolumn{2}{c}{$S$} & \multicolumn{2}{c}{$\Delta v_\mathrm{rand}$} & \multicolumn{2}{c}{$f$} & \multicolumn{2}{c}{$\alpha$} & $J$ & \multicolumn{2}{c}{$\Delta S/S$} & $W_{20}$ & $\Delta W_{20}/W_{20}$\\
& \multicolumn{2}{c}{km s$^{-1}$} & \multicolumn{2}{c}{km s$^{-1}$} & \multicolumn{2}{c}{Jy km s$^{-1}$} & \multicolumn{2}{c}{km s$^{-1}$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \% & \multicolumn{2}{c}{\%} & km s$^{-1}$ & \%\\
\hline
DDO 154 & 375.69 & 0.12 & 81.24 & 0.46 & 82.10 & 0.33 & 9.04 & 0.15 & 0.2600 & 0.0164 & -0.0901 & 0.0081 & $\phantom{0}1.0\mathrm{i}$ & 0.0 & 0.6 & $103.5$ & $-0.4$\\
DDO 53 & 18.83 & 0.20 & 0.08 & 0.18 & 19.88 & 0.20 & 12.37 & 0.14 & 0.5029 & 0.2654 & -0.0005 & 0.5241 & $\phantom{0}0.0\mathrm{i}$ & 0.4 & 1.4 & $\phantom{0}44.4$ & $-3.5$\\
Ho II & 157.27 & 0.10 & 48.97 & 0.97 & 218.86 & 0.73 & 9.19 & 0.22 & 0.2449 & 0.0486 & -0.0663 & 0.0090 & $\phantom{0}0.5\phantom{\mathrm{i}}$ & 0.3 & 0.5 & $\phantom{0}71.0$ & $\phantom{-}0.3$\\
Ho I & 143.11 & 0.86 & 25.70 & 3.68 & 40.00 & 0.33 & 8.83 & 0.42 & 0.5369 & 0.2501 & -0.6686 & 0.1627 & $\phantom{0}1.6\mathrm{i}$ & 0.2 & 1.1 & $\phantom{0}41.3$ & $-0.3$\\
IC 2574 & 50.06 & 0.07 & 97.84 & 0.38 & 385.97 & 0.64 & 13.08 & 0.10 & 0.3410 & 0.0099 & -0.0788 & 0.0038 & $\phantom{0}3.4\phantom{\mathrm{i}}$ & 0.2 & 0.2 & $128.6$ & $-0.9$\\
M81 dwA & 112.84 & 0.22 & 0.08 & 0.18 & 4.21 & 0.08 & 8.83 & 0.19 & 0.5001 & 0.2662 & 0.0124 & 0.5316 & $\phantom{0}0.0\phantom{\mathrm{i}}$ & 0.6 & 2.4 & $\phantom{0}31.7$ & $\phantom{-}0.5$\\
M81 dwB & 345.86 & 0.96 & 42.27 & 4.75 & 3.74 & 0.10 & 8.33 & 1.08 & 0.3803 & 0.2460 & 0.1226 & 0.1059 & $\phantom{0}6.9\mathrm{i}$ & 1.0 & 3.7 & $\phantom{0}60.4$ & $\phantom{-}3.1$\\
NGC 628 & 659.36 & 0.12 & 45.47 & 0.20 & 297.88 & 0.89 & 11.88 & 0.10 & 0.0080 & 0.0068 & -0.1235 & 0.0091 & $\phantom{0}2.6\phantom{\mathrm{i}}$ & 1.5 & 0.4 & $\phantom{0}75.6$ & $\phantom{-}1.6$\\
NGC 925 & 552.93 & 0.17 & 191.24 & 0.56 & 229.62 & 1.05 & 12.06 & 0.24 & 0.2731 & 0.0114 & 0.0603 & 0.0074 & $\phantom{0}6.9\phantom{\mathrm{i}}$ & 1.0 & 0.6 & $222.3$ & $\phantom{-}0.5$\\
NGC 1569 & -71.38 & 3.77 & 73.00 & 9.28 & 86.04 & 1.21 & 27.65 & 1.25 & 0.1929 & 0.1735 & -0.8676 & 0.0963 & $\phantom{0}2.2\phantom{\mathrm{i}}$ & 2.3 & 1.9 & $123.2$ & $\phantom{-}0.2$\\
NGC 2366 & 100.08 & 0.07 & 94.84 & 0.42 & 232.91 & 0.46 & 9.43 & 0.12 & 0.5250 & 0.0107 & 0.0767 & 0.0047 & $\phantom{0}2.7\phantom{\mathrm{i}}$ & 0.1 & 0.3 & $115.1$ & $\phantom{-}0.2$\\
NGC 2403 & 133.44 & 0.05 & 228.75 & 0.12 & 1047.36 & 1.23 & 8.91 & 0.05 & 0.3580 & 0.0022 & 0.1191 & 0.0021 & $\phantom{0}4.4\phantom{\mathrm{i}}$ & 0.7 & 0.2 & $251.1$ & $-0.9$\\
NGC 2841 & 632.49 & 0.19 & 566.35 & 0.44 & 181.87 & 0.74 & 14.62 & 0.17 & 0.1228 & 0.0060 & 0.0065 & 0.0049 & $12.9\phantom{\mathrm{i}}$ & 0.7 & 0.6 & $606.4$ & $-0.2$\\
NGC 2903 & 555.39 & 0.12 & 353.80 & 0.28 & 229.88 & 0.63 & 12.71 & 0.14 & 0.1224 & 0.0043 & -0.1072 & 0.0044 & $\phantom{0}5.6\phantom{\mathrm{i}}$ & 1.1 & 0.4 & $386.9$ & $\phantom{-}0.1$\\
NGC 2976 & 4.96 & 0.47 & 133.88 & 2.33 & 44.99 & 0.54 & 10.80 & 0.75 & 0.5288 & 0.0486 & -0.2219 & 0.0232 & $\phantom{0}3.6\phantom{\mathrm{i}}$ & 1.0 & 1.6 & $156.2$ & $\phantom{-}0.4$\\
NGC 3031 & -38.67 & 0.09 & 366.28 & 0.23 & 1155.72 & 1.45 & 23.62 & 0.10 & 0.0702 & 0.0024 & 0.2966 & 0.0020 & $11.2\phantom{\mathrm{i}}$ & 1.6 & 0.2 & $423.8$ & $\phantom{-}0.8$\\
NGC 3077 & -23.58 & 0.36 & 117.12 & 0.78 & 257.06 & 0.79 & 17.13 & 0.10 & 0.0009 & 0.0008 & 0.9943 & 0.0039 & $\phantom{0}7.3\phantom{\mathrm{i}}$ & 0.2 & 0.4 & $117.3$ & $\phantom{-}0.0$\\
NGC 3184 & 593.79 & 0.14 & 117.17 & 0.37 & 105.37 & 0.51 & 9.60 & 0.17 & 0.1042 & 0.0119 & -0.0798 & 0.0074 & $\phantom{0}0.8\phantom{\mathrm{i}}$ & 0.1 & 0.7 & $142.2$ & $-0.6$\\
NGC 3198 & 661.11 & 0.07 & 283.65 & 0.16 & 224.91 & 0.44 & 10.58 & 0.08 & 0.1201 & 0.0031 & 0.0353 & 0.0030 & $\phantom{0}4.1\phantom{\mathrm{i}}$ & 0.8 & 0.3 & $312.2$ & $-0.4$\\
NGC 3351 & 778.93 & 0.24 & 255.37 & 0.50 & 49.54 & 0.48 & 8.05 & 0.27 & 0.0051 & 0.0040 & -0.0001 & 0.0141 & $\phantom{0}3.8\mathrm{i}$ & 1.2 & 1.3 & $277.6$ & $-0.3$\\
NGC 3521 & 798.22 & 0.10 & 415.67 & 0.29 & 297.23 & 0.66 & 19.17 & 0.11 & 0.0975 & 0.0039 & -0.0461 & 0.0034 & $\phantom{0}7.9\phantom{\mathrm{i}}$ & 0.1 & 0.3 & $467.1$ & $-0.3$\\
NGC 3621 & 729.07 & 0.08 & 257.31 & 0.20 & 679.58 & 1.26 & 11.34 & 0.09 & 0.2857 & 0.0034 & -0.0272 & 0.0031 & $\phantom{0}3.1\phantom{\mathrm{i}}$ & 0.0 & 0.3 & $287.3$ & $-0.4$\\
NGC 3627 & 720.58 & 0.73 & 327.58 & 2.15 & 40.07 & 0.47 & 22.07 & 0.93 & 0.1764 & 0.0249 & 0.1190 & 0.0189 & $\phantom{0}5.3\mathrm{i}$ & 1.4 & 1.6 & $383.8$ & $\phantom{-}1.0$\\
NGC 4214 & 292.74 & 0.07 & 56.34 & 0.12 & 200.23 & 0.39 & 12.40 & 0.05 & 0.0043 & 0.0035 & -0.0458 & 0.0046 & $\phantom{0}2.3\phantom{\mathrm{i}}$ & 0.0 & 0.3 & $\phantom{0}88.9$ & $-0.3$\\
NGC 4449 & 200.43 & 0.56 & 133.64 & 3.68 & 263.17 & 1.15 & 20.29 & 0.53 & 0.9261 & 0.0487 & -0.0222 & 0.0271 & $\phantom{0}8.5\phantom{\mathrm{i}}$ & 0.3 & 0.6 & $153.5$ & $-3.8$\\
NGC 4736 & 309.36 & 0.27 & 195.56 & 0.50 & 76.78 & 0.49 & 15.47 & 0.25 & 0.0016 & 0.0013 & -0.0500 & 0.0104 & $10.7\phantom{\mathrm{i}}$ & 1.8 & 0.9 & $237.2$ & $\phantom{-}0.5$\\
NGC 4826 & 408.87 & 0.54 & 288.68 & 1.22 & 40.44 & 0.59 & 11.79 & 0.56 & 0.0110 & 0.0087 & 0.3199 & 0.0223 & $\phantom{0}5.5\phantom{\mathrm{i}}$ & 2.8 & 2.0 & $316.1$ & $\phantom{-}0.3$\\
NGC 5055 & 497.74 & 0.16 & 357.96 & 0.58 & 376.67 & 1.09 & 17.09 & 0.22 & 0.4021 & 0.0065 & 0.1099 & 0.0050 & $\phantom{0}7.0\phantom{\mathrm{i}}$ & 0.6 & 0.4 & $400.2$ & $\phantom{-}0.4$\\
NGC 5194 & 455.88 & 0.28 & 148.64 & 0.81 & 165.24 & 0.67 & 17.36 & 0.31 & 0.1259 & 0.0145 & 0.3317 & 0.0081 & $\phantom{0}5.3\phantom{\mathrm{i}}$ & 2.1 & 0.6 & $187.8$ & $\phantom{-}1.1$\\
NGC 5236 & 506.81 & 0.17 & 172.90 & 0.33 & 360.42 & 0.79 & 31.73 & 0.14 & 0.0030 & 0.0024 & 0.1650 & 0.0045 & $\phantom{0}4.4\phantom{\mathrm{i}}$ & 1.0 & 0.3 & $253.6$ & $-5.4$\\
NGC 5457 & 228.74 & 0.09 & 138.68 & 0.41 & 1093.04 & 1.39 & 23.15 & 0.11 & 0.0134 & 0.0083 & 0.2260 & 0.0026 & $\phantom{0}5.0\phantom{\mathrm{i}}$ & 0.9 & 0.2 & $195.8$ & $-0.3$\\
NGC 6946 & 45.03 & 0.13 & 210.84 & 0.36 & 508.91 & 1.71 & 11.09 & 0.14 & 0.3078 & 0.0068 & 0.1004 & 0.0057 & $\phantom{0}9.3\phantom{\mathrm{i}}$ & 1.2 & 0.5 & $238.9$ & $\phantom{-}0.1$\\
NGC 7331 & 816.52 & 0.26 & 479.34 & 0.57 & 178.52 & 0.98 & 15.07 & 0.27 & 0.0414 & 0.0087 & 0.1150 & 0.0080 & $\phantom{0}2.1\mathrm{i}$ & 0.2 & 0.8 & $518.8$ & $\phantom{-}0.2$\\
NGC 7793 & 226.75 & 0.11 & 157.62 & 0.40 & 245.89 & 0.80 & 12.43 & 0.15 & 0.1312 & 0.0092 & -0.0601 & 0.0054 & $\phantom{0}5.7\phantom{\mathrm{i}}$ & 0.4 & 0.5 & $190.3$ & $\phantom{-}0.1$\\
\hline
\end{tabular}
\tablefoot{
Columns are as follows. 1: source name. 2-7: the 6 model parameters, being mean values of the MCMC distribution. 8: wiggle fraction $J$ as defined in equation \ref{equ_bump}. 9: fractional difference between data and fitted values of total flux, also plotted in Fig. \ref{fig_A}. 10: width at 20\% of peak flux density of the fitted profile. 11: fractional difference between data and fit values of $W_{20}$, also plotted in Fig. \ref{fig_E}.
}
\end{sidewaystable*}
For about half of the galaxies, the difference between fluxes $S_\mathrm{data}$ and $S_\mathrm{fit}$ as displayed in Fig. \ref{fig_A} is consistent with zero. Almost all of the galaxies have flux differences less than about 1\%. Where there is a perceptible difference, the total flux values for the data ($S_\mathrm{data}$) are always larger than for the fitted profile.
No detailed explanation for this flux anomaly is known at present, but any explanation ought to start with the observation that the model is nonlinear in some of its parameters, and that it doesn't have support across the whole spectrum. Worth particular attention are the line wings, which in the model become closer to Gaussian in shape the further away from line centre one goes. This Gaussian component is intended to model the distribution of random gas velocities. However, it is known that the true distribution of velocities is better described by a sum of at least 2 Gaussians \citep{ianja_2012}. It is not hard to see that a Gaussian which is the best fit to the steep line edges might nevertheless miss flux present in broad, non-Gaussian wings for example.
As a toy model, with a non-linear parameter and limited support, to demonstrate how a pedestal can lead to flux underestimation in such circumstances, consider a simplified situation in which we wish to fit the two-step profile shown in Fig. \ref{fig_S} with a simple top-hat model with three free parameters: its left and right edges, and its height $s$.
Obviously there is nothing to be gained by moving the right edge of the model away from $v_2$; and given any two values for the edge locations, determining the best-fit height is trivial. The only open question is where within the range $[v_0,v_1]$ should we place the left edge $v$ of the profile such that $\chi^2$ is minimized. We define a kind of non-discrete, noiseless analog of $\chi^2$ as
\begin{displaymath}
\chi^2 = (v_1 - v)(s - s_0)^2 + (v_2 - v_1)(s_1 - s)^2.
\end{displaymath}
After optimizing the model height $s$ this becomes
\begin{displaymath}
\chi^2 = (s_1 - s_0)^2 (v_2 - v_1) \frac{v_1 - v}{v_2 - v}.
\end{displaymath}
Clearly this has its lowest allowed value when $v = v_1$. But this means that the profile is not fitting the pedestal at all, and the data then has more flux than the model, which is consistent with the characteristic upward deviations seen in Fig. \ref{fig_A}. Pedestals are noticeable in 5 of the galaxy profiles shown in Figs. \ref{fig_gal1} to \ref{fig_gal34}, namely for Ho II, NGC 628, NGC 5194, NGC 5236 and NGC 5457. Of these, all but Ho II are among the group of 6 worst cases of flux underestimation.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_s.eps}
\caption{Demonstrates a mechanism by which the flux under a spectral line may be underestimated. The solid line shows a simple model of a continuous spectrum which has a pedestal extending some way to the left. The model, indicated by the dotted line, has itself no matching pedestal. $\chi^2$ is minimized if $v \to v_1$ and $s \to s_1$.
}
\label{fig_S}
\end{figure}
Figure \ref{fig_E} compares the velocity widths at the 20\% height of the fitted profiles ($W_{20}$) to the same figure calculated from the raw data values. For both model and data, the 20\% flux density level was calculated with respect to the maximum value for that spectrum; the velocity at this level on either wing of the profile was calculated by linear interpolation between the most distal pair of velocities which straddled the 20\% level. This simple technique is only accurate for the data because of the relatively high S/N of the THINGS observations. Some discussion of difficulties which arise in linewidth estimation when the spectrum is noisy is given in Sect. \ref{sss_coarse_mc}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_e.eps}
\caption{This plot compares the linewidth $W_{20,\mathrm{data}}$ at the 20\% height estimated from the observed THINGS spectral line to $W_{20,\mathrm{fit}}$, the same value for the profile fitted in the present paper to that line. The fractional linewidth difference was calculated from the expression $2(W_{20,\mathrm{fit}}-W_{20,\mathrm{data}})/(W_{20,\mathrm{fit}}+W_{20,\mathrm{data}})$. Values for $W_{20,\mathrm{fit}}$ are given in Col. 10 of Table \ref{tab_O}, and the fractional difference itself is given in Col. 11.
}
\label{fig_E}
\end{figure}
The agreement between the fitted width and the widths from the data is seen to be very good. In few cases is the difference larger than the 1\% level. The anomalously high value ($W_{20,\mathrm{fit}} > W_{20,\mathrm{data}}$) belongs to the M81 dwarf B; the three low values ($W_{20,\mathrm{data}} > W_{20,\mathrm{fit}}$) to DDO 53, NGC 4449 and NGC 5236. For M81 dwB and DDO 53, the width of the spectrum channels is several percent of the $W_{20}$ width, and therefore entirely accounts for the error. In NGC 4449 and 5236 the profile model is clearly seen not to be a good fit. However, we note that NGC 4449 is an interacting galaxy with a significant amount of \ion{H}{i} outside the disk. And for NGC 5236 the THINGS observations have many missing spacings, which may explain the curious steps in its line wings, which are probably the cause of the inflated value of $W_{20,\mathrm{data}}$.
De Blok et al. (2008) fitted a tilted-ring model to a 19-member subset of the THINGS galaxies and obtained high-precision \ion{H}{i} rotation curves for these. In Fig. \ref{fig_D} we compare their systemic velocities with two measures of line centre from the fitted profile, namely the fitted line centre parameter $v_\mathrm{ctr}$, and the velocity $\langle V_{20} \rangle$ obtained from the mean of the 20\% velocities. The systemic velocities $V_\mathrm{sys}$ are taken from Col. 6 of Table 2 of \citeauthor{deblok_2008}. With exception of the outlier at about -14 km s$^{-1}$, which is NGC 3627, the spread in differences is centred on zero with a standard deviation on the same order as the channel width for these galaxies, which was about 5.2 km s$^{-1}$ for 12 out of the 19, half that for the rest.
De Blok et al. found that a $V_\mathrm{sys}$ calculated from the global line wings of NGC 3627 returned values in the range 717 to 720 km s$^{-1}$, which fall much closer to our value of 720.6. NGC 3627 appears to have kinematic asymmetries, such as have been described and discussed by \citet{swaters_1999}. However, in the two galaxies studied by \citeauthor{swaters_1999}, the disturbances seem to be confined to the inner velocity regions. Disturbances in the inner-disk kinematics don't affect the rise and fall of the global profile, and thus should have only a small effect on the $V_\mathrm{sys}$ derived via fitting the present model. In contrast to this, for NGC 3627, Fig. 79 of \citeauthor{deblok_2008} shows a significant excess of gas at velocities 10 km s$^{-1}$ or more greater than their tilted-ring fit, right at the trailing edge of the velocity distribution. The present model is sensitive to such distortions, so this is arguably the reason for the inconsistency between our value of $V_\mathrm{sys}$ for this galaxy and that of \citet{deblok_2008}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_d.eps}
\caption{Frequency histograms of the difference between values of systemic velocity from two different sources. The first value, $V_\mathrm{sys}$, is that given in Col. 6 of Table 2 of \citet{deblok_2008}. These values were derived from tilted-ring fits to rotation curves of a 19-member subset of THINGS galaxies. The second value, $v_\mathrm{fit}$, was calculated in the present work. In the upper plot, the line centre parameter $v_\mathrm{ctr}$ of the fitted profile was used for $v_\mathrm{fit}$; in the lower plot, $v_\mathrm{fit}$ is the mean of the low and high velocity values at the 20\% height level on the fitted profile.
}
\label{fig_D}
\end{figure}
\citet{andersen_2009} found that line centres obtained via a moment analysis were biased in proportion to the asymmetry as measured by the 3rd moment. Since asymmetry is built in to our model from the start, we would not expect it to suffer from a similar problem. Figure \ref{fig_V} confirms that this is the case. This figure is analogous to Fig. 8 of \citeauthor{andersen_2009}. Despite that both our velocity scatter and uncertainties are smaller, no correlation is apparent in our Fig. \ref{fig_V}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_v.eps}
\caption{Scatter plot comparing the asymmetry parameter $\alpha$ of the fitted profile model to differences between the present fitted centre velocity $v_\mathrm{ctr}$ and the systemic velocities $V_\mathrm{sys}$ obtained by \citet{deblok_2008}. The velocity difference is the same as that histogrammed in the upper panel of Fig. \ref{fig_D}. Only the 19-member subset of THINGS galaxies examined by \citeauthor{deblok_2008} are shown.
}
\label{fig_V}
\end{figure}
\subsection{Fitting to coarsened and noisified data} \label{ss_tests_coarse}
\subsubsection{Introduction} \label{sss_coarse_intro}
We argue in the introduction that the main use for this profile model is to extract information in a systematic way from the kinds of \ion{H}{i} spectra most commonly encountered in blind surveys, namely with low S/N, and a channel width of about 10 km s$^{-1}$. In order to test how well the model serves this purpose, data is needed with \emph{a priori} known values of line width, total flux etc. to allow comparison with the values obtained from fitting the model. Also a large number of test spectra is desirable to reduce or at least ascertain the uncertainties in the results. The raw THINGS spectra will not serve for this: they are too few in number, have unknown \emph{a priori} values, have too high S/N, and significantly higher spectral resolution than we normally expect to encounter in blind \ion{H}{i} surveys.
An alternative would be simply to simulate the required numbers of noisy, coarsely-binned profiles. However such a simulation always carries with it some uncertainty about how applicable its results are to real measurements. Simulations are also vulnerable to the criticism that one only gets out only what one puts in.
We can however make use of the THINGS profiles to generate realistic data simply by binning them into wider velocity channels and adding a lot more noise. \citet{lewis_1983} made use of a similar method in investigating line width measurement biases. This is also the approach which is taken in the present section.
What we do here is run several Monte Carlos, in each of which an ensemble of test spectra is generated. The line-profile model is fitted to each spectrum in an ensemble, and the width, centre and total flux of the line are also estimated using traditional direct methods. For each ensemble, and for each property of interest (line width etc.), an average is formed from the values obtained from the fitted profiles on the one hand and the direct measurement on the other. The input values of the properties are known and so are available for comparison. The aim is to demonstrate that the average value of each property is better estimated via model fitting than by direct measurements from the spectrum.
\subsubsection{The Simulation Monte Carlo} \label{sss_coarse_mc}
Five of the THINGS galaxies were selected, the sample being chosen so as to cover a range of shapes and sizes. These five are the ones shown in Fig. \ref{fig_H}. They all had channel widths of about 2.6 km s$^{-1}$, except for NGC 4214, for which the width is half this. For each of the five, a Monte Carlo ensemble of 100 spectra was generated for each of a set of twelve S/N values, the S/N figure being calculated according to the ALFALFA formula, which is discussed in Sect. \ref{sss_coarse_snr}. The twelve values were evenly spaced in a logarithmic sense over a range between about 2 and 100. Different amounts of S/N offset were applied to the five galaxies to prevent overlay of graph points when results from several galaxies appear on the same figure.
As already mentioned, the desired channel width in this Monte Carlo was about 10 km s$^{-1}$. This was most simply achieved by averaging an integer number of the channels of the original spectra, that number being 4 for all but NGC 4214, for which a factor of 8 was used. This scheme also permitted dithering of individual spectra by random offsets in the range from zero to one less than the rebinning factor. This dithering was to avoid systematic effects from the centre velocity of the input spectrum always having the same relation to the channel boundaries in the rebinned scheme.
\subsubsection{Calculating the S/N} \label{sss_coarse_snr}
Signal-to-noise ratio can be defined most simply as the ratio between the maximum or peak height of the input spectrum and the noise standard deviation or RMS. However this does not correlate well with the detectability of \ion{H}{i} lines, because a broader line is usually easier to detect than a narrow one having the same peak-to-RMS S/N. For this reason it is usual to define some alternative measure which takes account of the line width. The ALFALFA survey makes use of the following formula \citep[equation 2 of][]{haynes_2011}:
\begin{displaymath}
S/N_\mathrm{ALFALFA} = \frac{S}{\sigma W_\mathrm{50}} \sqrt{\frac{\min(400, W_\mathrm{50})}{2 \; \Delta v_\mathrm{chan}}}
\end{displaymath}
We used the same to calculate the S/N values for our Monte Carlo.
Note firstly that, for any given profile shape, $S/N_\mathrm{peak} \propto S/N_\mathrm{ALFALFA}$, but the proportionality constant differs from profile to profile. Note secondly that ALFALFA take $S/N_\mathrm{ALFALFA} > 6$ as their detection criterion \citep{haynes_2011}.
A sample of a spectrum produced in the Monte Carlo is shown in Fig. \ref{fig_I}. This has a S/N equal to 3.9, which is a little below the ALFALFA detection criterion. As such it represents a very typical example of the quality of profile commonly seen in such surveys.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_i.eps}
\caption{An example of a rebinned and noisified profile such as described in Sect. \ref{ss_tests_coarse}. The template spectrum was the THINGS observation of NGC 3184. The template data have been rebinned by a factor of 4 to give a channel width of 10.35 km/s; noise has then been added to bring the signal-to-noise ratio to a figure of 2.0 in peak-to-RMS terms, which for this galaxy is equivalent to about 3.9 according to the ALFALFA formula, as described in Sect. \ref{sss_coarse_snr}.
}
\label{fig_I}
\end{figure}
\subsubsection{Fitting to the data and analysing the results} \label{sss_coarse_fitting}
The simplex algorithm was used to fit each spectrum. Several fitting runs were done with increasing values of maximum permitted number of iterations, and decreasing values of the convergence criterion, in order to be sure that the fits were properly converging. The posterior described in Sect. \ref{sss_fitting_bayes} was the object function fitted, but this time some attention was paid to priors. Details of these and their justification can be found in appendix \ref{ss_app_B_justification}.
There are two ways one can analyse such data: one can compare the fitted values of the model parameters to those values obtained from fits to the original THINGS template spectra; and one can compare measurements derived from fitting a model to those obtained via more traditional, non-parametric methods. Both sorts of analysis have been done for the `coarse' data. The corresponding results are described respectively in Sects. \ref{sss_coarse_results_I} and \ref{sss_coarse_results_II}.
It is always going to be difficult to obtain reliable estimates of line parameters from data at such low S/N values as in Fig. \ref{fig_I}. There are three quantities which are of central importance: the total flux under the spectral line, its central or systemic velocity, and its width.
In the case of total flux, a typical non-parametric estimation method is that described in \citet{haynes_1984}: i.e. simply to `integrate over the observed signal'. Implicit in this however is a knowledge of where the `observed signal' begins and ends. For high S/N signals this is often done by visual inspection. In this case exactitude is not too important - one can afford to be generous, since the added noise per extra channel is small. Judging the line boundaries is of course much more problematic in the weak-signal limit. There are also practical difficulties involved in human inspection of large numbers of spectra.
Line width and line centre are often derived from the same source, namely two measurements of the low- and high-velocity edges of a line. Line width is taken from their difference and line centre from their average. \citet{bicay_1986} present several typical methods of estimating the edge velocities. These all start by deciding a flux density value, then interpolate linearly between adjacent channels which straddle this value. The flux density chosen is some percentage of a characteristic value for the line as a whole, which may be the mean flux density, or the maximum within the line profile, or some function of the peaks at the horns of the profile, in cases where these can be measured. In all cases such values for the characteristic flux density depend upon either the channel width or on implicit assumptions about where the line begins and ends.
The largest blind \ion{H}{i} survey (still on-going) is ALFALFA \citep{giovanelli_2005}; their most recent catalog release paper is \citet{haynes_2011}. The latter authors describe their edge-estimation procedure as similar to that of \citet{springob_2005}, in which polynomials (in practice nearly always `polynomials of order 1', i.e. straight lines) are fitted to several channels at the line rolloff on each side, the interpolation flux density being 50\% of the horn height on that side. In practice this is not much different to schemes already mentioned because, unlike the profiles shown in Fig. 2 of \citet{springob_2005}, in ALFALFA the effective channel width of 10 km s$^{-1}$ would not provide many channels in the rolloff.
To settle on a point of comparison therefore we decided to employ, as our `traditional, non-parametric method' of estimating the line edges, the simple scheme as follows:
\begin{enumerate}
\item Determine $s_\mathrm{max}$, the highest flux density within the spectrum.
\item Identify the number $j_\mathrm{lo}$ of the lowest-velocity channel whose $s$ value exceeds $0.5 s_\mathrm{max}$.
\item Identify the number $j_\mathrm{hi}$ of the highest-velocity channel whose $s$ value exceeds $0.5 s_\mathrm{max}$.
\item Interpolate between $j_\mathrm{lo}-1$ and $j_\mathrm{lo}$ to obtain $v_\mathrm{lo}$, and between $j_\mathrm{hi}$ and $j_\mathrm{hi}+1$ to obtain $v_\mathrm{hi}$.
\end{enumerate}
It is thus a maximizing algorithm.
\subsubsection{Monte Carlo results I: comparison with high-S/N fits} \label{sss_coarse_results_I}
Some of the more important results of the Monte Carlo described in Sects. \ref{sss_coarse_mc} to \ref{sss_coarse_fitting} are shown in Figs. \ref{fig_K} to \ref{fig_N3}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_k.eps}
\caption{Mean total flux of an ensemble of fitted line profiles over a range of values of ALFALFA signal-to-noise ratio (S/N). Only 5 of the THINGS galaxies are shown, as labelled. Construction of these Monte Carlos is described in Sect. \ref{sss_coarse_mc}. The square symbols give the mean, the diamonds the median, and the vertical bars show the standard deviation for each ensemble. The half-tone lines give the values from the profile fits to the original THINGS data. Red Xs show the points at which the peak-to-RMS S/N equals 2.
}
\label{fig_K}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_l.eps}
\caption{Similar to Fig. \ref{fig_K} except that the fitted value of intrinsic line width is shown instead of total flux. DDO 53 is not shown because the intrinsic line width for this galaxy is not significantly different from zero.
}
\label{fig_L}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_m.eps}
\caption{Mean fitted line centre offset is shown for DDO 154, at the same values of S/N as in Figs. \ref{fig_K} and \ref{fig_L}. No other galaxies in this subset are shown because all 5 show similar characteristics.
}
\label{fig_M}
\end{figure}
In Fig. \ref{fig_K}, the variation of the mean value of total flux fitted to the spectra is shown against the S/N, for all five galaxies in the trial. Some upward bias in the values becomes visible at low S/N values, but this always remains within 1 standard deviation of nominal.
Figure \ref{fig_L} is a similar plot, this time for the intrinsic line width. DDO 53 is not shown, because line width is small and poorly constrained for this galaxy, even in fitting to the raw THINGS spectrum. Similar slight biases are observed in three of the four galaxies; it is not clear why NGC 4214 seems to be so badly affected at low S/N.
It is important to note that, at the highest S/N levels shown, the linewidth clearly asymptotes to the value measured in Sect. \ref{ss_tests_orig} and tabulated in Col. 3 of Table \ref{tab_O}, despite the fact that the spectral resolution of the present data is several times poorer than the original THINGS observations. This emphasises that no `instrumental broadening' correction to linewidth measures is necessary when fitting a line-profile model.
The curious under-estimation by up to $2 \sigma$ of the linewidth for DDO 154 at quite large values of S/N remains unexplained, although it is worth noting that it is never more than about 8 km s$^{-1}$, which is less than the channel width of these spectra. In the next section it is demonstrated however that for DDO 154 the $W_{50}$ linewidth remains correct in this range - so clearly the decrease in the fitted `intrinsic' value is compensated by an increase in the `turbulent broadening' value.
Note that the mean fitted linewidth for all five galaxies remains accurate down to the ALFALFA limit of detectability.
The mean and standard deviation of the line centre offset (that is, the difference between the fitted line centre and the input value) are displayed in Fig. \ref{fig_M} just for DDO 154. All the galaxies show similar results. No significant bias is seen, and the scatter remains within the channel width for ALFALFA S/N values greater than 5.
From these results we can conclude that, so far as the important parameters of the fitted profile model go, there is little bias evident when the spectral resolution and S/N are lowered to typical levels expected in blind surveys.
\subsubsection{Monte Carlo results II: comparison with non-parametric methods} \label{sss_coarse_results_II}
Figures \ref{fig_N1} through \ref{fig_N3} all make the same comparison, but a single galaxy is plotted per figure for clarity. The other two galaxies have not been shown because their results are similar. For each value of S/N in each plot, two values of W50, the line width at 50\% of maximum height, are shown: the error bar centred on a triangle comes from measuring the fitted profile; that centred on the square, from the raw spectrum, using the simple algorithm discussed in Sect. \ref{sss_coarse_fitting}. It is obvious from all plots that finding W50 from the fitted profile yields a more accurate result. The effect is most stark for broad profiles, which for the same ALFALFA-formula S/N have a much lower peak-to-RMS value.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_n1.eps}
\caption{For the same range of S/N values as in Figs. \ref{fig_K} to \ref{fig_M}, line width at the 50\% height for the raw spectrum versus the same for the fitted profile is compared. DDO 53 is shown here. The square symbol indicates the mean value for the raw spectra, the triangle the mean for the fitted profiles. These respective points have been slightly offset horizontally for clarity. The halftone line indicates the value for the profile fitted to the original THINGS data.
}
\label{fig_N1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_n2.eps}
\caption{Same as for Fig. \ref{fig_N1} except DDO 154 is shown.
}
\label{fig_N2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_n3.eps}
\caption{Same as for Fig. \ref{fig_N1} except NGC 3184 is shown.
}
\label{fig_N3}
\end{figure}
\subsubsection{Discussion} \label{sss_coarse_discussion}
\citet{lewis_1983} made a similar study of the effect of noise on measurements of \ion{H}{i} line position and width. Although Lewis's study was in many respects more comprehensive than the present one, this author applied traditional direct methods to measure the positions of line edges, and deprecated the use of functions fitted to the line shoulders over several channels. Lewis argued that the number of channels spanning the line shoulder was of the same order as the minimum number of parameters required by any useful fitting function, and therefore that fitting was equivalent merely to interpolation, and could therefore not be expected to yield any improvement in precision.
In opposition to this view is the Bayesian probability theory, which says that the most complete knowledge possible of the probability distribution of linewidth values is obtained by use of Bayes' theorem in conjunction with a model of the spectral line. A seeming oversupply of model parameters is no objection provided the unwanted ones are marginalized out. The only necessary criterion is that the model reproduces accurately the widths of real spectral lines. For the present model, this is well demonstrated in Sect. \ref{sss_orig_comparison}.
Nevertheless it seems reasonable that a model with 6 parameters will be poorly constrained if the spectral line extends over no more than about the same number of velocity channels. It is of interest therefore to consider the likely shape of the Bayesian posterior distribution in the extreme case that the spectral line has most of its flux in a single channel. In this case it seems clear that the line centre and total flux parameters will remain well-constrained. On the other hand, the fraction-solid and asymmetry parameters can be expected to be completely unconstrained, with the posterior density remaining about the same if they are varied through their respective ranges (as for example in Fig. \ref{fig_G}). Since the Gaussian spread of the spectral line is modelled by the $\Delta v_\mathrm{rand}$ parameter, its value will be most tightly constrained by the flux densities in the channels adjacent to the central one. If these are insignificant, then this parameter will be unconstrained within a range between zero and about the width of a channel. The final parameter, the intrinsic linewidth, will be expected to have the same character (this again is similar to what is seen in Fig. \ref{fig_G}).
In fact none of these indeterminacies and degeneracies matter, because in practice one integrates the posterior over all the dimensions (i.e. parameters) of no present interest. For example if one wants a probability distribution of $W_{50}$ for a given spectrum, this is very easy to obtain from the Bayesian formulation for the respective data. The number of parameters in the model should be viewed merely as a detail of the way the problem is worked out, having no importance for the final result.
Another issue which deserves some discussion concerns the biases in fit parameters observed in Figs. \ref{fig_K} to \ref{fig_N3}. If the Bayesian formulation is optimum, one might ask, why does it still lead to biased results? The answer is that the Monte Carlo procedure adopted in the present section is not the way one ought to proceed with real data. In no case is it optimum to fit separate spectra, then average the resulting parameter values. This is done here only because we have no option if we wish to make comparison with standard methods of estimating line parameters directly from the data. If one did not have the necessity of comparison with non-Bayesian methods, the correct approach depends on the nature of the quantity to be estimated. Either we have an ensemble of observations of the same object, or an ensemble of observations of different objects. In the former case, the correct Bayesian approach is to fit once to the entire data set. In this case we are back (given sufficient data) in the high S/N regime and any biases will be correspondingly insignificant. If the objects are all different, then the correct approach is hierarchical modelling. An interesting recent example of this treatment is \cite{brewer_2014}. Essentially one defines a hyper-model which describes the distribution of values of properties across the ensemble of objects, and tries to constrain its hyper-parameters. Although any individual spectrum is low S/N, again a sufficiently large data set will eventually reach the high-S/N, thus low-bias regime as regards the hyperparameters.
\subsection{The THINGS galaxies in EBHIS} \label{ss_tests_ebhis}
\subsubsection{Overview} \label{sss_ebhis_overview}
The semi-simulated spectra generated in Sect. \ref{ss_tests_coarse} still fall some way short of real life and therefore cannot be expected to present the full range of difficulties one encounters in trying to analyse real observations. We wanted also to test the profile model on single-dish data. The EBHIS survey (Effelsberg Bonn \ion{H}{i} Survey, \citealt{winkel_2010, kerp_2011}) is convenient for this, since its survey coverage has included most of the THINGS galaxies, yet as a single-dish survey, it necessarily has much reduced spatial resolution when compared to the interferometer used by \citet{walter_2008} to make their THINGS observations. An extensive comparison of the EBHIS observations of the THINGS galaxies might be interesting and useful, but is beyond the scope of the present paper. Here we just wish to show that profile-fitting works well also under less than ideal conditions. Specifically we are interested in the following questions:
\begin{itemize}
\item Can we cope with non-flat baselines and interference?
\item Can profile-fitting at separate pixels across the source return a map of the source flux which is better (less noisy) than one obtained simply by adding flux densities over a range of channels?
\item Can the variation in the asymmetry parameter across the source give clues to the galaxy orientation and inclination, even with poorly-resolved sources?
\end{itemize}
\subsubsection{Baseline fitting} \label{sss_ebhis_baseline}
The baseline problem differs between single-dish vs. interferometer spectra. Single-dish measurements tend to be worse affected by Fabry-Perot type effects which can produce waves in spectral baselines. Spectra made with an interferometer on the other hand tend to have flat baselines, although a spatially superimposed continuum source can raise and otherwise perturb the baseline. Future, deep interferometric surveys for \ion{H}{i} may, however, begin to be affected by source confusion, which may necessitate wider use of baseline fitting.
Traditionally, non-flat baselines have been dealt with by fitting a function, often a simple polynomial, to stretches of channels on either side of the line of interest. The fit is interpolated across the line profile but channels within the profile make no contribution to the fit. This is less than ideal for three reasons: firstly because of the restricted number of channels which contribute information to the fit; secondly because it relies on a decision as to where the profile begins and ends; thirdly, selection of the degree of the fitting polynomial is usually done `by eye'.
The approach taken in the present paper is simply to add a baseline function to the existing profile model, and fit the combined model to all selected channels. This also lends itself to a Bayesian approach to determining the best order of the baseline function. This technique is more fully described in appendix \ref{ss_app_B_bkg}.
Chebyshev polynomials have been preferred for the baseline function in the present paper, because they are easy to generate, and more orthogonal and numerically tractable than simple polynomials. Note however that a Chebyshev polynomial is still just a polynomial, and an $n$th-order Chebyshev fitted to some data will result in exactly the same function as fitting a simple $n$th-order polynomial would (B. Winkel, private communication). The Chebyshev is just better behaved from a computational point of view.
The EBHIS cubes have 3000 spectral channels spanning nearly 3900 km/s, which is far wider than any of the lines fitted. It isn't necessary to fit to the full velocity range, so in all cases a small section only, about 3 or 4 times wider than the spectral line, was selected for fitting purposes.
\subsubsection{Results} \label{sss_ebhis_results}
Results are presented here for only a single galaxy, DDO 154.
Figure \ref{fig_R} shows the spectrum of DDO 154 obtained by integrating a $9 \times 9$ pixel area of the relevant data cube. The profile shown is the mean of $10^4$ iterations of a converged MCMC, but it is almost indistinguishable to the eye from the best-fit profile from a Levenberg-Marquardt optimisation. The baseline is fitted by a sum of Chebyshev polynomials truncated at order 5 (i.e. giving 6 orders in total), which number gave the maximum Bayesian evidence (see appendix \ref{ss_app_B_bkg}).
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_r.eps}
\caption{Constructed from the EBHIS cube for DDO 154. The upper plot shows a spectrum from a section of the cube, integrated over the same spatial dimensions as shown in Figs. \ref{fig_Q1} to \ref{fig_Q3}. The lower plot shows the residuals after subtraction of the fitted model. The red line shows the MCMC-mean profile model, with 6 orders of Chebyshev function. The solid line shows the whole profile, whereas the dashed line just shows the baseline. The $\pm 1$ sigma range of the baseline prior is indicated by the grey band.
}
\label{fig_R}
\end{figure}
A spectrum summed across the entire breadth of this cube showed no evidence for narrow-band RFI in any channel, so no channels were excised before fitting.
Because there is a significant baseline contribution in this spectrum, careful attention was paid to baseline priors, which were estimated from the non-source portions of the cube as described in Sect. \ref{ss_app_B_baseline}. As can be seen from the figure, the prior accounts for quite a lot of the baseline over the small spatial extent of the source. This gives one confidence that the baseline subtraction is accurate, and thus also the fitted total flux of the spectral line.
A Levenberg-Marquardt optimization returned a best-fit value of $102 \pm 4$ Jy km s$^{-1}$ for the total flux of DDO 154, whereas the MCMC returned a mean value of $100 \pm 5$ Jy km s$^{-1}$. These values are significantly higher than the value of 82 fitted to the data of \citet{walter_2008}. However, it is known that interferometric measurements can miss flux. Our value agrees very well with the value of $105 \pm 5$ Jy km s$^{-1}$ reported by \citet{carignan_1998} from a careful combination of interferometer and single-dish data.
Next, the baseline-plus-line model was fitted to spectra extracted at single spatial pixels across the same $9 \times 9$ area integrated to produce the spectrum in Fig. \ref{fig_R}. $9 \times 9$ maps of the fitted parameter values were made. Figures \ref{fig_Q1} and \ref{fig_Q2} show two of these, respectively for the total-flux parameter and the asymmetry parameter. (The pixels of Fig. \ref{fig_Q2} were set to null where the total flux fell below 15\% of maximum.) It is of interest to compare these to the much higher-resolution versions in Fig. 53 in \citeauthor{walter_2008}. As one might expect, the single-dish EBHIS survey seems to detect more extended structure than Walter et al's interferometric measurements.
Clearly visible in the asymmetry map is the orientation of the galaxy's major and minor axes, even though DDO 154 is barely resolved by the 9 arcmin FWHM beam of the Effelsberg telescope at L band \citep{kerp_2011}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_q1.eps}
\caption{Constructed from the EBHIS cube for DDO 154. The plot shows the fitted total flux at each of a small range of spatial pixels.
}
\label{fig_Q1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_q2.eps}
\caption{Same as for Fig. \ref{fig_Q1} except the asymmetry parameter is shown. Pixels are set to median grey where the total flux drops below 15\% of maximum.
}
\label{fig_Q2}
\end{figure}
The remaining parameter maps are of less interest and are not shown. The map of the fitted line centre also (as one might expect) shows indications of the major and minor axes, but the signal is not so unequivocal as with the asymmetry map.
As far as the baseline fits go, no obvious trends are visible either in the 6 maps of the individual Chebyshev coefficients, or a map of their RMS value (Fig. \ref{fig_Q3}). There is no bright continuum radio source to disturb the baseline at the location of DDO 154: NVSS \citep{condon_1998} shows that there is no source with an average flux density at L band greater than 90 mJy/beam within the approximately 30" by 30" square shown in Figs. \ref{fig_Q1} to \ref{fig_Q3}. The first Chebyshev coefficient (which is just a DC offset) does appear slightly elevated at the centre of the source, but only at the 2-sigma level.
\begin{figure}
\centering
\includegraphics[width=\hsize]{./fig_q3.eps}
\caption{Same as for Fig. \ref{fig_Q1} except the RMS of the Chebyshev coefficients is shown.
}
\label{fig_Q3}
\end{figure}
\section{Conclusion} \label{s_conclusion}
The two most desirable measurements of the global \ion{H}{i} spectral profile of a galaxy are its width and area. Combined with optical measurements of the inclination and brightness of the galaxy, these allow the galaxy's \ion{H}{i} mass and intrinsic optical luminosity, and thence its distance, to be estimated via the baryonic Tully-Fisher relation. The systemic velocity of the galaxy comes next in importance. This quantity, when compared to the Hubble velocity appropriate to that distance, additionally allows one to calculate the peculiar or local velocity of the galaxy in the line of sight to the observer. A profile model is described here which permits one to make accurate estimates of these quantities.
In Sect. \ref{ss_tests_orig} the model was fitted to a variety of \ion{H}{i} spectra from the THINGS survey \citep{walter_2008}. These galaxies were observed at high velocity resolution and under very low-noise conditions, and exhibited negligible baselines.
The fits are nearly always excellent. Local densities and voids in the \ion{H}{i} distribution give rise to noisiness in the profiles, but residuals from these wiggles typically have amplitudes less than about 10\% of the profile height. Such local deviations seem to have little effect on the bulk fitted properties.
Particularly important to note is the close fit to the slopes of the lines, which means that linewidth parameters of the profile such as $W_{50}$ or $W_{20}$ are accurately (within 2\%) reproduced by the best-fit model. There is a great deal of existing lore on how to correct such linewidth measurements so as to obtain estimates of the maximum rotation speed of the galaxy, which is the quantity desired for Tully-Fisher calculations. The availability of a good model proxy for the linewidth allows one to make use of such formulae without radical modification, even in cases of low spectral resolution or signal-to-noise ratio (S/N).
These correction formulae include a correction for instrumental broadening. Strictly speaking, this is no longer necessary if the linewidth is taken from the model profile, since this profile is what would actually be seen with no instrumental broadening (i.e., infinite resolution). Trials in Sect. \ref{ss_tests_coarse} in which the model is fitted to spectra with coarser velocity binning indeed show no significant `instrumental' bias in linewidth in the high-S/N limit. Biases do begin to show themselves as S/N decreases toward the limit of detectability, but they are always significantly less than those observed with linewidths measured directly from the noisy, coarsely-resolved profiles. As described in Sect. \ref{sss_coarse_discussion}, correct Bayesian treatment of ensemble observations can circumvent ill effects from such low-S/N biases.
Comparison of the total flux parameter of the fitted model to that obtained by summing the observed flux densities indicates again that the model value of flux is an accurate proxy to the real one (within 1\%). For about half the 34 galaxies fitted, the difference was less than the measurement uncertainty, which was nearly always significantly less than 1\%. There is however a significant minority of galaxies for which the total flux is slightly underestimated. It is thought likely that this is because of the frequent occurrence of extended tails in the profile wings, which are underfitted by the Gaussian wings of the model.
Systemic velocities are returned within measurement errors and without bias by the model fits, even for asymmetric profiles.
The use of the remaining three model parameters (turbulent broadening $\Delta v_\mathrm{rand}$, `solid rotating' fraction $f$ and asymmetry $\alpha$) has been little explored. The $\Delta v_\mathrm{rand}$ value fitted is typically about 12 km s$^{-1}$ (see appendix \ref{ss_app_B_justification}), which is 2 or 3 km s$^{-1}$ larger than typical values seen in spatially resolved maps of velocity distribution. As explained in Sect. \ref{sss_orig_comparison}, this is likely an artefact of the simplistic rotation curve implied by the model. Deviations of the rotation curves of real galaxies from this simple framework are readily absorbed into the $\Delta v_\mathrm{rand}$ parameter, tending therefore to inflate its value.
It might be of interest to see whether the $f$ and $\alpha$ parameters can be useful as proxies for features of galaxy morphology, in analogy with studies such as those of \citet{richter_1994} and \citet{andersen_2009}, but this question is beyond the scope of the present paper.
In Sect. \ref{ss_tests_coarse} it was shown that the model remains a useful way to extract parameters of interest even when the spectral resolution and S/N more nearly approximate those of the bulk of detected sources in surveys such as ALFALFA.
The final test performed on the model was to fit to a representative \ion{H}{i} profile from the EBHIS survey \citep{winkel_2010, kerp_2011}. In contrast to the THINGS observations, this survey was done using a single dish antenna, and thus comes with a significant amount of baseline. This was dealt with in a Bayesian fashion by using the non-source areas of the relevant data cube to constrain the baseline priors. The success of this procedure is shown by the accurate agreement between the resulting fitted value of total flux and previous careful measurements for this galaxy (DDO 154).
Fitting the profile to individual pixels instead of to a spatial sum across several pixels allowed something of the spatial distribution of neutral hydrogen in this galaxy to be mapped in a low-noise fashion, despite the intrinsically poor spatial resolution of the survey. The orientation of the galaxy's rotation axis could also be approximately determined via mapping the fitted value of the asymmetry parameter. The success of these fits shows that the model is not only suitable for \ion{H}{i} profiles which result from a complete spatial integration across the extent of the galaxy, but also for partially integrated profiles.
\begin{acknowledgements}
IMS wishes to thank F Walter and J Kerp for generous provisions of non-public data, and is grateful to L Fl\"{o}er, A Schr\"{o}der and B Winkel for help and advice. We are grateful to the anonymous referee for helpful and informative comments.
\end{acknowledgements}
\bibliographystyle{./aa}
|
1,108,101,563,841 | arxiv |
\section{Test cases}
\cite{Plain},\cite{onlydoi},\cite{onlyurl}, \cite{urldoi}, \cite{onlyeccc}, \cite{onlyeprint},\cite{ECCCandConf},\cite{ECCCandConfandDOI}, \cite{eprinteccc}
\bibliographystyle{alphaurlpp}
\section{Introduction}
In the context of proving lower bounds in complexity theory, many of the existing approaches for proving Boolean circuit lower bounds were unified by Razborov and Rudich under the {\em Natural Proofs} framework~\cite{RR97} and they showed that, under standard cryptographic assumptions, any technique that fits into this framework cannot yield very strong lower bounds.
In the last few years there has been some work (e.g.
\cite{G15}, \cite{GKSS17, FSV18}) aimed at developing an analogue of the Natural Proofs framework for algebraic circuit lower bounds.
A crucial notion in this context is that of an \emph{equation} for a class of polynomials which we now define.
For a class ${\cal C}$ of polynomials, an {\em equation} for ${\cal C}$ is a family of nonzero polynomials such that it vanishes on the coefficient vector of polynomials in ${\cal C}$.\footnote{Strictly speaking, these notions need us to work with families of polynomials, even though we sometimes drop the word \emph{family} for ease of exposition.}
Informally, an {\em algebraic natural proof} for a class ${\cal C}$ is a family of {\em equations} for ${\cal C}$ which can be computed by algebraic circuits of size and degree polynomially bounded in their number of variables.
Thus, a lower bound for ${\cal C}$ can be proved by exhibiting an explicit polynomial on which an equation for ${\cal C}$ does not vanish.
Many of the known algebraic circuit lower bounds fit into this framework of algebraically natural proofs as observed by several authors \cite{AD08,G15,FSV18,GKSS17}, thereby motivating the question of understanding whether techniques in this framework can yield strong algebraic circuit lower bounds; in particular, whether such techniques are sufficient to separate $\VNP$ from $\VP$.
Thus, in this framework, the first step towards a lower bound for $\VP$ is to understand whether $\VP$ has a family of equations which itself is in $\VP$, that is its degree and its algebraic circuit size are polynomially bounded in the number of the variables.
The next step, of course, would be to show the existence of a polynomial family in $\VNP$ which \emph{does not} satisfy this family of equations.
This work is motivated by the first step of this framework, that is the question of understanding whether natural and seemingly rich circuit classes like $\VP$ and $\VNP$ can have efficiently constructible equations.
We briefly discuss prior work on this problem, before describing our results.
\subsection{Complexity of Equations for classes of polynomials}
In one of the first results on this problem, Forbes, Shpilka and Volk~\cite{FSV18} and Grochow, Kumar, Saks and Saraf~\cite{GKSS17} observe that the class $\VP$ does not have efficiently constructible equations if we were to believe that there are hitting set generators for algebraic circuits with sufficiently succinct descriptions.
However, unlike the results of Razborov and Rudich \cite{RR97}, the plausibility of the pseudorandomness assumption in \cite{FSV18, GKSS17} is not very well understood.
The question of understanding the complexity of equations for $\VP$, or in general any natural class of algebraic circuits, continues to remain open.
In a recent work of Chatterjee and the authors~\cite{CKRST20}, it was shown that if we focus on the subclass of $\VP$ (in fact, even $\VNP$) consisting of polynomial families with bounded integer coefficients, then we indeed have efficiently computable equations.
More formally, the main result in \cite{CKRST20} was the following.
\begin{theorem}[\cite{CKRST20}]
\label{thm:CKRST-complexes}
For every constant $c > 0$, there is a polynomial family $\{P_{N, c}\} \in \VP_\Q$ \footnote{For a field $\F$, $\VP_{\F}$ denotes the class $\VP$ where the coefficients of the polynomials are from the field $\F$.
Similarly, $\VNP_{\F}$ denotes the class $\VNP$ where the coefficients of the polynomials are from the field $\F$.}
such that for all large $n$ and $N = \binom{n+n^c}{n}$, the following are true.
\begin{itemize}
\item For every family $\{f_n\} \in \VNP_\Q$, where $f_n$ is an $n$-variate polynomial of degree at most $n^c$ and coefficients in $\{-1, 0, 1\}$, we have
\[
P_{N,c}(\cvector{f_n}) = 0 \, .
\]
\item There exists a family $\set{h_n}$ of $n$-variate polynomials and degree at most $n^c$ with coefficients in $\{-1, 0, 1\}$ such that
\[
P_{N,c}(\cvector{h_n}) \neq 0 \, .
\]
\end{itemize}
Here, $\cvector{f}$ denotes the coefficient vector of a polynomial $f$.
\end{theorem}
Many of the natural and well studied polynomial families like the Determinant, the Permanent, Iterated Matrix Multiplication, etc., have this property of bounded coefficients, and in fact the above result even holds when the coefficients are as large as $\poly(N)$.
Thus, \autoref{thm:CKRST-complexes} could be interpreted as some evidence that perhaps we could still hope to prove lower bounds for one of these polynomial families via proofs which are algebraically natural.
Extending \autoref{thm:CKRST-complexes} to obtain efficiently constructible equations for all of $\VP$ (or even for slightly weaker models like formulas or constant depth algebraic circuits) is an extremely interesting open question.
In fact, even a conditional resolution of this problem in either direction, be it showing that the bounded coefficients condition in \autoref{thm:CKRST-complexes} can be removed, or showing that there are no such equations, would be extremely interesting and would provide much needed insight into whether or not there \emph{is} a natural-proofs-like barrier for algebraic circuit lower bounds.
\subsection{Our results}
In this paper, we show that assuming the Permanent is hard, the constraint of bounded coefficients in \autoref{thm:CKRST-complexes} is necessary for efficient equations for $\VNP$.
More formally, we show the following theorem.
\begin{restatable}[Conditional Hardness of Equations for {\sf VNP}]{theorem}{MainTheorem}
\label{thm:MainThm}
Let $\epsilon > 0$ be a constant. Suppose, for an $m$ large enough, we have that $\operatorname{Perm}_{m}$ requires circuits of size $2^{m^\epsilon}$.
Then, for $n = m^{\epsilon/4}$, any $d \leq n$ and $N = \binom{n+d}{n}$, we have that every nonzero polynomial $P(x_1,\ldots, x_N)$ that vanishes on all coefficient vectors of polynomials in $\VNP_{\mathbb{C}}(n,d)$ has $\operatorname{size}(P) = N^{\omega(1)}$.
\end{restatable}
\begin{remark*}
Our proof of the above theorem easily extends to any field of characteristic zero. We shall just work with complexes for better readability.
\end{remark*}
Extending the result in \autoref{thm:MainThm} to hardness of equations for $\VP$, even under the assumption that Permanent is sufficiently hard, is an extremely interesting open question.
Such an extension would answer the main question investigated in \cite{FSV18, GKSS17} and show a natural-proofs-like barrier for a fairly general family of lower bound proof techniques in algebraic complexity.
Our proof of \autoref{thm:MainThm} however crucially relies on some of the properties of $\VNP$ and does not appear to extend to $\VP$.
Although the proof of the above theorem is quite elementary, the main message (in our opinion) is that we do not\footnote{Or rather, the results of \cite{CKRST20} and the above theorem seem to provide \emph{some} evidence for both sides!}
have compelling evidence to rule out, or accept, the efficacy of algebraic natural proofs towards proving strong lower bounds for rich classes of algebraic circuits.
\subsection{An overview of the proof}
As was observed in \cite{FSV18, GKSS17}, a lower bound for equations for a class of polynomials is equivalent to showing the existence of succinctly describable hitting sets for this class.
For our proof we show that, assuming that the permanent is sufficiently hard, the coefficient vectors of polynomials in $\VNP$ form a \emph{hitting set} for the class $\VP$.
The connection between hardness and randomness in algebraic complexity is well known via a result of Kabanets and Impagliazzo \cite{KI04}, and we use this connection, along with some additional ideas for our proof.
We briefly describe a high level sketch of our proof in a bit more detail now.
Kabanets and Impagliazzo~\cite{KI04} showed that using any explicit polynomial family $\{f_n\}$ that is sufficiently hard, one can construct a hitting set generator for $\VP$, that is, we can construct a polynomial map $\mathsf{Gen}_f:\F^{k} \rightarrow \F^{t}$ that ``fools'' any small algebraic circuit $C$ on $t$ variables in the sense that $C(y_1, y_2, \ldots, y_t)$ is nonzero if and only if the $k$-variate polynomial $C\circ\mathsf{Gen}_f$ is nonzero.
In a typical invocation of this result, the parameter $k$ is much smaller than $t$ (typically $k = \poly\log t$).
Thus, this gives a reduction from the question of polynomial identity testing for $t$-variate polynomials to polynomial identity testing for $k$-variate polynomials.
Another related way of interpreting this connection is that if $\{f_n\}$ is sufficiently hard then $\mathsf{Gen}_f$ is a polynomial map whose image does not have an equation with small circuit size.
Thus, assuming the hardness of the Permanent, this immediately gives us a polynomial map (with appropriate parameters) such that its image does not have an efficiently constructible equation.
For the proof of \autoref{thm:MainThm}, we show that the points in the image of the map $\mathsf{Gen}_{\operatorname{Perm}}$, can be viewed as the coefficient vectors of polynomials in $\VNP$, or, equivalently in the terminology in \cite{FSV18, GKSS17}, that the Kabanets-Impagliazzo hitting set generator is $\VNP$-succinct.
To this end, we work with a specific instantiation of the construction of the Kabanets-Impagliazzo generator where the underlying construction of combinatorial designs is based on Reed-Solomon codes.
Although this is perhaps the most well known construction of combinatorial designs, there are other (and in some parameters, better) constructions known.
However, our proof relies on the properties of this particular construction to obtain the succinct description.
Our final proof is fairly short and elementary, and is based on extremely simple algebraic ideas and making generous use of the fact that we are trying to prove a lower bound for equations for $\VNP$ and not $\VP$.
\paragraph{Details of the proof.}
Let us assume that for some constant $\epsilon > 0$ and for all\footnote{To be more precise, we should work with this condition for ``infinitely often'' $m\in \N$ and obtain that $\VNP$ does not have efficient equations infinitely often. We avoid this technicality for the sake of simplicity and the proof continues to hold for the more precise version with suitable additional care. } $m \in \N$, $\operatorname{Perm}_{m}$ requires circuits of size $2^{m^{\epsilon}} $.
Kabanets and Impagliazzo~\cite{KI04} showed that, for every combinatorial design $\mathcal{D}$ (a collection of subsets of a universe with small pairwise intersection) of appropriate parameters, the map
\[
\mathsf{Gen}_{\operatorname{Perm}}(\vecz) = \left(\operatorname{Perm}(\vecz_S)\;:\; S \in \mathcal{D}\right)
\]
where $\vecz_S$ denotes the variables of in $\vecz$ restricted to the indices in $S$, is a hitting set generator for circuits of size $2^{o(m^{\epsilon})} $.
Our main goal is to construct a polynomial $F(\vecy,\vecz)$ in $\VNP$ such that
\begin{equation}
\label{eq:polyH}
F(\vecy,\vecz) = \sum\limits_{S\in {\cal D}} \operatorname{mon}_S(\vecy) \cdot {\sf Perm}(\vecz_S)
\end{equation}
By choosing parameters carefully, this would immediately imply that any equation on $N$-variables, for $N = \binom{n+d}{d}$, that vanishes on the coefficient vector of polynomials in $\VNP(n,d)$ (which are $n$-variate polynomials in $\VNP$ of degree at most $d$) requires size super-polynomial in $N$.
To show that the polynomial $F(\vecy,\vecz)$ in \autoref{eq:polyH} is in $\VNP$, we use a specific combinatorial design.
For the combinatorial design ${\cal D}$ obtained via Reed-Solomon codes, every set in the design can be interpreted as a univariate polynomial $g$ of appropriate degree over a finite field.
The degree of $g$ (say $\delta$) and size of the finite field (say $p$) are related to the parameters of the design ${\cal D}$.
Now,
\begin{equation}
\label{eq:polyH2}
F(\vecy,\vecz) = \sum_{\substack{g \in \F_p[v]\\ \deg(g) \leq \delta}} \inparen{\prod_{i=0}^{\delta} y_i^{g_i}} \cdot \operatorname{Perm}(\vecz_{S(g)}),
\end{equation}
where $(g_0,\ldots,g_\delta)$ is the coefficient vector of the univariate polynomial $g$.
Expressing $F(\vecy,\vecz)$ in \autoref{eq:polyH2} as a polynomial in $\VNP$ requires us to implement the product $\inparen{\prod\limits_{i=0}^{\delta} y_i^{g_i}}$ as a polynomial when given the binary representation of coefficients $g_0,\ldots,g_{\delta}$ via a binary vector $\vect$ of appropriate length (say $r$).
This is done via the polynomial $\operatorname{Mon}(\vect,\vecy)$ in \autoref{subsec:monomials} in a straightforward manner.
Furthermore, we want to algebraically implement the selection $\vecz_S$ for a set $S$ in the combinatorial design when given the polynomial $\vecg$ corresponding to $S$.
This is implemented via the polynomial $\operatorname{RS-Design}(\vect,\vecz)$ in \autoref{subsec:selections}.
Finally, we have
\begin{align*}
F(\vecy,\vecz) &= \sum_{\vect \in \set{0,1}^r} \operatorname{Mon}(\vect, \vecy) \cdot \operatorname{Perm}(\operatorname{RS-Design}(\vect, \vecz))
\end{align*}
which is clearly in $\VNP$ as $ \operatorname{Perm}_p$ is in $\VNP$ and polynomials $\operatorname{Mon}(\vect,\vecy)$ and $\operatorname{RS-Design}(\vect, \vecz)$ are efficiently computable.
We refer the reader to \autoref{section:details} for complete details.
\paragraph*{Related results.} The concept of algebraically natural
proofs was first studied in the works of Forbes, Shpilka and Volk~\cite{FSV18} and Grochow, Kumar, Saks and Saraf~\cite{GKSS17} who showed that constructing efficient equations for a class directly contradicts a corresponding {\em succinct} derandomization of the polynomial identity testing problem.
In fact, Forbes, Shpilka and Volk~\cite{FSV18} unconditionally ruled out equations for depth-three multilinear formulas computable by certain structured classes of algebraic circuits using this connection. However, this does not imply anything about complexity of equations for general classes of algebraic circuits such as $\VP$ and $\VNP$.
In the context of proving algebraic circuit lower bounds, Efremenko, Garg, Oliveira and Wigderson~\cite{EGOW18} and Garg, Makam, Oliveira and Wigderson~\cite{GMOW19} explore limitations of proving algebraic circuit lower bounds via rank based methods.
However, these results are not directly concerned with the complexity of equations for circuit classes.
Recently, Bl\"{a}ser, Ikenmeyer, Jindal and Lysikov~\cite{BIJL18} studied the complexity of equations in a slightly different context.
They studied a problem called ``matrix completion rank'', a measure for tensors that is $\NP$-hard to compute.
Assuming $\mathsf{coNP} \nsubseteq\exists\mathsf{BPP}$, they construct an explicit tensor of large (border) completion rank such that any efficient equation for the class of tensors of small completion rank must necessarily also vanish on this tensor of large completion rank.
That is, efficient equations cannot certify that this specific tensor has large (border) completion rank.
Subsequently, this result was generalized to {\em min-rank} or {\em slice-rank} ~\cite{BILPS19}.
The set-up in these papers is different from the that in our paper, and that of \cite{GKSS17,FSV18}.
One way to interpret this difference is that \cite{BIJL18} shows that ``variety of small completion rank tensors'' cannot be ``cut out'' by efficient equations, whereas the set-up of \cite{GKSS17, FSV18} and our paper would ask if \emph{every} equation for this variety requires large complexity.
In the context of equations for varieties in algebraic complexity, Kumar and Volk~\cite{KV20} proved polynomial degree bounds on the equations of the Zariski closure of the set of non-rigid matrices as well as small linear circuits over all large enough fields.
\section{Preliminaries}
\subsection{Notation}
\begin{itemize}\itemsep0pt
\item We use $[n]$ to denote the set $\set{1,\ldots, n}$ and $\inbracket{n}$ to denote the set $\set{0,1,\ldots, n}$. We also use $\N_{\geq 0}$ to denote the set of non-negative integers.
\item We use boldface letters such as $\vecx, \vecy$ to denote tuples, typically of variables. When necessary, we adorn them with a subscript such as $\vecy_{[n]}$ to denote the length of the tuple.
\item We also use $\vecx^{\vece}$ to denote the monomial $\prod x_i^{e_i}$. We write $\vecx^{\leq d}$ for the set of all monomials of degree at most $d$ in $\vecx$, and $\F[\vecx]^{\leq d} $ for the set of polynomials in $\vecx$ over the field $\F$ of degree at most $d$.
\item As usual, we identify the elements of $\F_p$ with $\{0, 1, \ldots, p-1\}$ and think of $\inbracket{n}$ as a subset of $\F_p$ in the natural way for any $n < p$.
\end{itemize}
\subsection{Some basic definitions}
\subsubsection*{Circuit classes}
\begin{definition}[Algebraic circuits] \label{defn:alg-circuits}
An \emph{algebraic circuit} is specified by a directed acyclic graph, with leaves (indegree zero; also called \emph{inputs}) labelled by field constants or variables, and internal nodes labelled by $+$ or $\times$.
The nodes with outdegree zero are called the \emph{outputs} of the circuit.
Computation proceeds in the natural way, where inductively each $+$ gate computes the sum of its children and each $\times$ gate computes the product of its children.
The \emph{size} of the circuit is defined as the number of nodes in the underlying graph.
\end{definition}
\begin{definition}[$\VP$ and $\VNP$]
A family of polynomials $\set{f_n}$, where $f_n$ is $n$-variate, is said to be in $\VP$ if $\deg(f_n)$ and the algebraic circuit complexity of $f_n$ is bounded by a polynomial function of $n$. That is, there is a constant $c \geq 0$ such that for all large enough $n$ we have $\deg(f_n), \operatorname{size}(f_n) \leq n^c$.
A family of polynomials $\set{f_n}$ is said to be in $\VNP$ if there is a family $\set{g_n(\vecx_{[n]}, \vecy_{[m]})} \in \VP$ such that $m$ is bounded by a polynomial function of $n$ and
\[
f_n(\vecx) = \sum_{\vecy \in \set{0,1}^m} g_n(\vecx, \vecy).\qedhere
\]
\end{definition}
For some $n, d \in \N$, let $\mathcal{C}_{n,d}$ be a class of $n$-variate polynomials of \emph{total} degree at most $d$.
That is, $\mathcal{C}_{n,d} \subseteq \F[\vecx]^{\leq d}$.
Similarly, we will use $\VP(n,d)$ and $\VNP(n,d)$ to denote the intersection of $\VP$ and $\VNP$ respectively, with $\F[\vecx_{[n]}]^{\leq d}$.
\subsubsection*{Equations and succinct hitting sets}
\begin{definition}[Equations for a class]
\label{def:proofs}
For $N = \binom{n + d}{n}$, a nonzero polynomial $P_N(\mathbf{Z})$ is called an {\em equation} for $\mathcal{C}_{n,d}$ if for all $f(\vecx) \in \mathcal{C}_{n,d}$, we have that $P_{N}(\cvector{f}) = 0$, where $\cvector{f}$ is the coefficient vector of $f$.
\end{definition}
Alternatively, we also say that a polynomial $P(\mathbf{Z})$ {\em vanishes} on the coefficient vectors of polynomials in class $\mathcal{C}$ if $P_{N}(\cvector{f}) = 0$ for all $f\in \mathcal{C}$.
\begin{definition}[Hitting Set Generator (HSG)]
\label{def:HSG}
A polynomial map $G:\mathbb{F^\ell}\rightarrow \mathbb{F}^n$ given by $G(z_1,\ldots,z_\ell)=(g_1(\vecz),\ldots,g_n(\vecz))$ is said to be a {\em hitting set generator (HSG)} for a class $\mathcal{C}\subseteq \mathbb{F}[\vecx]$ of polynomials if for all nonzero $P\in \mathcal{C}$, $P\circ G = P(g_1,\ldots,g_n)\not \equiv 0$.
\end{definition}
We review the definition of {\em succinct hitting sets} introduced \cite{GKSS17, FSV18}.
\begin{definition}[Succinct Hitting Sets for a class of polynomials~\cite{GKSS17,FSV18}]
\label{def:succinct-HS}
For $N = \binom{n + d}{n}$, we say that a class of $N$-variate polynomials \emph{$\mathcal{D}_N$ has $\mathcal{C}_{n,d}$-succinct hitting sets} if for all nonzero $P(\mathbf{Z}) \in \mathcal{D}_{N}$, there exists some $f \in \mathcal{C}_{n,d}$ such that $P_{N}(\cvector{f}) \neq 0$.
\end{definition}
\subsubsection*{Hardness to randomness connection}
For our proofs, we will need the following notion of combinatorial designs, which is a collection of subsets of a universe with small pairwise intersection.
\begin{definition}[Combinatorial designs]
A family of sets $\{S_1,\ldots,S_N\} \subseteq [\ell]$ is said to be an $(\ell,m,n)$-design if
\begin{itemize}
\item $|S_i|=m$ for each $i \in [n]$
\item $|S_i \cap S_j| < n$ for any $i \neq j$. \qedhere
\end{itemize}
\end{definition}
Kabanets and Impagliazzo \cite{KI04} obtain hitting set generators from polynomials that are hard to compute for algebraic circuits.
The following lemma is crucial to the proof of our main theorem.
\begin{lemmawp}[HSG from Hardness \cite{KI04}]
\label{lem:KI-HSG-from-hardness}
Let $\{S_1,\ldots,S_N\}$ be an $(\ell,m,n)$-design and $f(\vecx_m)$ be an $m$-variate, individual degree $d$ polynomial that requires circuits of size $s$.
Then for fresh variables $\vecy_{\ell}$, the polynomial map $\operatorname{KI-gen}_{(N,\ell,m,n)}(f) : \F^{\ell} \rightarrow \F^n $ given by
\begin{equation}\label{eqn:KI}
\inparen{f(\vecy_{S_1}), \ldots, f(\vecy_{S_N})}
\end{equation}
is a hitting set generator for all circuits of size at most $\inparen{\frac{s^{0.1}}{N(d+1)^n}} $.
\end{lemmawp}
\section{Proof of the main theorem}
\subsection*{Notation}
\label{section:details}
\begin{enumerate}
\item For a vector $\vect = (t_1,\ldots, t_r)$, we will use the short-hand $t_{i,j}^{(a)}$ to denote the variable $t_{(i \cdot a + j + 1)}$.
This would be convenient when we consider the coordinates of $\vect$ as blocks of length $a$.
\item For integers $a,p$, we shall use $\operatorname{Mod}(a,p)$ to denote the unique integer $a_p \in [0,p-1]$ such that $a_p = a\bmod{p}$.
\end{enumerate}
As mentioned in the overview, the strategy is to convert the hitting set generator given in \eqref{eqn:KI} into a succinct hitting set generator. Therefore, we would like to associate the coordinates of \eqref{eqn:KI} into coefficients of a suitable polynomial. That is, we would like to build a polynomial in $\VNP$ of the form
\[
g(y_1,\ldots, y_\ell, z_1,\ldots, z_t) = \sum_{m \in \vecy^{\leq d}} m \cdot f(\vecz_{S_m})
\]
with the monomials $m \in \vecy^{\leq d}$ suitably indexing into the sets of the combinatorial design. The above expression already resembles a $\VNP$-definition and with a little care this can be made effective. We will first show that the different components of the above expression can be made succinct using the following constructions.
\subsection{Building monomials from exponent vectors}
\label{subsec:monomials}
For $n,r \in \N$, let $a = \floor{r/n}$, and define $\operatorname{Mon}_{r,n}(\vect,\vecy) $ as follows.
\[
\operatorname{Mon}_{r,n}(t_1,\ldots, t_r, y_1, \ldots, y_n) = \prod_{i=0}^{n-1} \prod_{j=0}^{a-1} \inparen{t_{i,j}^{(a)} y_{i+1}^{2^{j}} + (1 - t_{i,j}^{(a)}) }
\]
The following observation is now immediate from the definition above.
\begin{observation}
For any $(e_1,\ldots, e_n) \in \inbracket{d}^n$, we have
\[
\operatorname{Mon}_{r,n}(\operatorname{Bin}(e_1),\ldots, \operatorname{Bin}(e_n), y_1, \ldots, y_n) = y_1^{e_1} \cdots y_n^{e_n},
\]
where $\operatorname{Bin}(e)$ is the tuple corresponding to the binary representation of $e$, and $r = n \cdot \ceil{\log_2 d}$.
Furthermore, the polynomial $\operatorname{Mon}_{r,n}$ is computable by an algebraic circuit of size $\poly(n,r)$.
\end{observation}
\subsection{Indexing Combinatorial Designs Algebraically}
\label{subsec:selections}
\newcommand{\operatorname{Sel}}{\operatorname{Sel}}
Next, we need to effectively compute the hard polynomial $f$ on sets of variables in a combinatorial design, indexed by the respective monomials.
We will need to simulate some computations modulo a fixed prime $p$. The following claim will be helpful for that purpose.
\begin{claim}
\label{claim:interpolation}
For any $i,b,p \in \N_{\geq 0}$ with $i\leq p$, there exists a unique univariate polynomial $Q_{i,b,p}(v)\in \mathbb{Q}[v]$ of degree at most $b$ such that
\[
Q_{i,b,p}(a) = \begin{cases}
1 & \text{if $0 \leq a < b$ and $a \equiv i~(\bmod{p})$},\\
0 & \text{if $0 \leq a < b$ and $a \not\equiv i~(\bmod{p})$}.
\end{cases}
\]
\end{claim}
\begin{proof}
We can define a unique univariate polynomial $Q_{i,b,p}(v)$ satisfying the conditions of the claim via interpolation to make a unique univariate polynomial take a value of $0$ or $1$ according to the conditions of the claim.
Since, there are $b$ conditions, there always exists such a polynomial of degree at most $b$.
\end{proof}
For any $n,b,p \in \N_{\geq 0}$ with $n\geq p$, define
\[
\operatorname{Sel}_{n,b,p}(u_1,\ldots, u_n, v) \triangleq \sum_{i=1}^n u_i\cdot Q_{i,b,p}(v).
\]
\begin{observation}
For any $n,b,p\in \N_{\geq 0}$ with $n\geq p$, for any $0 \leq a < b$, we have that
\[
\operatorname{Sel}_{n,b,p}(u_1,\ldots, u_n, a) = u_{\operatorname{Mod}(a,p)} = u_{a\bmod{p}}
\]
The degree of $\operatorname{Sel}_{n,b,p}$ is at most $(b+1)$ and can be computed by an algebraic circuit of size $\poly(b)$.
\end{observation}
\begin{proof}
From the definition of the univariate polynomial $Q_{i,b,p}(v)$ of degree $b$ in \autoref{claim:interpolation}, $Q_{i,b,p}(a)$ outputs $1$ if and only if $i= a \bmod{p}$.
Hence, $\operatorname{Sel}_{n,b,p}(u_1,\ldots, u_n, a)$ is $u_{a\bmod{p}}$ and is of degree at most $(b+1)$.
\end{proof}
\medskip
\noindent
And finally, we choose a specific combinatorial design to instantiate \autoref{lem:KI-HSG-from-hardness} with.
\subsection{Reed-Solomon based combinatorial designs}\label{sec:RS-designs}
For any prime $p$ and any choice of $a \leq p$, the following is an explicit construction of a $(p^2, p, a)$-combinatorial design of size $p^a$, defined as follows:
\begin{quote}
With the universe $U = \F_p \times \F_p$,
for every univariate polynomial $g(t) \in \F_p[t]$ of degree less than $a$, we add the set $S_g = \setdef{(i,g(i))}{i\in \F_p}$ to the collection.
\end{quote}
Since any two distinct univariate polynomials of degree less than $a$ can agree on at most $a$ points, it follows that the above is indeed a $(p^2, p, a)$-design.\\
The advantage of this specific construction is that it can be made succinct as follows.
For $r = a \cdot \floor{\log_2 p}$, let ${t_1,\ldots,t_r}$ be variables taking values in $\{0,1\}$.
The values assigned to $\vect$-variables can be interpreted as a univariate over $\F_p$ of degree $< a$ by considering $\vect \in \{0,1\}^r$ as a matrix with $a$ rows and $\floor{\log_2p}$ columns each~\footnote{Working with $\floor{\log_2 p}$ bits (as opposed to $\ceil{\log_2 p}$) makes the proofs much simpler, and does not affect the size of the design by much.}.
The binary vector in each row represents an element in $\mathbb{F}_p$. We illustrate this with an example.
\begin{center}
\begin{tikzpicture}[transform shape]
\node (label_t) at (-3.25,0) {$\vect = $};
\matrix [matrix of math nodes, left delimiter=(, right delimiter=)](g) at (-1.75,0) {
1 & 1 & 1\\
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 & 1\\
};
\node (convert) at (0.5,0) {$\longrightarrow$};
\matrix [matrix of math nodes, left delimiter=(, right delimiter=)] (eg) at (2.25,0) {
7\\
2\\
1\\
4\\
2\\
};
\node (label_g) at (3.75,0) {$\cong g(v)$};
\node (info1) at (0.25,-2) {For $p = 11$, $a = 5$, $g(v) = 7 + 2v + v^2 + 4v^3 + 2v^4 \in \F_{11}[v]$,};
\node (info2) at (0.25,-2.75) {$\vect$ is a $5 \times 3$ matrix that encodes the coefficients of $g(v)$.};
\end{tikzpicture}
\end{center}
Let $\vecz$ denote the $p^2$ variables $\set{z_{1},\ldots,z_{p^2}}$, put in into a $p \times p$ matrix.
Let $S$ be a set in the Reed-Solomon based $(p^2,p,a)$-combinatorial design.
We want to implement the selection $\vecz_S$ algebraically.
In the following, we design a vector of polynomials that outputs the vector of variables $\inparen{z_{0,g(0)\bmod{p}}^{(p)}, \ldots, z_{p-1,g(p-1)\bmod{p}}^{(p)} }$.
Note that as mentioned above the polynomial $g$ can be specified via variables $t_1,\ldots,t_r$.
That is,
\begin{align*}
\operatorname{RS-Design}_{p,a}(t_1,\ldots, t_r, z_{1},\ldots, z_{p^2}) & \in (\F[\vect, \vecz])^p \quad,\quad \text{for $r = a \cdot \floor{\log_2 p}$},\\
\operatorname{RS-Design}_{p,a}(t_1,\ldots, t_r, z_{1},\ldots, z_{p^2})_{i+1} &= \operatorname{Sel}_{p, p^3, p}\inparen{z_{i,0}^{(p)},\ldots, z_{i,p-1}^{(p)}, R_{i,a,p}(\vect)},\quad\text{for each $i \in \F_p $,}\\
\text{where }R_{i,a,p}(\vect) & = \sum_{j=0}^{a-1} \insquare{\inparen{\sum_{k=0}^{\ell_p - 1} t_{j,k}^{(\ell_p)} \cdot 2^{k}} \cdot \operatorname{Mod}(i^{j}, p)},\\
\text{with }\ell_p & = \floor{\log_2 p}.
\end{align*}
\begin{observation}
For any prime $p$, $a\leq p$, and $\vect \in \set{0,1}^r$ for $r = a \cdot \floor{\log_2p}$, we have
\[
\operatorname{RS-Design}_{p,a}(\vect, \vecz) = \inparen{z_{i,g(i)}\;:\; i \in \F_p},
\] where $g(v) \in \F_p[v]$ is the univariate whose coefficient vector is represented by the bit-vector $\vect$.
Furthermore, the polynomial $\operatorname{RS-Design}_{p,a}$ is computable by an algebraic circuit of size $\poly(p)$.
\end{observation}
\begin{proof}
Fix some $\vect \in \set{0,1}^r$.
From the definition of $R_{i,a,p}(\vect)$, it is clear that $R_{i,a,p}(\vect)$ returns an integer $\alpha$ such that $g(i) = \alpha\bmod p$ where $\vect$ encodes the coefficients of the polynomial $g(t)$ in binary.
Furthermore, since $\operatorname{Mod}(i^j,p)$ is the unique integer $c \in [0,p-1]$ with $c = i^j\bmod{p}$, it also follows that $R_{i,a,p}(\vect)$ is an integer in the range $[0,p^3]$.
Hence,
\[
\operatorname{Sel}_{p,p^3,p}\inparen{z_{i,0}^{(p)},\ldots, z_{i,p-1}^{(p)}, R_{i,a,p}(\vect)} = z_{i,g(i)}
\]
as claimed.
\end{proof}
\subsection{The \texorpdfstring{$\VNP$}{\sf VNP}-Succinct-KI generator}
We are now ready to show the $\VNP$-succinctness of the Kabanets-Impagliazzo hitting set generator when using a hard polynomial from $\VNP$ and a Reed-Solomon based combinatorial design.
For a prime $p$ and for the largest number $m$ such that $m^2 \leq p $, we will use $\operatorname{Perm}_{[p]} \in \F[\vecy_{[p]}]$ to denote $\operatorname{Perm}_m$ applied to the first $m^2 $ variables of $\vecy$.
We now define the polynomial $F_{n,a,p}(\vecy_{[n]},\vecz_{[p^2]}) $ as follows.
\begin{align}
\label{eqn:VNP-poly}
F_{n,a,p}(y_1,\ldots, y_n,z_1,\ldots, z_{p^2}) &= \sum_{\vect \in \set{0,1}^r} \operatorname{Mon}_{r,n}(\vect, \vecy) \cdot \operatorname{Perm}_{[p]}(\operatorname{RS-Design}_{p,a}(\vect, \vecz))\\
\text{where }r & = a \cdot \floor{\log_2 p} \nonumber
\end{align}
It is evident from the above definition that the polynomial $F_{n,a,p}(\vecy,\vecz)$ is in $\VNP$ for any $p$ that is $\poly(n)$, when seen as a polynomial in $\vecy$-variables with coefficients from $\mathbb{C}[\vecz]$.
From the construction, we have that
\[
F_{n,a,p}(y_1,\ldots, y_n, z_1,\ldots z_{p^2}) = \sum_{\vece} \vecy^{\vece} \cdot \operatorname{Perm}_{[p]}(\vecz_{S_\vece}),
\]
where $\set{S_\vece}$ is an appropriate ordering of the Reed-Solomon based $(p^2, p,a)$-combinatorial design of size $p^a$, described in \autoref{sec:RS-designs}.
\subsection{Putting it all together}
We are now ready to show that if the Permanent polynomial is exponentially hard, then any polynomial $P$ that vanishes on the coefficient vectors of all polynomials in the class {\sf VNP} requires super-polynomial size to compute it.
\MainTheorem*
\begin{proof}
Let $p$ be the smallest prime larger than $m^2$; we know that $p \leq 2m^2$. We will again use $\operatorname{Perm}_{[p]} \in \F[\vecy_{[p]}]$ to denote $\operatorname{Perm}_m$ acting on the first $m^2 $ variables of $\vecy$. Therefore, if $\operatorname{Perm}_m$ requires size $2^{m^\epsilon}$ then so does $\operatorname{Perm}_{[p]}$.\\
Consider the polynomial $F_{n,n,p}(\vecy_{[n]}, \vecz_{[p^2]}) \in \VNP$ defined in \eqref{eqn:VNP-poly}, which we interpret as a polynomial in $\vecy$ with coefficients in $\mathbb{C}[\vecz]$.
The individual degree in $\vecy$ is at least $d$, and at most $p$.\\
Let $F_{n,n,p}^{\leq d}(\vecy_{[n]},\vecz_{[p^2]})$ denote the polynomial obtained from $F_{n,n,p}$ by discarding all terms whose total degree in $\vecy$ exceeds $d$.
By standard homogenisation arguments, it follows that $F^{\leq d}_{n,n,p} \in \VNP$ as well.
Therefore,
\[
F^{\leq d}_{n,n,p}(\vecy, \vecz) = \sum_{\deg(\vecy^\vece) \leq d} \vecy^{\vece} \cdot \operatorname{Perm}_{[p]}(\vecz_{S_\vece}),
\]
where $S_{\vece}$, for various $\vece$, is an appropriate indexing into a $(p^2, p, n)$-combinatorial design of size $N$.
Since the individual degree in $\vecy$ of $F_{n,n,p}$ was at least $d$, every coefficient of $F^{\leq d}_{n,n,p}$ is $\operatorname{Perm}_{[p]}(\vecz_{S})$ for some $S$ in the combinatorial design.
In other words, the coefficient vector of $F^{\leq d}_{n,n,p}$ is precisely $\operatorname{KI-gen}_{N,p^2, p, n}(\operatorname{Perm}_{[p]})$.
Suppose $P(x_1,\dots, x_N)$ is a nonzero equation for $\VNP(n,d)$, then in particular it should be zero on the coefficient vector of $F^{\leq d}_{n,n,p}(\vecy,\veca) \in \VNP$ for any $\veca \in \mathbb{C}^{p^2}$. By the Polynomial Identity Lemma~\cite{O22,DL78,Z79,S80}, this implies that $P$ must be zero on the coefficient vector of $F^{\leq d}_{n,n,p}(\vecy,\vecz) \in (\mathbb{C}[\vecz])[\vecy]$, where coefficients are formal polynomials in $\mathbb{C}[\vecz]$.
Since the coefficient vector of $F^{\leq d}_{n,n,p}(\vecy, \vecz)$ is just $\operatorname{KI-gen}_{N,p^2, p, n}(\operatorname{Perm}_{[p]})$, the contrapositive of \autoref{lem:KI-HSG-from-hardness} gives that
\begin{align*}
\operatorname{size}(P) & > \frac{\operatorname{size}(\operatorname{Perm}_{[p]})^{0.1}}{N \cdot 2^n} > \frac{\operatorname{size}(\operatorname{Perm}_{m})^{0.1}}{N \cdot 2^n}\\
\implies \operatorname{size}(P) & > \frac{2^{0.1 m^{\epsilon}}}{N \cdot 2^n}
\end{align*}
Since $N = \binom{n+d}{n} \leq 2^{2n} \leq 2^{o(m^\epsilon)}$, it follows that $\operatorname{size}(P) = N^{\omega(1)}$.
\end{proof}
\paragraph*{Concluding that {\sf VNP} has no efficient equations}
Note that for a family $\set{P_N}$ to be a \emph{family of equations} for a class $\mathcal{C}$, we want that for \emph{all large enough} $n$, the corresponding polynomial $P_N$ should vanish on the coefficient vectors of all $n$-variate polynomials in $\mathcal{C}$.
This condition is particularly important if we want to use equations for $\mathcal{C}$ to prove lower bounds against it, since a family of polynomials $\set{f_n}$ is said to be computable in size $s(n)$ if $\operatorname{size}(f_n) \leq s(n)$ for \emph{all large enough} $n$.
\autoref{thm:MainThm} shows that, for $m$ large enough, if there is a constant $\epsilon > 0$ such that $\operatorname{size}(\operatorname{Perm}_m) \geq 2^{m^{\epsilon}}$, then for $n = m^{\epsilon/4}$ and any $d\leq n$, the coefficient vectors of polynomials in $\VNP(n,d)$ form a hitting set for all $N$-variate polynomials (where $N= \binom{n+d}{d}$) of degree $\poly(N)$ that are computable by circuits of size $\poly(N)$.
Now suppose the Permanent family is $2^{m^{\epsilon}}$-hard for a constant $\epsilon > 0$, which means that $\operatorname{Perm}_m$ is $2^{m^{\epsilon}}$-hard for \emph{infinitely many} $m \in \N$.
Then using \autoref{thm:MainThm}, we can conclude that for any family $\set{P_N} \in \VP$, we must have for \emph{infinitely many} $n$ that $P_N(\cvector{f_n}) \neq 0$ for some $f_n \in \VNP$, which then shows that $\set{P_N}$ is not a family of equations for $\VNP$.
\section{Discussion and Open Problems}
In the context of proving circuit lower bounds, and in relation to the notion of \emph{algebraically natural proofs}, an interesting question that emerges from the recent work of Chatterjee and the authors~\cite{CKRST20} (stated in \autoref{thm:CKRST-complexes}) is whether the condition of ``small coefficients'' is necessary for efficiently constructible equations to exist, especially for the class $\VP$.
While this question remains open for $\VP$, our result shows that this additional restriction on the coefficients is essentially vital for the existence of efficiently constructible equations for the class $\VNP$, and therefore provides strong evidence \emph{against} the existence of efficient equations for $\VNP$.
In light of \autoref{thm:CKRST-complexes} and \autoref{thm:MainThm} for $\VNP$, one could make a case that equations for $\VP$ might also incur a super-polynomial blow up, without the restriction on coefficients.
On the other hand, it could also be argued that an analogue of \autoref{thm:MainThm} may not be true for $\VP$, since our proof crucially uses the fact that $\VNP$ is ``closed under exponential sums''.
In fact, our proof essentially algebraises the intuition that coefficient vectors of polynomials in $\VNP$ ``look random'' to a polynomial in $\VP$, \emph{provided that} $\VNP$ was exponentially more powerful than $\VP$.
Thus, along with the previously known results on efficient equations for polynomials in $\VP$ with bounded coefficients, our result highlights that the existence of such equations for $\VP$ in general continues to remain an intriguing mystery.
\subsection*{Open Problems}
We now conclude with some possible directions for extending our results.
\begin{itemize}
\item Perhaps the most interesting question here is to prove an analogue of \autoref{thm:MainThm} for equations for $\VP$.
This would provide concrete evidence for the possibility that we cannot hope to prove very strong lower bounds for algebraic circuits using proofs which proceed via efficiently constructible equations, from a fairly standard complexity theoretic assumption.
\item At the moment, we cannot rule out the possibility of there being efficient equations for $\VP$ in general; it may be possible that the bounded coefficients condition in \autoref{thm:CKRST-complexes} can be removed. In particular, the question of proving upper bounds on the complexity of equations for $\VP$ is also extremely interesting, even if one proves such upper bounds under some reasonable complexity theoretic assumptions.
A first step perhaps would be to prove upper bounds on the complexity of potentially simpler models, like formulas, algebraic branching programs or constant depth circuits. From the works of Forbes, Shpilka and Volk~\cite{FSV18}, we know that such equations for structured subclasses of $\VP$ (like depth-$3$ multilinear circuits) cannot be \emph{too} simple (such as sparse polynomials, depth-$3$ powering circuits, etc.). Can we prove a non-trivial upper bound for equations for these structured classes within $\VP$?
\item Another question of interest would be to understand if the hardness assumption in \autoref{thm:MainThm} can be weakened further.
For instance, is it true that $\VNP$ does not have efficiently constructible equations if $\VP \neq \VNP$, or if $\operatorname{Perm}_n$ requires circuits of size $n^{\poly\log(n)}$?
The current proof seems to need an exponential lower bound for the Permanent.
\end{itemize}
\subsection*{Acknowledgements}
We thank an anonymous reviewer of FOCS 2020, and Joshua Grochow, whose questions pointed us in the direction of this result.
We also thank Prerona Chatterjee and Ben Lee Volk for helpful discussions at various stages of this work.
\bibliographystyle{customurlbst/alphaurlpp}
|
1,108,101,563,842 | arxiv | \section{Section Title}
\section{Introduction}
\setcounter{equation}{0}
\label{sec:Intro}
As is well known, the decay/resonant amplitude of a resonance cannot
exactly coincide with the Breit-Wigner amplitude. One reason is that the
Breit-Wigner
amplitude yields the exponential decay law only when it is defined over the
whole of the energy real line $(-\infty , \infty )$ rather than just over the
scattering spectrum. Because in quantum mechanics
the scattering spectrum has a lower bound, the Breit-Wigner amplitude would
yield the exponential decay law only if it was defined also at energies that
do not belong to the scattering spectrum.
Another reason why the Breit-Wigner amplitude cannot exactly coincide with the
resonant amplitude is that the energy (i.e., spectral) representation of the
Gamow state is given by the complex delta function rather than by the
Breit-Wigner amplitude. Because the Gamow state is the natural wave function
of a resonance, the exact resonant amplitude is given by the complex delta
function.
Even though it is well known that it cannot exactly coincide with the
resonant amplitude, the Breit-Wigner amplitude is often used
to describe the decay of unstable systems. Two classic examples are Fermi's
two-level system and the Weisskopf-Wigner approximation. However, whenever
it is used in such examples, the Breit-Wigner amplitude is
extended from the scattering spectrum to the whole real line of energies
in order to obtain some desirable, causal results.
Because the exact resonant amplitude is given by the complex delta function,
one may wonder if such desirable results can be recovered by way of the
complex delta function without using the approximation of extending energy
integrations to the whole real line. The purpose of this
paper is to show how this can be done.
In Secs.~\ref{sec:Fermi}, \ref{sec:other} and~\ref{sec:WWAPRSO}, we recall,
respectively, the main features of Fermi's two-level system, a standard
treatment of unstable states, and the Weisskopf-Wigner approximation.
In Sec.~\ref{sec:subs}, we explain why the complex delta function gives us
the same results as the Breit-Wigner amplitude without extending energy
integrations outside of the scattering spectrum. In Sec.~\ref{sec:con}, we state
our conclusions.
\section{Fermi's two level system}
\setcounter{equation}{0}
\label{sec:Fermi}
In 1932, Fermi constructed a simple model to check whether Quantum Mechanics
is compatible with Einstein causality~\cite{FERMI32}. He considered a pair
of two atoms A and B separated by a distance $R$. The states of each atom
form a two-level
system (see Fig.~\ref{states}a). The energy gap of each two-level system is
$hu$ (see Fig.~\ref{states}a). The initial state is such that atom A is in
the excited state, whereas atom B is in the ground
state (see Fig.~\ref{states}a). When atom A decays to its ground state,
it emits a photon of energy $hu$. This photon may eventually hit
atom~B, causing atom~B to reach the excited state. The final
state is such that atom A~is in the ground
state, whereas atom~B is in the excited state (see Fig.~\ref{states}b). Fermi
then calculated the probability ${\cal P}_{i\to f}(t)$ of going from
the initial state of Fig.~\ref{states}a to the final state of
Fig.~\ref{states}b. According to Einstein causality, ${\cal P}_{i\to f}(t)$
should be zero for any instant $t$ less than $R/c$, i.e., for any $t$
less
than what it takes the photon to go from atom A to atom B (see
Fig.~\ref{outcome}a). This is the result that Fermi obtained~\cite{FERMI32}.
About the same time Fermi proposed this model, von Neumann published his
book on the mathematical foundations of Quantum
Mechanics~\cite{vonNeuman}. According to von Neumann, the energy observable
is represented by a linear, self-adjoint operator, called Hamiltonian, that
acts on a Hilbert space. The spectrum of the Hamiltonian, which is
identified with the physical spectrum, should be bounded from below
(i.e., semibounded).
In 1966, Shirokov~\cite{Shirokov66} pointed out that, in order to obtain the
result of Fig.~\ref{outcome}a, Fermi had approximated an integral over
positive energies (i.e., over the scattering spectrum) by an integral over the
full energy real line $(-\infty ,\infty )$. Such integral involves the
Breit-Wigner amplitude. This approximation is
{\it crucial} to Fermi's calculation: if the integral is performed over the
scattering spectrum, then the causal result of Fig.~\ref{outcome}a does not
hold~\cite{Shirokov66}. In fact, in 1994 Hegerfeldt~\cite{Hegerfeldt94}
showed, in a model independent manner, that the problem pointed out by Shirokov
within Fermi's system is quite general: the semiboundedness of the Hamiltonian
leads to conflicts with causality. More precisely, according to
Hegerfeldt's theorem, Quantum Mechanics predicts that
either atom A never decays (see Fig.~\ref{outcome}b), or else there is
a non-zero probability that atom~B reaches the excited state before
the photon from atom A can possibly arrive at atom~B (see Fig.~\ref{outcome}c).
\section{Unstable states}
\setcounter{equation}{0}
\label{sec:other}
Approximations similar to Fermi's approximation can be found in standard
textbooks dealing with unstable states. For example, in Sec.~13.d of
Ref.~\cite{TAYLOR},
Taylor uses such kind of approximation when dealing with the
decay of a resonant state. More precisely, Taylor's equation~(13.3) reads
as
\begin{equation}
\psi _{\rm sc}({\bf x},t)= {\rm constant}\
\Gamma Y_l^0(\hat{\rm x})\phi_l(E_{\rm R})
\frac{e^{i(p_{\rm R}r-E_{\rm R}t)}}{p_{\rm R}^{1/2}r}
\times \int_0^{\infty}dE
\frac{e^{i(E-E_{\rm R})(t-r/v_{\rm R})}}{E-E_{\rm R}+i\Gamma /2} \, ,
\end{equation}
where $z_{\rm R}= E_{\rm R}-i\Gamma /2$ is the complex resonant energy. Taylor
then continues by saying that ``the integral can be
extended to $-\infty$ without significantly affecting its value.'' After
such extension, Taylor obtains the following desirable result:
\begin{equation}
|\psi _{\rm sc}({\bf x},t)|^2= 2 \pi m
\Gamma^2 |Y_l^0(\hat{\rm x})|^2 |\phi_l(E_{\rm R})|^2
\frac{e^{-\Gamma(t-r/v_{\rm R})}}{p_{\rm R}r^2}
\theta \left( t -\frac{r}{v_{\rm R}}\right) .
\label{psideca}
\end{equation}
Equation~(\ref{psideca}) implies that the decay of a resonant state
follows the exponential decay law in a causal manner. Clearly,
Eq.~(\ref{psideca}) does not hold exactly when the integration
is done over the scattering spectrum, just like causality is not preserved
in Fermi's two-level system when the integration is done over the
scattering spectrum.
In Ref.~\cite{BALLENTINE}, Ballentine treats the decay of a resonance
in a similar way to Taylor. Ballentine also extends an energy
integral to the whole real line (see Eq.~(16.120) of Ref.~\cite{BALLENTINE})
in order to obtain a desirable, causal result.
\section{Weisskopf-Wigner approximation}
\setcounter{equation}{0}
\label{sec:WWAPRSO}
In quantum mechanics, the approximation of extending the range of the
Breit-Wigner amplitude to the whole real line is often referred to as
the Weisskopf-Wigner approximation. Such approximation is used in many
calculations. For example, in Ref.~\cite{SCULLY}, Scully and Zubairy
calculate the following amplitude for the first-order correlation function:
\begin{equation}
\langle 0|E^{(+)}({\bf r},t)|\gamma _0\rangle =
\frac{ic {\cal P}_{ab} \sin \eta}{8\pi ^2 \epsilon _0 \Delta r}
\times \int_0^{\infty}dk k^2(e^{ik\Delta r}- e^{-ik\Delta r})
\frac{e^{-i\nu _kt}}{(\nu _k -\omega )+i\Gamma /2} \, .
\label{lrigkdkt}
\end{equation}
Then, Scully and Zubairy extend the range of the integral to the
whole real line and obtain a desirable causal result for the first-order
correlation function:
\begin{equation}
G^{(1)}({\bf r},{\bf r};t,t)=
|\langle 0|E^{(+)}({\bf r},t)|\gamma _0\rangle|^2 =
\frac{ |{\cal E}_0|^2}{|{\bf r}-{\bf r}_0|^2}
\theta (t- \frac{|{\bf r}-{\bf r}_0|}{c})
e^{-\Gamma (t-|{\bf r}-{\bf r}_0|/c)} \, .
\end{equation}
As in the above examples, this result cannot be obtained unless the
range of the frequency (energy) integration in Eq.~(\ref{lrigkdkt})
is extended to the whole real line.
\section{Substituting the Breit-Wigner amplitude by the complex
delta function}
\setcounter{equation}{0}
\label{sec:subs}
From the above examples, we have seen that, whenever we describe the
decay of an unstable state by the Breit-Wigner amplitude, we arrive
at an integral of the form
\begin{equation}
\int_0^{\infty}dE \, e^{-iEt} \frac{f(E)}{E-z_{\rm R}} \, ,
\label{ldlds}
\end{equation}
where $f(E)$ is an analytic function of $E$, and
$z_{\rm R}=E_{\rm R}-i\Gamma /2$ is the resonant energy. By assuming that the
extension of the integral to the whole real line
makes little error, one gets
\begin{equation}
\int_{-\infty}^{\infty}dE \, e^{-iEt} f(E) \frac{1}{E-z_{\rm R}} =
\frac{2 \pi}{i}
f(z_{\rm R}) e^{-iE_{\rm R}t}e^{-\Gamma t/2} \, , \qquad t>0 \, .
\label{desidkdkd}
\end{equation}
Equations similar
to~(\ref{desidkdkd}) are widely used in the literature on resonances
(see e.g.~review~\cite{ROSAS}).
Clearly, the desirable result~(\ref{desidkdkd}) is obtained by using the
approximation of extending the energy integration to the whole real
line. It seems therefore pertinent to try to
recover~(\ref{desidkdkd}) as an exact result. In order to do so, we are
going to substitute the Breit-Wigner amplitude by the complex delta function.
The complex delta function was introduced by Nakanishi~\cite{NAKANISHI}
to describe resonances in the Lee model~\cite{LEE}, and it has been used by
a number of authors (see e.g.~Refs.~\cite{GONZALO00,GONZALO1,GONZALO2} and
references therein). As shown in Ref.~\cite{NPA}, describing resonances by
means of the complex
delta function is the same as describing resonances by means of the
Gamow state~\cite{GAMOW,SIEGERT,PEIERLS,HUMBLET,ZELDOVICH,BERGGREN,GASTON,
BERGGREN78,SUDARSHAN,MONDRAGON83,CURUTCHET,BL,LIND,BERGGREN96,BOLLINI,
FERREIRA,BETAN,MICHEL1,AJP02,KAPUSCIK1,MONDRAGON03,MICHEL2,KAPUSCIK2,05CJP,
MICHEL3,MICHEL4,MICHEL5,URRIES,MICHEL6,COSTIN,ROSAS22,TOMIO,MICHEL7,
HATANO,HUANG}. We recall that the Gamow states are eigenfunctions of the
Hamiltonian subject to purely boundary conditions. The eigenvalue of the
Gamow state is also a pole of the $S$ matrix. The
resonant amplitude associated with the Gamow states is given
by the complex delta function, and the Breit-Wigner amplitude is just
an approximate resonant amplitude that is valid whenever we neglect the lower
bound of the energy.\footnote{One may argue that the Breit-Winger amplitude is
an approximation also because the exact formula corresponding to
the denominator $E-z_{\rm R}$ is a much more complicated function of $E$ that
includes a self-energy contribution.}
Mathematically, the complex delta function is a distribution that associates,
with a test function $g$, the value of such function at $z=z_{\rm R}$:
\begin{equation}
\int_{0}^{\infty}dE \, g(E) \delta (E-z_{\rm R}) = g(z_{\rm R}) \, .
\label{compdfdef}
\end{equation}
Now, if the resonant amplitude is given by the complex delta function
(rather than by the Breit-Wigner amplitude
$\frac{1}{E-z_{\rm R}}$), and if the scattering spectrum is the
positive real line (rather than the whole real line), Eq.~(\ref{ldlds})
should be written as
\begin{equation}
\int_{0}^{\infty}dE \, e^{-iEt} f(E) \delta (E-z_{\rm R}) \, .
\label{deltaintpos}
\end{equation}
By combining Eqs.~(\ref{compdfdef}) and~(\ref{deltaintpos}), we obtain
\begin{equation}
\int_{0}^{\infty}dE \, e^{-iEt} f(E) \delta (E-z_{\rm R}) =
f(z_{\rm R}) e^{-iE_{\rm R}t}e^{-\Gamma t/2} \, ,
\label{aadeltaintpos}
\end{equation}
which, up to a numerical factor, coincides with the
desirable result~(\ref{desidkdkd}). In addition, since the time
evolution of the complex delta function is defined only for
$t>0$ (see Refs.~\cite{GONZALO00,GONZALO1,GONZALO2}), Eq.~(\ref{aadeltaintpos})
is valid only for $t>0$:
\begin{equation}
\int_{0}^{\infty}dE \, e^{-iEt} f(E) \delta (E-z_{\rm R}) =
f(z_{\rm R}) e^{-iE_{\rm R}t}e^{-\Gamma t/2} \, , \qquad t>0 \, .
\label{aadeltaintposaa}
\end{equation}
Thus, instead of using the Breit-Wigner amplitude and integrating over the
whole real line as in Eq.~(\ref{desidkdkd}), we can integrate over
the scattering spectrum and use the complex delta function as in
Eq.~(\ref{aadeltaintposaa}) to obtain the same result.
\section{Conclusion}
\setcounter{equation}{0}
\label{sec:con}
In Quantum Mechanics, the combination of two approximations --the approximation
of describing the decay of an unstable state by means of the
Breit-Wigner amplitude, and the approximation of
extending the Breit-Wigner amplitude to the whole real line-- yields
desirable, causal results for the decay of a resonance. In this paper, we
have seen that if we replace the Breit-Wigner amplitude by the complex delta
function, it is possible
to recover such desirable results without the need to extend any energy
integration outside of the physical scattering spectrum. This result provides
another argument in favor of seeing the complex delta function as
the exact resonant amplitude, and the Breit-Wigner amplitude as an
approximate resonant amplitude that is valid whenever we can neglect
the lower bound of the energy\cite{NPA}. This results also shows that
what Fermi's, Weisskopf-Wigner's and similar approximations do is
basically to neglect the effect of the lower bound of the energy.
\vskip1cm
{\it Acknowledgment}. The author wishes to thank the organizers
of the YKIS2009 workshop for their invitation and warm hospitality. The
author also wishes to thank the participants for lively, enlightening
discussions, especially Profs.~Tomio Petrosky, Naomichi Hatano, Gonzalo
Ordo\~nez, and Buang Ann Tay.
|
1,108,101,563,843 | arxiv | \section{Introduction}\label{sec_intro}
Modern dense storage devices, e.g., multi-level Flash and magnetic recording (MR) devices, operate at very low frame error rate (FER) values, motivating the need for strong error correction techniques. Because of their capacity approaching performance, low-density parity-check (LDPC) codes \cite{gal_th, rich_urb1, rich_urb2, ahh_smm} are becoming the first choice for many of the modern storage systems \cite{ahh_jsac, aslm_fl, maeda_fl, jia_sym, kin_sym, ahh_bas, shafa, yuval1, yuval2}. Under iterative quantized decoding, LDPC codes suffer from the error floor problem, which is a change in the FER slope that undermines the chances of reaching desirable very low FER levels \cite{lara_floor, bani_est, b_ryan, hu_sim, tom_ferr}. It was demonstrated in the literature that absorbing sets (ASs), which are detrimental subgraphs in the Tanner graph of the LDPC code, are the principal cause of the error floor problem \cite{lara_as, behzad_elem}. There are other works that studied different classes of detrimental objects, specifically, stopping sets \cite{decl_nb} and trapping sets \cite{olgica1}, \cite{vas_trap1}. Research works investigating the error floor problem of LDPC codes include \cite{b_ryan, lara_as, olgica1, vas_trap1, jia_cyc, olgica2, siegel_flr, bani_trap, vas_trap2, shu_flr1, shu_flr2}.
Particularly for non-binary LDPC (NB-LDPC) codes, the authors in \cite{behzad_elem} used concepts from \cite{decl_nb} to study non-binary elementary absorbing sets (EASs), and showed that EASs are the detrimental objects which contribute the most to the error floor of NB-LDPC codes over the canonical additive white Gaussian noise (AWGN) channel. The observation that the combinatorial structure of the dominant detrimental objects critically depends on the characteristics of the channel of interest was first introduced in \cite{ahh_glc} and then discussed in \cite{ahh_bas}; we introduced balanced absorbing sets (BASs) and demonstrated their dominance in the error floor of NB-LDPC codes over partial-response (PR) channels, which exemplify 1-D MR channels \cite{vas_prc, cola_pr}. Motivated by the asymmetry possessed by practical Flash channels \cite{mit_nl, cai_fl}, in a recent research \cite{ahh_jsac}, we introduced general absorbing sets (GASs) and general absorbing sets of type two (GASTs) to capture the dominant problematic objects over realistic Flash channels. GASs and GASTs subsume previously introduced AS subclasses (namely EASs and BASs).
In \cite{behzad_elem} and \cite{ahh_bas}, NB-LDPC code optimization algorithms tailored to AWGN and PR channels, respectively, were proposed. While the weight consistency matrix (WCM) framework introduced in \cite{ahh_jsac} was originally motivated by the need to optimize NB-LDPC codes for asymmetric Flash channels, we customized this methodology to be suitable for channels with memory (e.g., PR channels), canonical symmetric channels (e.g., AWGN channels), as well as practical Flash channels, achieving at least $1$ order of magnitude performance gain over all these channels. The principal idea of the WCM framework is representing a problematic object, e.g., a GAST, using a small set of matrices, called WCMs. Since problematic objects in an NB-LDPC code are described in terms of both their weight conditions as well as their topological conditions, there are explicit weight properties associated with the WCMs of an object. By changing the null spaces of the WCMs associated with an object such that the weight conditions of all these WCMs are broken \cite{ahh_jsac}, this problematic object is removed from the Tanner graph of the code. A key feature of the WCM framework is that the GASTs removal process is performed solely via manipulating the edge weights of the Tanner graph of the NB-LDPC code, which consequently preserves all the structural topological properties of the code being optimized.
For NB-LDPC codes with fixed column weights (fixed variable node degrees), our contributions in this paper are:
\begin{enumerate}
\item We characterize GASTs via their WCMs. In particular, we define the unlabeled GAST tree to describe the underlying topology of a GAST, where the leaves of this tree represent the WCMs of the GAST. Using this tree, we prove the optimality of the WCM framework by demonstrating that the framework indeed operates on the minimum possible number of matrices to remove the detrimental object. We also deploy concepts from graph theory and combinatorics to compute the exact number of WCMs associated with a GAST in different cases. We further compare the number of matrices the WCM framework operates on with the number of matrices a suboptimal idea works with, showing the significant reduction (up to about $90\%$) achieved by the WCM framework in the cases of interest.
\item Based on tools from graph theory and linear algebra, we propose a comprehensive analysis of the removal process of GASTs. We start off with discussing the dimensions of the null spaces of WCMs; these null spaces play the central role in the identification and removal of a GAST. Then, we derive the best that can be done to process a short WCM (a WCM that has fewer rows than columns) during the GAST removal process. Finally, we provide the minimum number of edge weight changes\footnote{In the WCM framework, a GAST is removed via careful processing of the weights of its edges (the original and the new weights are not zeros). Throughout this paper, the edge weight changes are always with respect to the original configuration.} needed to remove a GAST, along with how to select the edges and the new weights to guarantee the removal of the GAST through its WCMs.
\item We introduce new combinatorial objects that capture the majority of the non-GAST detrimental objects in the error floor region of NB-LDPC codes that have even column weights over asymmetric Flash channels. We define oscillating sets (OSs) and oscillating sets of type two (OSTs). Furthermore, we expand the analysis of GASTs in \cite{ahh_jsac} to cover OSTs, describing how the WCM framework can be customized to remove OSTs, after GASTs have been removed, to achieve additional performance gains.
\item We extend the scope of the WCM framework by using it to optimize codes with different properties and for various applications. Specifically, we show that despite the good error floor performance of NB-LDPC codes with column weight $5$ before optimization, more than $1$ order of magnitude gain in the uncorrectable bit error rate (UBER) over practical Flash channels is achievable via the WCM framework. We further apply the theoretical concepts in item 3 for NB-LDPC codes with column weight $4$ over practical Flash channels to achieve overall UBER gains up to nearly $2.5$ orders of magnitude. Additionally, we optimize NB-LDPC codes for practical Flash channels with more soft information ($6$ reads). We also use the WCM framework to optimize NB-LDPC codes with irregular check node (CN) degrees and fixed variable node (VN) degrees; we show that more than $1$ order of magnitude performance gain in the FER is achievable by optimizing spatially-coupled (SC) codes \cite{kud_sc, lent_asy, andr_asy, pus_sc, olm_sc, iye_sc} used over PR and AWGN channels.
\end{enumerate}
The rest of the paper is organized as follows. Section~\ref{sec_sum} summarizes the main concepts of the WCM framework. Then, Section~\ref{sec_cch} discusses the characterization of GASTs through their WCMs, in addition to the optimality proof and the WCMs enumeration. In Section~\ref{sec_rem} we detail our analysis for the process of the GAST removal through WCMs. Afterwards, Section~\ref{sec_os} discusses OSTs and how to customize the WCM framework to remove them. The simulation results are presented in Section~\ref{sec_sim}. Finally, the paper is concluded in Section~\ref{sec_conc}.
\section{Summary of the WCM Framework}\label{sec_sum}
In this section, along with Appendices~\ref{sec_appa} and \ref{sec_appb}, we provide a brief summary of the main concepts and ideas of the WCM framework for the sake of clarity and completeness. Details of the WCM framework were introduced in \cite{ahh_jsac}.
Consider the Tanner graph of an LDPC code. An $(a, b)$ AS in this graph is defined as a set of $a$ VNs with $b$ unsatisfied CNs connected to it such that each VN is connected to strictly more satisfied than unsatisfied CNs, for some set of VN values (these $a$ VNs have non-zero values, while the remaining VNs are set to zero) \cite{lara_as, behzad_elem}. For canonical channels, e.g., the AWGN channel, it was shown that elementary ASs (EASs) are the objects that dominate the error floor region of NB-LDPC codes \cite{behzad_elem}. EASs have the property that all satisfied CNs are of degree $2$, and all unsatisfied CNs are of degree $1$. The different characteristics of storage channels (compared with the AWGN channel) result in changing the combinatorial properties of detrimental objects in NB-LDPC codes simulated over such channels.
Asymmetry, as in Flash channels \cite{mit_nl, cai_fl}, can result in VN errors having high magnitudes, which is typically not the case for canonical channels. These VN errors with high magnitudes make it very difficult for unsatisfied CNs with degree $2$ to participate in correcting an AS error. Consequently, it becomes more likely to have AS errors with degree-$2$ unsatisfied CNs, which are non-elementary AS errors. This was the motivation behind introducing GASs and GASTs in \cite{ahh_jsac} to capture the objects that dominate the error floor region of NB-LDPC codes over asymmetric channels (e.g., Flash channels).
The intrinsic memory, as in PR channels \cite{ahh_bas, vas_prc}, can also result in VN errors having high magnitudes. Moreover, the global iterations (detector-decoder iterations) help the decoder correct AS errors with higher numbers of unsatisfied CNs. Thus, the objects that dominate the error floor region of NB-LDPC codes simulated over PR channels can also have unsatisfied CNs with degree $2$ (non-elementary), and they have a fewer number of unsatisfied (particularly degree-$1$) CNs, which is the reason why they are called ``balanced''. BASs and BASs of type two (BASTs) were introduced in \cite{ahh_bas} and \cite{ahh_jsac} to capture such detrimental objects.
We start off with the definitions of a GAS and an unlabeled GAS \cite{ahh_jsac}.
\vspace{-0.1em}
\begin{definition}\label{def_gas}
Consider a subgraph induced by a subset $\mathcal{V}$ of VNs in the Tanner graph of an NB-LDPC code. Set all the VNs in $\mathcal{V}$ to values $\in$ GF($q$)$\setminus \{0\}$ and set all other VNs to $0$. The set $\mathcal{V}$ is said to be an $(a, b, b_2, d_1, d_2, d_3)$ \textbf{general absorbing set (GAS)} over GF($q$) if and only if the size of $\mathcal{V}$ is $a$, the number of unsatisfied (resp., degree-$2$ unsatisfied) CNs connected to $\mathcal{V}$ is $b$ (resp., $b_2$), the number of degree-$1$ (resp., $2$ and $> 2$) CNs connected to $\mathcal{V}$ is $d_1$ (resp., $d_2$ and $d_3$), and each VN in $\mathcal{V}$ is connected to strictly more satisfied than unsatisfied neighboring CNs, for some set of VN values.
\end{definition}
Let $\gamma$ be the column weight (VN degree) of the NB-LDPC code. BASs are GASs with $0 \leq b \leq \left \lfloor \frac{ag}{2} \right \rfloor$, where $g = \left \lfloor \frac{\gamma-1}{2} \right \rfloor$ \cite{ahh_bas}. GF refers to Galois field, and $q$ is the GF size (order). We focus here on the case of $q=2^\lambda$, where $\lambda$ is a positive integer $\geq 2$. Furthermore, when we say in this paper that nodes are ``connected'', we mean they are ``directly connected'' or they are ``neighbors'', unless otherwise stated. The same applies conceptually when we say an edge is ``connected'' to a node or vice versa.
\vspace{-0.1em}
\begin{definition}\label{def_ugas}
Let $\mathcal{V}$ be a subset of VNs in the unlabeled Tanner graph of an NB-LDPC code. Let $\mathcal{O}$ (resp., $\mathcal{T}$ and $\mathcal{H}$) be the set of degree-$1$ (resp., $2$ and $> 2$) CNs connected to $\mathcal{V}$. This graphical configuration is an \textbf{$(a, d_1, d_2, d_3)$ unlabeled GAS} if it satisfies the following two conditions:
\begin{enumerate}
\item {$|\mathcal{V}| = a$, $\vert{\mathcal{O}}\vert=d_1$, $\vert{\mathcal{T}}\vert=d_2$, and $\vert{\mathcal{H}}\vert=d_3$.}
\item Each VN in $\mathcal{V}$ is connected to more neighbors in $(\mathcal{T} \cup \mathcal{H})$ than in $\mathcal{O}$.
\end{enumerate}
\end{definition}
In this paper, all vectors are column vectors, except the cutting vectors of SC codes and the equalization target of the PR channel (see Subsection \ref{subsec_sc}).
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[width=2.6in]{Figure_1.pdf}\vspace{-1.2em}
\caption{The relation between different types of absorbing sets demonstrated via a Venn diagram.}
\label{Figure_1}
\vspace{-0.2em}
\end{figure}
Let $\bold{H}$ denote the parity-check matrix of an NB-LDPC code defined over GF($q$). Consider an $(a,b,b_2,d_1,d_2,d_3)$ GAS in the Tanner graph of this code. Let $\bold{A}$ be the $\ell \times a$ submatrix of $\bold{H}$ that consists of $\ell=d_1+d_2+d_3$ rows of $\bold{H}$, corresponding to the CNs participating in this GAS, and $a$ columns of $\bold{H}$, corresponding to the VNs participating in this GAS. From \cite[Lemma~1]{ahh_jsac}, an $(a, b, b_2, d_1, d_2, d_3)$ GAS must satisfy:
\begin{itemize}
\item \textbf{Topological conditions:} Its unlabeled configuration must satisfy the {unlabeled GAS} conditions stated in Definition~\ref{def_ugas}.
\item \textbf{Weight conditions:} The set is an $(a, b, b_2, d_1, d_2, d_3)$ GAS over GF($q$) if and only if there exists an $(\ell-b) \times a$ submatrix $\bold{W}$ of column rank $\tau_{\bold{W}} < a$, with elements $\psi_{e,f}$, $1 \leq e \leq (\ell-b)$, $1 \leq f \leq a$, of the GAS adjacency matrix $\bold{A}$, that satisfies the following two conditions:
\begin{enumerate}
\item Let $\mathcal{N}(\bold{W})$ be the null space of the submatrix $\bold{W}$, and let $\bold{d}_k^\textup{T}$, $1 \leq k \leq b$, be the $k$th row of the matrix $\bold{D}$ obtained by removing the rows of $\bold{W}$ from $\bold{A}$. Let $\bold{v}$ be a vector of VN values and $\bold{R}$ be an $\ell \times \ell$ permutation matrix. Then,
\begin{align}\label{eq_gas_cond1}
&\exists \text{ } \bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_a]^\textup{T} \in \mathcal{N}(\bold{W}) \text{ s.t. } v_f \neq 0, \nonumber \\ &\text{ } \forall f \in \{1, 2, \dots , a\}, \text{ and } \bold{d}_k^\textup{T} \bold{v} = m_k \neq 0, \nonumber \\ &\text{ } \forall k \in \{1, 2, \dots , b\}, \text{ } \bold{m}=[m_1 \text{ } m_2 \text{ } \dots \text{ } m_b]^\textup{T}, \nonumber \\ &\text{i.e.}, \text{ } \bold{RAv}=\begin{bmatrix}\bold{W}_{(\ell-b) \times a}\\ \bold{D}_{b \times a}\end{bmatrix}\bold{v}_{a \times 1}=\begin{bmatrix}\bold{0}_{(\ell-b) \times 1}\\ \bold{m}_{b \times 1}\end{bmatrix}.
\end{align}
\item Let $\theta_{k,f}$, $1 \leq k \leq b$, $1 \leq f \leq a$, be the elements of the matrix $\bold{D}$. Then, $\forall f \in \{1, 2, \dots , a\}$,
\begin{align}\label{eq_gas_cond2}
\left ( \sum\limits_{e=1}^{\ell-b}F\left ( \psi_{e,f} \right ) \right ) > \left ( \sum\limits_{k=1}^{b}F\left ( \theta_{k,f} \right ) \right ),
\end{align}
where $F(\beta)=0$ if $\beta=0$, and $F(\beta)=1$ otherwise.
\end{enumerate}
Computations are performed over GF($q$).
\end{itemize}
In words, $\bold{W}$ is the submatrix of satisfied CNs, and $\bold{D}$ is the submatrix of unsatisfied CNs. Next, we define an important subclass of GASs, which are GASTs.
\begin{definition}\label{def_gast}
A GAS that has $d_2 > d_3$ and all the unsatisfied CNs connected to it (if any) belong to $(\mathcal{O} \cup \mathcal{T})$ (i.e., having either degree $1$ or degree $2$) is defined as an $(a, b, d_1, d_2, d_3)$ \textbf{general absorbing set of type two (GAST)}. Similar to the {unlabeled GAS} definition (Definition~\ref{def_ugas}), we also define the \textbf{$(a, d_1, d_2, d_3)$ unlabeled GAST}.
\end{definition}
A BAST is also a BAS with all the unsatisfied CNs connected to it (if any) having either degree $1$ or degree $2$.
As demonstrated by the Venn diagram in Fig.~\ref{Figure_1}, the set of GASs (resp., GASTs) subsumes the sets of EASs and BASs (resp., BASTs). Moreover, while the WCM framework was originally introduced to remove GASTs from the Tanner graph of an NB-LDPC code, it can be easily customized to efficiently remove EASs and BASTs \cite{ahh_jsac} depending on the application, as we shall discuss shortly in brief.
The three theorems essential for understanding the WCM framework are in \cite{ahh_jsac}. We recall from \cite[Theorem~2]{ahh_jsac} that, given an $(a, d_1, d_2, d_3)$ unlabeled GAST, the maximum number of unsatisfied CNs, $b_{\textup{max}}$, in the resulting GAST after edge labeling is upper bounded by:
\begin{equation}\label{eq_bmax}
b_{\textup{max}} \leq d_1 + b_{\textup{ut}}, \text{ where}
\end{equation}
\begin{equation}\label{eq_but}
b_{\textup{ut}} = \left \lfloor \frac{1}{2} \left ( a\left \lfloor \frac{\gamma-1}{2} \right \rfloor - d_1 \right ) \right \rfloor.
\end{equation}
\vspace{-0.2em}
Here, $b_{\textup{ut}}$ is the upper bound on the maximum number of degree-$2$ unsatisfied CNs the resulting GAST can have. Because of the structure of the underlying unlabeled configuration, sometimes the exact maximum (obtained by \cite[Algorithm~1]{ahh_jsac}, see Appendix~\ref{sec_appa}) is a quantity smaller than $b_{\textup{ut}}$. We refer to this exact maximum as $b_{\textup{et}}$. Thus,
\begin{equation}\label{eq_bmax_exact}
b_{\textup{max}} = d_1 + b_{\textup{et}}.
\end{equation}
Throughout this paper, the notation ``ut'' (resp., ``et'') in the subscript of $b$ refers to the \textit{upper bound on the} (resp., \textit{exact}) maximum number of \textit{degree-$2$} unsatisfied CNs.
For a given $(a, b, d_1, d_2, d_3)$ {GAST}, let $\mathcal{Z}$ be the set of all $(a, b', d_1, d_2, d_3)$ GASTs with $d_1 \leq b' \leq b_{\textup{max}}$, which have the same {unlabeled GAST} as the original $(a, b, d_1, d_2, d_3)$ {GAST}. Here, $b_{\textup{max}}$ is the largest allowable number of unsatisfied CNs for these configurations.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[width=3.6in]{Figure_2.pdf}\vspace{-0.7em}
\text{\hspace{1.5em} \footnotesize{(a) \hspace{14em} (b) \hspace{1em}}}
\caption{(a) A $(7, 9, 13, 0)$ unlabeled GAST for $\gamma=5$. (b) An $(8, 0, 16, 0)$ unlabeled GAST for $\gamma=4$.}
\label{Figure_2}
\vspace{-0.6em}
\end{figure}
\begin{definition}\label{def_drem_gast}
An $(a, b, d_1, d_2, d_3)$ \textbf{GAST} is said to be \textbf{removed} from the Tanner graph of an NB-LDPC code if and only if the resulting object (after edge weight processing) $\notin \mathcal{Z}$.
\end{definition}
In all GAST, unlabeled GAST, and OST figures, circles represent VNs. In all GAST and OST figures, grey (resp., white) squares represent unsatisfied (resp., satisfied) CNs. In all unlabeled GAST figures, grey (resp., white) squares represent degree-$1$ (resp., $> 1$) CNs.
\begin{example}\label{ex_prime}
Fig.~\ref{Figure_2}(a) shows a $(7, 9, 13, 0)$ unlabeled GAST for $\gamma=5$. For this unlabeled GAST, $d_1=9$, and from (\ref{eq_but}), $b_{\textup{ut}}=\left \lfloor \frac{1}{2} \left ( 7 \left \lfloor \frac{5-1}{2} \right \rfloor -9 \right ) \right \rfloor=2=b_{\textup{et}}$, which means the resulting GAST after edge labeling can have up to $2$ degree-$2$ unsatisfied CNs. Thus, $b_{\textup{max}}=9+2=11$, and $9 \leq b' \leq 11$. Consequently, $\mathcal{Z}=\{(7, 9, 9, 13, 0), (7, 10, 9, 13, 0), (7, 11, 9, 13, 0)\}$.
Fig.~\ref{Figure_2}(b) shows an $(8, 0, 16, 0)$ unlabeled GAST for $\gamma=4$. For this unlabeled GAST, $d_1=0$, and from (\ref{eq_but}), $b_{\textup{ut}}=\left \lfloor \frac{1}{2} \left ( 8 \left \lfloor \frac{4-1}{2} \right \rfloor -0 \right ) \right \rfloor=4=b_{\textup{et}}$, which means the resulting GAST after edge labeling can have up to $4$ degree-$2$ unsatisfied CNs. Thus, $b_{\textup{max}}=0+4=4$, and $0 \leq b' \leq 4$. Consequently, $\mathcal{Z}=\{(8, 0, 0, 16, 0), (8, 1, 0, 16, 0), (8, 2, 0, 16, 0), \allowbreak (8, 3, 0, 16, 0), (8, 4, 0, 16, 0)\}$.
\end{example}
For a given GAST, define a matrix $\bold{W}^{\textup{z}}$ to be the matrix obtained by removing $b'$, $d_1 \leq b' \leq b_{\textup{max}}$, rows corresponding to CNs $\in (\mathcal{O} \cup \mathcal{T})$ from the matrix $\bold{A}$, the GAST adjacency matrix. These $b'$ CNs can simultaneously be unsatisfied under some edge labeling that produces a GAST which has the same {unlabeled GAST} as the given GAST. Let $\mathcal{U}$ be the set of all such matrices $\bold{W}^{\textup{z}}$. Each element in $\mathcal{Z}$ has one or more matrices in $\mathcal{U}$.
\begin{definition}\label{def_wcms}
For a given $(a,b,d_1,d_2,d_3)$ GAST and its associated adjacency matrix $\bold{A}$ and its associated set $\mathcal{Z}$, we construct a set of $t$ matrices as follows:
\begin{enumerate}
\item Each matrix $\bold{W}_h^{\textup{cm}}$, $1 \leq h \leq t$, in this set is an $(\ell-b^{\textup{cm}}_h)\times a$ submatrix, $d_1 \leq b^{\textup{cm}}_h \leq b_{\textup{max}}$, formed by removing \textbf{different} $b^{\textup{cm}}_h$ rows from the $\ell \times a$ matrix $\bold{A}$ of the GAST. These $b^{\textup{cm}}_h$ rows to be removed correspond to CNs $\in (\mathcal{O} \cup \mathcal{T})$ that can simultaneously be unsatisfied under some edge labeling that produces a GAST which has the same {unlabeled GAST} as the given GAST.
\item Each matrix $\bold{W}^{\textup{z}} \in \mathcal{U}$, for every element $\in \mathcal{Z}$, contains at least one element of the resultant set as its submatrix.
\item This resultant set has the \textbf{smallest cardinality}, which is $t$, among all the sets which satisfy conditions 1 and 2 stated above.
\end{enumerate}
We refer to the matrices in this set as \textbf{weight consistency matrices (WCMs)}, and to this set itself as $\mathcal{W}$.
\end{definition}
\vspace{-0.3em}
Throughout this paper, the notation ``z'' (resp., ``cm'') in the superscript of a matrix means that the matrix is \textit{associated with an element in the set $\mathcal{Z}$} (resp., \textit{a WCM}).
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[width=1.8in]{Figure_3_1.pdf}
\includegraphics[width=3.6in]{Figure_3_2.pdf}
\vspace{-2.5em}
\caption{An illustrative figure showing the process of extracting the WCMs of a $(6, 0, 0, 9, 0)$ GAST. Appropriate edge weights ($w$'s) $\in$ GF($q$)$\setminus \{0\}$ are assumed.}
\label{Figure_3}
\vspace{-0.7em}
\end{figure}
\begin{definition}\label{def_bet_bst}
Parameter $b_{\textup{et}}$ represents the \textbf{exact maximum number} of rows corresponding to degree-$2$ CNs that can be removed together from $\bold{A}$ to extract a WCM. Similarly, we define $b_{\textup{st}}$ to be the \textbf{exact minimum number} of rows corresponding to degree-$2$ CNs that can be removed together from $\bold{A}$ to extract a WCM. Recall that the rows corresponding to degree-$1$ CNs are always removed while extracting a WCM. Thus, $d_1 \leq d_1+b_{\textup{st}} \leq b^{\textup{cm}}_h \leq d_1+b_{\textup{et}}=b_{\textup{max}}$. Both $b_{\textup{st}}$ and $b_{\textup{et}}$ depend on the unlabeled GAST configuration.
\end{definition}
Fig.~\ref{Figure_3} depicts the relation between a GAST and its associated WCMs, and roughly describes how the WCMs of this GAST are extracted.
The core idea of the WCM framework is in \cite[Theorem~3]{ahh_jsac}. This theorem states that the necessary and sufficient processing needed to remove an $(a, b, d_1, d_2, d_3)$ GAST, according to Definition~\ref{def_drem_gast}, is to change the edge weights such that $\forall h$:
\vspace{-0.1em}\begin{align}\label{eq_rem_cond}
&\text{If } \text{ } \mathcal{N}(\bold{W}^{\textup{cm}}_h)=\textup{span}\{\bold{x}_1, \bold{x}_2, \dots ,\bold{x}_{p_h}\}, \text{ then } \nonumber \\ &\nexists \text{ } \bold{r}=[r_1 \text{ } r_2 \text{ } \dots \text{ } r_{p_h}]^\textup{T} \text{ for} \text{ }
\bold{v}=r_1\bold{x}_1+r_2\bold{x}_2+\dots+r_{p_h}\bold{x}_{p_h} \nonumber \\ &= [v_1 \text{ } v_2 \text{ } \dots v_a]^\textup{T} \text{ s.t. } v_j \neq 0, \text{ } \forall j \in \{1, 2, \dots , a\},
\end{align}
where $p_h$ is the dimension of $\mathcal{N}(\bold{W}^{\textup{cm}}_h)$. Computations are performed over GF($q$).
The WCM framework is easily adjusted to efficiently remove special subclasses of GASTs, namely EASs and BASTs, by customizing the WCM definition. In particular, via replacing $b_{\textup{max}}$ by $b_{\textup{e\_max}}=d_1$ (resp., $b_{\textup{b\_max}}=\left \lfloor \frac{ag}{2} \right \rfloor$ (see \cite{ahh_bas}), where $g=\left \lfloor \frac{\gamma-1}{2} \right \rfloor$), the WCM framework is customized to remove EASs (resp., BASTs), which are the dominant objects in the case of AWGN (resp., PR) channels. More details can be found in \cite{ahh_jsac}.
The two algorithms that constitute the WCM framework are \cite[Algorithm~1]{ahh_jsac}, which is the WCM extraction algorithm, and \cite[Algorithm~2]{ahh_jsac}, which is the code optimization algorithm. The steps of the two algorithms are listed in Appendices~\ref{sec_appa} and \ref{sec_appb} for the reference of the reader.
A WCM that has (\ref{eq_rem_cond}) satisfied is said to be a WCM with \textbf{broken weight conditions}. A GAST is removed if and only if all its WCMs have broken weight conditions.
Note that the complexity of the process of removing a specific GAST using the WCM framework is mainly controlled by the number of WCMs, which is $t$, of that GAST (see the \textbf{for} loop in Step~12 of \cite[Algorithm~2]{ahh_jsac}). Thus, the complexity of the WCM framework depends on the size of the set $\mathcal{G}$ (see Appendix~\ref{sec_appb}) and the numbers of WCMs of the GASTs in $\mathcal{G}$.
\section{Characterizing GASTs through Their WCMs}\label{sec_cch}
In order to characterize a GAST through its WCMs, we introduce the definition of the GAST tree, which will also be used to derive all the results in this section. Since this tree does not depend on the edge weights of the configuration, we call it the unlabeled GAST tree.
Recall that $\bold{A}$ is the adjacency matrix of the GAST. Both $\bold{W}^{\textup{z}}$ and $\mathcal{U}$ are defined in the paragraph before Definition~\ref{def_wcms}. Recall also that $b_{\textup{et}}$ is the maximum number of degree-$2$ CNs that can be unsatisfied simultaneously while the object remains a GAST. Define $u^0$ as the number of degree-$2$ CNs that can be unsatisfied individually while the object remains a GAST, and $\bold{y}^0$ as the vector in which the indices of such $u^0$ CNs are saved. Note that we always have $b_{\textup{et}} \leq u^0$.
\begin{definition}\label{def_gast_tree}
For a given $(a, d_1, d_2, d_3)$ unlabeled GAST with $b_{\textup{et}} > 0$, we construct the \textbf{unlabeled GAST tree} of $b_{\textup{et}}$ levels (level $0$ is not counted) as follows:
\begin{itemize}
\item Except the root node at level $0$, each tree node represents a degree-$2$ CN in the unlabeled GAST. For any two CNs in the tree, being neighbors means that they can be unsatisfied simultaneously after labeling and the resulting object remains a GAST.
\item Let $i_1, i_2, \dots, i_{b_{\textup{et}}}$ be the running indices used to access nodes at different levels in the tree as follows. The index of a node at level $j$, $1 \leq j \leq b_{\textup{et}}$, is saved in $\bold{y}_{i_1,i_2, \dots, i_{j-1}}^{j-1}$ and given by $y_{i_1,i_2, \dots, i_{j-1}}^{j-1}(i_j)$. CN $c_{y_{i_1,i_2, \dots, i_{j-1}}^{j-1}(i_j)}$ at level $j$ is accessed via the path of nodes ``root node -- $c_{y^{0}(i_1)}$ -- $c_{y_{i_1}^{1}(i_2)}$ -- $c_{y_{i_1,i_2}^{2}(i_3)}$ -- \dots -- $c_{y_{i_1,i_2, \dots, i_{j-1}}^{j-1}(i_j)}$''.
\item At level $0$, a virtual root node is assumed to be connected to the $u^0$ nodes with indices in $\bold{y}^0$ at level $1$. Level $j$ of the tree consists of all the nodes with indices in $\bold{y}_{i_1,i_2, \dots, i_{j-1}}^{j-1},$ $\forall i_1, i_2, \dots, i_{j-1}$. Level $j+1$ of the tree is created as follows. Each CN $c_{y_{i_1,i_2, \dots, i_{j-1}}^{j-1}(i_j)}$ at level $j$ is connected to all the CNs with indices in $\bold{y}_{i_1,i_2, \dots, i_j}^j$ at level $j+1$. These CNs can each be -- simultaneously with the nodes on the path from the root node until $c_{y_{i_1,i_2, \dots, i_{j-1}}^{j-1}(i_j)}$ -- unsatisfied after labeling and the resulting object remains a GAST.
\item The number of nodes at level $j+1$ that are connected to $c_{y_{i_1,i_2, \dots, i_{j-1}}^{j-1}(i_j)}$ at level $j$ is $u_{i_1,i_2, \dots, i_j}^{j}$ (which is the size of the vector $\bold{y}_{i_1,i_2, \dots, i_{j}}^{j}$), with $u_{i_1,i_2, \dots, i_{j}}^{j} < u_{i_1,i_2, \dots, i_{j-1}}^{j-1}, \forall i_1, i_2, \dots, i_j$.
\item The leaves of this tree are linked to the matrices extracted by \cite[Algorithm~1]{ahh_jsac} before removing the repeated matrices (see Appendix~\ref{sec_appa}).
\end{itemize}
\end{definition}
Note that for the parameters $u$ and $\bold{y}$, the superscript refers to the level prior to the level in which the nodes exist, and the subscript refers to the running indices used to access the nodes. Note also that \cite[Algorithm~1]{ahh_jsac} is designed to generate the unlabeled GAST tree.
Fig.~\ref{Figure_4} shows an unlabeled GAST tree for a configuration that has $b_{\textup{et}}=3$. The configuration has three levels after the root node. We say that each tree node at level $j$, $j > 0$, in the unlabeled GAST tree is \textit{\textbf{linked to}} a matrix $\bold{W}^{\textup{z}} \in \mathcal{U}$ extracted by removing $(d_1+j)$ rows from the matrix $\bold{A}$. These rows correspond to all the $d_1$ degree-$1$ CNs, and the $j$ degree-$2$ CNs on the path from the virtual root node to this tree node in the configuration. We also say that every valid matrix $\bold{W}^{\textup{z}} \in \mathcal{U}$ is \textit{\textbf{linked to}} one or more tree nodes.
It can be shown that $b_{\textup{et}}$ is the number of levels (nested loops in \cite[Algorithm~1]{ahh_jsac}), after which {$u^{b_{\textup{et}}}_{i_1,i_2, \dots, i_{b_{\textup{et}}}}=0$, $\forall i_1, i_2, \dots, i_{b_{\textup{et}}}$}. Moreover, because the WCMs do not necessarily have the same row dimension, \cite[Algorithm~1]{ahh_jsac} may stop at $b_{\textup{k}}$ levels, $b_{\textup{k}} \leq b_{\textup{et}}$, starting from some $c_{y^0(i_1)}$, which results in an $(\ell-b^{\textup{cm}}_h)\times a$ WCM with $b^{\textup{cm}}_h = d_1+b_{\textup{k}} \leq b_{\textup{max}} = d_1+b_{\textup{et}}$. The smallest value of $b_{\textup{k}}$ is $b_{\textup{st}}$, i.e., $b_{\textup{st}} \leq b_{\textup{k}} \leq b_{\textup{et}}$.
\begin{remark}\label{rem_1}
Note that the unlabeled GAST tree is unique for a given unlabeled configuration. In other words, two non-isomorphic $(a, d_1, d_2, d_3)$ configurations have two different unlabeled GAST trees even though they have the same $a$, $d_1$, $d_2$, and $d_3$ parameters.
\end{remark}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[width=3.1in]{Figure_4.pdf}
\vspace{-1.5em}
\caption{An unlabeled GAST tree with $b_{\textup{et}}=3$.}
\label{Figure_4}
\vspace{-0.5em}
\end{figure}
Repetitions in tree nodes linked to matrices $\bold{W}^{\textup{z}}$ come from the fact that we are addressing the permutations and not the combinations in the tree. In other words, if we have a path from the root node at level $0$ to a tree node at level $2$ that has $c_1$ at level $1$ then $c_4$ at level $2$ on it, there must exist another path from the root node at level $0$ to another tree node at level $2$ that has $c_4$ at level $1$ then $c_1$ at level $2$ on it. Obviously, removing the row of $c_1$ then the row of $c_4$, or first $c_4$ then $c_1$ (in addition to the rows of degree-$1$ CNs) from $\bold{A}$ to extract a matrix produces the same end result.
\subsection{Proving the Optimality of the WCM Framework}\label{subsec_optim}
The numbers of matrices needed to operate on for different GASTs control the complexity of the code optimization process. In this subsection, we prove that the WCM framework is optimal in the sense that it works on the minimum possible number of matrices to remove a GAST. Our optimization problem is formulated as follows:
\textbf{The optimization problem:} We seek to find the set $\mathcal{W}$ of matrices that has the minimum cardinality, with the matrices in $\mathcal{W}$ representing submatrices of $\bold{A}$ that can be used to remove the problematic GAST, without the need to work on other submatrices.
\textbf{The optimization constraint:} Each matrix in $\mathcal{W}$ has to be a valid $\bold{W}^{\textup{z}}$ matrix in $\mathcal{U}$.
The optimization constraint is set to ensure that we are performing not only sufficient, but also necessary processing to remove the object. Note that, by definition, the set of WCMs is the solution of this optimization problem. Thus, the problem of proving the optimality of the WCM framework reduces to proving that the matrices we extract by \cite[Algorithm~1]{ahh_jsac}, and operate on in \cite[Algorithm~2]{ahh_jsac} to remove the GAST, are indeed the WCMs.
Now, we are ready to present the optimality theorem and its proof.
\begin{theorem}\label{th_opt_frm}
Consider an $(a, b, d_1, d_2, d_3)$ GAST with $b_{\textup{et}} > 0$. After eliminating repetitions, the set of matrices which are linked to the leaves of the unlabeled GAST tree characterized by Definition~\ref{def_gast_tree} is the set of WCMs, i.e., the set $\mathcal{W}$ of minimum cardinality.
\end{theorem}
\begin{IEEEproof}
According to Definition~\ref{def_wcms}{, \cite[Theorem~3]{ahh_jsac}}, and its proof, each matrix $\bold{W}^{\textup{z}} \in \mathcal{U}$ must have at least one matrix $\in \mathcal{W}$ as its submatrix (as $\mathcal{W}$ is the set of WCMs). The relation between a matrix $\bold{W}^{\textup{z}}_1$ linked to tree node $1$ at level $j$ and a matrix $\bold{W}^{\textup{z}}_2$ linked to tree node $2$ at level $j+1$, provided that tree node $2$ is a child of tree node $1$, is as follows. Matrix $\bold{W}^{\textup{z}}_2$ is a submatrix of matrix $\bold{W}^{\textup{z}}_1$, extracted by removing one more row, that corresponds to a degree-$2$ CN (tree node $2$), from $\bold{W}^{\textup{z}}_1$. Following the same logic, the multiset of matrices, say $\mathcal{W}_{\textup{rep}}$, linked to tree nodes with no children (the leaves of the tree) contains submatrices of every possible matrix $\bold{W}^{\textup{z}}$. We let the set $\mathcal{W}_{\textup{nrep}}$ be $\mathcal{W}_{\textup{rep}}$ after eliminating the repetitions.
Now, we prove the sufficiency and minimality, which imply the optimality of $\mathcal{W}_{\textup{nrep}}$. The sufficiency is proved as follows. Any matrix $\bold{W}^{\textup{z}}$ that is linked to a tree node with a child will be redundant if added to the set $\mathcal{W}_{\textup{nrep}}$ because in $\mathcal{W}_{\textup{nrep}}$ there already exists a submatrix of this $\bold{W}^{\textup{z}}$ (from the analysis above). The minimality is proved as follows. If we eliminate any matrix from $\mathcal{W}_{\textup{nrep}}$, there will be at least one matrix $\bold{W}^{\textup{z}}$ that has no submatrices in $\mathcal{W}_{\textup{nrep}}$ (which is the eliminated matrix itself since it is linked to a node (nodes) with no children). Thus, we cannot further reduce the cardinality of $\mathcal{W}_{\textup{nrep}}$. Hence, the set $\mathcal{W}_{\textup{nrep}}$ is indeed the set $\mathcal{W}$ of WCMs, which proves the optimality of the WCM framework.
\end{IEEEproof}
\subsection{Enumeration of WCMs Associated with a GAST}\label{subsec_enum}
In this subsection, we provide the exact number of distinct WCMs associated with a GAST. Moreover, we present particular examples where this number reduces to a combinatorial function of the column weight of the code. Since the number of WCMs (and also their sizes) associated with a GAST only depends on the unlabeled configuration and not on the edge weights, we relate the number of distinct WCMs, $t$, to the unlabeled GAST throughout this paper.
We first identify the following two types of unlabeled GAST configurations according to the properties of their unlabeled GAST trees.
\begin{definition}
An $(a, d_1, d_2, d_3)$ \textbf{same-size-WCMs} unlabeled GAST satisfies one of the following two conditions:
\begin{enumerate}
\item It has $b_{\textup{et}}=0$, i.e., $u^0=0$, which results in $\vert{\mathcal{W}}\vert=1$.
\item It has $b_{\textup{et}} > 0$; thus, $u^0 > 0$, and its tree has the property that $u_{i_1,i_2, \dots, i_j}^{j}=0$ only if $j=b_{\textup{et}}$, $\forall i_1, i_2, \dots, i_{b_{\textup{et}}}$, which results in all the WCMs having the same $(\ell-b_{\textup{max}}) \times a$ size, $b_{\textup{max}}=d_1+b_{\textup{et}}$.
\end{enumerate}
\end{definition}
\begin{definition}
An $(a, d_1, d_2, d_3)$ \textbf{u-symmetric} unlabeled GAST is a same-size-WCMs unlabeled GAST which satisfies the following condition. If $u^0 > 0$, its tree has the property that at any level $j$, $u_{i_1,i_2, \dots, i_{j-1}}^{j-1}$ is the same, $\forall i_1, i_2, \dots, i_{j-1}$.
\end{definition}
An example of a same-size-WCMs unlabeled GAST that is not u-symmetric is the $(7, 9, 13, 0)$ configuration shown in Fig.~\ref{Figure_6}(a). The $(6, 0, 9, 0)$ and the $(8, 0, 16, 0)$ configurations shown in Fig.~\ref{Figure_8}(a) are examples of u-symmetric unlabeled GASTs.
We start off with the count for the general case.
\begin{theorem}\label{th_exact_gen}
Given the unlabeled GAST tree, an $(a, d_1, d_2, d_3)$ unlabeled GAST, with the parameters $b_{\textup{st}} > 0$ and $b_{\textup{et}} > 0$, results in the following number, $t$, of distinct WCMs ($t$ is the size of the set $\mathcal{W}$) for the labeled configuration:
\vspace{-0.1em}\begin{equation}\label{eq_exact_gen}
t = \sum_{b_{\textup{k}}=b_{\textup{st}}}^{b_{\textup{et}}}\frac{1}{b_{\textup{k}}!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{k}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{k}}-1}}^{b_{\textup{k}}-1}} T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right ),
\end{equation}
where $b_{\textup{st}} \leq b_{\textup{k}} \leq b_{\textup{et}}$. Here, $T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right )=1$ if $u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}}\allowbreak=0$, and $T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right )=0$ otherwise.
\end{theorem}
\begin{IEEEproof}
To prove Theorem~\ref{th_exact_gen}, we recall the unlabeled GAST tree. The number of nodes in this tree at any level $b_{\textup{k}} > 0$ is given by:
\vspace{-0.3em}\begin{equation}\label{eq_leaves_bk}
\mu_{b_{\textup{k}}} = \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{k}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{k}}-1}}^{b_{\textup{k}}-1}} \left ( 1 \right ).
\end{equation}
From the previous subsection, the number, $t_{\textup{rep},b_{\textup{k}}}$, of WCMs (not necessarily distinct) extracted by removing $b^{\textup{cm}}_h=d_1+b_{\textup{k}}$ rows from $\bold{A}$ equals the number of leaves at level $b_{\textup{k}}$. Note that the leaves at level $b_{\textup{k}}$ do not have connections to level $b_{\textup{k}}+1$ (no children) in the tree. As a result, $t_{\textup{rep},b_{\textup{k}}}$ is given by:
\begin{equation}\label{eq_repwcms_bk}
t_{\textup{rep},b_{\textup{k}}} = \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{k}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{k}}-1}}^{b_{\textup{k}}-1}} T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right ).
\end{equation}
To compute the number of distinct WCMs, we need to eliminate repeated WCMs. Since a WCM extracted by removing $(d_1+b_{\textup{k}})$ rows from $\bold{A}$ appears $b_{\textup{k}}!$ times, we compute the number of distinct WCMs that are extracted by removing $(d_1+b_{\textup{k}})$ rows from $\bold{A}$ using (\ref{eq_repwcms_bk}) as follows:
\begin{equation}\label{eq_wcms_bk}
t_{b_{\textup{k}}} = \frac{1}{b_{\textup{k}}!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{k}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{k}}-1}}^{b_{\textup{k}}-1}} T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right ).
\end{equation}
The total number of distinct WCMs is then obtained by summing $t_{b_{\textup{k}}}$ in (\ref{eq_wcms_bk}) over all values of $b_{\textup{k}}$, $b_{\textup{st}} \leq b_{\textup{k}} \leq b_{\textup{et}}$, to reach $t$ in (\ref{eq_exact_gen}).
\end{IEEEproof}
Recall that $\gamma$ is the column weight (VN degree) of the code.
\begin{example}\label{ex_1}
Fig.~\ref{Figure_5}(a) shows a $(6, 2, 5, 2)$ unlabeled GAST for $\gamma=3$. As demonstrated by the unlabeled GAST tree in Fig.~\ref{Figure_5}(b), the configuration has WCMs that are not of the same size. Since $b_{\textup{st}}=1$, $b_{\textup{et}}=b_{\textup{ut}}=2$, and $u^0=3$ (that are $c_2$, $c_3$, and $c_4$), (\ref{eq_exact_gen}) reduces to:
\vspace{-0.1em}\begin{align}
t &= \sum_{b_{\textup{k}}=1}^{2}\frac{1}{b_{\textup{k}}!} \sum_{i_1=1}^{3} \sum_{i_2=1}^{u_{i_1}^1} T\left ( u_{i_1, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right ) \nonumber \\ &=\frac{1}{1!}(0+1+0)+\frac{1}{2!}(1+0+1)=2. \nonumber
\end{align}
Thus, the configuration has only $2$ WCMs, extracted by removing the rows of the following groups of CNs from $\bold{A}$: $\{(c_3,\mathcal{O}_\textup{sg}), (c_2, c_4,\mathcal{O}_\textup{sg})\}$, where $\mathcal{O}_\textup{sg}$ is $(c_8, c_9)$. {We explicitly list the subgroup $\mathcal{O}_\textup{sg}$ of degree-$1$ CNs to highlight the fact that the rows of these CNs are always removed, irrespective of the action on the remaining rows in $\bold{A}$.}
\end{example}
\vspace{-0.3em}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.15in 0.0in 0.4in 0.0in},clip,width=3.5in]{Figure_5.pdf}\vspace{-0.3em}
\text{\hspace{0.3em} \footnotesize{(a) \hspace{14em} (b) \hspace{1em}}}
\caption{(a) A $(6, 2, 5, 2)$ unlabeled GAST for $\gamma=3$. (b) The associated unlabeled GAST tree with $b_{\textup{et}}=2$.}
\label{Figure_5}
\vspace{-0.5em}
\end{figure}
Now, we analyze the important special case of same-size-WCMs configurations.
\begin{lemma}\label{lem_exact_samelen}
Given the unlabeled GAST tree, a same-size-WCMs $(a, d_1, d_2, d_3)$ unlabeled GAST, with the parameter $b_{\textup{et}} > 0$, results in the following number, $t$, of distinct WCMs ($t$ is the size of the set $\mathcal{W}$) for the labeled configuration:
\begin{equation}\label{eq_exact_samelen}
t = \frac{1}{b_{\textup{et}}!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{et}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{et}}-1}}^{b_{\textup{et}}-1}} \left ( 1 \right ).
\end{equation}
\end{lemma}
\begin{IEEEproof}
We prove Lemma~\ref{lem_exact_samelen} by substituting $b_{\textup{k}}=b_{\textup{st}}=b_{\textup{et}}$ in (\ref{eq_exact_gen}):
\vspace{-0.2em}\begin{align}\label{eq_lem1_pr}
t &= \frac{1}{b_{\textup{et}}!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{et}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{et}}-1}}^{b_{\textup{et}}-1}} T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{et}}}}^{b_{\textup{et}}} \right ) \nonumber \\ &= \frac{1}{b_{\textup{et}}!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_{b_{\textup{et}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{et}}-1}}^{b_{\textup{et}}-1}} \left ( 1 \right ).
\end{align}
The second equality in (\ref{eq_lem1_pr}) follows from the fact that $T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{et}}}}^{b_{\textup{et}}} \right )=1$ since $u_{i_1,i_2, \dots, i_{b_{\textup{et}}}}^{b_{\textup{et}}}=0$, $\forall i_1, i_2, \dots, i_{b_{\textup{et}}}$, by definition of $b_{\textup{et}}$.
\end{IEEEproof}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.08in 0.0in 0.5in 0.0in},clip,width=3.5in]{Figure_6.pdf}\vspace{-0.5em}
\text{\hspace{1.5em} \footnotesize{(a) \hspace{14em} (b) \hspace{1em}}}
\caption{(a) A $(7, 9, 13, 0)$ unlabeled GAST for $\gamma=5$. (b) The associated unlabeled GAST tree with $b_{\textup{et}}=2$.}
\label{Figure_6}
\vspace{-0.5em}
\end{figure}
\begin{example}\label{ex_2}
Fig.~\ref{Figure_6}(a) shows a $(7, 9, 13, 0)$ unlabeled GAST for $\gamma=5$. As demonstrated by the unlabeled GAST tree in Fig.~\ref{Figure_6}(b), this is a same-size-WCMs configuration. Since $b_{\textup{et}}=b_{\textup{ut}}=2$ and $u^0=5$ (that are $c_3$, $c_4$, $c_9$, $c_{11}$, and $c_{12}$), (\ref{eq_exact_samelen}) reduces to:
\vspace{-0.1em}
\begin{equation}
t = \frac{1}{2!} \sum_{i_1=1}^{5} \sum_{i_2=1}^{u_{i_1}^1} \left ( 1 \right )=\frac{1}{2}(1+1+3+2+3)=5. \nonumber
\end{equation}
Thus, the configuration has $5$ WCMs, all of the same size ($11 \times 7$), extracted by removing the rows of the following groups of CNs from the matrix $\bold{A}$: $\{(c_3, c_{12},\mathcal{O}_\textup{sg}), (c_4, c_9,\mathcal{O}_\textup{sg}), \allowbreak (c_9, c_{11},\mathcal{O}_\textup{sg}), (c_9, c_{12},\mathcal{O}_\textup{sg}), (c_{11}, c_{12},\mathcal{O}_\textup{sg})\}$, where $\mathcal{O}_\textup{sg}$ is $(c_{14}, c_{15}, c_{16}, c_{17}, c_{18}, c_{19}, c_{20}, c_{21}, c_{22})$.
\end{example}
Another important special case to study is the case of u-symmetric configurations.
\begin{corollary}\label{lem_rel_symm}
Given the unlabeled GAST tree, a u-symmetric $(a, d_1, d_2, d_3)$ unlabeled GAST, with the parameter $b_{\textup{et}} > 0$, results in the following number, $t$, of distinct WCMs ($t$ is the size of the set $\mathcal{W}$) for the labeled configuration:
\vspace{-0.2em}\begin{equation}\label{eq_rel_symm}
t = \frac{1}{b_{\textup{et}}!} \prod_{j=1}^{b_{\textup{et}}} u^{j-1}.
\end{equation}
\end{corollary}
\begin{IEEEproof}
Since the u-symmetric case is a special case of the same-size-WCMs case, we use (\ref{eq_exact_samelen}) to conclude:
\begin{align}
t &= \frac{1}{b_{\textup{et}}!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u^1} \sum_{i_3=1}^{u^2} \dots \sum_{i_{b_{\textup{et}}}=1}^{u^{b_{\textup{et}}-1}} \left ( 1 \right ) = \frac{1}{b_{\textup{et}}!} \prod_{j=1}^{b_{\textup{et}}} u^{j-1}. \label{eq_lem2_pr2}
\end{align}
Equation (\ref{eq_lem2_pr2}) follows from the fact that for a u-symmetric configuration, at any level $j$, $u_{i_1,i_2, \dots, i_{j-1}}^{j-1}$ is the same, $\forall i_1, i_2, \dots, i_{j-1}$. Thus, we can express $u_{i_1,i_2, \dots, i_{j-1}}^{j-1}$ in (\ref{eq_exact_samelen}) as $u^{j-1}$, which is independent of $i_1, i_2, \dots, i_{b_{\textup{et}}-1}$, $\forall j \in \{1, 2, \dots, b_{\textup{et}}\}$.
\end{IEEEproof}
\begin{example}\label{ex_3}
Fig.~\ref{Figure_7}(a) shows a $(6, 2, 11, 0)$ unlabeled GAST for $\gamma=4$. As demonstrated by the unlabeled GAST tree in Fig.~\ref{Figure_7}(b), the configuration is u-symmetric. Since $b_{\textup{et}}=b_{\textup{ut}}=2$, $u^0=6$, and $u^1=1$, (\ref{eq_rel_symm}) reduces to:
\begin{equation}
t = \frac{1}{2!} \prod_{j=1}^{2} u^{j-1}=\frac{1}{2}(6)(1)=3. \nonumber
\end{equation}
Thus, the configuration has $3$ WCMs, all of the same size ($9 \times 6$), extracted by removing the rows of the following groups of CNs from the matrix $\bold{A}$: $\{(c_1, c_4,\mathcal{O}_\textup{sg}), (c_7, c_8,\mathcal{O}_\textup{sg}), \allowbreak (c_9, c_{10},\mathcal{O}_\textup{sg})\}$, where $\mathcal{O}_\textup{sg}$ is $(c_{12}, c_{13})$.
\end{example}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.3in 0.0in 0.4in 0.0in},clip,width=3.5in]{Figure_7.pdf}\vspace{-0.5em}
\text{\hspace{2.7em} \footnotesize{(a) \hspace{13em} (b) \hspace{1em}}}
\caption{(a) A $(6, 2, 11, 0)$ unlabeled GAST for $\gamma=4$. (b) The associated unlabeled GAST tree with $b_{\textup{et}}=2$.}
\label{Figure_7}
\vspace{-0.5em}
\end{figure}
After providing the exact number of WCMs for different cases, we now study examples where the number of distinct WCMs associated with a configuration is proved to be a function only of the column weight $\gamma$ (the VN degree). We study the u-symmetric version of the $(2\gamma, 0, \gamma^2, 0)$ unlabeled GASTs with $g=\left \lfloor \frac{\gamma-1}{2} \right \rfloor=1$ (i.e., for $\gamma=3$ or $\gamma=4$). Studying these configurations is important because they are unlabeled low weight codewords, and their multiplicity in the Tanner graph of a code typically strongly affects the error floor (and also the waterfall) performance of this code.
\begin{lemma}\label{lem_cw_g1}
A u-symmetric $(2 \gamma, 0, \gamma^2, 0)$ unlabeled GAST, with $\gamma \in \{3, 4\}$ (see Fig.~\ref{Figure_8}(a)), results in $t = \gamma!$ distinct WCMs for the labeled configuration.
\end{lemma}
\begin{IEEEproof}
From (\ref{eq_but}), for a u-symmetric $(2 \gamma, 0, \gamma^2, 0)$ unlabeled GAST\footnote{Unlike the superscript of $u$, the superscript of $\gamma$ and all linear expressions of $\gamma$ refers to the mathematical power (if exists).}, we have:
\begin{equation}\label{eq_bet_cwg1}
b_{\textup{et}}=b_{\textup{ut}}=\left \lfloor \frac{1}{2}\left ( 2 \gamma \left \lfloor \frac{\gamma-1}{2} \right \rfloor-0 \right ) \right \rfloor=\gamma.
\end{equation}
Notice that $\left \lfloor \frac{\gamma-1}{2} \right \rfloor=1$ for $\gamma \in \{3, 4\}$. Substituting (\ref{eq_bet_cwg1}) in (\ref{eq_rel_symm}) gives:
\begin{equation}\label{eq_temp_cwg1}
t = \frac{1}{\gamma!} \prod_{j=1}^{\gamma} u^{j-1} = \frac{\gamma^2}{\gamma!} \prod_{j=2}^{\gamma} u^{j-1},
\end{equation}
where the second equality in (\ref{eq_temp_cwg1}) follows from the property that for a $(2 \gamma, 0, \gamma^2, 0)$ unlabeled GAST, $u^0=\gamma^2$.
Next, we compute $u^{j-1}$, $2 \leq j \leq b_{\textup{et}} = \gamma$. At level $1$, a degree-$2$ CN that has its index in $\bold{y}^0$ will be marked as unsatisfied resulting in:
\begin{equation}
u^1 = u^0-1-2 \left (\gamma -1 \right ) = \gamma^2 - 2 \gamma +1 = \left (\gamma -1 \right )^2. \label{eq_u1_2}
\end{equation}
Equation (\ref{eq_u1_2}) follows from the fact that after such a degree-$2$ CN is selected to be marked as unsatisfied at level $1$, all the remaining $\left ( \gamma-1 \right )$ CNs connected to each of the two VNs sharing this CN cannot be selected at level $2$ (because $g=1$ for $\gamma \in \{3, 4\}$). Thus, $u^1 = u^0-(1+2 \left ( \gamma-1 \right ))$, where the additional $1$ represents the already selected CN itself. Furthermore:
\begin{equation}
u^2 = u^1-1-2 \left (\gamma -2 \right ) = \left (\gamma -1 \right )^2-2\gamma+3 = \left (\gamma -2 \right )^2. \label{eq_u2_2}
\end{equation}
Note that the $2 \left ( \gamma -1 \right )$ CNs that cannot be selected at level $2$ are connected to all the remaining $(2\gamma -2)$ VNs in the configuration (after excluding the two VNs sharing the CN selected at level $1$). Thus, any CN to be selected at level $2$ results in $2 \left ( \gamma-2 \right )$ extra CNs that cannot be selected\footnote{The reason why it is $2 \left ( \gamma-2 \right )$ and not $2 \left ( \gamma-1 \right )$ is that two CNs from the group that cannot be selected at level $3$ were already accounted for while computing $u^1$ as they could not be selected at level $2$ (recall that the configuration is u-symmetric).} at level $3$. As a result, $u^2 = u^1-(1+2 \left ( \gamma-2 \right ))$, which is equation (\ref{eq_u2_2}). This analysis also applies for $u^{j-1}$ with $j > 3$. By means of induction, we conclude the following for every $u^{j-1}$ with $1 \leq j \leq b_{\textup{et}} = \gamma$:
\begin{equation}\label{eq_uj_gen}
u^{j-1} = \left ( \gamma - (j-1) \right )^2.
\end{equation}
Substituting (\ref{eq_uj_gen}) into (\ref{eq_temp_cwg1}) gives:
\begin{equation}\label{eq_final}
t = \frac{1}{\gamma!} \gamma^2 \left (\gamma -1 \right )^2 \left (\gamma -2 \right )^2 \cdots 1^2 = \gamma!.
\end{equation}
As a result, $t=\gamma!$, which completes the proof.
\end{IEEEproof}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.2in 0.0in 0.35in 0.0in},clip,width=3.5in]{Figure_8.pdf}\vspace{-0.5em}
\text{\hspace{0.3em} \footnotesize{(a) \hspace{14em} (b) \hspace{1em}}}
\caption{(a) Upper panel: the u-symmetric $(6, 0, 9, 0)$ unlabeled GAST for $\gamma=3$. Lower panel: the u-symmetric $(8, 0, 16, 0)$ unlabeled GAST for $\gamma=4$. (b) The associated unlabeled GAST tree for the $(6, 0, 9, 0)$ unlabeled GAST with $b_{\textup{et}}=3$.}
\label{Figure_8}
\vspace{-0.5em}
\end{figure}
\begin{example}\label{ex_4}
Fig.~\ref{Figure_8}(a), upper panel, shows the u-symmetric $(6, 0, 9, 0)$ unlabeled GAST for $\gamma=3$. Fig.~\ref{Figure_8}(b) confirms that the configuration is u-symmetric with $b_{\textup{et}}=b_{\textup{ut}}=3$, $u^0=9$, $u^1=4$, and $u^2=1$. Thus, (\ref{eq_rel_symm}) reduces to (\ref{eq_final}), implying:
\begin{equation}
t = \frac{1}{3!} \prod_{j=1}^{3} u^{j-1}=3!=6. \nonumber
\end{equation}
The configuration has $6$ WCMs (size $6 \times 6$), extracted by removing the rows of the following groups of CNs from $\bold{A}$: $\{(c_1, c_3, c_5),$ $(c_1, c_4, c_9),$ $(c_2, c_4, c_6),$ $(c_2, c_5, c_8),$ $(c_3, c_6, c_7),$ $(c_7, c_8, c_9)\}$.
Fig.~\ref{Figure_8}(a), lower panel, shows the u-symmetric $(8, 0, 16, 0)$ unlabeled GAST for $\gamma=4$. We omit the tree of this unlabeled GAST for brevity. Following the same logic we used for the u-symmetric $(6, 0, 9, 0)$ unlabeled GAST ($\gamma=3$), we conclude that this configuration has $b_{\textup{et}}=4$, $u^0=16$, $u^1=9$, $u^2=4$, and $u^3=1$. Thus, from (\ref{eq_final}):
\vspace{-0.3em}\begin{equation}
t = \frac{1}{4!} \prod_{j=1}^{4} u^{j-1}=4!=24. \nonumber
\end{equation}
\end{example}
We conclude Subsection~\ref{subsec_enum} with Table~\ref{Table1}. Table~\ref{Table1} lists the number of distinct WCMs for different types of unlabeled GASTs.
\begin{table*}
\caption{Number of distinct WCMs for different types of unlabeled GASTs.}
\vspace{-0.5em}
\centering
\scalebox{1.00}
{
\begin{tabular}{|c|c|}
\hline
Unlabeled GAST type & Number of distinct WCMs ($t$) \\
\hline
General & $t=\sum\limits_{b_{\textup{k}}=b_{\textup{st}}}^{b_{\textup{et}}}\frac{1}{b_{\textup{k}}!} \sum\limits_{i_1=1}^{u^0} \sum\limits_{i_2=1}^{u_{i_1}^1} \sum\limits_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum\limits_{i_{b_{\textup{k}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{k}}-1}}^{b_{\textup{k}}-1}} T\left ( u_{i_1,i_2, \dots, i_{b_{\textup{k}}}}^{b_{\textup{k}}} \right )$ \\
\hline
Same-size-WCMs & $t=\frac{1}{b_{\textup{et}}!} \sum\limits_{i_1=1}^{u^0} \sum\limits_{i_2=1}^{u_{i_1}^1} \sum\limits_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum\limits_{i_{b_{\textup{et}}}=1}^{u_{i_1,i_2, \dots, i_{b_{\textup{et}}-1}}^{b_{\textup{et}}-1}} \left ( 1 \right )$ \\
\hline
U-symmetric & $t=\frac{1}{b_{\textup{et}}!} \prod\limits_{j=1}^{b_{\textup{et}}} u^{j-1}$ \\
\hline
U-symmetric, $(2 \gamma, 0, \gamma^2, 0)$, with $\gamma \in \{3, 4\}$ & $t=\gamma!$ \\
\hline
\end{tabular}}
\label{Table1}
\end{table*}
\subsection{Complexity Comparison with a Suboptimal Idea}\label{subsec_comp}
We have already proved the optimality of the WCM framework in Subsection~\ref{subsec_optim}. In this subsection, we demonstrate the complexity reduction we gain by focusing only on the set of WCMs, $\mathcal{W}$, to remove a GAST. We compute the total number of distinct matrices to operate on in an alternative idea (a suboptimal idea), and compare it with the number of distinct WCMs we operate on, which is $t$ derived in Subsection~\ref{subsec_enum}. The suboptimal idea we compare with is operating on the set of all distinct matrices $\bold{W}^{\textup{z}}$.
Computational savings of the WCM framework relative to the suboptimal idea mentioned above on a prototypical example of an NB-LDPC code are quite apparent; it takes roughly only four days to optimize this code using the WCM framework (via operating on the WCMs of each GAST to be removed), compared with roughly a month of computations using the suboptimal approach (via operating on all distinct matrices $\bold{W}^{\textup{z}}$ of each GAST to be removed). In this subsection, we justify this observation.
Here, we seek to compare the number of distinct WCMs, which is the size of the set $\mathcal{W}$, with the number of distinct $\bold{W}^{\textup{z}}$ matrices, which is the size of the set $\mathcal{U}$. For convenience, we assume for this comparison that $b_{\textup{et}} > 0$ and $u^0 > 0$.
\begin{theorem}\label{th_diff_gen}
Given the unlabeled GAST tree, the difference between the cardinalities of the sets $\mathcal{U}$ and $\mathcal{W}$ (the reduction in the number of matrices to operate on) for an $(a, d_1, d_2, d_3)$ unlabeled GAST, with the parameters $b_{\textup{st}} > 0$ and $b_{\textup{et}} > 0$, is:
\vspace{-0.1em}
\begin{equation}\label{eq_diff_gen}
t'-t=1+\sum_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \hspace{-1.0em} T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right ),
\end{equation}
where $t'=\vert{\mathcal{U}}\vert$ and $t=\vert{\mathcal{W}}\vert$. Here, $T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )=1$ if $u_{i_1,i_2, \dots, i_{j}}^{j} \neq 0$, and $T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )=0$ otherwise.
\end{theorem}
\begin{IEEEproof}
Given that $t$ (which is $\vert{\mathcal{W}}\vert$) is known from Subsection~\ref{subsec_enum}, we need to derive $t'$ (which is $\vert{\mathcal{U}}\vert$). Since $\mathcal{U}$ is the set of all distinct matrices $\bold{W}^{\textup{z}}$, it follows that the cardinality of $\mathcal{U}$ is a function of the total number of nodes in the unlabeled GAST tree. Note that each node at level $j$ in the unlabeled GAST tree is linked to a matrix $\bold{W}^{\textup{z}}$ (see the proof of Theorem~\ref{th_opt_frm}). The total number of these tree nodes is:
\begin{equation}\label{eq_nodes_wrep}
\eta_{\textup{rep}}=\sum_{j=1}^{b_{\textup{et}}} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left ( 1 \right ).
\end{equation}
To remove the repeated $\bold{W}^{\textup{z}}$ matrices from that count, we need to divide the number of tree nodes at each level $j$ by $j!$. Thus, we reach:
\begin{equation}\label{eq_nodes_norep}
\eta=\sum_{j=1}^{b_{\textup{et}}} \frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left ( 1 \right ).
\end{equation}
The cardinality of the set $\vert{\mathcal{U}}\vert$, which is $t'$, is then:
\begin{equation}\label{eq_wz_count}
t'=1+\eta=1+\sum_{j=1}^{b_{\textup{et}}} \frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left ( 1 \right ),
\end{equation}
where the additional $1$ is for the particular matrix $\bold{W}^{\textup{z}}$ extracted by removing $d_1$ rows from $\bold{A}$ corresponding to all degree-$1$ CNs in the configuration. Note that we can consider the virtual root node as the node linked to this particular $\bold{W}^{\textup{z}}$ matrix in the tree.
To compute $t'-t$, we subtract (\ref{eq_exact_gen}) from (\ref{eq_wz_count}). Consequently,
\begin{align}
&t'-t=1+\sum_{j=1}^{b_{\textup{st}}-1}\frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left ( 1 \right ) + \nonumber \\ & \sum_{j=b_{\textup{st}}}^{b_{\textup{et}}}\frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left [ 1-T\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right ) \right ]. \label{eq_th3_pr1}
\end{align}
Thus, we complete the proof as follows:
\vspace{-0.1em}\begin{align}\label{eq_th3_pr2}
&t'-t = 1+ \nonumber \\ &\sum_{j=1}^{b_{\textup{et}}}\frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left [ 1-T\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right ) \right ] \nonumber \\
&=1+\sum_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \hspace{-1.0em} T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right ).
\end{align}
The first equality in (\ref{eq_th3_pr2}) is derived by observing that $\left [ 1-T\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right ) \right ]=1$ for $j \in \{1, 2, \dots, b_{\textup{st}}-1\}$. The second equality in (\ref{eq_th3_pr2}) follows from that $T\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )=1$ for $j=b_{\textup{et}}$, and $\left [ 1-T\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right ) \right ] =T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )$~(from the definitions of both $T$ and $T_{\textup{c}}$). We can simply consider $T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )$ as the complement function (binary inversion) of $T\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )$.
\end{IEEEproof}
\begin{table*}
\caption{Reduction in the number of matrices to operate on for different types of unlabeled GASTs.}
\vspace{-0.5em}
\centering
\scalebox{1.00}
{
\begin{tabular}{|c|c|}
\hline
Unlabeled GAST type & Reduction in the number of matrices ($t'-t$) \\
\hline
General & $t'-t=1+\sum\limits_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \sum\limits_{i_1=1}^{u^0} \sum\limits_{i_2=1}^{u_{i_1}^1} \sum\limits_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum\limits_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} T_{\textup{c}}\left ( u_{i_1,i_2, \dots, i_{j}}^{j} \right )$ \\
\hline
Same-size-WCMs & $t'-t=1+\sum\limits_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \sum\limits_{i_1=1}^{u^0} \sum\limits_{i_2=1}^{u_{i_1}^1} \sum\limits_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum\limits_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left ( 1 \right )$ \\
\hline
U-symmetric & $t'-t=1+\sum\limits_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \prod\limits_{i=1}^{j} u^{i-1}$ \\
\hline
U-symmetric, $(2 \gamma, 0, \gamma^2, 0)$, with $\gamma \in \{3, 4\}$ & $t'-t=1+\sum\limits_{j=1}^{\gamma-1}\frac{1}{j!} \prod\limits_{i=1}^{j} \left ( \gamma - (i-1) \right )^2$ \\
\hline
\end{tabular}}
\label{Table2}
\end{table*}
\begin{example}\label{ex_6}
Consider the $(6, 2, 5, 2)$ unlabeled GAST ($\gamma=3$) shown in Fig.~\ref{Figure_5}(a). Since $b_{\textup{st}}=1$, $b_{\textup{et}}=b_{\textup{ut}}=2$, and $u^0=3$, and aided by the unlabeled GAST tree in Fig.~\ref{Figure_5}(b), the complexity reduction (the reduction in the number of matrices to operate on) is (see (\ref{eq_diff_gen})):
\vspace{-0.1em}
\begin{equation}
t'-t=1+\frac{1}{1!} \sum_{i_1=1}^{3} T_{\textup{c}}\left ( u_{i_1}^{1} \right )=1+(1+0+1)=3. \nonumber
\end{equation}
In other words, the cardinality of the set $\mathcal{U}$ is $t'=5$, while from Example~\ref{ex_1}, the cardinality of the set $\mathcal{W}$ (the number of distinct WCMs) is $t=2$. Thus, the complexity reduction is $60\%$ for this configuration.
\end{example}
Now, we study the case of same-size-WCMs unlabeled GASTs.
\begin{lemma}\label{lem_diff_samelen}
Given the unlabeled GAST tree, the difference between the cardinalities of the sets $\mathcal{U}$ and $\mathcal{W}$ (the reduction in the number of matrices to operate on) for a same-size-WCMs $(a, d_1, d_2, d_3)$ unlabeled GAST, with the parameter $b_{\textup{et}} > 0$, is:
\begin{equation}\label{eq_diff_samelen}
t'-t=1+\sum_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \sum_{i_1=1}^{u^0} \sum_{i_2=1}^{u_{i_1}^1} \sum_{i_3=1}^{u_{i_1,i_2}^2} \dots \sum_{i_j=1}^{u_{i_1,i_2, \dots, i_{j-1}}^{j-1}} \left ( 1 \right ),
\end{equation}
where $t'=\vert{\mathcal{U}}\vert$ and $t=\vert{\mathcal{W}}\vert$.
\end{lemma}
\begin{IEEEproof}
Knowing that the configuration is a same-size-WCMs unlabeled GAST does not simplify the expression of $t'$ in (\ref{eq_wz_count}). Thus, to compute $(t'-t)$, we subtract (\ref{eq_exact_samelen}) from (\ref{eq_wz_count}). The result of this subtraction is (\ref{eq_diff_samelen}).
\end{IEEEproof}
\begin{example}\label{ex_7}
Consider the $(7, 9, 13, 0)$ unlabeled GAST ($\gamma=5$) shown in Fig.~\ref{Figure_6}(a). Since $b_{\textup{et}}=b_{\textup{ut}}=2$ and $u^0=5$, and aided by the unlabeled GAST tree in Fig.~\ref{Figure_6}(b), the complexity reduction (the reduction in the number of matrices to operate on) is (see (\ref{eq_diff_samelen})):
\begin{equation}
t'-t=1+\frac{1}{1!} \sum_{i_1=1}^{5} \left ( 1 \right )=1+5=6. \nonumber
\end{equation}
In other words, the cardinality of the set $\mathcal{U}$ is $t'=11$, while from Example~\ref{ex_2}, the cardinality of the set $\mathcal{W}$ (the number of distinct WCMs) is $t=5$. Thus, the complexity reduction is over $50\%$.
\end{example}
\begin{corollary}\label{cor_diff_symm}
Given the unlabeled GAST tree, the difference between the cardinalities of the sets $\mathcal{U}$ and $\mathcal{W}$ (the reduction in the number of matrices to operate on) for a u-symmetric $(a, d_1, d_2, d_3)$ unlabeled GAST, with the parameter $b_{\textup{et}} > 0$, is:
\begin{equation}\label{eq_diff_symm}
t'-t=1+\sum_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \prod_{i=1}^{j} u^{i-1},
\end{equation}
where $t'=\vert{\mathcal{U}}\vert$ and $t=\vert{\mathcal{W}}\vert$.
\end{corollary}
\begin{IEEEproof}
We recall again that the u-symmetric configuration is a special case of the same-size-WCMs configuration. Consequently, we can use the same idea from the proof of Corollary~\ref{lem_rel_symm} in (\ref{eq_diff_samelen}) to reach (\ref{eq_diff_symm}).
\end{IEEEproof}
\begin{example}\label{ex_8}
Consider the u-symmetric $(2\gamma, 0, \gamma^2, 0)$ unlabeled GAST. From (\ref{eq_uj_gen}), we know that $u^{j-1} = \left ( \gamma - (j-1) \right )^2$. Thus, from Corollary~\ref{cor_diff_symm}, the complexity reduction (the reduction in the number of matrices to operate on) is:
\begin{equation}
t'-t=1+\sum_{j=1}^{b_{\textup{et}}-1}\frac{1}{j!} \prod_{i=1}^{j} u^{i-1}=1+\sum_{j=1}^{\gamma-1}\frac{1}{j!} \prod_{i=1}^{j} \left ( \gamma - (i-1) \right )^2.
\end{equation}
For $\gamma=3$ (corresponding to the u-symmetric $(6, 0, 9, 0)$ unlabeled GAST), the complexity reduction is $1+\frac{1}{1!}(9)+\frac{1}{2!}(9)(4)=28$, which is over $80\%$ (i.e., $t'=34$ while $t=6$). For $\gamma=4$ (corresponding to the u-symmetric $(8, 0, 16, 0)$ unlabeled GAST), the complexity reduction is $1+\frac{1}{1!}(16)+\frac{1}{2!}(16)(9)+\frac{1}{3!}(16)(9)(4)=185$, which is about $90\%$ (i.e., $t'=209$ while $t=24$).
\end{example}
We conclude Subsection~\ref{subsec_comp} with Table~\ref{Table2}. Table~\ref{Table2} lists the reduction, $t'-t$, in the number of matrices to operate on for different types of unlabeled GASTs.
\begin{remark}\label{rem_4}
The analysis in Section~\ref{sec_cch} is focusing on the case where $b_{\textup{et}} > 0$ (thus, $u^0 > 0$) because if $b_{\textup{et}} = 0$ (i.e., $u^0 = 0$), $t=1$ always. In other words, there exists only one matrix $\bold{W}^{\textup{z}}$. As a result, there exists only one WCM of size $(\ell-d_1) \times a$, which is the single matrix $\bold{W}^{\textup{z}}$ itself. Note that if $b_{\textup{et}} > 0$, the matrix $\bold{W}^{\textup{z}}$ of size $(\ell-d_1) \times a$ cannot be a WCM (this is the reason why we do not add $1$ in (\ref{eq_exact_gen}) as we do in (\ref{eq_wz_count})).
\end{remark}
\begin{remark}\label{rem_bas}
An analysis similar to what we presented in Section~\ref{sec_cch} can be done for BASTs.
\end{remark}
\section{More on How GASTs Are Removed}\label{sec_rem}
After demonstrating the complexity reduction achieved by operating only on the set of WCMs to remove a GAST, in this section, we provide more details on the removal of GASTs via their WCMs. We first investigate the dimension of the null space of a WCM. Then, we discuss the best that can be done to break the weight conditions of a short WCM. Finally, we discuss the exact minimum number of edge weight changes needed by the WCM framework to remove a GAST from the Tanner graph of an NB-LDPC code, and we provide a useful topological upper bound on that minimum. A further discussion about the null spaces of WCMs that belong to GASTs having $b=d_1$ is provided in Appendix~\ref{sec_appc}.
\subsection{The Dimension of the Null Space of a WCM}\label{subsec_dim}
A GAST is removed via breaking the weight conditions of all its WCMs, i.e., via satisfying (\ref{eq_rem_cond}) for all its WCMs. This breaking is performed by forcing the null spaces of these WCMs to have a particular property. Thus, studying the dimension of the null space of a WCM is critical to understand how GASTs are removed.
Consider a WCM $\bold{W}^{\textup{cm}}_h$, $1 \leq h \leq t$, of a GAST. Recall that $\mathcal{N}(\bold{M})$ is the null space of a matrix $\bold{M}$, and let $\textup{dim}(\mathcal{N}(\bold{M}))$ denote the dimension of the null space of a matrix $\bold{M}$. Moreover, let $G^{\textup{cm}}_h$ be the subgraph created by removing $b^{\textup{cm}}_h$ degree $\leq 2$ CNs from the GAST subgraph. The $b^{\textup{cm}}_h$ rows that are removed from $\bold{A}$ to reach $\bold{W}^{\textup{cm}}_h$ correspond to these $b^{\textup{cm}}_h$ CNs. Note that these CNs are the ones on the path from the root node until the tree node linked to $\bold{W}^{\textup{cm}}_h$ in the unlabeled GAST tree (the CNs marked as unsatisfied by \cite[Algorithm~1]{ahh_jsac} to extract $\bold{W}^{\textup{cm}}_h$). Moreover, let $\bold{M}(G)$ denote the adjacency matrix of a graph $G$.
\begin{theorem}\label{th_dim_null}
The dimension $p_h$ of the null space of a WCM $\bold{W}^{\textup{cm}}_h$, $1 \leq h \leq t$, of an $(a, b, d_1, d_2, d_3)$ GAST, given that this WCM has unbroken weight conditions, is given by:
\vspace{-0.1em}\begin{equation}\label{eq_dim_null}
p_h = \textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) = \sum_{k=1}^{\delta_h}\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ) \geq \delta_h,
\end{equation}
where $\delta_h$ is the number of disconnected components in $G^{\textup{cm}}_h$, and $G^{\textup{disc}}_{h,k}$ is the $k$th disconnected component in $G^{\textup{cm}}_h$, with $1 \leq k \leq \delta_h$.
\end{theorem}
\begin{IEEEproof}
It is known from graph theory that if graph $G^{\textup{cm}}_h$ has $\delta_h$ disconnected components defined as $G^{\textup{disc}}_{h,k}$, $1 \leq k \leq \delta_h$, then:
\vspace{-0.4em}\begin{align}\label{eq_th4_pr1}
p_h &= \textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) = \textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{cm}}_h) \right ) \right ) \nonumber \\ &= \sum_{k=1}^{\delta_h}\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ).
\end{align}
Note that by definition of $G^{\textup{cm}}_h$, $\bold{W}^{\textup{cm}}_h=\bold{M}(G^{\textup{cm}}_h)$.
Then, we prove the inequality $p_h \geq \delta_h$. If $\exists \text{ } G^{\textup{disc}}_{h,k}$ s.t. $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ) = 0$ (which means $\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) = \{\bold{0}\}$), then it is impossible to have a vector $\bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_a]^\textup{T} \in \mathcal{N}\left (\bold{M}(G^{\textup{cm}}_h) \right )=\mathcal{N}(\bold{W}^{\textup{cm}}_h)$ s.t. $v_f \neq 0$, $\forall f \in \{1, 2, \dots, a\}$, where $a$ is the size of the GAST. Thus, in order to have a WCM that has unbroken weight conditions, we must have $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ) > 0$, $\forall k \in \{1, 2, \dots, \delta_h\}$. Noting that if $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ) > 0$, $\forall k \in \{1, 2, \dots, \delta_h\}$, then it has to be the case that $p_h=\sum_{k=1}^{\delta_h}\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ) \geq \delta_h$ completes the proof of Theorem~\ref{th_dim_null}.
\end{IEEEproof}
\begin{remark}\label{rem_5}
Consider a WCM $\bold{W}^{\textup{cm}}_h$ that has unbroken weight conditions. In the majority of the GASTs we have studied, if $G^{\textup{cm}}_h$ (the graph corresponding to $\bold{W}^{\textup{cm}}_h$) has $\delta_h=1$ (the graph is fully connected), then $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) =1$. Similarly, we have typically observed that if $\delta_h > 1$, then $\forall G^{\textup{disc}}_{h,k}$, $1 \leq k \leq \delta_h$, $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k}) \right ) \right ) = 1$. In other words, in most of the cases we have seen, $p_h = \textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) = \delta_h$. Having said that, we have already encountered few examples where $p_h = \textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) > \delta_h$ (see the next subsection).
\end{remark}
Typically, if $\delta_h = 1$, breaking the weight conditions of a WCM $\bold{W}^{\textup{cm}}_h$ yields $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) = 0$ (there are few exceptions to that). Contrarily, it is important to note that if $\delta_h > 1$ (which again means the graph corresponding to $\bold{W}^{\textup{cm}}_h$ has more than one disconnected components), the weight conditions of this WCM can be broken while $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) > 0$. This situation occurs if $\exists$ $G^{\textup{disc}}_{h,k_1}$ and~$G^{\textup{disc}}_{h,k_2}$, $1 \leq k_1, k_2 \leq \delta_h$, s.t. $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k_1}) \right ) \right ) = 0$ while $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{h,k_2}) \right ) \right ) > 0$. Thus, breaking the weight conditions of such a WCM by making $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) = 0$ (if possible) is, albeit sufficient, not necessary. A conceptually-similar observation will be presented in the next subsection. We present Example~\ref{ex_9} to illustrate Theorem~\ref{th_dim_null} as well as this discussion.
\begin{example}\label{ex_9}
Consider the $(6, 0, 0, 9, 0)$ GAST ($\gamma=3$) over GF($4$), where GF($4$) $=\{0, 1 , \alpha, \alpha^2\}$ and $\alpha$ is a primitive element (a root of the primitive polynomial $\rho(x)=x^2+x+1$, i.e., $\alpha^2=\alpha+1$), that is shown in Fig.~\ref{Figure_9}(a). The matrix $\bold{A}$ of this configuration is:
\vspace{-1.0em}
\begin{gather*}\label{ex_wcms}
\begin{matrix}
\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ } v_1 \text{ } & v_2 \text{ } & v_3 \text{ } & v_4 & v_5 & v_6 \vspace{-0.3em}
\end{matrix}
\\
\bold{A}=
\begin{matrix}
c_1 \vspace{-0.0em}\\
c_2 \vspace{-0.0em}\\
c_3 \vspace{-0.0em}\\
c_4 \vspace{-0.0em}\\
c_5 \vspace{-0.0em}\\
c_6 \vspace{-0.0em}\\
c_7 \vspace{-0.0em}\\
c_8 \vspace{-0.0em}\\
c_9 \\
\end{matrix}
\begin{bmatrix}
w_{1,1} & \alpha & 0 & 0 & 0 & 0 \vspace{-0.0em}\\
0 & \alpha^2 & \alpha^2 & 0 & 0 & 0 \vspace{-0.0em}\\
0 & 0 & 1 & \alpha^2 & 0 & 0 \vspace{-0.0em}\\
0 & 0 & 0 & \alpha^2 & 1 & 0 \vspace{-0.0em}\\
0 & 0 & 0 & 0 & 1 & 1 \vspace{-0.0em}\\
w_{6,1} & 0 & 0 & 0 & 0 & \alpha \vspace{-0.0em}\\
0 & \alpha & 0 & 1 & 0 & 0 \vspace{-0.0em}\\
\alpha & 0 & 0 & 0 & \alpha^2 & 0 \vspace{-0.0em}\\
0 & 0 & 1 & 0 & 0 & 1
\end{bmatrix}.
\end{gather*}
For the original configuration, we assume that $w_{1,1}=w_{6,1} \allowbreak =1$. The unlabeled GAST tree of this configuration reveals that it is neither u-symmetric nor same-size-WCMs. The configuration has $10$ WCMs (of different sizes), extracted by removing the rows of the following groups of CNs from $\bold{A}$: $\{(c_1, c_3, c_5), (c_1, c_4, c_9), (c_2, c_4, c_6), (c_2, c_5), (c_2, c_8), (c_3, c_6), \allowbreak (c_3, c_8), (c_5, c_7), (c_6, c_7), (c_7, c_8, c_9)\}$. We index these groups of CNs (and consequently, the resulting WCMs) by $h$, $1 \leq h \leq t=10$. The WCM of interest in this example is $\bold{W}^{\textup{cm}}_2$, which is extracted by removing the rows of $(c_1, c_4, c_9)$ from $\bold{A}$. The graph corresponding to $\bold{W}^{\textup{cm}}_2$, which is $G^{\textup{cm}}_2$, is shown in Fig.~\ref{Figure_9}(b). Note that this graph has $\delta_2=2$ disconnected components. For the given edge weight assignment, $\bold{W}^{\textup{cm}}_2$ (as well as all the remaining $9$ WCMs) has unbroken weight conditions. Thus, according to Theorem~\ref{th_dim_null}, $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_2) \right ) = \sum_{k=1}^{2}\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{2,k}) \right ) \right ) \geq 2$ must be satisfied. Solving for the null space of $\bold{W}^{\textup{cm}}_2$ yields:
\begin{equation}\label{eq_null_od2}
\mathcal{N}(\bold{W}^{\textup{cm}}_2)=\textup{span}\{[\alpha \text{ } 0 \text{ } 0 \text{ } 0 \text{ } 1 \text{ } 1]^\textup{T}, [0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T}\},
\end{equation}
which means that $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_2) \right ) = 2$, and the reason is that $\textup{dim}\left (\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{2,k}) \right ) \right ) = 1$, $\forall k \in \{1, 2\}$.~Note that $\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{2,1}) \right ) = \textup{span}\{[\alpha \text{ } 1 \text{ } 1]^\textup{T}\}$, where $G^{\textup{disc}}_{2,1}$ is the subgraph grouping $\{v_1, v_5, v_6\}$ in Fig.~\ref{Figure_9}(b), while $\mathcal{N}\left (\bold{M}(G^{\textup{disc}}_{2,1}) \right ) = \textup{span}\{[1 \text{ } 1 \text{ } \alpha]^\textup{T}\}$, where $G^{\textup{disc}}_{2,2}$ is the subgraph grouping $\{v_2, v_3, v_4\}$ in Fig.~\ref{Figure_9}(b). Observe that the existance of the vector:
\begin{align}
\bold{v}&=[\alpha \text{ } 0 \text{ } 0 \text{ } 0 \text{ } 1 \text{ } 1]^\textup{T} + [0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T} \nonumber \\ &=[\alpha \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 1 \text{ } 1]^\textup{T} \in \mathcal{N}(\bold{W}^{\textup{cm}}_2)
\end{align}
verifies that the weight conditions of $\bold{W}^{\textup{cm}}_2$ are unbroken.
Now, assume that in the process of removing the GAST, we break the weight conditions of $\bold{W}^{\textup{cm}}_2$ via the following set of a single edge weight change: $\{w_{6,1}:1 \rightarrow \alpha^2\}$. This change results in breaking the weight conditions of $\bold{W}^{\textup{cm}}_2$, i.e., $\nexists \text{ } \bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_6] \in \mathcal{N}(\bold{W}^{\textup{cm}}_2)$ s.t. $v_f \neq 0$, $\forall f \in \{1, 2, \dots, 6\}$. However, $\mathcal{N}(\bold{W}^{\textup{cm}}_2) = \textup{span}\{[0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T}\} \neq \{\bold{0}\}$, i.e., $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_2) \right ) = 1$ (it was originally $2$). This is an example of how the weight conditions of a WCM that has a corresponding graph with $\delta_h > 1$ can be broken while $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) > 0$ (for $h=2$ here). Obviously, it is possible to make another edge weight change for an edge in $G^{\textup{disc}}_{2,2}$ to make $\mathcal{N}(\bold{W}^{\textup{cm}}_2) = \{\bold{0}\}$. However, this is by no means necessary for the GAST removal process.
\end{example}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.06in 0.0in 0.07in 0.0in},clip,width=3.5in]{Figure_9.pdf}\vspace{-0.5em}
\text{\hspace{0.6em} \footnotesize{(a) \hspace{14em} (b) \hspace{1em}}}
\caption{(a) A $(6, 0, 0, 9, 0)$ GAST for $\gamma=3$. (b) The graph created by removing $(c_1, c_4, c_9)$ from the GAST graph. GF($4$) is assumed.}
\label{Figure_9}
\vspace{-0.5em}
\end{figure}
\subsection{Breaking the Weight Conditions of Short WCMs}\label{subsec_break}
In this subsection, we discuss the best that can be done to break the weight conditions of a short WCM. The following lemma states this result.
\begin{lemma}\label{th_short_wcm}
The null space of a short WCM $\bold{W}^{\textup{cm}}_h$ (a WCM that has fewer rows than columns) after a successful removal process for the $(a, b, d_1, d_2, d_3)$ GAST to which this WCM belongs satisfies the following two conditions\footnote{Note that Lemma~\ref{th_short_wcm} also applies to any WCM $\bold{W}^{\textup{cm}}_h$ that has $G^{\textup{cm}}_h$ with $\delta_h > 1$ and at least one of the disconnected components having a short adjacency matrix (a very rare case).}:
\begin{enumerate}
\item Its dimension is strictly more than $0$, i.e., $p_h = \textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) > 0$.
\item For any $\bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_a]^\textup{T} \in \mathcal{N}(\bold{W}^{\textup{cm}}_h)$, where $a$ is the size of the GAST, the number of non-zero elements in $\bold{v}$ is strictly less than $a$, i.e., $\left \| \bold{v} \right \|_0 < a$.
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
Since the number of rows is less than the number of columns in this WCM, the WCM cannot have a full column rank. Thus, $\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \neq \{\bold{0}\}$, which means $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) > 0$, and proves the first condition in the lemma. Moreover, if a GAST is removed successfully, this implies that each WCM in the set $\mathcal{W}$ associated with that GAST has broken weight conditions. Thus, the second condition in the lemma is satisfied for the short WCM.
\end{IEEEproof}
Lemma~\ref{th_short_wcm} further emphasizes on the fact that the weight conditions of a WCM $\bold{W}^{\textup{cm}}_h$ can be broken while $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) > 0$ (i.e., $\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \neq \{\bold{0}\}$). One way this case can happen is if $\delta_h > 1$ (which is discussed in the previous subsection). Another way is if the WCM is short, even with $\delta_h=1$. For many short WCMs, the reason why this case occurs is that before breaking the weight conditions of a short WCM, it typically has $p_h=\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) > \delta_h$. Here, we are more interested in short WCMs with $\delta_h = 1$. The difference between the two ways is that if $\delta_h > 1$ and there does not exist any disconnected component having a short adjacency matrix, we can still break the weight conditions of the WCM by making $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) = 0$. However, such processing is not necessary, and it would require more edge weight changes than the minimum needed. Contrarily, if the WCM is short, it is impossible to break the weight conditions by making $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) = 0$, and the best we can do is what is described in Lemma~\ref{th_short_wcm}. The following example demonstrates Lemma~\ref{th_short_wcm}.
\begin{figure}
\vspace{-0.7em}
\center
\includegraphics[trim={0.08in 0.0in 0.07in 0.0in},clip,width=3.5in]{Figure_10.pdf}\vspace{-0.5em}
\text{\hspace{0.6em} \footnotesize{(a) \hspace{14em} (b) \hspace{1em}}}
\caption{(a) A $(6, 2, 2, 5, 2)$ GAST for $\gamma=3$. (b) The subgraph created by removing $(c_2, c_4, c_8, c_9)$ from the GAST subgraph. GF($4$) is assumed.}
\label{Figure_10}
\vspace{-0.5em}
\end{figure}
\vspace{-0.1em}
\begin{example}\label{ex_10}
Consider the $(6, 2, 2, 5, 2)$ GAST ($\gamma=3$) over GF($4$), where GF($4$) $=\{0, 1 , \alpha, \alpha^2\}$ and $\alpha$ is a primitive element, that is shown in Fig.~\ref{Figure_10}(a). The matrix $\bold{A}$ of this configuration is:
\vspace{-0.5em}
\begin{gather*}\label{ex_wcms}
\begin{matrix}
\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ } v_1 & v_2 \text{ } & v_3 \text{ } & v_4 & v_5 & v_6 \vspace{-0.3em}
\end{matrix}
\\
\bold{A}=
\begin{matrix}
c_1 \vspace{-0.0em}\\
c_2 \vspace{-0.0em}\\
c_3 \vspace{-0.0em}\\
c_4 \vspace{-0.0em}\\
c_5 \vspace{-0.0em}\\
c_6 \vspace{-0.0em}\\
c_7 \vspace{-0.0em}\\
c_8 \vspace{-0.0em}\\
c_9 \\
\end{matrix}
\begin{bmatrix}
0 & w_{1,2} & \alpha^2 & 0 & 0 & 0 \vspace{-0.0em}\\
0 & 0 & 1 & 1 & 0 & 0 \vspace{-0.0em}\\
0 & 0 & 0 & \alpha & 1 & 0 \vspace{-0.0em}\\
0 & 0 & 0 & 0 & 1 & 1 \vspace{-0.0em}\\
1 & 0 & 0 & 0 & 0 & \alpha \vspace{-0.0em}\\
\alpha & 0 & \alpha & 0 & \alpha & 0 \vspace{-0.0em}\\
0 & \alpha & 0 & \alpha^2 & 0 & \alpha^2 \vspace{-0.0em}\\
0 & \alpha^2 & 0 & 0 & 0 & 0 \vspace{-0.0em}\\
1 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}.
\end{gather*}
For the original configuration, we assume that $w_{1,2}=\alpha^2$. From Example~\ref{ex_1}, this configuration has $2$ WCMs, extracted by removing the rows of the following groups of CNs from $\bold{A}$: $\{(c_3,\mathcal{O}_\textup{sg}), (c_2, c_4,\mathcal{O}_\textup{sg})\}$, where $\mathcal{O}_\textup{sg}$ is $(c_8, c_9)$. We index these groups of CNs (and consequently, the resulting WCMs) by $h$, $1 \leq h \leq t=2$. {The WCM of interest in this example is $\bold{W}^{\textup{cm}}_2$, which is extracted by removing the rows of $(c_2, c_4, c_8, c_9)$ from $\bold{A}$.} The graph corresponding to $\bold{W}^{\textup{cm}}_2$, which is $G^{\textup{cm}}_2$, is shown in Fig.~\ref{Figure_10}(b). Note that $\bold{W}^{\textup{cm}}_2$ is of size $5 \times 6$ (a short matrix). For the given edge weight assignment, $\bold{W}^{\textup{cm}}_2$ (as well as $\bold{W}^{\textup{cm}}_1$) has unbroken weight conditions. Solving for the null space of $\bold{W}^{\textup{cm}}_2$ yields the following:
\begin{equation}\label{eq_null_od2}
\mathcal{N}(\bold{W}^{\textup{cm}}_2)=\textup{span}\{[0 \text{ } 1 \text{ } 1 \text{ } \alpha^2 \text{ } 1 \text{ } 0]^\textup{T}, [1 \text{ } 1 \text{ } 1 \text{ } 0 \text{ } 0 \text{ } \alpha^2]^\textup{T}\},
\end{equation}
This is one of the cases where we have $\textup{dim}\left (\mathcal{N}\left (\bold{W}^{\textup{cm}}_h \right ) \right ) > 1$ (for $h=2$) with $\delta_h = 1$ (the corresponding graph to $\bold{W}^{\textup{cm}}_2$ is fully connected). Observe also that the existance of the vector:
\begin{align}
\bold{v}&=[0 \text{ } 1 \text{ } 1 \text{ } \alpha^2 \text{ } 1 \text{ } 0]^\textup{T} + \alpha[1 \text{ } 1 \text{ } 1 \text{ } 0 \text{ } 0 \text{ } \alpha^2]^\textup{T} \nonumber \\ &= [\alpha \text{ } \alpha^2 \text{ } \alpha^2 \text{ } \alpha^2 \text{ } 1 \text{ } 1]^\textup{T} \in \mathcal{N}(\bold{W}^{\textup{cm}}_2)
\end{align}
verifies that the weight conditions of $\bold{W}^{\textup{cm}}_2$ are unbroken.
Now, assume that in the process of removing the GAST, we break the weight conditions of $\bold{W}^{\textup{cm}}_2$ via the following set of a single edge weight change: $\{w_{1,2}:\alpha^2 \rightarrow \alpha\}$. This change results in breaking the weight conditions of $\bold{W}^{\textup{cm}}_2$, i.e., $\nexists \text{ } \bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_6] \in \mathcal{N}(\bold{W}^{\textup{cm}}_2)$ s.t. $v_f \neq 0$, $\forall f \in \{1, 2, \dots, 6\}$. However, $\mathcal{N}(\bold{W}^{\textup{cm}}_2) = \textup{span}\{[1 \text{ } 0 \text{ } 0 \text{ } \alpha^2 \text{ } 1 \text{ } \alpha^2]^\textup{T}\} \neq \{\bold{0}\}$, i.e., $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_2) \right ) = 1$ (it was originally $2$). This example illustrates that the weight conditions of the short WCM can only be broken with $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_2) \right ) > 0$ regardless of the edge weight change(s) we perform.
\end{example}
\vspace{-0.2em}
\subsection{The Number of Edge Weight Changes Needed}\label{subsec_num}
In this subsection, we discuss the minimum number of edge weight changes needed, in addition to how to select these edge weight changes in order to have a successful removal of the problematic object. Recall that we need to break the weight conditions of all the WCMs of a GAST in order to remove the GAST.
\begin{lemma}\label{lem_emin_gas}
The minimum number of edge weight changes (with respect to the original configuration) needed to remove an $(a, b, b_2, d_1, d_2, d_3)$ GAS (convert it into a non-AS, i.e., a non-GAS) is given by:
\begin{equation}\label{eq_emin_gas}
E_{\textup{GAS},\textup{min}}=g-b_{\textup{vn},\textup{max}}+1,
\end{equation}
where $g=\left \lfloor \frac{\gamma-1}{2} \right \rfloor$, and $b_{\textup{vn},\textup{max}}$ is the maximum number of existing unsatisfied CNs per VN in the GAS. A topological upper bound on that minimum is given by:
\begin{equation}\label{eq_emin_bnd}
E_{\textup{GAS},\textup{min}} \leq g-d_{1,\textup{vn},\textup{max}}+1,
\end{equation}
where $d_{1,\textup{vn},\textup{max}}$ is the maximum number of existing degree-$1$ CNs per VN in the GAS.
\end{lemma}
\begin{IEEEproof}
The set of GASs is simply the set of absorbing sets (ASs). Thus, the proof of (\ref{eq_emin_gas}) is exactly the same as the proof of $E_{\textup{AS},\textup{min}}$ in \cite[Lemma~2]{ahh_bas}.
Now, we prove the upper bound in (\ref{eq_emin_bnd}). Recall that degree-$1$ CNs are always unsatisfied. Thus, irrespective of whether the VN that has the maximum number of existing unsatisfied CNs is the same as the VN that has the maximum number of existing degree-$1$ CNs or not, the following inequality is always satisfied:
\begin{equation}\label{eq_lem6_pr}
b_{\textup{vn},\textup{max}} \geq d_{1,\textup{vn},\textup{max}}.
\end{equation}
Substituting (\ref{eq_lem6_pr}) in (\ref{eq_emin_gas}) gives (\ref{eq_emin_bnd}), and completes the proof.
\end{IEEEproof}
While theoretically a GAST can be removed by forcing a single degree $> 2$ CN (if any) to be unsatisfied via a single edge weight change, this is not the strategy we follow. The reason is that the described process can result in another GAS with a degree $> 2$ unsatisfied CN ($b > d_1+b_2$). Despite that GASs with $b > d_1+b_2$ are generally less harmful than GASTs, it is not preferred to remove GASTs by converting some of them into other types of GASs. Thus, we remove a GAST by performing $E_{\textup{GAST},\textup{min}}=E_{\textup{GAS},\textup{min}}$ edge weight changes for edges connected to only degree-$2$ CNs (all the weights of edges connected to degree $> 2$ CNs remain untouched). In other words, (\ref{eq_emin_gas}) and (\ref{eq_emin_bnd}) are applicable to both GASTs and GASs. Furthermore, $E_{\textup{GAST},\textup{min}}$ we use in \cite[Algorithm~2]{ahh_jsac} (see Appendix~\ref{sec_appb}) is given by (\ref{eq_emin_gas}) and bounded by (\ref{eq_emin_bnd}).
The upper bound in (\ref{eq_emin_bnd}) is purely topological (determined from the unlabeled GAST), i.e., it does not require any knowledge of $b$ of the GAST being processed (nor $b_{\textup{vn},\textup{max}}$, consequently). The importance of this topological bound will be illustrated shortly.
A useful definition and a subsequent corollary, which are used to simplify the process of selecting $E_{\textup{GAST},\textup{min}}$ edge weights to change, are proposed below.
\begin{definition}
A \textbf{borderline VN} in an $(a, b, d_1, d_2, d_3)$ GAST in a code with column weight $\gamma$ is a VN that is connected to exactly $g=\left \lfloor \frac{\gamma-1}{2} \right \rfloor$ degree-$1$ CNs.
\end{definition}
\begin{corollary}\label{cor_emin_1}
An $(a, b, d_1, d_2, d_3)$ GAST that has at least one borderline VN has $E_{\textup{GAST},\textup{min}}=1$, and the upper bound on $E_{\textup{GAST},\textup{min}}$ is also $1$.
\end{corollary}
\begin{IEEEproof}
A borderline VN already has the maximum number of unsatisfied CNs a VN can have in a GAST (or in an AS in general), which is $g=\left \lfloor \frac{\gamma-1}{2} \right \rfloor$. Consequently, a GAST with at least one borderline VN has:
\begin{equation}\label{eq_cor6_pr}
b_{\textup{vn},\textup{max}} = d_{1,\textup{vn},\textup{max}} = g = \left \lfloor \frac{\gamma-1}{2} \right \rfloor.
\end{equation}
Substituting (\ref{eq_cor6_pr}) in (\ref{eq_emin_gas}) gives $E_{\textup{GAST},\textup{min}}=1$. Noting that $b_{\textup{vn},\textup{max}} = d_{1,\textup{vn},\textup{max}}$ proves that the upper bound is also $1$ (by substituting in (\ref{eq_emin_bnd})).
\end{IEEEproof}
\begin{remark}\label{rem_7}
Lemma~\ref{lem_emin_gas}, Corollary~\ref{cor_emin_1}, and the discussion below give the minimum number of edge weight changes in addition to the specification of which edge weights need to be changed in order to remove a GAST. However, they do not determine what these particular changes should be, i.e., they do not specify the new values of the edge weights to be changed. Specifying the new values of the edge weights is performed by the WCM framework via checking the null spaces of all WCMs of the GAST being processed, and making sure that all the WCMs have broken weight conditions after the edge weight changes (see also \cite[Theorem~3]{ahh_jsac}, \cite[Algorithm~2]{ahh_jsac}, and Example~\ref{ex_12}). This justification is the reason why the word ``\textbf{properly}'' is used to describe the edge weight changes in this subsection.
\end{remark}
It can be concluded from Corollary~\ref{cor_emin_1} that any degree-$2$ unsatisfied CN connected to a borderline VN results in an object that is not a GAST. Thus, assuming that the object being processed is identified to be a GAST via the WCM framework (at least one of its WCMs has unbroken weight conditions), we select the edge weights to be changed based on the following two cases\footnote{Note that these two items are for a stand-alone GAST. It happens in few cases that we need more than the minimum number of edge weight changes to remove a GAST because of previously removed GASTs that share edges with the GAST being processed (or other reasons).
}:
\begin{enumerate}
\item If the GAST has at least one borderline VN ($E_{\textup{GAST},\textup{min}}=1$), then we \textbf{properly} change the weight of an edge connected to a degree-$2$ CN connected to any of the borderline VNs. If every VN in the GAST is borderline, then we change the weight of an edge connected to any degree-$2$ CN.
\item If the GAST does not have any borderline VNs ($E_{\textup{GAST},\textup{min}} \geq 1$), then we determine the VN(s) that has (have) the maximum number, $d_{1,\textup{vn},\textup{max}}$, of degree-$1$ CNs connected to it (them). Then we \textbf{properly} change the weights of a maximum of $(g-d_{1,\textup{vn},\textup{max}}+1)$ edges connected to different degree-$2$ CNs connected to a particular VN of those having $d_{1,\textup{vn},\textup{max}}$ neighboring degree-$1$ CNs.
\end{enumerate}
To relate the above analysis to the WCMs, recall that every CN in a GAST has a corresponding row in the matrix $\bold{A}$ of this GAST. The GAST is removed by breaking the weight conditions of all its WCMs. To achieve this, we operate on a set of rows in $\bold{A}$, that has the minimum cardinality and its rows correspond to degree-$2$ CNs, with the property that every WCM has at least one row in that set. Any set of $(g-d_{1,f}+1)$ rows in $\bold{A}$ satisfies the stated property if they correspond to degree-$2$ CNs connected to the same VN, $v_f$, $f \in \{1, 2, \dots, a\}$, where $d_{1,f}$ is the number of degree-$1$ CNs connected to VN $v_f$. The reason is that we cannot together remove $(g-d_{1,f}+1)$ rows of degree-$2$ CNs connected to VN $v_f$ from $\bold{A}$ to extract a WCM since the resulting matrix then will not be a valid $\bold{W}^{\textup{z}}$. Thus, a set of $(g-d_{1,\textup{vn},\textup{max}}+1)$ rows of degree-$2$ CNs connected to the same VN that achieves $d_{1,\textup{vn},\textup{max}}$ is indeed a set of minimum cardinality with the property that every WCM has at least one row in that set. Consequently, the topological upper bound in (\ref{eq_emin_bnd}) provides the cardinality of that set of rows satisfying the stated property. Properly operating on a maximum of $(g-d_{1,\textup{vn},\textup{max}}+1)$ weights in these rows (only one weight per row) is what is needed to remove the GAST. Examples~\ref{ex_12} and \ref{ex_13} illustrate the process performed by the WCM framework to remove a stand-alone GAST.
\begin{remark}\label{rem_8}
Typically, we only need to perform $(g-b_{\textup{vn},\textup{max}}+1) \leq (g-d_{1,\textup{vn},\textup{max}}+1)$ edge weight changes to remove the GAST. When $b_{\textup{vn},\textup{max}} \neq d_{1,\textup{vn},\textup{max}}$, the number of WCMs with unbroken weight conditions becomes strictly less than $t$, and only $(g-b_{\textup{vn},\textup{max}}+1)$ rows are enough to establish a set of minimum cardinality with the property that every WCM with unbroken weight conditions has at least one row in that set.
\end{remark}
\begin{example}\label{ex_12}
We again discuss the $(6, 0, 0, 9, 0)$ GAST ($\gamma=3$) in Fig.~\ref{Figure_9}(a), with $w_{1,1}=w_{6,1}=1$. The null spaces of the $10$ WCMs of that GAST are given in Example~\ref{ex_11}, (\ref{eq_ex11_1}) (see Example~\ref{ex_9} for how the WCMs are extracted). Since $b_{\textup{vn},\textup{max}}=d_{1,\textup{vn},\textup{max}}=0$, (\ref{eq_emin_gas}) gives $E_{\textup{GAST},\textup{min}}=\left \lfloor \frac{3-1}{2} \right \rfloor-0+1=2$ (same as the upper bound). Given that all VNs have $0$ degree-$1$ neighboring CNs, any VN can be selected. Suppose that $v_1$ is selected. Each of the $10$ WCMs has at least one of the two rows corresponding to $c_1$ and $c_6$ (both are connected to $v_1$) in $\bold{A}$. Thus, we consider the following two sets of edge weight changes for $w_{1,1}$ and $w_{6,1}$ (cardinality $2$). The first set is $\{w_{1,1}:1 \rightarrow \alpha, w_{6,1}:1 \rightarrow \alpha\}$. Using this set of edge weight changes, the null spaces of the $10$ WCMs become:
\begin{align}\label{eq_ex12_1}
\mathcal{N}(\bold{W}^{\textup{cm}}_1) &= \mathcal{N}(\bold{W}^{\textup{cm}}_3) = \mathcal{N}(\bold{W}^{\textup{cm}}_4) \nonumber \\ &= \mathcal{N}(\bold{W}^{\textup{cm}}_6) = \mathcal{N}(\bold{W}^{\textup{cm}}_8) = \mathcal{N}(\bold{W}^{\textup{cm}}_9) = \{\bold{0}\}, \nonumber \\
\mathcal{N}(\bold{W}^{\textup{cm}}_2)&=\textup{span}\{[0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T}\}, \text{ and} \nonumber \\
\mathcal{N}(\bold{W}^{\textup{cm}}_5)&= \mathcal{N}(\bold{W}^{\textup{cm}}_7) = \mathcal{N}(\bold{W}^{\textup{cm}}_{10}) = \textup{span}\{[1 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 1 \text{ } 1]^\textup{T}\}.
\end{align}
Clearly, the GAST is not removed as there are $3$ WCMs with unbroken weight conditions: $\bold{W}^{\textup{cm}}_5$, $\bold{W}^{\textup{cm}}_7$, and $\bold{W}^{\textup{cm}}_{10}$. The second set is $\{w_{1,1}:1 \rightarrow \alpha, w_{6,1}:1 \rightarrow \alpha^2\}$. Using this set of edge weight changes, the null spaces of the $10$ WCMs become:
\vspace{-0.1em}\begin{align}\label{eq_ex12_2}
\mathcal{N}(\bold{W}^{\textup{cm}}_1) &= \mathcal{N}(\bold{W}^{\textup{cm}}_3) = \mathcal{N}(\bold{W}^{\textup{cm}}_4) \nonumber \\ &= \mathcal{N}(\bold{W}^{\textup{cm}}_5) = \mathcal{N}(\bold{W}^{\textup{cm}}_6) = \mathcal{N}(\bold{W}^{\textup{cm}}_7) \nonumber \\ &= \mathcal{N}(\bold{W}^{\textup{cm}}_8) = \mathcal{N}(\bold{W}^{\textup{cm}}_9) = \mathcal{N}(\bold{W}^{\textup{cm}}_{10}) = \{\bold{0}\} \text{ and} \nonumber \\
\mathcal{N}(\bold{W}^{\textup{cm}}_2)&=\textup{span}\{[0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T}\},
\end{align}
which means that the GAST is successfully removed as the $10$ WCMs have broken weight conditions. As a result, it can be concluded that properly identifying which edge weights to change is important but not enough. Checking the null spaces of all WCMs is what determines which set of edge weight changes is sufficient for a successful GAST removal.
Now, consider the case of $w_{1,1}=\alpha$ and $w_{6,1}=1$ for the original configuration (i.e., before any removal attempt). The configuration in this case is a $(6, 1, 0, 9, 0)$ GAST with $b_{\textup{vn},\textup{max}}=1$ and $d_{1,\textup{vn},\textup{max}}=0$. Thus, (\ref{eq_emin_gas}) gives $E_{\textup{GAST},\textup{min}}=1$, while the upper bound is $2$ from (\ref{eq_emin_bnd}). The null spaces of the $10$ WCMs are:
\begin{align}\label{eq_ex12_3}
\mathcal{N}(\bold{W}^{\textup{cm}}_1) &=\textup{span}\{[\alpha \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 1 \text{ } 1]^\textup{T}\}, \nonumber \\ \mathcal{N}(\bold{W}^{\textup{cm}}_2) &=\textup{span}\{[\alpha \text{ } 0 \text{ } 0 \text{ } 0 \text{ } 1 \text{ } 1]^\textup{T}, [0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T}\}, \text{ and} \nonumber \\
\mathcal{N}(\bold{W}^{\textup{cm}}_3) &= \mathcal{N}(\bold{W}^{\textup{cm}}_4) = \mathcal{N}(\bold{W}^{\textup{cm}}_5) = \mathcal{N}(\bold{W}^{\textup{cm}}_6) = \mathcal{N}(\bold{W}^{\textup{cm}}_7) \nonumber \\ &= \mathcal{N}(\bold{W}^{\textup{cm}}_8) = \mathcal{N}(\bold{W}^{\textup{cm}}_9) = \mathcal{N}(\bold{W}^{\textup{cm}}_{10}) = \{\bold{0}\}.
\end{align}
Only $2$ WCMs, $\bold{W}^{\textup{cm}}_1$ and $\bold{W}^{\textup{cm}}_2$, have unbroken weight conditions. Both of them share the row corresponding to $c_6$. Consequently, only one edge weight change is needed to break the weight conditions of the $2$ WCMs and remove the object ($E_{\textup{GAST},\textup{min}}$ is achieved). A set of a single edge weight change, e.g., $\{w_{6,1}:1 \rightarrow \alpha^2\}$, is sufficient to perform the removal (see also Remark~\ref{rem_8}).
\end{example}
\begin{example}\label{ex_13}
We discuss the $(6, 2, 2, 5, 2)$ GAST ($\gamma=3$) in Fig.~\ref{Figure_10}(a), with $w_{1,2}=\alpha^2$. The null spaces of the $2$ WCMs of that GAST are given in Example~\ref{ex_11}, (\ref{eq_ex11_2}) (see Example~\ref{ex_10} for how the WCMs are extracted). Since $b_{\textup{vn},\textup{max}}=d_{1,\textup{vn},\textup{max}}=1$, (\ref{eq_emin_gas}) gives $E_{\textup{GAST},\textup{min}}=1$ (same as the upper bound). Either $v_1$ or $v_2$ can be selected as both are borderline VNs. Suppose that $v_2$ is selected. Each of the $2$ WCMs has the row of $c_1$ (see Example~\ref{ex_10}). Applying the set of a single edge weight change, $\{w_{1,2}:\alpha^2 \rightarrow \alpha\}$, yields the following null spaces:
\vspace{-0.1em}\begin{align}\label{eq_ex12_4}
\mathcal{N}(\bold{W}^{\textup{cm}}_1)&= \{\bold{0}\} \text{ and } \mathcal{N}(\bold{W}^{\textup{cm}}_2)=\textup{span}\{[1 \text{ } 0 \text{ } 0 \text{ } \alpha^2 \text{ } 1 \text{ } \alpha^2]^\textup{T}\},
\end{align}
which means that the GAST is successfully removed.
\end{example}
\vspace{-0.2em}
\section{Removing Oscillating Sets to Achieve More Gain}\label{sec_os}
Now that we have presented the in-depth analysis of the baseline WCM framework, we are ready to introduce an extension to the framework. In particular, in this section, we discuss a new set of detrimental objects, namely oscillating sets of type two (OSTs), that are the second-order cause of the error floor of NB-LDPC codes with even column weights over asymmetric channels. We show how to remove OSTs using the WCM framework. In the simulation results section, we will show that performing another optimization phase that addresses OSTs, after the GASTs removal phase, secures up to nearly $2.5$ orders of magnitude overall performance gain in the error floor region over practical (asymmetric) Flash channels.
\subsection{Defining OSs and OSTs}\label{subsec_ost}
Before we introduce an oscillating set (OS), we define an oscillating VN.
\begin{definition}\label{def_osc_vn}
Consider a subgraph induced by a subset $\mathcal{V}$ of VNs in the Tanner graph of an NB-LDPC code. Set all the VNs in $\mathcal{V}$ to values $\in$ GF($q$)$\setminus \{0\}$ and set all other VNs to $0$. A VN in $\mathcal{V}$ is said to be an \textbf{oscillating VN} if the number of its neighboring satisfied CNs equals the number of its neighboring unsatisfied CNs for some set of VN values. The set of all oscillating VNs in $\mathcal{V}$ is referred to as $\mathcal{S}$.
\end{definition}
It is clear that for codes with fixed column weights (fixed VN degrees), there can exist an oscillating VN only under the condition that the column weight $\gamma$ is even. Based on Definition~\ref{def_osc_vn}, we define the oscillating set.
\begin{definition}\label{def_os}
Consider a subgraph induced by a subset $\mathcal{V}$ of VNs in the Tanner graph of an NB-LDPC code. Set all the VNs in $\mathcal{V}$ to values $\in$ GF($q$)$\setminus \{0\}$ and set all other VNs to $0$. The set $\mathcal{V}$ is said to be an $(a, b, b_2, d_1, d_2, d_3)$ \textbf{oscillating set (OS)} over GF($q$) if and only if the size of $\mathcal{V}$ is $a$, the number of unsatisfied (resp., degree-$2$ unsatisfied) CNs connected to $\mathcal{V}$ is $b$ (resp., $b_2$), the number of degree-$1$ (resp., $2$ and $> 2$) CNs connected to $\mathcal{V}$ is $d_1$ (resp., $d_2$ and $d_3$), the set of oscillating VNs $\mathcal{S} \subseteq \mathcal{V}$ is not empty, and each VN (if any) in $\mathcal{V} \setminus \mathcal{S}$ is connected to strictly more satisfied than unsatisfied neighboring CNs, for some set of VN values.
\end{definition}
Recall that $\mathcal{O}$ (resp., $\mathcal{T}$ and $\mathcal{H}$) is the set of degree-$1$ (resp., $2$ and $> 2$) CNs connected to $\mathcal{V}$.
The unlabeled OS is defined in a way similar to the unlabeled GAS except for that Condition 2 in Definition~\ref{def_ugas} is replaced by ``\textit{Each VN in $\mathcal{V}$ has a number of neighbors in $(\mathcal{T} \cup \mathcal{H})$ that is at least the same as its number of neighbors in $\mathcal{O}$.}'' Moreover, \cite[Lemma~1]{ahh_jsac} can be changed to suit OSs by referring to the unlabeled OS instead of the unlabeled GAS in the topological conditions, and by using the following equation instead of (\ref{eq_gas_cond2}) in the weight conditions:
\begin{align}\label{eq_os_cond2}
\left ( \sum\limits_{e=1}^{\ell-b}F\left ( \psi_{e,f} \right ) \right ) \geq \left ( \sum\limits_{k=1}^{b}F\left ( \theta_{k,f} \right ) \right ).
\end{align}
Note that the equality in (\ref{eq_os_cond2}) must hold for at least one VN in $\mathcal{V}$. We also define an oscillating set of type two (OST) as follows.
\begin{definition}\label{def_ost}
An OS that has $d_2 > d_3$ and all the unsatisfied CNs connected to it $\in (\mathcal{O} \cup \mathcal{T})$ (having either degree $1$ or degree $2$), is defined as an $(a, b, d_1, d_2, d_3)$ \textbf{oscillating set of type two (OST)}. Similar to the {unlabeled OS} definition, we also define the \textbf{$(a, d_1, d_2, d_3)$ unlabeled OST}.
\end{definition}
If hard decision, majority rule decoding, e.g., Gallager B decoding \cite{gal_th}, is assumed, an oscillating VN in error receives the exact same number of ``stay'' and ``change'' messages. This observation makes it harder for the decoder to correct it compared with a VN with more neighboring unsatisfied than satisfied CNs. Over aggressively asymmetric channels, oscillating VNs in error are even less likely to be corrected in many cases under soft decision decoding because of the high error magnitudes. Based on our extensive simulations, OSTs typically result in between $5\%$ and $10\%$ of the errors of NB-LDPC codes with even $\gamma$ in the error floor region over practical (asymmetric) Flash channels, making OSTs the second-order cause, after GASTs, of the error floor in such channels. As we shall see in Section~\ref{sec_sim}, removing OSTs from the Tanner graphs of NB-LDPC codes offers about $0.5$ of an order of magnitude or more additional performance gain.
Fig.~\ref{Figure_11}(a) shows an $(8, 4, 3, 13, 1)$ OST that has $\mathcal{S}=\{v_1\}$. Fig.~\ref{Figure_11}(b) shows a $(6, 6, 2, 11, 0)$ OST that has $\mathcal{S}=\{v_2, v_3, v_4, v_5\}$. Some OSTs have underlying GASTs as subgraphs, while others do not. For example, if the VN $v_1$ is eliminated from the $(8, 4, 3, 13, 1)$ OST in Fig.~\ref{Figure_11}(a), the underlying object is a $(7, 4, 3, 11, 1)$ GAST (the two CNs shaded in red will be degree-$1$ unsatisfied CNs as a result of the elimination of $v_1$). Contrarily, the $(6, 6, 2, 11, 0)$ OST in Fig.~\ref{Figure_11}(b) does not have an underlying GAST.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.07in 0.0in 0.2in 0.0in},clip,width=3.5in]{Figure_11.pdf}\vspace{-0.5em}
\text{\hspace{0.3em} \footnotesize{(a) \hspace{14.5em} (b) \hspace{1em}}}
\caption{(a) An $(8, 4, 3, 13, 1)$ OST for $\gamma=4$. (b) A $(6, 6, 2, 11, 0)$ OST for $\gamma=4$. Appropriate non-binary edge weights are assumed. Unlabeled OSTs are reached by setting all the weights in the configurations
to $1$.}
\label{Figure_11}
\vspace{-0.5em}
\end{figure}
\subsection{How to Remove OSTs Using WCMs}\label{subsec_orem}
Before we propose the lemma that discusses the removal of OSTs, we need to state several auxiliary results.
\begin{lemma}\label{lem_odeg2_unsat}
Consider an $(a,d_1,d_2,d_3)$ {unlabeled OST} with its sets $\mathcal{T}$ and $\mathcal{H}$. A CN $c \in \mathcal{T}$ can be unsatisfied in the resulting OST (with proper edge labeling) resulting in $b > d_1$ if and only if the two neighboring VNs of $c$ (with respect to this {unlabeled OST}) each has the property that strictly more than $\frac{\gamma}{2}$ of its neighboring CNs belong to $(\mathcal{T} \cup \mathcal{H})$.
\end{lemma}
\begin{IEEEproof}
The proof follows the same logic as the proof of \cite[Theorem~1]{ahh_jsac}.
\end{IEEEproof}
\begin{lemma}\label{lem_obmax}
Given an $(a, d_1, d_2, d_3)$ {unlabeled OST}, the maximum number of unsatisfied CNs, $b_{\textup{o\_max}}$, in the resulting OST after edge labeling is upper bounded by:
\begin{equation}\label{eq_obmax}
b_{\textup{o\_max}} \leq d_1 + b_{\textup{o\_ut}}, \text{ where}
\end{equation}
\begin{equation}\label{eq_obut}
b_{\textup{o\_ut}} = \left \lfloor \frac{1}{2} \left ( a\left( \frac{\gamma}{2} \right ) - d_1 \right ) \right \rfloor.
\end{equation}
\end{lemma}
\begin{IEEEproof}
The proof follows the same logic as the proof of \cite[Theorem~2]{ahh_jsac}. The main equation in the proof is:
\vspace{-0.1em}\begin{align}\label{eq_lem9_pr}
b_{\textup{o\_ut}}&=\sum_{f=1}^{a}\left [ \frac{\gamma}{2} - b_{\textup{o\_up},f}\right ] = a\left ( \frac{\gamma}{2} \right ) - \left ( d_1 + b_{\textup{o\_ut}} \right ),
\end{align}
where $b_{\textup{o\_ut}}$ is the upper bound on the maximum number of degree-$2$ unsatisfied CNs the resulting OST can have after labeling, {and $b_{\textup{o\_up},f}$ is the number of the already-unsatisfied CNs connected to VN $v_f$, $f \in \{1, 2, \dots, a \}$, updated by what has been done for all the VNs processed prior to VN $v_f$.}
\end{IEEEproof}
The following example illustrates Lemmas~\ref{lem_odeg2_unsat} and \ref{lem_obmax}.
\begin{example}\label{ex_14}
Both configurations in Fig.~\ref{Figure_11} can have degree-$2$ unsatisfied CNs in the resulting OSTs. For the $(8, 3, 13, 1)$ unlabeled OST, $b_{\textup{o\_ut}}=6$ {($1$ of these $6$ CNs is unsatisfied in the $(8, 4, 3, 13, 1)$ OST in Fig.~\ref{Figure_11}(a))}, while for the $(6, 2, 11, 0)$ unlabeled OST, $b_{\textup{o\_ut}}=5$ {($4$ of these $5$ CNs are unsatisfied in the $(6, 6, 2, 11, 0)$ OST in Fig.~\ref{Figure_11}(b))}. The upper bound of $b_{\textup{o\_max}}$ is achieved for both.
\end{example}
For a given $(a, b, d_1, d_2, d_3)$ {OST}, let $\mathcal{Z}_{\textup{o}}$ be the set of all $(a, b_\textup{o}', d_1, d_2, d_3)$ GASTs/OSTs with $d_1 \leq b_\textup{o}' \leq b_{\textup{o\_max}}$, which have the same {unlabeled GAST/OST} as the original {OST}. Here, $b_{\textup{o\_max}}$ is the largest allowable number of unsatisfied CNs for these configurations.
\begin{definition}\label{def_drem_ost}
An $(a, b, d_1, d_2, d_3)$ \textbf{OST} is said to be \textbf{removed} from the Tanner graph of an NB-LDPC code if and only if the resulting object (after edge weight processing) $\notin \mathcal{Z}_{\textup{o}}$.
\end{definition}
We thus augment the code optimization process for asymmetric channels to consist of two phases. The first phase, as before, focuses on the removal of GASTs, and the second phase focuses on the removal of OSTs. The ordering of the phases is critical because of the following. While it is allowed to remove a GAST by converting it to an OST during the first phase, it is not allowed to remove an OST by converting it into a GAST during the second phase because GASTs, not OSTs, are the principal cause of the error floor. This is the reason why the set $\mathcal{Z}_{\textup{o}}$ is the set of all $(a, b_\textup{o}', d_1, d_2, d_3)$ GASTs/OSTs with $d_1 \leq b_\textup{o}' \leq b_{\textup{o\_max}}$. For example, to remove the $(6, 6, 2, 11, 0)$ OST in Fig.~\ref{Figure_11}(b), the configuration needs to be converted into an object $\notin \{(6, 2, 2, 11, 0) \text{ GAST}, (6, 3, 2, 11, 0) \text{ GAST/OST}, \allowbreak (6, 4, 2, 11, 0) \text{ GAST/OST}, (6, 5, 2, 11, 0) \text{ OST}, (6, 6, 2, 11, 0) \allowbreak \text{ OST}, (6, 7, 2, 11, 0) \text{ OST}\}$ (this is because $d_1=2$ and $b_{\textup{o\_max}}=d_1+b_{\textup{o\_ut}}=7$).
For a given OST, define a matrix $\bold{W}_{\textup{o}}^{\textup{z}}$ to be the matrix obtained by removing $b_\textup{o}'$, $d_1 \leq b_\textup{o}' \leq b_{\textup{o\_max}}$, rows corresponding to CNs $\in (\mathcal{O} \cup \mathcal{T})$ from the matrix $\bold{A}$, which is the OST adjacency matrix. These $b_\textup{o}'$ CNs can simultaneously be unsatisfied under some edge labeling that produces a GAST/an OST which has the same {unlabeled GAST/OST} as the given OST. Let $\mathcal{U}_{\textup{o}}$ be the set of all matrices $\bold{W}_{\textup{o}}^{\textup{z}}$. Each element $\in \mathcal{Z}_{\textup{o}}$ has one or more matrices $\in \mathcal{U}_{\textup{o}}$.
\begin{definition}\label{def_owcms}
For a given $(a,b,d_1,d_2,d_3)$ OST and its associated adjacency matrix $\bold{A}$ and its associated set $\mathcal{Z}_{\textup{o}}$, we construct a set of $t_{\textup{o}}$ matrices as follows:
\begin{enumerate}
\item Each matrix $\bold{W}_h^{\textup{o\_cm}}$, $1 \leq h \leq t_{\textup{o}}$, in this set is an $(\ell-b^{\textup{o\_cm}}_h)\times a$ submatrix, $d_1 \leq b^{\textup{o\_cm}}_h \leq b_{\textup{o\_max}}$, formed by removing \textbf{different} $b^{\textup{o\_cm}}_h$ rows from the $\ell \times a$ matrix $\bold{A}$ of the OST. These $b^{\textup{o\_cm}}_h$ rows to be removed correspond to CNs $\in (\mathcal{O} \cup \mathcal{T})$ that can simultaneously be unsatisfied under some edge labeling that produces a GAST/an OST which has the same {unlabeled GAST/OST} as the given OST.
\item Each matrix $\bold{W}_{\textup{o}}^{\textup{z}} \in \mathcal{U}_{\textup{o}}$, for every element $\in \mathcal{Z}_{\textup{o}}$, contains at least one element of the resultant set as its submatrix.
\item This resultant set has the \textbf{smallest cardinality}, which is $t_{\textup{o}}$, among all the sets which satisfy conditions 1 and 2 stated above.
\end{enumerate}
We refer to the matrices in this set as \textbf{oscillating weight consistency matrices (OWCMs)}, and to this set itself as $\mathcal{W}_{\textup{o}}$.
\end{definition}
Similar to the definition of $b_{\textup{et}}$ in GASTs, we also define $b_{\textup{o\_et}} \leq b_{\textup{o\_ut}}$ for OSTs such that $b_{\textup{o\_max}} = d_1 + b_{\textup{o\_et}}$. The following lemma addresses the removal of OSTs via their OWCMs. In other words, the lemma shows how the WCM framework can be customized to remove OSTs.
\begin{lemma}\label{lem_rem_ost}
The necessary and sufficient processing needed to remove an $(a, b, d_1, d_2, d_3)$ OST, according to Definition~\ref{def_drem_ost}, is to change the edge weights such that {for every OWCM $\bold{W}^{\textup{o\_cm}}_h \in \mathcal{W}_{\textup{o}}$, there does not exist any vector with all its entries $\neq 0$ in the null space of that OWCM.} Mathematically, $\forall h$:
\begin{align}\label{eq_orem_cond}
&\text{If } \text{ } \mathcal{N}(\bold{W}^{\textup{o\_cm}}_h)=\textup{span}\{\bold{x}_1, \bold{x}_2, \dots ,\bold{x}_{p_h}\}, \text{ then } \nonumber \\ &\nexists \text{ } \bold{r}=[r_1 \text{ } r_2 \text{ } \dots \text{ } r_{p_h}]^\textup{T} \text{ for} \text{ }
\bold{v}=r_1\bold{x}_1+r_2\bold{x}_2+\dots+r_{p_h}\bold{x}_{p_h} \nonumber \\ &= [v_1 \text{ } v_2 \text{ } \dots v_a]^\textup{T} \text{ s.t. } v_j \neq 0, \text{ } \forall j \in \{1, 2, \dots , a\},
\end{align}
{where $p_h$ is the dimension of $\mathcal{N}(\bold{W}^{\textup{o\_cm}}_h)$}. Computations are performed over GF($q$).
\end{lemma}
\begin{IEEEproof}
The proof follows the same logic as the proof of \cite[Theorem~3]{ahh_jsac}.
\end{IEEEproof}
A similar analysis to the one in Section~\ref{sec_cch} can be performed to compute the number of OWCMs in the set $\mathcal{W}_{\textup{o}}$, with few changes (e.g., $b_{\textup{o\_et}}$ should be used instead of $b_{\textup{et}}$). Now, we propose the minimum number of edge weight changes needed to remove an OST from the Tanner graph of an NB-LDPC code with even column weight.
\begin{corollary}\label{cor_emin_os}
The minimum number of edge weight changes (with respect to the original configuration) needed to remove an $(a, b, b_2, d_1, d_2, d_3)$ OS (convert it into a non-OS/non-AS) is given by:
\begin{equation}\label{eq_emin_os}
E_{\textup{OS},\textup{min}}=1 \leq \frac{\gamma}{2}-d_{1,\textup{vn},\textup{max}}+1,
\end{equation}
where $d_{1,\textup{vn},\textup{max}}$ is the maximum number of existing degree-$1$ CNs per VN in the OS.
\end{corollary}
\begin{IEEEproof}
The proof follows the same logic as the proof of Lemma~\ref{lem_emin_gas} (see also \cite{ahh_bas}), with $\frac{\gamma}{2}$ replacing $g$. Note that by definition of an OS, at least one of its VNs has exactly $\frac{\gamma}{2}$ neighboring unsatisfied CNs. Thus,
\vspace{-0.1em}\begin{equation}
E_{\textup{OS},\textup{min}}=\frac{\gamma}{2}-b_{\textup{vn},\textup{max}}+1=\frac{\gamma}{2}-\frac{\gamma}{2}+1=1,
\end{equation}
where $b_{\textup{vn},\textup{max}}$ is the maximum number of existing unsatisfied CNs per VN in the OS, which equals $\frac{\gamma}{2}$ for any OS.
\end{IEEEproof}
Similar to the case of GASTs, since we only change~the weights of edges connected to degree-$2$ CNs to remove OSTs, (\ref{eq_emin_os}) also holds for $E_{\textup{OST},\textup{min}}$. Moreover, a \textbf{topologically-oscillating VN} in an OST in a code with even column weight $\gamma$ is connected to exactly $\frac{\gamma}{2}$ degree-$1$ CNs. An OST with such a VN has the upper bound on $E_{\textup{OST},\textup{min}}$ equal to $1$.
The following simple algorithm illustrates the procedure we follow to optimize NB-LDPC codes with even column weights for usage over asymmetric channels.
\begin{algorithm}[H]\label{alg_code_opt}
\caption{Optimizing NB-LDPC Codes with Even Column Weights}
\begin{algorithmic}[1]
\State Apply \cite[Algorithm~2]{ahh_jsac} to optimize the NB-LDPC code by removing the detrimental GASTs.
\State Using initial simulations and combinatorial techniques (e.g., \cite{bani_cycle}) for the output code of Step~1, determine the set of OSTs to be removed.
\State Apply a customized version of \cite[Algorithm~2]{ahh_jsac} to further optimize the NB-LDPC code generated in Step~1 by removing the detrimental OSTs.
\end{algorithmic}
\end{algorithm}
\vspace{-0.2em}
A crucial check to make while removing an OST is that the edge weight changes to be performed do not undo the removal of any of the already removed GASTs nor OSTs.
\section{Applications of the WCM Framework}\label{sec_sim}
In this section, we apply the WCM framework to optimize NB-LDPC codes with different structures and for various applications, demonstrating significant performance gains in the error floor region. We used a finite-precision, fast Fourier transform based $q$-ary sum-product algorithm (FFT-QSPA) LDPC decoder \cite{dec_fft}, which performs a maximum of $50$ iterations (except for PR channel simulations), and it stops if a codeword is reached sooner.
In the following items, we provide some details about the performed simulations and about our choices for the simulated NB-LDPC codes:
\begin{itemize}
\item We provide simulation results over practical storage channels, which are the main focus of this work. In particular, we present results over asymmetric Flash channels (with $3$ and $6$ voltage reads) and a 1-D MR channel with intrinsic memory (a PR channel). As an additional example, we also present results over the AWGN channel. These results collectively demonstrate the effectiveness of the WCM framework in optimizing NB-LDPC codes for various channels with different characteristics.
\item All our codes are circulant-based codes. The unoptimized NB block codes are chosen to be protograph-based codes as in \cite{lara_prot} and \cite{baz_qc} (more details about the construction are provided below). The reasons are that NB protograph-based codes enable faster encoding and decoding, and they have good performance \cite{ahh_bas, lara_prot, baz_qc}.
\item We provide results for NB-LDPC codes with various column weights (particularly $\gamma \in \{3, 4, 5\}$). The justification is that we want to demonstrate that the WCM framework typically offers at least $1$ order (and up to almost $2.5$ orders) of magnitude performance gain in the error floor region for NB-LDPC codes with initial average, good, and very good error floor performance (average in the case of $\gamma = 3$, good in the case of $\gamma = 4$, and very good in the case of $\gamma = 5$).
\item Details about the constructions and the parameters of the SC codes we simulate are provided in Subsection~\ref{subsec_sc}.
\end{itemize}
All the unoptimized NB-LDPC codes we are using in Subsections~\ref{subsec_five}, \ref{subsec_osc}, and \ref{subsec_soft} are regular non-binary protograph-based LDPC (NB-PB-LDPC) codes. These codes are constructed as follows. First, a binary protograph matrix $\bold{H}_{\textup{p}}$ is designed. Then, (the Tanner graph of) $\bold{H}_{\textup{p}}$ is lifted via a lifting parameter $\zeta$ to create (the Tanner graph of) the binary image of $\bold{H}$, which is $\bold{H}_{\textup{b}}$. The lifting process means that every $1$ in $\bold{H}_{\textup{p}}$ is replaced by a $\zeta \times \zeta$ circulant matrix, while every $0$ (if any) in $\bold{H}_{\textup{p}}$ is replaced by a $\zeta \times \zeta$ all-zero matrix. The circulant powers are adjusted such that the unlabeled Tanner graph of the resulting code does not have cycles of length $4$. Then, the $1$'s in $\bold{H}_{\textup{b}}$ are replaced by non-zero values $\in$ GF($q$) to generate $\bold{H}$. These unoptimized codes are high performance NB-PB-LDPC codes (see also \cite{lara_prot} and \cite{baz_qc}, in addition to \cite{ahh_jsac}). Note that the WCM framework works for any regular, or even irregular with fixed column weight, NB-LDPC codes. Moreover, the WCM framework also works for any GF size, $q$, and for any code rate.
\begin{remark}
While the WCM framework works for NB-LDPC codes defined over any GF size, $q$, the performance gains achieved are expected to be relatively smaller for higher GF sizes, e.g., $q \geq 32$ (over all channels). The reason is that the fraction of edge weight assignments under which a configuration is a detrimental GAST becomes smaller as $q$ increases (see also \cite{behzad_elem}), which means NB-LDPC codes with higher GF sizes naturally have improved error floor performance. However, increasing the GF size dramatically increases the decoding complexity, as the decoding complexity either has $O(q^2)$ or $O(q \log_2 (q))$ \cite{dec_fft}. Consequently, the approach of solely increasing the GF size in order to mitigate the error floor is not advised for storage systems because of complexity constraints. Note that NB-LDPC codes with $\gamma=2$ can only provide good error floor performance if the GF size is large. This discussion is the reason why in our simulations, we work with various column weights ($\gamma \in \{3, 4, 5\}$), but we keep the GF size relatively small ($q \in \{4, 8\}$). In summary, we provide NB-LDPC codes with superior error floor performance, achieved via the WCM framework, without a dramatic increase in the decoding complexity. An NB-LDPC decoder implementation customized for storage application, which uses a GF size $q=8$, is provided in \cite{dec_yuta}.
\end{remark}
In this paper, RBER is the raw bit error rate, which is the number of raw (uncoded) data bits in error divided by the total number of raw (uncoded) data bits read \cite{cai_defn}. UBER is the uncorrectable bit error rate, which is a metric for the fraction of bits in error out of all bits read after the error correction is applied via encoding/decoding \cite{cai_defn}. One formulation of UBER, as recommended by industry, is the frame error rate (FER) divided by the sector size in bits.
\subsection{Optimizing Column Weight $5$ Codes}\label{subsec_five}
In this subsection, we use the WCM framework to optimize NB-LDPC codes with column weight $5$ for the first time. Column weight $5$ codes generally guarantee better performance compared with column weight $3$ and $4$ codes in the error floor region. We show in this subsection, that more than $1$ order of magnitude performance gain is still achievable via the WCM framework for such codes despite their improved error floor performance. The channel used in this subsection is a practical Flash channel: the normal-Laplace mixture (NLM) Flash channel \cite{mit_nl}. Here, we use $3$ reads, and the sector size is $512$ bytes.
In the NLM channel, the threshold voltage distribution of sub-$20$nm multi-level cell (MLC) Flash memories is carefully modeled. The four levels are modeled as different NLM distributions, incorporating several sources of error due to wear-out effects, e.g., programming errors, thereby resulting in significant asymmetry \cite{mit_nl}. Furthermore, the authors provided accurate fitting results of their model for program/erase (P/E) cycles up to $10$ times the manufacturer's endurance specification. We implemented the NLM channel based on the parameters described in \cite{mit_nl}.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_12.pdf}
\vspace{-1.5em}
\caption{Simulation results over the NLM channel with $3$ reads for Code~1 (unoptimized) and Code~2 (WCM framework). The two codes have $\gamma = 5$.}
\label{Figure_12}
\end{figure}
In this subsection, Code~1 is an NB-PB-LDPC code defined over GF($4$), with block length $= 6724$ bits, rate $\approx 0.88$, and $\gamma = 5$. Code~2 is the result of optimizing Code~1 for the asymmetric NLM channel by attempting to remove the GASTs in Table~\ref{Table3} using the WCM framework.
Fig.~\ref{Figure_12} shows that more than $1$ order of magnitude performance gain is achieved via optimizing Code~1 to arrive at Code~2 using the WCM framework. The figure also shows that using the WCM framework, an UBER of approximately $4.53 \times 10^{-15}$ is achievable at RBER of approximately $4.69 \times 10^{-3}$ on the NLM Flash channel (an aggressively asymmetric channel) with only $3$ reads.
\begin{table}
\caption{Error profile of Codes~1 and 2 over the NLM channel with $3$ reads, RBER $\approx 4.69 \times 10^{-3}$, UBER (unoptimized) $\approx 6.31 \times 10^{-14}$, and UBER (WCM framework) $\approx 4.53 \times 10^{-15}$ (see Fig.~\ref{Figure_12}).}
\vspace{-0.5em}
\centering
\scalebox{1.00}
{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{Error type} & \multicolumn{2}{|c|}{Count} \\
\cline{2-3}
{} & Code~1 & \makecell{Code~2} \\
\hline
$(4, 8, 8, 6, 0)$ & $18$ & $0$ \\
\hline
$(6, 8, 8, 11, 0)$ & $9$ & $0$ \\
\hline
$(6, 10, 8, 11, 0)$ & $11$ & $0$ \\
\hline
$(7, 5, 5, 15, 0)$ & $4$ & $0$ \\
\hline
$(7, 9, 9, 13, 0)$ & $4$ & $0$ \\
\hline
$(7, 10, 10, 9, 2)$ & $7$ & $1$ \\
\hline
$(8, 6, 6, 17, 0)$ & $23$ & $0$ \\
\hline
$(8, 8, 6, 17, 0)$ & $15$ & $0$ \\
\hline
Other & $9$ & $7$ \\
\hline
\end{tabular}}
\label{Table3}
\end{table}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.08in 0.0in 0.2in 0.0in},clip,width=3.5in]{Figure_13.pdf}\vspace{-1.0em}
\text{\hspace{-1.5em} \footnotesize{(a) \hspace{15em} (b) \hspace{1em}}}
\caption{(a) A $(4, 8, 8, 6, 0)$ GAST for $\gamma=5$. (b) An $(8, 8, 6, 17, 0)$ GAST for $\gamma=5$. Appropriate non-binary edge weights are assumed.}
\label{Figure_13}
\vspace{-0.5em}
\end{figure}
Table~\ref{Table3} shows the error profiles of Codes~1 and 2 over the NLM channel with $3$ reads. The table reveals that $33\%$ of the errors in the error profile of Code~1 are non-elementary GASTs. The table also demonstrates the effectiveness of the WCM framework in removing the detrimental objects. Two of the GASTs that strongly contribute to the error profile of Code~1 are $(4, 8, 8, 6, 0)$ and $(8, 8, 6, 17, 0)$ GASTs, which are shown in Fig.~\ref{Figure_13}. The key difference between GASTs in codes with $\gamma=5$ (or $6$) and GASTs in codes with $\gamma \in \{3, 4\}$ is that for the former GASTs, $g=2$, while for the latter GASTs, $g=1$. In other words, a VN in an object in a code with $\gamma=5$ (or $6$) can be connected to a maximum of $2$ unsatisfied CNs while the object is classified as a GAST (see also Fig.~\ref{Figure_13} and Example~\ref{ex_2}); for $\gamma \in \{3, 4\}$, this maximum is $1$.
\subsection{Achieving More Gain by Removing Oscillating Sets}\label{subsec_osc}
In this subsection, we demonstrate the additional gains that can be achieved for NB-LDPC codes with even column weights (particularly $\gamma = 4$) over practical asymmetric channels by removing OSTs as described in Section~\ref{sec_os}.
First, we present results for the NLM channel described in the previous subsection (still with $3$ reads). Code~3 is an NB-PB-LDPC code defined over GF($4$), with block length $= 8480$ bits, rate $\approx 0.90$, and $\gamma = 4$. Code~4 is the result of optimizing Code~3 by attempting to remove the dominant GASTs $(4, 4, 4, 6, 0)$, $(6, 4, 4, 10, 0)$, $(6, 5, 5, 8, 1)$, and $(8, 4, 2, 15, 0)$ using the WCM framework (see \cite{ahh_jsac}). Code~5 is the result of optimizing Code~4 for the asymmetric NLM channel by attempting to remove the OSTs in Table~\ref{Table4} using the WCM framework. The performance curves of Code~3 (unoptimized) and Code~4 (WCM framework, no OSTs removal) in Fig.~\ref{Figure_14} were introduced in \cite{ahh_jsac}.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_14.pdf}
\vspace{-1.5em}
\caption{Simulation results over the NLM channel with $3$ reads for Code~3 (unoptimized), Code~4 (WCM framework, no OSTs removal), and Code~5 (WCM framework, with OSTs removal). The three codes have $\gamma = 4$.}
\label{Figure_14}
\vspace{-0.2em}
\end{figure}
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_15.pdf}
\vspace{-1.5em}
\caption{Simulation results over the CHMM channel with $3$ reads for Code~6 (unoptimized), Code~7 (WCM framework, no OSTs removal), and Code~8 (WCM framework, with OSTs removal). The three codes have $\gamma=4$.}
\label{Figure_15}
\vspace{-0.2em}
\end{figure}
It is demonstrated by Fig.~\ref{Figure_14} that removing the dominant OSTs to generate Code~5 results in nearly $0.5$ of an order of magnitude gain in performance over Code~4 (for which only the dominant GASTs are removed) even though Code~4 is highly optimized (it outperforms Code~3 by about $2$ orders of magnitude). Thus, applying Algorithm~1 to remove OSTs after removing GASTs raises the gain to almost $2.5$ orders of magnitude for Code~5 compared with the unoptimized code (Code~3) over the NLM channel. Table~\ref{Table4} shows the significant reduction in the number of OSTs in the error profile of Code~5 compared with Code~3.
Second, we present results for another asymmetric Flash channel: the Cai-Haratsch-Mutlu-Mai (CHMM) Flash channel \cite{cai_fl}. The authors developed a model in \cite{cai_fl} for the threshold voltage distribution that is suitable for $20$nm and $24$nm MLC Flash memories. The four levels are modeled as different Gaussian distributions that are shifted and broadened with the increase in P/E cycles, resulting in limited asymmetry relative to the NLM channel. We implemented the CHMM channel based on the data and the model provided in \cite{cai_fl}. In this subsection, we use $3$ reads, and the sector size is $512$ bytes.
\begin{table}
\caption{OSTs error profile of Codes~3 and 5 over the NLM channel with $3$ reads, RBER $\approx 3.75 \times 10^{-3}$, UBER (unoptimized) $\approx 6.98 \times 10^{-12}$, and UBER (WCM framework, with OSTs removal) $\approx 3.58 \times 10^{-14}$ (see Fig.~\ref{Figure_14}).
}
\vspace{-0.5em}
\centering
\scalebox{1.00}
{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{Error type} & \multicolumn{2}{|c|}{Count} \\
\cline{2-3}
{} & Code~3 & \makecell{Code~5} \\
\hline
$(5, 5, 5, 6, 1)$ & $22$ & $0$ \\
\hline
$(6, 5, 4, 10, 0)$ & $29$ & $0$ \\
\hline
$(8, 4, 2, 12, 2)$ & $24$ & $0$ \\
\hline
$(8, 5, 3, 13, 1)$ & $25$ & $0$ \\
\hline
\end{tabular}}
\label{Table4}
\end{table}
\begin{table}
\caption{OSTs error profile of Codes~6 and 8 over the CHMM channel with $3$ reads, RBER $\approx 5.87 \times 10^{-3}$, UBER (unoptimized) $\approx 1.74 \times 10^{-12}$, and UBER (WCM framework, with OSTs removal) $\approx 3.11 \times 10^{-14}$ (see Fig.~\ref{Figure_15}).
}
\vspace{-0.5em}
\centering
\scalebox{1.00}
{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{Error type} & \multicolumn{2}{|c|}{Count} \\
\cline{2-3}
{} & Code~6 & \makecell{Code~8} \\
\hline
$(6, 5, 2, 11, 0)$ & $29$ & $0$ \\
\hline
$(6, 6, 2, 11, 0)$ & $11$ & $0$ \\
\hline
$(7, 5, 3, 11, 1)$ & $34$ & $0$ \\
\hline
$(8, 4, 3, 13, 1)$ & $15$ & $0$ \\
\hline
$(9, 4, 2, 14, 2)$ & $11$ & $1$ \\
\hline
\end{tabular}}
\label{Table5}
\end{table}
Here, Code~6 is an NB-PB-LDPC code defined over GF($4$), with block length $= 1840$ bits, rate $\approx 0.80$, and $\gamma = 4$. Code~7 is the result of optimizing Code~6 by attempting to remove the dominant GASTs $(4, 4, 4, 6, 0)$, $(6, 4, 2, 11, 0)$, $(6, 4, 4, 10, 0)$, $(7, 4, 3, 11, 1)$, $(8, 5, 5, 12, 1)$, and $(9, 5, 5, 14, 1)$ using the WCM framework (see also \cite{ahh_jsac}). Code~8 is the result of optimizing Code~7 for the asymmetric CHMM channel by attempting to remove the OSTs in Table~\ref{Table5} using the WCM framework. The performance curves of Code~6 (unoptimized) and Code~7 (WCM framework, no OSTs removal) in Fig.~\ref{Figure_15} were introduced in \cite{ahh_jsac}.
Fig.~\ref{Figure_15} reveals that removing the dominant OSTs to design Code~8 results in more than $0.5$ of an order of magnitude performance gain over Code~7 (for which only the dominant GASTs are removed). Consequently, applying Algorithm~1 to remove OSTs (after removing GASTs) raises the performance gain to more than $1.5$ orders of magnitude for Code~8 compared with the unoptimized code (Code~6) over the CHMM channel. Table~\ref{Table5} clarifies the significant reduction in the number of OSTs in the error profile of Code~8 compared with Code~6.
\vspace{-0.22em}
\subsection{Effect of Soft Information in Flash Channels}\label{subsec_soft}
In this subsection, we show the performance of NB-LDPC codes optimized by the WCM framework over practical Flash channels with additional soft information. The NLM and CHMM Flash channels used in this subsection are as described in the previous two subsections, except that we now consider $6$ voltage reads instead of $3$. The additional reads increase the amount of soft information provided to the decoder from the Flash channel.
In the simulations of this subsection, Code~9 is an NB-PB-LDPC code defined over GF($4$), with block length $= 3996$ bits, rate $\approx 0.89$, and $\gamma = 3$. Code~10 is the result of optimizing Code~9 for the asymmetric NLM channel (with $6$ reads this time) by attempting to remove the dominant GASTs $(4, 2, 2, 5, 0)$, $(4, 3, 2, 5, 0)$, $(5, 2, 2, 5, 1)$, $(6, 0, 0, 9, 0)$, $(6, 1, 0, 9, 0)$, $(6, 1, 1, 7, 1)$, $(6, 2, 2, 5, 2)$, and $(6, 2, 2, 8, 0)$ using the WCM framework.
Furthermore, Code~11 is another NB-PB-LDPC code defined over GF($4$), with block length $= 3280$ bits, rate $\approx 0.80$, and $\gamma = 4$. Code~12 is the result of optimizing Code~11 for the asymmetric NLM channel (with $6$ reads) by attempting to remove the dominant GASTs $(4, 4, 4, 6, 0)$, $(6, 2, 2, 11, 0)$, $(8, 4, 3, 13, 1)$, and $(8, 5, 2, 15, 0)$ in addition to the dominant OSTs $(6, 5, 4, 10, 0)$, $(7, 6, 4, 12, 0)$, $(8, 4, 2, 12, 2)$, and $(9, 4, 2, 14, 2)$ using the WCM framework. We also reuse Code~6 in this subsection (its parameters are stated in the previous subsection). Code~13 is the result of optimizing Code~6 for the asymmetric CHMM channel (with $6$ reads) by attempting to remove the dominant GASTs $(4, 4, 4, 6, 0)$, $(6, 4, 4, 11, 0)$, and $(7, 4, 3, 11, 1)$ in addition to the dominant OSTs $(6, 5, 2, 11, 0)$, $(7, 5, 3, 11, 1)$, $(7, 5, 4, 9, 2)$, $(7, 6, 6, 8, 2)$, $(8, 6, 2, 15, 0)$, and $(10, 7, 5, 11, 4)$ using the WCM framework.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_16.pdf}
\vspace{-1.5em}
\caption{Simulation results over the NLM channel with $6$ reads for Code~9 (unoptimized) and Code~10 (WCM framework). The two codes have $\gamma = 3$.}
\label{Figure_16}
\vspace{-0.3em}
\end{figure}
According to our simulations, the most dominant GASTs in the error floor of the unoptimized codes (Codes~9, 11, and 6) are hardly affected by the additional soft information (compare the dominant GASTs listed above for Codes~9, 11, and 6 with the dominant GASTs in \cite[Table~I]{ahh_jsac}, \cite[Table~II]{ahh_jsac}, and \cite[Table~IV]{ahh_jsac}, respectively). Moreover, Figures~\ref{Figure_16}, \ref{Figure_17}, and \ref{Figure_18} show that the performance gains achieved by applying the WCM framework over practical Flash channels with $6$ reads are in the same range as the gains achieved over the same channels with $3$ reads. In particular, more than $1$ order of magnitude gain is achieved in Fig.~\ref{Figure_16}, and more than $1.5$ orders of magnitude ($> 0.5$ of an order of magnitude is due to OSTs removal) gain is achieved in both Fig.~\ref{Figure_17} and \ref{Figure_18}. Furthermore, similar to the case of $3$ reads demonstrated in \cite{ahh_jsac}, the more asymmetric the Flash channel is, the higher the percentage of relevant non-elementary GASTs ($b > d_1$ or/and $d_3 > 0$) that appear in the error profile of the NB-LDPC code.
\begin{figure}
\vspace{-0.3em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_17.pdf}
\vspace{-1.5em}
\caption{Simulation results over the NLM channel with $6$ reads for Code~11 (unoptimized) and Code~12 (WCM framework, with OSTs removal). The two codes have $\gamma = 4$.}
\label{Figure_17}
\vspace{-0.1em}
\end{figure}
\begin{figure}
\vspace{-0.3em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_18.pdf}
\vspace{-1.5em}
\caption{Simulation results over the CHMM channel with $6$ reads for Code~6 (unoptimized) and Code~13 (WCM framework, with OSTs removal). The two codes have $\gamma=4$.}
\label{Figure_18}
\vspace{-0.3em}
\end{figure}
The major difference between the results over practical Flash channels with $3$ and $6$ reads is the gain achieved in RBER. Consider the $\gamma=4$ codes simulated over the CHMM channel, and assume that the target UBER is $10^{-13}$. In Fig.~\ref{Figure_15}, Code~8 achieves the target UBER at RBER $\approx 6.5 \times 10^{-3}$. On the contrary, Code~13 achieves the target UBER at RBER $\approx 1.1 \times 10^{-2}$, as revealed by Fig.~\ref{Figure_18}. Thus, using $6$ reads achieves in this case about $70\%$ RBER gain compared with using only $3$ reads. This RBER gain is directly translated into P/E cycles gain, which means an extension in the lifetime of the Flash device. Similar gains are also observed for codes with different column weights over both the NLM and CHMM channels.
\subsection{Optimizing Spatially-Coupled Codes}\label{subsec_sc}
In this subsection, we extend the scope of the WCM framework to irregular codes with fixed column weights (fixed VN degrees). In particular, we use the WCM framework to optimize non-binary spatially-coupled (NB-SC) codes with $\gamma \in \{3, 4\}$ for PR and AWGN channels, showing more than $1$ order of magnitude performance gain.
SC codes are a class of graph-based (LDPC) codes that have capacity-approaching asymptotic performance, and very good finite-length performance. Literature works studying the asymptotic performance of SC codes include \cite{kud_sc, lent_asy, naj_asy} for the binary case, and \cite{andr_asy, amna_asy} for the non-binary case. Recent results on finite-length constructions of SC codes include \cite{pus_sc, olm_sc, iye_sc, mitch_fl, yix_fl, scb_ext1, scb_ext2, scb_ext3, homa_boo} for the binary case, and \cite{irina_sc, homa_sc, ahh_nboo, ahh_nboo2} for the non-binary case. Most of these finite-length constructions are based on protographs. SC codes are constructed by partitioning an underlying block LDPC code, and then rewiring the partitioned components together multiple times \cite{scb_ext1, homa_boo, homa_sc, ahh_nboo}. We demonstrate the effectiveness of the WCM framework by optimizing the edge weights of NB-SC codes designed using two different finite-length construction techniques (both are also based on protographs). First, we show results for NB-SC codes partitioned using single cutting vectors (CVs) \cite{scb_ext1, homa_sc}, and the underlying block LDPC codes used are array-based LDPC (AB-LDPC) codes \cite{lara_as}. More details about the CV technique can be found in \cite{homa_sc}. Second, we show results for better NB-SC codes, designed using the optimal overlap, circulant power optimizer (OO-CPO) technique \cite{homa_boo, ahh_nboo, ahh_nboo2}. The partitioning here is derived through solving an optimization problem aiming at minimizing the total number of detrimental objects in the protograph of the SC code. Next, circulant powers of the underlying block code are optimized to further reduce the number of detrimental objects in the final unlabeled Tanner graph of the SC code. More details about the OO-CPO technique for AWGN and Flash channels can be found in \cite{homa_boo} and \cite{ahh_nboo}. In this subsection, we focus on the case of partitioning the underlying block code into only two component matrices (memory $=1$), and all the SC codes do not have cycles of length $4$ in their unlabeled graphs. Furthermore,
\begin{itemize}
\item The CV and the OO-CPO techniques mentioned above are chosen to design the underlying topologies (the binary images) of our NB-SC codes. SC codes designed using the OO-CPO technique have superior performance over AWGN \cite{homa_boo}, Flash \cite{ahh_nboo}, and PR channels \cite{ahh_nboo2}.
\item We use coupling lengths (see \cite{scb_ext1} and \cite{homa_sc}) of average values (namely, $5$ and $8$) in our SC codes. This is because for a fixed block length, the circulant size of an SC code is inversely proportional to the coupling length. Using a very small circulant size typically exacerbates the error floor problem.
\end{itemize}
The WCM framework requires the initial unoptimized code to have a fixed column weight (fixed VN degree) but not necessarily a fixed row weight (fixed CN degree). NB-SC codes that are based on the underlying structured and regular block codes incorporate irregularities in their CN degrees (different row weights), while having fixed VN degrees \cite{homa_boo, homa_sc}, making them suitable for optimization using the WCM framework for various applications.
We use the PR channel described in \cite{ahh_bas}. This PR channel incorporates inter-symbol interference (intrinsic memory), jitter, and electronic noise. The normalized channel density \cite{shafa, gl_tolga, tom_v} is $1.4$, and the PR equalization target is $[8 \text{ } 14 \text{ } 2]$. The receiver consists of filtering units followed by a Bahl Cocke Jelinek Raviv (BCJR) detector \cite{bcjr}, which is based on pattern-dependent noise prediction (PDNP) \cite{pdnp}, and an FFT-QSPA LDPC decoder \cite{dec_fft}. The number of global (detector-decoder) iterations is $10$, and the number of local (decoder only) iterations is $20$. Unless a codeword is reached, the decoder performs its prescribed number of local iterations for each global iteration. More details can be found in \cite{ahh_bas}.
Code~14 is an NB-SC code designed using the CV technique, and defined over GF($4$), with block length $= 8464$ bits, rate $\approx 0.85$, and $\gamma = 3$. The underlying block code is a non-binary AB-LDPC code defined over GF($4$), with circulant size $=23$ and $\gamma = 3$. The coupling length $L=8$ \cite{homa_sc}, and the underlying block code is partitioned using the optimal CV $[5 \text{ } 11 \text{ } 18]$ (see also \cite{homa_sc} for more details about determining the optimal CV). Code~15 is the result of optimizing Code~14 for the PR channel by attempting to remove the dominant BASTs $(6, 0, 0, 9, 0)$, $(6, 1, 0, 9, 0)$, $(6, 2, 0, 9, 0)$, $(8, 0, 0, 10, 1)$, and $(8, 0, 0, 12, 0)$ using the WCM framework.
Fig.~\ref{Figure_19} shows that the SC code optimized using the WCM framework (Code~15) outperforms the unoptimized SC code (Code~14) by more than $1.5$ orders of magnitude over the PR channel. Note that this significant performance gain is achieved despite the unlabeled Tanner graphs of Codes~14 and 15 both being designed using the optimal CV. In the caption of Fig.~\ref{Figure_19} we precede the names of Codes~14 and 15 with ``SC'' for clarity.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.0in 0.0in 0.0in 0.2in},clip,width=3.5in]{Figure_19.pdf}
\vspace{-1.5em}
\caption{Simulation results over the PR channel for SC~Code~14 (unoptimized) and SC~Code~15 (WCM framework). The two codes have $\gamma=3$.}
\label{Figure_19}
\vspace{-0.2em}
\end{figure}
In the AWGN simulations, Code~16 (resp., Code~17) is an NB-SC code designed using the CV (resp., OO-CPO) technique, and defined over GF($8$), with block length $= 12615$ bits, rate $\approx 0.83$, and $\gamma = 4$. The underlying block code is defined over GF($8$), with circulant size $=29$, $\gamma = 4$, and row weight $=29$. The coupling length $L=5$ \cite{homa_sc, ahh_nboo}. The underlying block code of Code~16 is partitioned using the CV $[5 \text{ } 11 \text{ } 18 \text{ } 24]$, and it is a non-binary AB-LDPC code. The underlying block code of Code~17 is partitioned according to Fig.~\ref{Figure_20}, upper panel, and its circulant power arrangement is given in Fig.~\ref{Figure_20}, lower panel (see also \cite{homa_boo} and \cite{ahh_nboo}). Code~18 (resp., Code~19) is the result of optimizing Code~16 (resp., Code~17) for the AWGN channel by attempting to remove the dominant EASs $(4, 4, 4, 6, 0)$ (only from Code~17), $(6, 4, 4, 10, 0)$, $(6, 6, 6, 9, 0)$, and $(8, 2, 2, 15, 0)$ using the WCM framework.
\begin{figure*}
\center
\includegraphics[trim={0.1in 0.8in 0.1in 0.0in},clip,width=6.0in]{Figure_20.pdf}
\vspace{-0.7em}
\caption{Upper panel: the OO partitioning of the underlying block code of Code~17 (and Code~19). Entries with circles (resp., squares) are assigned to the first (resp., second) component matrix. Lower panel: the circulant power arrangement for the circulants in the underlying block code of Code~17 (and Code~19) after applying the CPO.}
\label{Figure_20}
\vspace{-0.5em}
\end{figure*}
Fig.~\ref{Figure_21} shows that the SC codes optimized using the WCM framework outperform the unoptimized SC codes by more than $1$ order of magnitude over the AWGN channel. Again, note that this significant performance gain is achieved despite the unlabeled Tanner graphs of Codes~16 and 18 (resp., Codes~17 and 19) both being designed using the same technique. An important observation is that despite the very good performance of Code~17 (the unoptimized code is designed using the OO-CPO technique), optimizing Code~17 using the WCM framework to reach Code~19 still achieves over $1$ order of magnitude performance gain. In the caption of Fig.~\ref{Figure_21} we precede the names of Codes~16, 17, 18, and 19 with ``SC'' for clarity.
\begin{figure}
\vspace{-0.5em}
\center
\includegraphics[trim={0.1in 0.0in 0.0in 0.2in},clip,width=3.6in]{Figure_21.pdf}
\vspace{-1.5em}
\caption{Simulation results over the AWGN channel for SC~Codes~16 (CV, unoptimized), 17 (OO-CPO, unoptimized), 18 (CV, WCM framework), and 19 (OO-CPO, WCM framework). The four codes have $\gamma=4$.}
\label{Figure_21}
\vspace{-0.2em}
\end{figure}
\section{Conclusion}\label{sec_conc}
In this paper, we have provided a theoretical analysis of a general combinatorial framework for optimizing non-binary graph-based codes. In particular, we proved the optimality of the WCM framework, and we demonstrated its efficiency by comparing the number of matrices it operates on with that number in a suboptimal idea. We have also detailed the theory behind the removal of a GAST; we discussed the dimension of the null space of a WCM and the minimum number of edge weight changes needed to remove a GAST. Furthermore, we proposed new combinatorial objects, OSTs, and showed how to extend the WCM framework to remove them and achieve additional performance gains for NB-LDPC codes with even column weights. On the applications side, the WCM framework was applied to different codes over a variety of channels with different characteristics, where performance gains of at least $1$ order, and up to nearly $2.5$ orders, of magnitude were achieved. A notable extension of the WCM framework was to use it in optimizing spatially-coupled codes for multiple channels. We believe that this framework will serve as an effective code optimization tool for emerging multi-dimensional storage devices, e.g., 3-D Flash and two-dimensional magnetic recording (TDMR) devices.
\begin{appendices}
\section{Finding the WCMs of a Given GAST}\label{sec_appa}
The steps of \cite[Algorithm~1]{ahh_jsac} are:
\begin{enumerate}
\item \textbf{Input:} Tanner graph $G_s$ of the GAST $s$, with edge weights over GF($q$), from which the matrix $\bold{A}$ is formed.
\item Set the maximum number of nested \textbf{for} loops, loop\_max.
\item Mark all the CNs $\in (\mathcal{T} \cup \mathcal{H})$ as satisfied. \textit{(CNs $\in \mathcal{O}$ are always unsatisfied.)}
\item Check if $\exists$ in $G_s$ at least one degree-$2$ CN connecting two VNs, each is connected to $> \left \lceil \frac{\gamma+1}{2} \right \rceil$ CNs that are marked as satisfied.
\item \textbf{if} $\nexists$ any of them \textbf{then}
\item \hspace{2ex} $\exists$ only one $(\ell-d_1) \times a$ WCM. Extract it by removing all the rows corresponding to degree-$1$ CNs from the matrix $\bold{A}$.
\item \hspace{2ex} Go to 26.
\item \textbf{else}
\item \hspace{2ex} Count such CNs (that satisfy the condition in 4), save the number in $u^0$, and save their indices (the indices of their rows in $\bold{A}$) in $\bold{y}^0=[y^0(1) \text{ } y^0(2) \text{ } \dots \text{ } y^0(u^0)]^\textup{T}$.
\item \textbf{end if}
\item Compute $b_{\textup{ut}}$ from (\ref{eq_but}). If $b_{\textup{ut}}=1$, go to 25.
\item \textbf{for} $i_1 \in \{1, 2, \dots, u^0\}$ \textbf{do} \textit{ (Level $1$)}
\item \hspace{2ex} Remove the marking performed in levels $\geq 1$, and mark the selected CN $c_{y^0(i_1)}$ as unsatisfied.
\item \hspace{2ex} {Redo the counting in 9, but save in $u^1_{i_1}$ ($< u^0$) and $\bold{y}^1_{i_1}$ (instead of $u^0$ and $\bold{y}^0$, resp.).}
\item \hspace{2ex} If $b_{\textup{ut}}=2$ $\parallel$ $u^1_{i_1}=0$, go to 12.
\item \hspace{2ex} \textbf{for} $i_2 \in \{1, 2, \dots, u^1_{i_1}\}$ \textbf{do} \textit{ (Level $2$)}
\item \hspace{4ex} Remove the marking performed in levels $\geq 2$, and mark the selected CN $c_{y^1_{i_1}(i_2)}$ as unsatisfied.
\item \hspace{4ex} {Redo the counting in 9, but save in $u^2_{i_1,i_2}$ ($<u^1_{i_1}$) and $\bold{y}^2_{i_1,i_2}$.}
\item \hspace{4ex} If $b_{\textup{ut}}=3$ $\parallel$ $u^2_{i_1,i_2}=0$, go to 16.
\item \hspace{4ex} $\dots$
\item \hspace{4ex} {The lines from 16 to 19 are repeated (loop\_max$-2$) times, with the nested (loop\_max$-2$) \textbf{for} loops executed over the running indices $i_3, i_4, \dots, i_{\text{loop\_max}}$.}
\item \hspace{4ex} $\dots$
\item \hspace{2ex} \textbf{end for}
\item \textbf{end for}
\item Obtain the WCMs via the indices in the $\bold{y}$ arrays. In particular, by removing permutations of the rows corresponding to $c_{y^0(i_1)}, c_{y^1_{i_1}(i_2)}, \dots, c_{y^{b_{\textup{ut}}-1}_{i_1,i_2, \dots, i_{b_{\textup{ut}}-1}}(i_{b_{\textup{ut}}})}$, and the degree-$1$ CNs from $\bold{A}$, all the WCMs are reached.
\item Eliminate all the repeated WCMs to reach the final set of WCMs, $\mathcal{W}$, where $t=\vert{\mathcal{W}}\vert$.
\item \textbf{Output:} The set $\mathcal{W}$ of all WCMs of the GAST.
\end{enumerate}
\section{Optimizing NB-LDPC Codes by Reducing the Number of GASTs}\label{sec_appb}
The steps of \cite[Algorithm~2]{ahh_jsac} are:
\begin{enumerate}
\item \textbf{Input:} Tanner graph $G_{\textup{C}}$ of the NB-LDPC code with edge weights over GF($q$).
\item Using initial simulations and combinatorial techniques (e.g., \cite{bani_cycle}), determine $\mathcal{G}$, the set of GASTs to be removed.
\item Let $\mathcal{X}$ be the set of GASTs in $\mathcal{G}$ that cannot be removed, and initialize it with $\varnothing$.
\item Let $\mathcal{P}$ be the set of GASTs in $\mathcal{G}$ that have been processed, and initialize it with $\varnothing$.
\item Sort the GASTs in $\mathcal{G}$ according to their sizes (parameter $a$) from the smallest to the largest.
\item Start from the smallest GAST (smallest index).
\item {\textbf{for} every GAST $s \in \mathcal{G} \setminus \mathcal{P}$ \textbf{do}}
\item \hspace{2ex} If the unlabeled configuration of $s$ does not satisfy the {unlabeled GAST} conditions in Definitions~\ref{def_ugas} and \ref{def_gast}, skip $s$ and go to 7.
\item \hspace{2ex} Determine the minimum number of edge weight changes needed to remove the GAST $s$, $E_{\textup{GAST},\textup{min}}$, by using \cite[Lemma~2]{ahh_bas} (see also Lemma~\ref{lem_emin_gas} in this paper).
\item \hspace{2ex} Extract the subgraph $G_s$ of the GAST $s$, from $G_{\textup{C}}$.
\item \hspace{2ex} Use \cite[Algorithm~1]{ahh_jsac} to determine the set $\mathcal{W}$ of all WCMs of $s$ ($\vert{\mathcal{W}}\vert=t$).
\item \hspace{2ex} \textbf{for} $h \in \{1, 2, \dots, t\}$ \textbf{do}
\item \hspace{4ex} Find the null space $\mathcal{N}(\bold{W}^{\textup{cm}}_h)$ of the $h$th WCM.
\item \hspace{4ex} \textbf{if} (\ref{eq_rem_cond}) is satisfied (i.e., the WCM already has broken weight conditions) \textbf{then}
\item \hspace{6ex} Go to 12.
\item \hspace{4ex} \textbf{else}
\item \hspace{6ex} Keep track of the changes already performed in $G_s$. \textit{(The total number of changes to remove the GAST should be as close as possible to $E_{\textup{GAST},\textup{min}}$.)}
\item \hspace{6ex} Determine the smallest set of edge weight changes in $G_s$ needed to achieve (\ref{eq_rem_cond}) for the $h$th WCM, without violating (\ref{eq_rem_cond}) for WCMs prior to the $h$th.
\item \hspace{6ex} If this set of edge weight changes does not undo~the removal of any GAST $\in \mathcal{P} \setminus \mathcal{X}$, perform these changes in $G_s$ and go to 12.
\item \hspace{6ex} \textbf{if} $\nexists$ more edge weights to execute 18 and 19 \textbf{then}
\item \hspace{8ex} Add GAST $s$ to the set $\mathcal{X}$ and go to 27.
\item \hspace{6ex} \textbf{else} Go to 18 to determine a new set of changes.
\item \hspace{6ex} \textbf{end if}.
\item \hspace{4ex} \textbf{end if}
\item \hspace{2ex} \textbf{end for}
\item \hspace{2ex} Update $G_{\textup{C}}$ by the changes performed in $G_s$.
\item \hspace{2ex} Add GAST $s$ to the set $\mathcal{P}$.
\item \hspace{2ex} If $\mathcal{P} \neq \mathcal{G}$, go to 7 to pick the next smallest GAST.
\item \textbf{end for}
\item If $\mathcal{X}=\varnothing$, then all the GASTs in $\mathcal{G}$ have been removed. Otherwise, only the remaining GASTs in $\mathcal{X}$ cannot be removed.
\item \textbf{Output:} Updated Tanner graph $G_{\textup{C}}$ of the optimized NB-LDPC code with edge weights over GF($q$).
\end{enumerate}
\section{Null Spaces of WCMs of GASTs with $b=d_1$}\label{sec_appc}
In this appendix, we investigate the null spaces, along with their dimensions, of WCMs that belong to GASTs with $b=d_1$.
\begin{remark}\label{rem_6}
There are few configurations that can be categorized as $(a, b_{\textup{g}}, d_1, d_2, d_3)$ GASTs, with $b_{\textup{g}} \in \{b_i, b_{ii}, \dots\}$ ($b_{\textup{g}}$ is not unique). In other words, it is possible to have a configuration which is an $(a, b_i, d_1, d_2, d_3)$ GAST for some set of VN vectors, and it is an $(a, b_{ii}, d_1, d_2, d_3)$ GAST for another set of VN vectors, where $b_i \neq b_{ii}$. For example, the configuration in Fig.~\ref{Figure_9}(a), with $w_{1,1}=w_{6,1}=1$, is a $(6, 0, 0, 9, 0)$ GAST for the vector $[\alpha \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 1 \text{ } 1]^\textup{T}$ (along with others), while the same configuration is a $(6, 3, 0, 9, 0)$ GAST for the vector $[\alpha^2 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } \alpha \text{ } \alpha]^\textup{T}$ (along with others). In cases like these, we identify the configuration with its \textbf{smallest} $b_{\textup{g}}$ in the set $\{b_i, b_{ii}, \dots\}$. Thus, we identify the configuration in Fig.~\ref{Figure_9}(a) as a $(6, 0, 0, 9, 0)$ GAST as mentioned in Example~\ref{ex_9}. Note that this situation is not problematic for our GAST removal process, as our goal is to convert the GAST into another object $\notin \mathcal{Z}$, where $\mathcal{Z}$ is the set of all $(a, b', d_1, d_2, d_3)$, $d_1 \leq b' \leq b_{\textup{max}}$.
\end{remark}
\begin{corollary}\label{cor_b_d1}
An $(a, d_1, d_1, d_2, d_3)$ GAST, which is a GAST with $b=d_1$, has unbroken weight conditions for all its WCMs.
\end{corollary}
\begin{IEEEproof}
From \cite[Lemma~1]{ahh_jsac} (see Section~\ref{sec_sum}), a GAST that has $b=d_1$ must have the particular $\bold{W}^{\textup{z}}$ matrix of size $(\ell-d_1) \times a$ (extracted by removing the rows of all degree-$1$ CNs from $\bold{A}$) with unbroken weight conditions, i.e., $\exists \text{ } \bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_a]^\textup{T} \in \mathcal{N}(\bold{W}^{z})$ s.t. $v_f \neq 0$, $\forall f \in \{1, 2, \dots, a\}$. Since by definition of WCMs, each $\bold{W}^{\textup{cm}}_h$, $\forall h \in \{1, 2, \dots, t\}$, is a submatrix of this particular $\bold{W}^{\textup{z}}$, it follows that $\mathcal{N}(\bold{W}^{z}) \subseteq \mathcal{N}(\bold{W}^{\textup{cm}}_h)$, $\forall h$. In other words, $\exists \text{ } \bold{v}=[v_1 \text{ } v_2 \text{ } \dots \text{ } v_a]^\textup{T} \in \mathcal{N}(\bold{W}^{\textup{cm}}_h)$, $\forall h$, s.t. $v_f \neq 0$, $\forall f \in \{1, 2, \dots, a\}$.
\end{IEEEproof}
Corollary~\ref{cor_b_d1} highlights that each WCM of an $(a, d_1, d_1, d_2, d_3)$ GAST has $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) > 0$, $\forall h$, which is a consequence of all of them having unbroken weight conditions. The following example further discusses the null spaces of WCMs belonging to GASTs with $b=d_1$.
\begin{example}\label{ex_11}
We once more return to the $(6, 0, 0, 9, 0)$ GAST in Fig.~\ref{Figure_9}(a), with $w_{1,1}=w_{6,1}=1$. The configuration is a $(6, 0, 0, 9, 0)$ GAST because the vector $\bold{v} = [\alpha \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 1 \text{ } 1]^\textup{T}$, for example, is in the null space of the $9 \times 6$ matrix $\bold{W}^{\textup{z}}=\bold{A}$ (note that there are no degree-$1$ CNs in this configuration). The null spaces of the $10$ WCMs, extracted according to Example~\ref{ex_9}, of that GAST are detailed below:
\begin{align}\label{eq_ex11_1}
\mathcal{N}(\bold{W}^{\textup{cm}}_1) &= \mathcal{N}(\bold{W}^{\textup{cm}}_3) = \mathcal{N}(\bold{W}^{\textup{cm}}_4) = \mathcal{N}(\bold{W}^{\textup{cm}}_5) \nonumber \\ &= \mathcal{N}(\bold{W}^{\textup{cm}}_6) = \mathcal{N}(\bold{W}^{\textup{cm}}_7) = \mathcal{N}(\bold{W}^{\textup{cm}}_8) = \mathcal{N}(\bold{W}^{\textup{cm}}_9) \nonumber \\ &= \mathcal{N}(\bold{W}^{\textup{cm}}_{10}) = \textup{span}\{[\alpha \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 1 \text{ } 1]^\textup{T}\} \text{ and} \nonumber \\
\mathcal{N}(\bold{W}^{\textup{cm}}_2)&=\textup{span}\{[\alpha \text{ } 0 \text{ } 0 \text{ } 0 \text{ } 1 \text{ } 1]^\textup{T}, [0 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } 0 \text{ } 0]^\textup{T}\}.
\end{align}
We now turn our attention to the $(6, 2, 2, 5, 2)$ GAST in Fig.~\ref{Figure_10}(a), with $w_{1,2}=\alpha^2$. The configuration is a $(6, 2, 2, 5, 2)$ GAST because the vector $\bold{v} = [\alpha^2 \text{ } 1 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } \alpha]^\textup{T}$, for example, is in the null space of the $7 \times 6$ matrix $\bold{W}^{\textup{z}}$, extracted by removing the rows of the $2$ degree-$1$ CNs (that are $c_8$ and $c_9$) from $\bold{A}$. The null spaces of the $2$ WCMs, extracted according to Example~\ref{ex_10}, of that GAST are:
\begin{align}\label{eq_ex11_2}
\mathcal{N}(\bold{W}^{\textup{cm}}_1)&= \textup{span}\{[\alpha^2 \text{ } 1 \text{ } 1 \text{ } 1 \text{ } \alpha \text{ } \alpha]^\textup{T}\} \text{ and} \nonumber \\
\mathcal{N}(\bold{W}^{\textup{cm}}_2)&=\textup{span}\{[0 \text{ } 1 \text{ } 1 \text{ } \alpha^2 \text{ } 1 \text{ } 0]^\textup{T}, [1 \text{ } 1 \text{ } 1 \text{ } 0 \text{ } 0 \text{ } \alpha^2]^\textup{T}\}.
\end{align}
It is clear that, all the WCMs for both GASTs have unbroken weight conditions, which is expected according to Corollary~\ref{cor_b_d1} (both GASTs have $b=d_1$).
\end{example}
Note that in Example~\ref{ex_11}, all the WCMs have $p_h = \textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) = \delta_h$, except for one WCM; $\bold{W}^{\textup{cm}}_2$ of the $(6, 2, 2, 5, 2)$ GAST, which is a short WCM, has $\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_2) \right ) = 2 > \delta_2 = 1$. As mentioned before, it is typically the case that $p_h=\textup{dim}\left (\mathcal{N}(\bold{W}^{\textup{cm}}_h) \right ) = \delta_h$.
\end{appendices}
\section*{Acknowledgement}
The authors thank the associate editor, Prof. Michael Lentmaier, for the handling of the paper and for the constructive feedback that has improved the paper.
The research was supported in part by UCLA dissertation year fellowship, in part by a grant from ASTC-IDEMA, in part by NSF Grant CCF-CAREER 1150212, and in part by NSF Grant CCF-BSF 1718369.
|
1,108,101,563,844 | arxiv | \section{Protection of Parity: Differential Privacy vs. Randomized Response}\label{app:parity}
\input{appparity}
}
\section{Proof of Theorem \lowercase{\ref{thm:closure}}}\label{app:close}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:closure}).}
Given a privacy definition $\priv$, its consistent normal form $\cnf(\priv)$ is equivalent to the following.
\begin{enumerate}
\item Define $\priv^{(1)}$ to be the set of all (deterministic and randomized algorithms) of the form $\randalg\circ\mech$, where $\mech\in\priv$, $\range(\mech)\subseteq\domain(\randalg)$, and the random bits of $\randalg$ and $\mech$ are independent of each other.
\item For any positive integer $n$, finite sequence $\mech_1,\dots,\mech_n$ and probability vector $\vec{p}=(p_1,\dots,p_n)$, use the notation $\choice^{\vec p}(\mech_1,\dots,\mech_n)$ to represent the algorithm that runs $\mech_i$ with probability $p_i$.
Define $\priv^{(2)}$ to be the set of all algorithms of the form $\choice^{\vec{p}}(\mech_1,\dots,\mech_n)$ where $n$ is a positive integer, $\mech_1,\dots,\mech_n\in\priv^{(1)}$, and $\vec{p}$ is a probability vector.
\item Set $\cnf(\priv)=\priv^{(2)}$.
\end{enumerate}
\end{theorem}
\begin{proof}
We need to show that $\priv^{(2)}$ satisfies Axioms \ref{ax:post} and \ref{ax:conv} consistent and that any other privacy definition that satisfies both axioms and contains $\priv$ must also contain $\priv^{(2)}$.
By construction, $\priv^{(2)}$ satisfies Axiom \ref{ax:conv} (convexity). To show that $\priv^{(2)}$ satisfies Axiom \ref{ax:post} (post-processing), choose any $\mech \in \priv^{(2)}$ and a postprocessing algorithm $\randalg$. By construction of $\priv^{(2)}$, there exists an integer $m$, a sequence of algorithms $\mech^{(1)}_1,\dots,\mech^{(1)}_m$ with each $\mech^{(1)}_i\in \priv^{(1)}$, and a probability vector $\vec{p}=(p_1,\dots,p_m)$ such that $\mech=\choice^p(\mech^{(1)}_1,\dots,\mech^{(1)}_m)$. It is easy to check that $\randalg\circ\mech=\choice^p(\randalg\circ\mech^{(1)}_1,\dots,\randalg\circ\mech^{(1)}_m)$. By construction of $\priv^{(1)}$, $\randalg\circ\mech^{(1)}_i\in\priv^{(1)}$ because $\mech^{(1)}_i\in\priv^{(1)}$. Therefore, by construction of $\priv^{(2)}$, $\randalg\circ\mech\in\priv^{(2)}$ and so $\priv^{(2)}$ satisfies Axiom \ref{ax:post} (post-processing).
Now let $\priv^\prime$ be some privacy definition containing $\priv$ and satisfying both axioms. By Axiom \ref{ax:post} (post-processing), $\priv^{(1)}\subseteq\priv^\prime$. By Axiom \ref{ax:conv} (convexity) it follows that $\priv^{(2)}\subseteq\priv^\prime$. Therefore $\cnf(\priv)=\priv^{(2)}\subseteq \priv^\prime$.
\end{proof}
\section{Proof of Corollary \lowercase{\ref{cor:one}}}\label{app:corone}
\begin{corollary}\emph{(Restatement of Corollary \ref{cor:one}).}\\
If $\priv=\set{\mech}$ consists of just one algorithm, $\cnf(\priv)$ is the set of all algorithms of the form $\randalg\circ\mech$, where $\range(\mech)\subseteq\domain(\randalg)$ and the random bits in $\randalg$ and $\mech$ are independent of each other.
\end{corollary}
\begin{proof}
We use the notation defined in Theorem \ref{thm:closure}.
The corollary follows easily from process described in Theorem \ref{thm:closure} and the fact that $$\choice^{\vec{p}}(\randalg_1\circ\mech,\dots,\randalg_n\circ\mech)=\left(\choice^{\vec{p}}(\randalg_1,\dots,\randalg_n)\right)\circ\mech$$ so that the process of computing $\cnf(\priv)$ has stopped after the first step.
\end{proof}
\section{Proof of Theorem \lowercase{\ref{thm:cone}}}\label{app:cone}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:cone}).}
$\rowcone(\priv)$ is a convex cone.
\end{theorem}
\begin{proof}
Choose any $\vec{v}=(v_1,v_2,\dots)\in\rowcone(\priv)$. Then by definition $c\vec{v}\in\rowcone(\priv)$ for any $c\geq 0$. This takes care of the cone property so that we only need to show that $\rowcone(\priv)$ is a convex set.
Choose any vectors $\vec{x}=(x_1,x_2,\dots)\in\rowcone(\priv)$, $\vec{y}=(y_1,y_2,\dots)\in\rowcone(\priv)$, and number $t$ such that $0\leq t\leq 1$. We show that $t\vec{x}+(1-t)\vec{y}\in\rowcone(\priv)$. If either $\vec{x}=\vec{0}$ or $\vec{y}=\vec{0}$ then we are done by the cone property we just proved. Otherwise, by definition of row cone, there exist constants $c_1,c_2>0$, algorithms $\mech_1,\mech_2\in\cnf(\priv)$, and sanitized outputs $\omega_1\in\range(\mech_1)$, $\omega_2\in \range(\mech_2)$ such that $\vec{x}/c_1$ is a row of the matrix representation of $\mech_1$ and $\vec{y}/c_2$ is a row of the matrix representation of $\mech_2$:
\begin{eqnarray*}
\vec{x}&=&\Big(c_1P[\mech_1(D_1)=\omega_1],~c_1P[\mech_1(D_2)=\omega_1],\dots\Big)\\
\vec{y}&=&\Big(c_2P[\mech_2(D_1)=\omega_2],~c_2P[\mech_2(D_2)=\omega_2],\dots\Big)
\end{eqnarray*}
Let $\randalg_1$ be the algorithm that outputs $\omega$ if its input is $\omega_1$ and $\omega^\prime$ otherwise. Similarly, let $\randalg_2$ be the algorithm that outputs $\omega$ if its input is $\omega_2$ and $\omega^\prime$ otherwise. Define $\mech_1^\prime\equiv\randalg_1\circ\mech_1$ and $\mech_2^\prime\equiv\randalg_2\circ\mech_2$. Then by Theorem \ref{thm:closure} (and the post-processing Axiom \ref{ax:post}), $\mech_1^\prime,\mech_2^\prime\in\cnf(\priv)$ and
\begin{eqnarray*}
\vec{x}&=&\Big(c_1P[\mech^\prime_1(D_1)=\omega],~c_1P[\mech^\prime_1(D_2)=\omega],\dots\Big)\\
\vec{y}&=&\Big(c_2P[\mech^\prime_2(D_1)=\omega],~c_2P[\mech^\prime_2(D_2)=\omega],\dots\Big)
\end{eqnarray*}
Now consider the algorithm $\mech^*$ which runs $\mech_1^\prime$ with probability $\frac{tc_1}{tc_1+(1-t)c_2}$ and runs $\mech_2^\prime$ with probability $\frac{(1-t)c_2}{tc_1 + (1-t)c_2}$. By Theorem \ref{thm:closure}, $\mech^*\in\cnf(\priv)$. Then for all $i=1,2,\dots$,
\begin{eqnarray*}
P(\mech^*(D_i)=\omega) &=&\frac{tc_1P(\mech_1^\prime(D_i)=\omega)+(1-t)c_2P(\mech_2^\prime(D_i)=\omega)}{tc_1+(1-t)c_2}\\
&=& \frac{tx_i + (1-t)y_i}{tc_1+(1-t)c_2}
\end{eqnarray*}
Thus the vector $\frac{t\vec{x} + (1-t)\vec{y}}{tc_1+(1-t)c_2}$ is the row vector corresponding to $\omega$ of the matrix representation of $\mech^*$ and is therefore in $\rowcone(\priv)$. Multiplying by the nonnegative constant $tc_1+(1-t)c_2$, we get that $t\vec{x} + (1-t)\vec{y}\in\rowcone(\priv)$ and so $\rowcone(\priv)$ is convex.
\end{proof}
\eat{
\section{Proof of Theorem \lowercase{\ref{thm:nok}}}\label{app:nok}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:nok}).} Given a fixed schema with a quasi-identifier that contains an integer-valued attribute, the consistent normal form of $k$-anonymity consists of every algorithm whose input domain contains tables with this schema. The row cone consists of all vectors.
\end{theorem}
\begin{proof}
Without loss of generality, assume Age is the integer-valued quasi-identifier attribute. Consider the algorithm $\mech_1$ that suppresses all attributes except for Age. It then sorts the tuples by Age (breaking ties arbitrarily). Using this sorted order, it puts the first $k$ tuples into the first group, the second $k$-tuples into the second group, etc. For each group $i$, the age is coarsened into an age range $[a_i,b_i]$. The $a_i$ and $b_i$ are decimal numbers (e.g. $3.552$) that encode all of the tuples in the group $i$. They have the following format. The number $a_i$ is equal to the minimum age in group $i$ minus $0.4$. The number $b_i$ has the form $\alpha_i+\beta_i$, where $\alpha_i$ is the maximum age in group $i$ plus $0.5$. $\beta_i$ has the form $0.0\gamma_i$, where $\gamma_i$ is a prefix-free encoding of the tuples in group $i$.
Clearly this algorithm $\mech_1$ satisfies $k$-anonymity and each input table is transformed into a unique ``anonymized'' table. Clearly there also exists a postprocessing algorithm $\randalg_2$ which can decode the ``anonymized'' table to recover the original table. By Axiom \ref{ax:post} (post-processing), the algorithm that first runs $\mech_1$ and then $\randalg_2$ (to recover the original table) belongs to the consistent normal form of $k$-anonymity. Since that algorithm is the identity, then by Axiom \ref{ax:post} (post-processing) all algorithms are in the consistent normal form of $k$-anonymity.
It easily follows that the row cone consists of all vectors.
\end{proof}
}
\section{Proof of Theorem \lowercase{\ref{thm:invcnfinf}}}\label{app:invcnfinf}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:invcnfinf}).}
Let $\inp$ be a finite or countably infinite set of possible datasets. Let $\mech^*$ be an algorithm with $\domain(\mech^*)= \inp$. Let $M^*$ be the matrix representation of $\mech^*$ (Definition \ref{def:matrix}). If $(M^*)^{-1}$ exists and the $L_1$ norm of each column of $(M^*)^{-1}$ is bounded by a constant $C$ then
\begin{list}{\labelitemi}{\leftmargin=1em}
\itemsep 1pt
\parskip 4pt
\item[(1)] A bounded row vector $\vec{x}\in\rowcone(\set{\mech^*})$ if and only if $\vec{x}\cdot m\geq 0$ for every column $m$ of $(M^*)^{-1}$.
\item[(2)] An algorithm $\mech$, with matrix representation $M$, belongs to $\cnf(\set{\mech^*})$ if and only if the matrix $M(M^*)^{-1}$ contains no negative entries.
\item[(3)] An algorithm $\mech$, with matrix representation $M$, belongs to $\cnf(\set{\mech^*})$ if and only if every row of $M$ belongs to $\rowcone(\set{\mech^*})$.
\end{list}
\end{theorem}
\begin{proof}
We first prove $(1)$. If $\vec{x}$ is the $0$ vector then this is clearly true. Thus assume $\vec{x}\neq\vec{0}$. If $\vec{x}\in\rowcone(\set{\mech^*})$ then by definition of the row cone and by Corollary \ref{cor:one}, $\vec{x}=\vec{y} M^*$ where $\vec{y}$ is a bounded row vector and has nonnegative components. Then $\vec{x}(M^*)^{-1}=\vec{y}M^*(M^*)^{-1}=\vec{y}$ and so $\vec{x}\cdot m\geq 0$ for every column $m$ of $(M^*)^{-1}$.
For the other direction, we must construct an algorithm $\randalg$ with matrix representation $A$ such that for some $c>0$, $c\vec{x}$ is a row of $AM^*$ (by definition of row cone and Corollary \ref{cor:one}). Thus, by hypothesis, suppose $\vec{x}\cdot m\geq 0$ for each column vector $m$ of $(M^*)^{-1}$ and consider the row vector $\vec{y}=\vec{x}(M^*)^{-1}$ which therefore has nonnegative entries. Since $\vec{x}$ is bounded and $||m||_1\leq C$ for each column vector $m$ of $(M^*)^{-1}$ then $|\vec{x}\cdot m|\leq ||\vec{x}||_\infty ||m||_1\leq ||\vec{x}||_\infty C$ (by H\"{o}lder's Inequality \cite{rudin}) so that $\vec{y}$ is bounded. Choose a $c$ so that $c\vec{y}$ is bounded by $1$. Consider the algorithm $\randalg$ that has a matrix representation $A$ with two rows, the first row being $c\vec{y}$ and the second row being $1-c\vec{y}$ ($\randalg$ is an algorithm since $c\vec{y}$ and $1-c\vec{y}$ have nonnegative components and the column sums of $A$ are clearly $1$). $\randalg$ is the desired algorithm since $c\vec{x}$ is a row of $AM^*$.
To prove (2) and (3), note that if an algorithm has matrix representation $M$, then $M(M^*)^{-1}$ contains all the dot products between rows of $M$ and columns of $(M^*)^{-1}$. Therefore, the entries of $M(M^*)^{-1}$ are nonnegative if and only if every row of $M$ is in the $\rowcone(\set{\mech^*})$ (this follows directly from the first part of the theorem). Thus (2) and (3) are equivalent and therefore we only need to prove (2).
To prove (2), first note the trivial direction. If $\mech\in\cnf(\set{\mech^*})$ then by definition every row of $M$ is in the row cone (and so by (1) all entries of $M(M^*)^{-1}$ are nonnegative). For the other direction, let $A=M(M^*)^{-1}$ (which has no negative entries by hypothesis). If we can show that the column sums of $A$ are all $1$ then, since $A$ contains no negative entries, $A$ would be a column stochastic matrix and therefore it would be the matrix representation of some algorithm $\randalg$. From this it would follow that $AM^*=M$ and therefore $\randalg\circ\mech^*=\mech$ (in which case $\mech\in\cnf(\mech^*)$ by Theorem \ref{thm:closure}).
So all we need to do is to prove that the column sums of $A$ are all $1$. Let $\vec{1}$ be a column vector whose components are all $1$. Then since $M$ is a matrix representation of an algorithm (Definition \ref{def:matrix}), $M$ has column sums equal to $1$, and similarly for $M^*$. Thus:
\begin{eqnarray*}
\vec{1}^T&=&\vec{1}^T M^*(M^*)^{-1}\\
&=&\vec{1}^T(M^*)^{-1}\\
&&\text{and therefore}\\
\vec{1}^T A &=& \vec{1}^TM(M^*)^{-1}\\
&=&\vec{1}^T(M^*)^{-1}\\
&=&\vec{1}^T
\end{eqnarray*}
and so the column sums of $A$ are equal to $1$. This completes the proof of this theorem.
\end{proof}
\eat{
\section{Proof of Theorem \lowercase{\ref{thm:invcnf}}}\label{app:invcnf}
Before proving this theorem, we need to introduce some definitions from convex analysis \cite{Boyd:convex} and some intermediate results.
\begin{definition}\label{def:cone}\emph{(Polyhedral Cone).} Let $v_1, ..., v_m \in \mathbb{R}^n$ be row-vectors. The \emph{polyhedral cone} of $v_1, ..., v_m$, denoted by $C(v_1, ...,v_m)$, is the closed convex cone generated by $v_1,\dots, v_n$: $C(v_1,\dots,v_m)=\set{\sum_{i=1}^m c_iv_i~|~ c_i\geq 0}$. If $M$ is an $m\times n$ matrix with rows $v_1,\dots,v_m$ then we define $C(M)\equiv C(v_1,\dots,v_m)$.
\end{definition}
The next lemma says that the row cone of $\set{\mech^*}$ is a specific polyhedral cone that is related to the matrix representation $M^*$ of $\mech^*$.
\begin{lemma}\label{lem:dualrow} Let $\mech^*$ be an algorithm with finite domain $\inp$ and let $M^*$ be its matrix representation. Then $\rowcone(\set{\mech^*})=C(M^*)$.
\end{lemma}
\begin{proof}
Using
Corollary \ref{cor:one},
it is easy to see that
\begin{eqnarray*}
\cnf(\set{\mech^*})&=&\set{\randalg \circ\mech^*~:~\range(\mech^*)\subseteq \domain(\randalg)}
\end{eqnarray*}
It is also easy to see that the matrix representation of $\randalg\circ\mech^*$ is equal to $AM^*$, where $A$ is the matrix representation of $\randalg$ (thus composition of algorithms is equivalent to multiplication of the corresponding matrices). Now, each row of $AM^*$ is equivalent to a nonnegative linear combination (with coefficients $\leq 1$) of rows of $M^*$. Therefore each vector $\vec{x}$ is in $\rowcone(\set{\mech^*})$ if and only if $\vec{x}$ is a nonnegative linear combination of rows of $M^*$, which is the same as saying $\rowcone(\set{\mech^*})=C(M^*)$.
\end{proof}
The next step is to find the linear constraints that determine $C(M^*)$ since these are the same constraints that determine $\rowcone(\set{\mech^*})$. For this, we need the concept of a \emph{dual cone}.
\begin{definition}\label{def:dualcone}\emph{(Dual Cone).} Let $C\subseteq \mathbb{R}^n$ be a cone. The dual cone of $C$, denoted by $C^*$, is defined as $\set{ w \in \mathbb{R}^n | v\cdot w \geq 0,~ \forall v \in C}$.
\end{definition}
We need the following result which applies to polyhedral cones such as $C(M^*)$ (note that $C(M^*)$ is a closed convex cone). The importance of this result is that it shows that $C(M^*)$, which is the equivalent to the row cone that we are interested in, is completely defined by the linear inequalities encapsulated by the dual cone $C^*(M^*)$. In other words, by definition of dual cone, $\vec{v}\in C(M^*)$ if and only if $\vec{v}\cdot\vec{w}\geq 0$ for all $\vec{w}$ in the dual cone $C^*(M^*)$.
\begin{lemma}\cite{Boyd:convex}.
\label{lemma:dualdual}
Let $C$ be a cone. Then $C^{*}$ is a closed convex cone. If $C$ is a closed and convex cone, then $C=C^{**}$ ($C$ is the dual of its dual cone).
\end{lemma}
The dual cone $C^*(M^*)$ contains infinitely many vectors and hence places infinitely many linear constraints that must be satisfied by $\vec{v}$ in order for $v$ to be in $C(M*)$, the row cone we are interested in. The following result allows us to find a finite representative set of linear constraints.
\begin{lemma}\cite{burns:dualconeinverse}.
\label{lemma:dual}
Let $M^*$ be an invertible matrix, then $C^*(M^*)=C(~((M^*)^{-1})^T~)$. In other words, the dual cone is generated by the columns of the inverse of $M^*$ -- every vector in the dual cone is a positive linear combination of the (transpose of the) columns of $(M^*)^{-1}$
\end{lemma}
From Lemmas \ref{lemma:dual} and \ref{lemma:dualdual}, it easily follows that $\vec{v}\in C(M^*)$ if and only if $\vec{v}\cdot\vec{w}\geq 0$ for all $\vec{w}$ that are column vectors of $(M^*)^{-1}$.
We are now in position to prove Theorem \ref{thm:invcnf}.
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:invcnf}).}
Let $\inp=\set{D_1,\dots,D_n}$ be a finite set of possible datasets. Let $\mech^*$ be an algorithm with $\domain(\mech^*)\subseteq \inp$. Let $M^*$ be the matrix representation of $\mech^*$ (Definition \ref{def:matrix})). If $M^*$ is an invertible matrix then, denoting the $i^\text{th}$ column of $(M^*)^{-1}$ as $m^{(i)}$,
\begin{list}{\labelitemi}{\leftmargin=1em}
\itemsep 1pt
\parskip 4pt
\item A vector $\vec{x}\in\rowcone(\set{\mech^*})$ if and only if $\vec{x}\cdot m^{(i)}\geq 0$ for every column $m^{(i)}$ of $(M^*)^{-1}$.
\item An algorithm $\mech$, with matrix representation $M$, belongs to $\cnf(\set{\mech^*})$ if and only if the matrix $M(M^*)^{-1}$ contains no negative entries.
\item An algorithm $\mech$, with matrix representation $M$, belongs to $\cnf(\set{\mech^*})$ if and only if every row of $M$ belongs to $\rowcone(\set{\mech^*})$.
\end{list}
\end{theorem}
\begin{proof}
The characterization of the row cone follows from Lemmas \ref{lem:dualrow}, \ref{lemma:dual}, \ref{lemma:dualdual}, and the discussion surrounding them. This proves the first part of the theorem.
Note that if an algorithm has matrix representation $M$, then $M(M^*)^{-1}$ contains all the dot products between rows of $M$ and columns of $(M^*)^{-1}$. Therefore, the entries of $M(M^*)^{-1}$ are nonnegative if and only if every row of $M$ is in the $\rowcone(\set{\mech^*})$ (this follows directly from the first part of the theorem). Thus proving the second part of the theorem automatically proves the third part.
To prove the second part of the theorem, let $A=M(M^*)^{-1}$. If we can show that the column sums of $A$ are all $1$ then, since $A$ contains nonnegative entries, $A$ would be a column stochastic matrix and therefore it would be the matrix representation of some algorithm $\randalg$. From this it would follow that $AM^*=M$ and therefore $\randalg\circ\mech^*=\mech$ (in which case $\mech\in\cnf(\mech^*)$ by Theorem \ref{thm:closure}).
So all we need to do is to prove that the column sums of $A$ are all $1$. Let $\vec{1}$ be a column vector of length $n$ whose components are all $1$. Then since $M$ is a matrix representation of an algorithm (Definition \ref{def:matrix}), $M$ has column sums equal to $1$, and similarly for $M^*$. Thus:
\begin{eqnarray*}
\vec{1}^T&=&\vec{1}^T M^*(M^*)^{-1}\\
&=&\vec{1}^T(M^*)^{-1}\\
&&\text{and therefore}\\
\vec{1}^T A &=& \vec{1}^TM(M^*)^{-1}\\
&=&\vec{1}^T(M^*)^{-1}\\
&=&\vec{1}^T
\end{eqnarray*}
and so the column sums of $A$ are equal to $1$. This completes the proof of this theorem.
\end{proof}
}
\section{Proof of Lemma \lowercase{\ref{lem:phalf}}}\label{app:phalf}
\begin{lemma}\emph{(Restatement and proof of Lemma \ref{lem:phalf}).}
Given a privacy parameter $p$, define $q=\max(p,1-p)$. Then
\begin{list}{\labelitemi}{\leftmargin=2em}
\itemsep 0pt
\parskip 2pt
\item $\cnf(\set{\rr{p}})=\cnf(\set{\rr{q}})$.
\item If $p=1/2$ then $\cnf(\set{\rr{p}})$ consists of the set of algorithms whose outputs are statistically independent of their inputs (i.e. those algorithms $\mech$ where $P[\mech(D_i)=\omega]=P[\mech(D_j)=\omega]$ for all $D_i,D_j\in\inp$ and $\omega\in\range(\mech)$), and therefore attackers learn nothing from those outputs.
\end{list}
\end{lemma}
\begin{proof}
Consider the algorithm $\rr{0}$ which always flips each bit in its input. It is easy to see that $\rr{0}\circ\rr{p}=\rr{1-p}$ and $\rr{0}\circ\rr{1-p}=\rr{p}$. From Theorem \ref{thm:closure}, it follows that $\cnf(\set{\rr{p}})=\cnf(\set{\rr{1-p}})$ and therefore $\cnf(\set{\rr{p}})=\cnf(\set{\rr{q}})$.
Clearly, the output of $\rr{1/2}$ is independent of whatever was the true input table $D\in\inp$. By Theorem \ref{thm:closure}, all algorithms in $\cnf(\set{\rr{1/2}})$ have outputs independent of their inputs. For the other direction, choose any algorithm $\mech$ whose outputs are statistically independent of their inputs. Then it is easy to see that $\mech=\mech\circ\rr{1/2}$; that is, $\mech$ and $\mech\circ\rr{1/2}$ have the same range and $P(\mech(D_i)=\omega)=P([\mech\circ\rr{1/2}](D_i)=0)$ for all $D_i\in\inp$ and $\omega\in\range(\mech)$. Thus $\mech\in\cnf(\set{\rr{1/2}})$.
Clearly, when the output is statistically independent of the input, an attacker can learn nothing about the input after observing the output.
\end{proof}
\section{Proof of Theorem \lowercase{\ref{thm:rrcnf}}}\label{app:rrcnf}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:rrcnf}).}
Given input space $\inp=\set{D_1,\dots,D_{2^k}}$ of bit strings of length $k$ and a privacy parameter $p> 1/2$,
\begin{list}{\labelitemi}{\leftmargin=1em}
\itemsep 4pt
\parskip 2pt
\item A vector $\vec{x}=(x_1,\dots,x_{2^k})\in\rowcone(\set{\rr{p}})$ if and only if for every bit string $s$ of length $k$,
$$\sum\limits_{i=1}^{2^k}p^{\hamming(s, D_i)}(p-1)^{k-\hamming(s, D_i)}x_i \geq 0$$
where $\hamming(s, D_i)$ is the Hamming distance between $s$ and $D_i$.
\item An algorithm $\mech$ with matrix representation $M$ belongs to $\cnf(\set{\rr{p}})$ if and only if every row of $M$ belongs to $\rowcone(\set{\rr{p}})$.
\end{list}
\end{theorem}
\begin{proof}
Our strategy is to first derive the matrix representation of $\rr{p}$, which we denote by $\mrr{p}$. Then we find the inverse of $\mrr{p}$ and apply Theorem \ref{thm:invcnfinf}. Accordingly, we break the proof down into 3 steps.
\noindent\textbf{Step 1:} \ul{Derive $\mrr{p}$}. Define $B$ to be the matrix
\begin{eqnarray*}
B&=&
\left(
\begin{matrix}
p & 1-p\\
1-p & p
\end{matrix}
\right)
\end{eqnarray*}
Recall that the Kronecker product $C\oplus D$ of an $m\times n$ matrix $C$ and $m^\prime\times n^\prime$ matrix $D$ is the block matrix
$\left(
\begin{smallmatrix}
c_{11}D & \dots & c_{1n}D\\
\vdots & \ddots & \vdots\\
c_{m1}D & \dots &c_{mn}D
\end{smallmatrix}
\right)$
of dimension $mm^\prime\times nn^\prime$. An easy induction shows that the matrix representation $\mrr{p}$ is equal to the $k$-fold Kronecker product of $B$ with itself:
$$\mrr{p}=\bigotimes\limits_{i=1}^k B$$
The entry in row $i$ and column $j$ of $\mrr{p}$ is equal to $P[\rr{P}(D_j)=D_i]$ and a direct computation shows that this is equal to
$$p^{\hamming(D_i,D_j)}(1-p)^{k-\hamming(D_i,D_j)}$$
\noindent\textbf{Step 2:} \ul{Derive $(\mrr{p})^{-1}$}.
It is easy to check that
\begin{eqnarray*}
B^{-1}&=&
\frac{1}{2p-1}\left(
\begin{matrix}
p & -(1-p)\\
-(1-p) & p
\end{matrix}
\right)
\end{eqnarray*}
and therefore
$$(\mrr{p})^{-1}=\bigotimes\limits_{i=1}^k B^{-1}$$
A comparison with $\bigotimes\limits_{i=1}^k B^{-1}$ shows that the we can calculate the entry in row $i$ and column $j$ of $(\mrr{p})^{-1}$ by taking the corresponding entry of $\mrr{p}$ and replacing every occurrence of $1-p$ with $-(1-p)=p-1$. Thus the entry in row $i$ and column $j$ of $(\mrr{p})^{-1}$ is equal to
$$\frac{1}{(2p-1)^k}p^{\hamming(D_i,D_j)}(p-1)^{k-\hamming(D_i,D_j)}$$
Therefore each column of $(\mrr{p})^{-1}$ has the form:
\begin{eqnarray*}
\frac{1}{(2p-1)^k}
\begin{bmatrix}
p^{\hamming(s,D_1)}(p-1)^{k-\hamming(s,D_1)}\\
p^{\hamming(s,D_2)}(p-1)^{k-\hamming(s,D_2)}\\
\vdots\\
p^{\hamming(s,D_{2^k})}(p-1)^{k-\hamming(s,D_{2^k})}\\
\end{bmatrix}
\end{eqnarray*}
\noindent\textbf{Step 3:} Now we apply Theorem \ref{thm:invcnfinf} and observe that if $m^{(i)}$ is the $i^\text{th}$ column of $(\mrr{p})^{-1}$, then, since $p>1/2$ and $2p-1>0$, the condition $\vec{x}\cdot m^{(i)}$ is equal to the condition
$$\sum\limits_{j=1}^{2^k}p^{\hamming(s, D_j)}(p-1)^{k-\hamming(s, D_j)}x_j \geq 0$$
where $s=D_i$.
\end{proof}
\section{Proof of Theorem \lowercase{\ref{thm:rrsemantics}}}\label{app:rrsemantics}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:rrsemantics}).}
Let $p$ be a privacy parameter and let $\inp={D_1,\dots, D_{2^k}}$. Let $\mech$ be an algorithm that has a matrix representation whose every row belongs to the row cone of randomized response. If the attacker believes that the bits in the data are independent and bit $i$ is equal to $1$ with probability $q_i$, then $\mech$ protects the parity of any subset of bits that have prior probability $\geq p$ or $\leq 1-p$. That is, for any subset $\set{\ell_1,\dots,\ell_m}$ of bits of the input data such that $q_{\ell_j}\geq p~\vee~g_{\ell_j} \leq 1-p$ for $j=1,\dots, m$, the following holds:
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\itemsep 4pt
\parskip 2pt
\item If $P(\parity(J)=0) \geq P(\parity(J)=1)$ then $P(\parity(J)=0~|~\mech(\data)) \geq P(\parity(J)=1~|~\mech(\data))$
\item If $P(\parity(J)=1) \geq P(\parity(J)=0)$ then $P(\parity(J)=1~|~\mech(\data)) \geq P(\parity(J)=0~|~\mech(\data))$
\end{list}
Furthermore, an algorithm $\mech$ can only provide these guarantees if every row of its matrix representation belongs to $\rowcone(\set{\rr{p}})$.
\end{theorem}
\begin{proof}
We break this proof up into a series of steps. We first reformulate the statements to make them easier to analyze mathematically, then we specialize to the case where $J=\set{1,\dots,k}$ is the set of all bits in the database. We then show that every $\mech$ whose rows (in the corresponding matrix representation) belong to $\rowcone({\rr{p}})$ has these semantic guarantees. We then show that only those $\mech$ provide these semantic guarantees. Finally we show that those results imply that the theorem holds for all $J$ whose bits have prior probability $\geq p$ or $\leq 1-p$.
\vspace{0.5em}
\noindent\textbf{Step 1:} \ul{Problem reformulation and specialization to the case when $J=\set{1,\dots,k}$}.
Assume $J=\set{1,\dots,k}$ so that for all bits $j$, either $q_j\geq p$ or $q_j\leq 1-p$.
First, Lemma \ref{lem:phalf} allows us to assume that the privacy parameter \ul{$p>1/2$ without any loss of generality}: the case of $p=1/2$ is trivial since the output provides no information about the input so that parity is preserved; in the case of $p<1/2$, the row cone and $\cnf$ are unchanged if we replace $p$ with $1-p$.
Second, we need a few results about parity. An easy induction shows that:
\begin{eqnarray*}
P(\parity(\data)=1)&=&\frac{1-\prod\limits_{j=1}^k (1-2q_j)}{2}\\
P(\parity(\data)=0)&=&\frac{1+\prod\limits_{j=1}^k(1-2q_j)}{2}
\end{eqnarray*}
in particular, if all of the $q_j\neq 1/2$ then $P(\parity(\data)=1)\neq P(\parity(\data)=0)$ so that one parity has higher prior probability than the other.
When $J$ is the set of all $k$ bits, then for all $q_j$, $q_j\neq 1/2$ and so the parities cannot be equally likely \textit{a priori}, the statement about protection of parity can be rephrased as $P(\parity(\data)=0) - P(\parity(\data)=1)$ and $P(\parity(\data)=0~|~\mech(\data)) - P(\parity(\data)=1~|~\mech(\data))$ have the same sign or the posterior probabilities of parity are the same. Equivalently,
\begin{eqnarray}
0&\leq&\Big(P[\parity(\data)=0] - P[\parity(\data)=1]\Big) \nonumber \\
&& \times \Big(P[\parity(\data)=0~|~\mech(\data)] - P[\parity(\data)=1~|~\mech(\data)]\Big)\label{eqn:prodparity}
\end{eqnarray}
Now, it is easy to see that
\begin{eqnarray}
\lefteqn{P(\parity(\data)=0) - P(\parity(\data)=1)}\nonumber\\
&=& \left[\bigotimes\limits_{j=1}^k (-q_{j},~ 1-q_{j})\right]\cdot \left[\bigotimes\limits_{j=1}^k (1, 1)\right]\nonumber\\
&=& \prod\limits_{j=1}^k\Big[(-q_j,~ 1-q_{j})\cdot (1,1)\Big]\label{eqn:priorsub}
\end{eqnarray}
and
\begin{eqnarray}
\lefteqn{\hspace{-2cm}P[\parity(\data)=0~|~\mech(\data)] - P[\parity(\data)=1~|~\mech(\data)]}\nonumber\\
&&=\alpha \left[\bigotimes\limits_{j=1}^k (-q_{j}, 1-q_{j})\right] \cdot\vec{x}\label{eqn:postsub}
\end{eqnarray}
where $\alpha$ is a positive normalizing constant and $\vec{x}$ is a vector of the matrix
representation of $\mech$. So, by Equations \ref{eqn:prodparity}, \ref{eqn:priorsub}, and \ref{eqn:postsub},
the statement about protecting parity is equivalent to
\begin{eqnarray}
\forall \vec{x}\in\rowcone(\set{\rr{p}}) ~:~ 0 \leq \left(\prod\limits_{j=1}^k\Big[(-q_{j}, ~1-q_{j})\cdot (1,1)\Big]\right) *\left(\left[\bigotimes\limits_{j=1}^k (-q_{j}, ~1-q_{j})\right] \cdot\vec{x}\right)\label{eqn:toprove}
\end{eqnarray}
\vspace{0.5em}
\noindent\textbf{Step 2:} \ul{Show that if for all $j$, $q_j\geq p\vee q_j\leq 1-p$ then the constraints in Equation }\ref{eqn:toprove}\ul{ hold (i.e. the most likely parity \textit{a priori} is the most likely parity \textit{a posteriori})}.
It follows from
Corollary \ref{cor:one}
that every $\mech\in\cnf(\set{\rr{p}})$ has the form $\randalg\circ\rr{p}$ and so, by Theorem \ref{thm:invcnfinf}, $\vec{x}$ is a row from the matrix representation of an $\mech\in\cnf(\set{\rr{p}})$ if and only if $\vec{x}\in\rowcone(\set{\rr{p}})$. This means that ever such $\vec{x}$ is a nonnegative
linear combination of rows of the randomized response algorithm $\rr{p}$. Thus it suffices to show that
\begin{eqnarray}
0&\leq& \left(\prod\limits_{j=1}^k\Big[(-q_{j}, ~1-q_{j})\cdot (1,1)\Big]\right) * \left(\left[\bigotimes\limits_{j=1}^k (-q_{j}, ~1-q_{j})\right] \cdot\vec{m}\right)\label{eqn:toprove2}
\end{eqnarray}
for each vector $\vec{m}$ in $\mrr{p}$ (the matrix representation of $\rr{p}$).
It is easy to check that
\begin{eqnarray*}
\mrr{p}=\bigotimes\limits_{i=1}^k
\begin{pmatrix}
p & 1-p\\
1-p & p
\end{pmatrix}
\end{eqnarray*}
and so every vector $\vec{m}$ that is a row of $\mrr{p}$ has the form $\bigotimes\limits_{i=1}^k v_i$ where $v_i=(p,~1-p)$ or $(1-p, ~p)$. Thus right hand side of Equation \ref{eqn:toprove2} has the form:
\begin{eqnarray}
\prod\limits_{j=1}^k\Big[\Big((-q_{j}, ~1-q_{j})\cdot (1,1)\Big)*\Big((-q_j,~1-q_j)\cdot v_i\Big)\Big]\label{eqn:toprove3}
\end{eqnarray}
where $v_i=(p,~1-p)$ or $(1-p, ~p)$. Each term in this product is either
$$(1-2q_j)*[(1-p)(1-q_j)-q_jp]=(1-2q_j)[1-p-q_j]$$
or
$$(1-2q_j)*[p(1-q_j)-q_j(1-p)]=(1-2q_j)[p-q_j]$$
Recalling that we had assumed $p>1/2$ without any loss of generality, both of these terms are nonnegative if $q_j\geq p > 1/2$ and they are also nonnegative when $q_i\leq (1-p) < 1/2$. Thus the product in Equation \ref{eqn:toprove3} is nonnegative from which it follows that the conditions in Equation \ref{eqn:toprove2} and \ref{eqn:toprove} are satisfied which implies Equation \ref{eqn:prodparity} is satisfied, which proves half of the theorem when restricted to the special case of $J=\set{1,\dots,k}$.
\vspace{0.5em}
\noindent\textbf{Step 3:} \ul{Show that if $\mech$ is a mechanism that protects parity whenever $q_j\geq p~\vee~ q_j\leq 1-p$ for $i=1,\dots,k$ then every row $\vec{x}$ in its matrix representation belongs to $\rowcone(\set{\rr{p}})$}.
We actually prove a more general statement: if $\mech$ is a mechanism that protects parity whenever $q_j= p~\vee~ q_j= 1-p$ for $i=1,\dots,k$ then every row $\vec{x}$ in its matrix representation belongs to $\rowcone(\set{\rr{p}})$.
Recalling the argument leading up to Equation \ref{eqn:toprove} in Step 2 (where we reformulated the problem into a statement that is more amenable to mathematical manipulation), we need to show that if
\begin{eqnarray}
0\leq \left(\prod\limits_{j=1}^k\Big[(-q_{j}, ~1-q_{j})\cdot (1,1)\Big]\right) *\left(\left[\bigotimes\limits_{j=1}^k (-q_{j}, ~1-q_{j})\right] \cdot\vec{x}\right)\label{eqn:step3toprove}
\end{eqnarray}
whenever $q_j=p$ or $q_j=1-p$ then $\vec{x}\in\rowcone(\set{\rr{p}})$.
Define the function:
\begin{eqnarray*}
\sign(\alpha)=\begin{cases}
-1 & \text{ if } \alpha < 0\\
0 & \text{ if } \alpha = 0\\
1 & \text{ if } \alpha > 0\\
\end{cases}
\end{eqnarray*}
Simplifying Equation \ref{eqn:step3toprove} (by computing the dot product in the first term, looking just at the sign of that dot product, and then combining both terms), our goal is to show that if
\begin{eqnarray}
0&\leq&
\left(\left[\bigotimes\limits_{j=1}^k (-q_{j}, ~1-q_{j})*\sign(1-2q_j)\right] \cdot\vec{x}\right)\label{eqn:step3toprove2}
\end{eqnarray}
whenever $q_j=p$ or $q_j=1-p$ then $\vec{x}\in\rowcone(\set{\rr{p}})$.
Now, when $q_j=p$ (and recalling that we have assumed $p>1/2$ with no loss of generality in Step 1), then $$(-q_{j}, ~1-q_{j})*\sign(1-2q_j)=(p,~ -(1-p))$$ and when $q_j=1-p$ then $$(-q_j, ~1-q_j)*\sign(1-2q_j)=(-(1-p), ~p)$$
Thus asserting that Equation \ref{eqn:step3toprove2} holds whenever $q_j$ equals $p$ or $1-p$ is the same as asserting that the vector:
\begin{eqnarray}
\vec{x}^T \bigoplus\limits_{i=1}^k \frac{1}{2p-1}
\begin{pmatrix}
p & -(1-p)\\
-(1-p) & p
\end{pmatrix}\label{eqn:step3toprove3}
\end{eqnarray}
has no negative components. However, the randomized response algorithm $\rr{p}$ has a matrix representation $\mrr{p}$ whose inverse (which we also derived in the proof of Theorem \ref{thm:rrcnf}) is
\begin{eqnarray*}
(\mrr{p})^{-1}=\bigoplus\limits_{i=1}^k \frac{1}{2p-1}
\begin{pmatrix}
p & -(1-p)\\
-(1-p) & p
\end{pmatrix}
\end{eqnarray*}
Thus the condition that the vector in Equation \ref{eqn:step3toprove3} has no negative entries means that $\vec{x}^T(\mrr{p})^{-1}$ has no negative entries and so the dot product of $\vec{x}$ with any column of $(\mrr{p})^{-1}$ is nonnegative. By Theorem \ref{thm:rrcnf}, this means that $\vec{x}\in\rowcone(\set{\rr{p}})$.
This concludes the proof for the entire theorem specialized to the case where $J=\set{1,\dots,k}$. In the next step, we generalize this to arbitrary $J$.
\vspace{0.5em}
\noindent\textbf{Step 4:}
Now let $J=\set{\ell_1,\dots,\ell_m}$. First consider an ``extreme'' attacker whose prior beliefs $q_j$ are such that $q_j=0$ or $q_j=1$ whenever $j\notin J$. It follows from the previous steps that such an attacker would not change his mind about the parity of the whole dataset. Since the attacker is completely sure about the values of bits outside of $J$, this means that after seeing a sanitized output $\omega$, the attacker will not change his mind about the parity of the bits in $J$.
Now, note that showing
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\itemsep 4pt
\parskip 2pt
\item If $P(\parity(J)=0) \geq P(\parity(J)=1)$ then\\ $P(\parity(J)=0~|~\mech(\data)=\omega) \geq P(\parity(J)=1~|~\mech(\data)=\omega)$
\item If $P(\parity(J)=1) \geq P(\parity(J)=0)$ then\\ $P(\parity(J)=1~|~\mech(\data)=\omega) \geq P(\parity(J)=0~|~\mech(\data)=\omega)$
\end{list}
is equivalent to showing
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\itemsep 4pt
\parskip 2pt
\item If $P(\parity(J)=0) \geq P(\parity(J)=1)$ then\\ $P(\parity(J)=0~\wedge~\mech(\data))\geq P(\parity(J)=1~\wedge~\mech(\data))$
\item If $P(\parity(J)=1) \geq P(\parity(J)=0)$ then\\ $P(\parity(J)=1~\wedge~\mech(\data)=\omega) \geq P(\parity(J)=0~\wedge~\mech(\data)=\omega)$
\end{list}
since we just multiply the equations on both sides of the inequalities by the positive number $P(\mech(\data)=\omega)$.
Now consider an attacker Bob such that $q_j\geq p$ or $q_j\leq 1-p$ whenever $j\in J$ and there are no restrictions on $q_j$ for $j\notin J$. There is a corresponding set of $2^{k-|J|}$ ``extreme'' attackers for whom $P($bit $j=1)=q_j$ for $j\in J$ and $P($bit $j=1)\in\set{0,1}$ otherwise.
Bob's vector of prior probabilities over possible datasets
$$(P[\data=D_1], P[\data=D_2], \dots)$$
is a convex combination of the corresponding vectors for the extreme attackers.
and thus Bob's joint distributions:
$$P(\parity(J)=1~\wedge~\mech(\data)=\omega)$$
and
$$P(\parity(J)=0~\wedge~\mech(\data)=\omega)$$
are convex combinations of the corresponding posteriors for the extreme attackers, and the coefficients of this convex combination are the same.
Note that Bob and all of the extreme attackers have the same prior on the parity of $J$. However, we have shown that the extreme attackers will not change their minds about the parity of $J$. Therefore if they believe $P(\parity(J)=1~\wedge~\mech(\data)=\omega)$ is larger than the corresponding probability for even parity, then Bob will have the same belief. If the extreme attackers believe, after seeing the sanitized output $\omega$, that even parity is more likely, then so will Bob. Thus Bob will not change his belief about the parity of the input dataset.
\end{proof}
\section{Proof of Lemma \lowercase{\ref{lem:frappapprox}}}\label{app:frappapprox}
\begin{lemma}\emph{(Restatement and proof of Lemma \ref{lem:frappapprox}).}
Let $p=\frac{\gamma}{\gamma+1}$. Then $\tilde{K}_p$ is an approximation cone for $\frapp$.
\end{lemma}
\begin{proof}
Clearly $\tilde{K}_p$ is a closed convex cone. Thus we just need to prove that $\rowcone(\frapp)\subseteq \tilde{K}_p$.
Choose any $\mech_Q\in\frapp$, with matrix representation $M_Q$.
Clearly
$$M_Q=\bigotimes\limits_{i=1}^k Q$$
and $Q$ satisfies the constraints
$$\forall i,j\in\set{1,\dots, N}~:~ Q (p e_i - (1-p) e_j)\succeq \vec{0}$$
where $e_{i}$ is the $i^\text{th}$ column vector of the $N\times N$ identity matrix and $\vec{a}\succeq\vec{b}$ means that $\vec{a}-\vec{b}$ has no negative components. It follows from the properties of the Kronecker product that
\begin{eqnarray}
\forall i_1,\dots, i_k,j_1,\dots,j_k\in\set{1,\dots,N}~:~ M_Q\left( \bigotimes\limits_{\ell=1}^k (p e_{i_\ell} - (1-p)e_{j_\ell})\right)\succeq \vec{0} \label{eqn:frappconstraints}
\end{eqnarray}
Thus each row of the matrix representation of $\mech_Q$ satisfies a set of linear constraints.
From Theorem \ref{thm:closure}, we see that $\cnf(\frapp)$ can be obtained by first creating all algorithms of the form $\randalg\circ\mech_Q$ (for $\mech_Q\in\frapp$) and then by taking the convex combination of those results (i.e. creating algorithms that randomly choose to run one of the algorithms generated in the previous step). However, the matrix representation of $\randalg\circ\mech_Q$ is equal to $AM_Q$ (where $A$ is the matrix representation of $\randalg$) and every row in $AM_Q$ is a positive linear combination of rows in $\mech_Q$. Thus every row of the matrix representation of $\randalg\circ\mech_Q$ also satisfies the constraints defining $\tilde{K}_p$. Finally, creating an algorithm $\randalg^*$ that randomly choose to run one algorithm in $\set{\randalg_1\circ\mech_{Q_1},\dots,\randalg_h\circ\mech_{Q_h}}$ means that the rows in the matrix representation of $\randalg^*$ is a convex combination of the rows appearing in the matrix representations of the $\randalg_i\circ\mech_{Q_i}$ and so those rows also satisfy the constraints that define $\tilde{K}_p$. Therefore $\rowcone(\frapp)\subseteq \tilde{K}_p$.
\end{proof}
\section{Proof of Theorem \lowercase{\ref{thm:skellam}}}\label{app:skellam}
\begin{theorem}(\emph{Restatement and proof of Theorem \ref{thm:skellam}})
Let the input domain $\inp=\set{\dots, -2, -1, 0,\linebreak[0] 1, 2, \dots}$ be the set of integers. Let $\mech_{\text{skell($\lambda_1,\lambda_2$)}}$ be the algorithm that adds to its input a random integer $k$ with the Skellam$(\lambda_1,\lambda_2)$ distribution and let $f_Z(\cdot; \lambda_1,\lambda_2)$ be the probability mass function of the Skellam$(\lambda_1,\lambda_2)$ distribution. A bounded row vector $\vec{x}=(\dots, x_{-2}, x_{-1}, x_0, x_1, x_2, \dots)$ belongs to $\rowcone(\set{\mech_{\text{skell($\lambda_1,\lambda_2$)}}})$ if for all integers $k$,
\begin{eqnarray*}
\sum\limits_{j=-\infty}^\infty (-1)^j f_Z(j;\lambda_1,\lambda_2)x_{k+j}\geq 0
\end{eqnarray*}
\end{theorem}
\begin{proof}
For integers $k$, define the functions
\begin{eqnarray*}
f_X(k)&=&
\begin{cases}
e^{-\lambda_1} \frac{\lambda_1^k}{k!}&\text{ if } k\geq 0\\
0 &\text{ if } k<0
\end{cases}\\
f_Y(k)&=&
\begin{cases}
e^{-\lambda_2} \frac{\lambda_2^{-k}}{(-k)!}&\text{ if } k\leq 0\\
0 &\text{ if } k>0
\end{cases}
\end{eqnarray*}
Note that $f_X$ is the probability mass function for a Poisson$(\lambda_1)$ random variable $X$ while \ul{$f_Y$ is the probability mass function of the \textbf{negative} of a Poisson$(\lambda_2)$ random variable $Y$}.
With this notation, the Skellam distribution is the distribution of the sum $X+Y$. Therefore its probability mass function satisfies the following relation
$$f_Z(k; \lambda_1,\lambda_2)=\sum\limits_{j=-\infty}^\infty f_X(k-j)f_Y(j)=(f_X\star f_Y)(k)$$
where $f_X\star f_Y$ is the convolution operation.
Now for each integer $k$ define
\begin{eqnarray*}
g_X(k)&=& (-1)^k f_X(k)\\
g_Y(k)&=&(-1)^k f_Y(k)\\
g_Z(k)&=&(g_X\star g_Y)(k)\\
&=&\sum\limits_{j=-\infty}^\infty g_X(k-j)g_Y(j)\\
&=&\sum\limits_{j=-\infty}^\infty (-1)^{k-j}f_X(k-j)(-1)^jf_Y(j)\\
&=&(-1)^k\sum\limits_{j=-\infty}^\infty f_X(k-j)f_Y(j)\\
&=&(-1)^k f_Z(k;\lambda_1,\lambda_2)
\end{eqnarray*}
We will need the following calculations:
\begin{eqnarray*}
(g_X\star f_X)(k)&=&\sum\limits_{j=-\infty}^\infty g_X(k-j)f_X(j)\\
&=&\sum\limits_{j=-\infty}^\infty (-1)^{k-j}f_X(k-j)f_X(j)\\
&=&\sum\limits_{j=0}^k (-1)^{k-j}f_X(k-j)f_X(j)\\
&&\text{(since $f_X$ is $0$ for negative integers also note the summation is $0$ if $j>k)$}\\
&=&\ind{k\geq 0}e^{-2\lambda_1}\sum\limits_{j=0}^k \frac{(-\lambda_1)^{k-j}}{(k-j)!}\frac{\lambda_1^j}{j!}\\
&=&\ind{k\geq 0}\frac{e^{-2\lambda_1}}{k!}\sum\limits_{j=0}^k {k\choose j}(-\lambda_1)^{k-j}\lambda_1^j\\
&=&\ind{k\geq 0}\frac{e^{-2\lambda_1}}{k!}(\lambda_1-\lambda_1)^k\\
&=&\begin{cases}e^{-2\lambda_1}\text{ if }k=0\\0 \text{ otherwise}\end{cases}
\end{eqnarray*}
Similarly
\begin{eqnarray*}
(g_Y\star f_Y)(k)&=&\sum\limits_{j=-\infty}^\infty g_Y(k-j)f_Y(j)\\
&=&\sum\limits_{j=-\infty}^\infty (-1)^{k-j}f_Y(k-j)f_Y(j)\\
&=&\sum\limits_{j=k}^0 (-1)^{k-j}f_Y(k-j)f_Y(j)\\
&&\text{(since $f_Y$ is $0$ for positive integers also note the summation is $0$ if $k>j)$}\\
&=&\ind{k\leq 0}e^{-2\lambda_2}\sum\limits_{j=k}^0 \frac{(-\lambda_2)^{-(k-j)}}{(-(k-j))!}\frac{\lambda_2^{-j}}{(-j)!}\\
&=&\ind{k\leq 0}e^{-2\lambda_2}\sum\limits_{j=0}^{-k} \frac{(-\lambda_2)^{(-k)-j}}{[(-k)-j]!}\frac{\lambda_2^{j}}{j!}\\
&&\text{(replacing the dummy index $j$ with $-j$)}\\
&=&\ind{k\leq 0}\frac{e^{-2\lambda_2}}{(-k)!}\sum\limits_{j=0}^{-k} {(-k)\choose j}(-\lambda_2)^{(-k)-j}\lambda_2^j\\
&=&\ind{k\leq 0}\frac{e^{-2\lambda_2}}{(-k)!}(\lambda_2-\lambda_2)^{(-k)}\\
&=&\begin{cases}e^{-2\lambda_2}\text{ if }k=0\\0 \text{ otherwise}\end{cases}
\end{eqnarray*}
From these calculations we can conclude that
\begin{eqnarray*}
(g_Z\star f_Z(\cdot;\lambda_1,\lambda_2))(k)&=&((g_X\star g_Y)\star(f_X\star f_Y))(k)\\
&=&((g_X\star f_X)\star(g_Y\star f_Y))(k)\\
&&\text{(since convolutions are commutative and associative)}\\
&=&\begin{cases}
e^{-2(\lambda_1+\lambda_2)} & \text{ if }k=0\\
0 & \text{ otherwise}
\end{cases}
\end{eqnarray*}
These convolution calculations show that the matrices $M^{(f)}$ and $M^{(g)}$, whose rows and columns are indexed by the integers and which are defined below, are inverses of each other.
\begin{eqnarray*}
M^{(f)}_{(i,j)}&\equiv&(i,j) \text{ entry of }M^{(f)} \\
&=&f_Z(i-j;\lambda_1,\lambda_2)\\
M^{(g)}_{(i,j)}&\equiv&(i,j)\text{ entry of }M^{(g)}\\
&=& e^{2(\lambda_1+\lambda_2)}g_Z(i-j)
\end{eqnarray*}
To see that they are inverses, note that the dot product between row $r$ of $M^{(f)}$ and column $c$ of $M^{(g)}$ is
\begin{eqnarray*}
\sum\limits_{j=-\infty}^\infty M^{(f)}_{(r,j)}M^{(g)}_{(j,c)}&=&\sum\limits_{j=-\infty}^\infty f_Z(r-j;\lambda_1,\lambda_2)e^{2(\lambda_1+\lambda_2)}g_Z(j-c)\\
&=&\sum\limits_{j=-\infty}^\infty f_Z(r-c-j;\lambda_1,\lambda_2)e^{2(\lambda_1+\lambda_2)}g_Z(j)\\
&=&e^{2(\lambda_1+\lambda_2)}(f_Z(\cdot;\lambda_1,\lambda_2)\star g_Z)(r-c)\\
&=&e^{2(\lambda_1+\lambda_2)}(g_Z\star f_Z(\cdot;\lambda_1,\lambda_2))(r-c)\\
&=&\begin{cases}
1 & \text{ if }r=c\\
0 & \text{ otherwise}
\end{cases}
\end{eqnarray*}
Now, clearly $M^{(f)}$ is the matrix representation of $\mech_{\text{skell($\lambda_1,\lambda_2$)}}$ so that we can again use Theorem \ref{thm:invcnfinf} and the observation that $g_Z(k)=(-1)^kf_Z(k;\lambda_1,\lambda_2)$ so that column $c$ of $M^{(g)}=(M^{(f)})^{-1}$ is the column vector whose entry $j$ is $(-1)^{j-c}f_Z(j-c;\lambda_1,\lambda_2)$.
Note that the columns of $M^{(g)}$ have bounded $L_1$ norm since the absolute value of the entries in any column are proportional to the probabilities given by the Skellam distribution.
The proof is completed by the observation that for any $\vec{x}=(\dots,x_{-2}, x_{-1}, x_0, x_1, x_2, \dots)$,
\begin{eqnarray*}
\sum\limits_{j=-\infty}^\infty (-1)^{j-c}f_Z(j-c;\lambda_1,\lambda_2)x_j = \sum\limits_{j=-\infty}^\infty (-1)^{j}f_Z(j;\lambda_1,\lambda_2)x_{j+c}
\end{eqnarray*}
\end{proof}
\eat{
\section{Proof of Lemma \lowercase{\ref{lem:commute}}}\label{app:commute}
\begin{lemma}\emph{(Restatement and proof of Lemma \ref{lem:commute}).}
Let $\mech_1$ and $\mech_2$ be two algorithms that commute ($\mech_1\circ\mech_2=\mech_2\circ\mech_1$), let $\mech_1$ have a matrix representation that is invertible and let $\mech_2$ be idempotent. Then
\begin{list}{\labelitemi}{\leftmargin=0.5em}
\itemsep 0pt
\parskip 2pt
\item $\cnf(\set{\mech_1\circ\mech_2})=\cnf(\set{\mech_1})\cap \cnf(\set{\mech_2})$
\item $\rowcone(\set{\mech_1\circ\mech_2})=\rowcone(\set{\mech_1})\cap \rowcone(\set{\mech_2})$
\end{list}
\end{lemma}
\begin{proof}
Let $\inp$ be the input domain. Since $\mech_1$ and $\mech_2$ commute, it is clear that $\range(\mech_1)\subseteq \inp$ and $\range(\mech_2)\subseteq \inp$ (i.e. the ranges must be subsets of the domain of the algorithms). Let $M_1$ and $M_2$ be the matrix representations of $\mech_1$ and $\mech_2$ respectively. Furthermore, since $\mech_1$ has a matrix representation that is invertible, $\range(\mech_1)=\inp$ and therefore $\range(\mech_2)\subseteq\range(\mech_1)$. We expand the range of $\mech_2$ so that it equals the range of $\mech_1$ (some outputs will just have $0$ probability for all inputs). Then $\mech_1$ and $\mech_2$ have matrix representations $M_1$ and $M_2$ where the columns are indexed by datasets in $\inp$ and the rows are also indexed by datasets in $\inp$ (in the case of $M_2$, some of those rows may contain only $0$ values). This ensures that the corresponding matrix representations $M_1$ and $M_2$ are square matrices. The commutativity of $\mech_1$ and $\mech_2$ imply that $M_1M_2 = M_2M_1$ and the fact that $\mech_2$ is idempotent (i.e. $\mech_2\circ\mech_2=\mech_2)$ implies $M_2M_2=M_2$.
By Corollary \ref{cor:one},
we see that
\begin{itemize}
\item $\cnf(\set{\mech_1})$ is the set of algorithms of the form $\randalg\circ\mech_1$ where $\randalg$ is any postprocessing algorithm (whose domain contains the range of $\mech_1$.
\item $\cnf(\set{\mech_2})$ is the set of algorithms of the form $\randalg\circ\mech_2$
\item $\cnf(\set{\mech_1\circ\mech_2})$ is the set of algorithms of the form $\randalg \circ\mech_1\circ\mech_2$. By commutativity of $\mech_1$ and $\mech_2$ this is also the set of algorithms of the form $\randalg \circ\mech_2\circ\mech_1$.
\end{itemize}
Now, let $\mech_3$ be an algorithm in $\cnf(\set{\mech_1\circ\mech_2})$. Then $\mech_3=\randalg_1\circ\mech_1\circ\mech_2$ (for some $\randalg_1$) so that $\mech_3\in\cnf(\set{\mech_2})$. By commutativity, $\mech_3=\randalg_2\circ\mech_2\circ\mech_1$ (for some $\randalg_2$) so that $\mech_3\in\cnf(\set{\mech_1})$ and therefore $\mech_3\in \cnf(\set{\mech_2})\cap\cnf(\set{\mech_1})$.
To prove the other direction, choose a $\mech_4\in \cnf(\set{\mech_2})\cap\cnf(\set{\mech_1})$. Then $\mech_4=\randalg_4\circ\mech_2$ for some $\randalg_4$. In terms of the corresponding matrix representations, we have:
\begin{eqnarray}
M_4&=& A_4 M_2\nonumber\\
&=& A_4 M_2 M_1^{-1}M_1\nonumber\\
&&\text{($M_1^{-1}$ exists by hypothesis)}\nonumber\\
&=& A_4 M_1^{-1}M_2M_1\label{eqn:commutealg}\\
&&\text{(since $M_1$ and $M_2$ commute if and only if }\nonumber\\
&&\text{$M_1^{-1}$ and $M_2$ commute)}\nonumber
\end{eqnarray}
Now, note that a $A_4 M_1^{-1}M_2$ is the matrix representation of an algorithm because of the following facts: (1) $M_4=A^* M_1$ where $A^*$ is the matrix representation of some algorithm since $\mech_4\in\cnf(\set{\mech_1})$ (by assumption), (2)
$A^*M_1=M_4=(A_4 M_1^{-1}M_2)M_1$ by Equation \ref{eqn:commutealg} and so $A^*=A_4 M_1^{-1}M_2$ because $M_1$ is invertible. Therefore
\begin{eqnarray*}
M_4&=&A_4 M_1^{-1}M_2M_1\\
&=&A_4 M_1^{-1}M_2M_2M_1\\
&&\text{(since $\mech_2$, and therefore $M_2$, is idempotent)}\\
&=&(A_4 M_1^{-1}M_2)M_1M_2\\
&& \text{(by commutativity)}
\end{eqnarray*}
and therefore $M_4\in \cnf(\set{\mech_1\circ\mech_2})$ since we had shown that $(A_4 M_1^{-1}M_2)$ is the matrix representation of an algorithm.
We now prove the corresponding statement for row cones. The results for the consistent normal form immediately imply $\rowcone(\set{\mech_1\circ\mech_2})\subseteq \rowcone(\set{\mech_1})\cap \rowcone(\set{\mech_2})$.
Now consider a vector $\vec{x}\in \rowcone(\set{\mech_1})\cap \rowcone(\set{\mech_2})$. Then there is a an algorithm $\mech^\prime_1\in\cnf(\set{\mech_1})$ such that $c_1\vec{x}$ is a row of the corresponding matrix representation $M^\prime_1$ (for some $c_2>0$). Similarly, there is a $\mech^\prime_2\in\cnf(\set{\mech_2})$ such that $c_2\vec{x}$ is a row of the corresponding matrix representation $M^\prime_2$ (for some $c_2>0$). By appropriate postprocessing\footnote{i.e. using an algorithm that maps outputs not associated with a designated row to the same symbol}, we can assume that $\mech_1$ and $\mech_2$ have only two possible outputs. Without loss of generality assume $c_1\geq c_2$. In this case, it is easy to construct and algorithm $\randalg$ so that $\randalg\circ\mech_1^\prime=\mech_2^\prime$. Hence $\mech_2^\prime\in \cnf(\set{\mech_2})\cap\cnf(\set{\mech_1})$ and therefore $\mech_2^\prime \in \cnf(\set{\mech_1\circ\mech_2})$. Since $c_2\vec{x}$ is a row of the matrix representation $M_2^\prime$, this means that $\vec{x}\in\rowcone (\set{\mech_1\circ\mech_2})$.
\end{proof}
\section{Proof of Theorem \lowercase{\ref{thm:samplecone}}}\label{app:samplecone}
In this section we prove Theorem \ref{thm:samplecone}. Before doing so, we first need to derive the matrix representation of the sampling algorithm $\mechsamp$ (defined in Definition \ref{def:mechsample}). As discussed in Definition \ref{def:mechsampledomain}, a typical input to $\mechsamp$ consists of a sequence of tuples, one per individual in the population. If $i^\text{th}$ individual did not provide any information for the survey, then the $i^\text{th}$ tuple value is ``?''. If the $i^\text{th}$ individual did provide information, then the $i^\text{th}$ tuple is the record corresponding to that information.
The sampling algorithm $\mechsamp$ (Definition \ref{def:mechsample}) can be expressed as a composition of two other algorithms: $$\mechsamp=\mechsort\circ\mechdrop$$ where $\mechdrop$ replaces tuples with ``?'' independently and with probability $1-p$, and $\mechsort$ sorts the tuples in its input dataset. Letting $\matsamp$, $\matdrop$, and $\matsort$ be the corresponding matrix representations, we see that
$$\matsamp=\matsort\matdrop$$
The matrix representation $\matsort$ is easy to derive. The rows are indexed by the set of databases of size $W$ (the number of people in the population) and correspond to possible outputs. The columns are indexed by the set of databases of size $W$ and correspond to possible inputs. A column corresponding to an input dataset $D$ contains $0$ entries everywhere except in the row corresponding to the sorted version of $D$ (by convention ``?'' are considered larger than other tuple values). For example, when the domain of tuples $\tdom=\set{a,b}$ and the population size $W=2$ then the matrix representation of $\mechsort$ is:
$$\matsort =
\left(
\bordermatrix{
& \red{aa} & \red{ab} & \red{a?} & \red{ba} & \red{bb} & \red{b?} &\red{?a} & \red{?b} & \red{??}\cr
\blue{aa} & \mathbf{1}& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{ab} & 0 & \mathbf{1} & 0 & \mathbf{1} & 0 & 0 & 0 & 0 & 0 \cr
\blue{a?} & 0 & 0 & \mathbf{1} & 0 & 0 & 0 & \mathbf{1}& 0 & 0 \cr
\blue{ba} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{bb} & 0 & 0 & 0 & 0 & \mathbf{1} & 0 & 0 & 0 & 0 \cr
\blue{b?} & 0 & 0 & 0 & 0 & 0 & \mathbf{1} & 0 & \mathbf{1} & 0 \cr
\blue{?a} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{?b} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{??} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mathbf{1}
}
\right)
$$
\begin{figure*}[!t]
\begin{eqnarray*}
\matdrop &=& \bigoplus\limits_{i=1}^W B_p
=
\bordermatrix{
& \red{aa} & \red{ab} & \red{a?} & \red{ba} & \red{bb} & \red{b?} &\red{?a} & \red{?b} & \red{??}\cr
\blue{aa} & p^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{ab} & 0 & p^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{a?} & p(1-p) & p(1-p) & p & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{ba} & 0 & 0 & 0 & p^2 & 0 & 0 & 0 & 0 & 0 \cr
\blue{bb} & 0 & 0 & 0 & 0 & p^2 & 0 & 0 & 0 & 0 \cr
\blue{b?} & 0 & 0 & 0 & p(1-p) & p(1-p) & p & 0 & 0 & 0 \cr
\blue{?a} & p(1-p) & 0 & 0 & p(1-p) & 0 & 0 & p & 0 & 0 \cr
\blue{?b} & 0 & p(1-p) & 0 & 0 & p(1-p) & 0 & 0 & p & 0 \cr
\blue{??} & (1-p)^2 & (1-p)^2 & 1-p & (1-p)^2 & (1-p)^2 & 1-p & 1-p & 1-p & 1
}
\end{eqnarray*}
\caption{Matrix representation of $\mechdrop$ with $\tdom=\set{a,b}$ and $W=2$}\label{fig:mechdrop}
\end{figure*}
The matrix representation of $\mechdrop$ has the form
\begin{eqnarray}
\matdrop&=&\bigoplus\limits_{i=1}^W B_p \label{eqn:matdropdefine}\\
(i,j)\text{-entry of }B_p &=&
\begin{cases}
1-p&\text{ if }i=N+1, j<N+1\\
p & \text{ if }i=j, i\neq N+1\\
1 &\text{ if }i=j=N+1\\
0 &\text{ otherwise}
\end{cases}\label{eqn:matdropb}
\end{eqnarray}
where $\bigoplus$ is the Kronecker product, $B_p$ is an $N+1\times N+1$ matrix (recall $N=|\tdom|$) whose first $N$ diagonal entries are $p$, the first $N$ entries of the last row are $1-p$, the last diagonal entry is $1$ and all other entries are $0$. When the duple domain is $\tdom=\set{a,b}$ and the population size $W=2$ then $B_p$ is:
$$B_p=
\begin{pmatrix}
p & 0 & 0\\
0 & p & 0\\
1-p & 1-p & 1
\end{pmatrix}
$$
and the matrix representation of $\mechdrop$ is shown in Figure \ref{fig:mechdrop}.
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:samplecone}).}
Let $\mech$ be an algorithm and $\omega\in\range(\mech)$. The vector
$$(P[\mech(D_1)=\omega],~\dots,~P[\mech(D_n)=\omega])$$
belongs to $\rowcone(\set{\mechsamp})$ if and only if:
\begin{list}{\labelitemi}{\leftmargin=1em}
\itemsep 1pt
\parskip 4pt
\item $P(\mech(D_i)=\omega)=P(\mech(D_j)=\omega)$ whenever $D_i$ and $D_j$ are permutations of each other, and
\item
$ \forall i:~\sum\limits_{D_j\subseteq D_i} P(\mech(D_j)=\omega)\left(-(1-p)\right)^{\text{blank}(D_i,D_j)}\geq 0$
\end{list}
where the notation $D_j\subseteq D_i$ means that $D_i$ can be converted to $D_j$ by replacing tuples with ``?'', nonblank$(D_i)$ is the number of tuples in $D_i$ not equal to ``?'', and blank$(D_i,D_j)$ is the number of tuples in $D_j$ equal to ``?'' minus the number of tuples in $D_i$ equal to ``?''.
\end{theorem}
\begin{proof}
Our strategy is to apply Lemma \ref{lem:commute} to the algorithm $\mechsamp=\mechsort\circ\mechdrop$.
\vspace{0.5cm}\noindent\textbf{Step 1:} computing the row cone $\rowcone(\set{\mechsort})$. Consider a partition on the input datasets such that two datasets $D_i$ and $D_j$ are in the same partition if and only if sorting their tuples gives the same result. Note that this is equivalent to saying that $D_i$ is a permutation of $D_j$. By
Corollary \ref{cor:one}, $\cnf(\set{\mechsort})$ is the set of algorithms of the form $\randalg\circ\mechsort$ and therefore the row cone consists of all positive linear combinations of rows of the corresponding matrix representation $\matsort$. It is easy to see that a vector with nonnegative entries is a positive linear of rows of $\matsort$ if and only if for all pairs of datasets $D_i$ and $D_j$ that are in the same partition, the components corresponding to $D_i$ and $D_j$ are the same. Therefore, given an algorithm $\mech$ and some output $\omega\in\range(\mech)$, the vector
$$(P[\mech(D_1)=\omega],~\dots,~P[\mech(D_n)=\omega])$$
belongs to $\rowcone(\set{\mechsort})$ if and only if $P(\mech(D_i)=\omega)=P(\mech(D_j)=\omega)$ whenever $D_i$ and $D_j$ are permutations of each other.
\vspace{0.5cm}\noindent\textbf{Step 2:} computing the row cone $\rowcone(\set{\mechdrop})$. The corresponding matrix representation is $\matdrop = \bigoplus\limits_{i=1}^W B_p$ where $B_p$ is the $N+1\times N+1$ matrix such that:
\begin{eqnarray*}
(i,j)\text{-entry of }B_p &=&
\begin{cases}
p & \text{ if }i=j, i\neq N+1\\
1-p&\text{ if }i=N+1, j<N+1\\
1 &\text{ if }i=j=N+1\\
0 &\text{ otherwise}
\end{cases}
\end{eqnarray*}
The inverse $(B_p)^{-1}$ is easy to derive and equals:
\begin{eqnarray*}
(i,j)\text{-entry of }(B_p)^{-1} &=&
\begin{cases}
\frac{1}{p} & \text{ if }i=j, i\neq N+1\\
-\frac{1-p}{p}&\text{ if }i=N+1, j<N+1\\
1 &\text{ if }i=j=N+1\\
0 &\text{ otherwise}
\end{cases}
\end{eqnarray*}
and thus the inverse of $\matdrop$ exists and equals $(\matdrop)^{-1} = \bigoplus\limits_{i=1}^W (B_p)^{-1}$. Now, according to Theorem \ref{thm:invcnfinf}, a vector $\vec{x}\in\rowcone(\set{\mechdrop})$ if and only if $x\cdot m^{(i)}\geq 0$ for every column $m^{(i)}$ of $(\matdrop)^{-1}$. Thus we need to enumerate the columns of $ (\matdrop)^{-1} = \bigoplus\limits_{i=1}^W (B_p)^{-1}$. Now, define the functions:
\begin{enumerate}
\item super$(D_i,D_j)$ is $1$ if $D_i$ can be turned into $D_j$ by replacing some tuples with $''?''$, and it is $0$ otherwise.
\item blank$(D_i,D_j)$ is the number of tuples in $D_j$ whose value is ``?'' minus the number of tuples in $D_i$ whose value is ``?''.
\item nonblank$(D_i)$ is the number of tuples in $D_i$ whose value is not ``?''
\end{enumerate}
\begin{figure*}[!t]
\begin{eqnarray*}
(\matdrop)^{-1} &=& \bigoplus\limits_{i=1}^2 (B_p)^{-1} = \bigoplus\limits_{i=1}^2 \begin{pmatrix}
\frac{1}{p} & 0 & 0\\
0 & \frac{1}{p} & 0\\
\frac{-(1-p)}{p} & \frac{-(1-p)}{p} & 1
\end{pmatrix}\\
&=&
\bordermatrix{
& \red{aa} & \red{ab} & \red{a?} & \red{ba} & \red{bb} & \red{b?} &\red{?a} & \red{?b} & \red{??}\cr
\blue{aa} & \frac{1}{p^2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{ab} & 0 & \frac{1}{p^2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{a?} & \frac{-(1-p)}{p^2} & \frac{-(1-p)}{p^2} & \frac{1}{p} & 0 & 0 & 0 & 0 & 0 & 0 \cr
\blue{ba} & 0 & 0 & 0 & \frac{1}{p^2} & 0 & 0 & 0 & 0 & 0 \cr
\blue{bb} & 0 & 0 & 0 & 0 & \frac{1}{p^2} & 0 & 0 & 0 & 0 \cr
\blue{b?} & 0 & 0 & 0 & \frac{-(1-p)}{p^2} & \frac{-(1-p)}{p^2} & \frac{1}{p} & 0 & 0 & 0 \cr
\blue{?a} & \frac{-(1-p)}{p^2} & 0 & 0 & \frac{-(1-p)}{p^2} & 0 & 0 & \frac{1}{p} & 0 & 0 \cr
\blue{?b} & 0 & \frac{-(1-p)}{p^2} & 0 & 0 & \frac{-(1-p)}{p^2} & 0 & 0 & \frac{1}{p} & 0 \cr
\blue{??} & \frac{(1-p)^2}{p^2} & \frac{(1-p)^2}{p^2} & \frac{-(1-p)}{p} & \frac{(1-p)^2}{p^2} & \frac{(1-p)^2}{p^2} & \frac{-(1-p)}{p} & \frac{-(1-p)}{p} & \frac{-(1-p)}{p} & 1
}
\end{eqnarray*}
\caption{$(\matdrop)^{-1}$ when $\tdom=\set{a,b}$ and $W=2$}\label{fig:matdropinv}
\end{figure*}
Note that if super$(D_i,D_j)=1$ then blank$(D_i,D_j)\geq 0$ because it is the number of tuples that need to be changed into ``?'' in order to convert $D_i$ into $D_j$.
With these definitions, we can enumerate the columns of $ (\matdrop)^{-1}$ in the following way. We associate a dataset $D_i$ to the $i^\text{th}$ column $m^{(i)}$ of $(\matdrop)^{-1}$.
A simple proof by induction on $W$ shows that\footnote{Note in the induction, the base case is $(\matdrop)^{-1}=(B_p)^{-1}$ and the general case is $(\matdrop)^{-1} = \bigoplus_{i=1}^W (B_p)^{-1}$. For reference, Figure \ref{fig:matdropinv} shows $(\matdrop)^{-1} = \bigoplus_{i=1}^2(B_p)^{-1}$ for the case $W=2$.}:
\begin{eqnarray*}
j^\text{th} \text{ entry of }m^{(i)}&&\\
&&\hspace{-2.5cm}= \text{super}(D_i,D_j)\frac{\left(-(1-p)\right)^{\text{blank}(D_i,D_j)} }{p^{\text{nonblank}(D_i)}}
\end{eqnarray*}
For example, Figure \ref{fig:matdropinv} shows $(\matdrop)^{-1}$ for the case $W=2$ and $\tdom=\set{a,b}$. The second column, $m^{(2)}$ is associated with the dataset $D_2=ab$. Its $4^\text{th}$ entry is $0$ because $D_4=ba$ and there is no way to convert $ab$ to $ba$ by replacing tuple values with $''?''$. On the other hand, the $3^\text{rd}$ entry is $\frac{-(1-p)}{p^2}$ because $D_3=a?$ and super$(D_2,D_3)=1$ (we can replace the $b$ with ``?''), blank$(D_2,D_3)=1$, and nonblank$(D_2)=2$.
Now, according to Theorem \ref{thm:invcnfinf}, the vector $$(P[\mech(D_1)=\omega],~\dots,~P[\mech(D_n)=\omega])$$ (for some algorithm $\mech$ and output $\omega\in\range(\mech)$) belongs to $\rowcone(\set{\mechdrop})$ if and only if its dot product with every column $m^{(i)}$ of $(\matdrop)^{-1}$ is nonnegative. Using our method of associating $D_i$ with $m^{(i)}$, the condition becomes:
\begin{eqnarray*}
\forall i:~\sum\limits_{D_j\subseteq D_i} P(\mech(D_j)=\omega)\frac{\left(-(1-p)\right)^{\text{blank}(D_i,D_j)} }{p^{\text{nonblank}(D_i)}}\geq 0
\end{eqnarray*}
where we use the notation $D_j\subseteq D_i$ to mean that $D_i$ can be converted to $D_j$ by changing some tuple values to ``?''. We then multiply by $p^{\text{nonblank}(D_i)}$ without affecting the inequalities to get:
\begin{eqnarray*}
\forall i:~\sum\limits_{D_j\subseteq D_i} P(\mech(D_j)=\omega)\left(-(1-p)\right)^{\text{blank}(D_i,D_j)}\geq 0
\end{eqnarray*}
\vspace{0.5cm}\noindent\textbf{Step 3:} combining the results. Since the matrix representation of $\mechdrop$ is invertible, $\mechdrop$ and $\mechsort$ commute, $\mechsort$ is idempotent, and $\mechsamp=\mechsort\circ\mechdrop$, we can use Lemma \ref{lem:commute} to combine the results from the previous two steps. Thus a vector
$$(P[\mech(D_1)=\omega],~\dots,~P[\mech(D_n)=\omega])$$
belongs to $\rowcone(\set{\mechsamp})$ if and only if:
\begin{eqnarray*}
\forall i:~\sum\limits_{D_j\subseteq D_i} P(\mech(D_j)=\omega)\left(-(1-p)\right)^{\text{blank}(D_i,D_j)}\geq 0
\end{eqnarray*}
and $P(\mech(D_i)=\omega)=P(\mech(D_j)=\omega)$ whenever $D_i$ and $D_j$ are permutations of each other.
\end{proof}
\section{Proof of Theorem \lowercase{\ref{thm:samplesemantics}}}\label{app:samplesemantics}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:samplesemantics}).}
Suppose an attacker knows individuals $\set{i_1,\dots,i_k}$ are a superset of those who participated in the survey. Furthermore, suppose the attacker knows that the individuals have record values $r_{j_1},\dots,r_{j_k}$ but is unsure about the true assignment $\sigma$ of record value to individual (i.e. the attacker may know that exactly 25 of 138 individuals have cancer but does now know who are the cancer patients). If the attacker believes that each individual participated in the survey with probability $q\geq \frac{1}{2-p}$, then $\mechsamp$ (and any other algorithm in $\cnf(\set{\mechsamp})$) guarantees that after seeing the sanitized data, the attacker learns nothing new about the true assignment. Furthermore, the attacker will believe that the parity of the subset of $\set{i_1,\dots,i_k}$ who did not participate is more likely to be even than odd.
\end{theorem}
\begin{proof}
Let $\mech$ be an algorithm such that every row of its matrix representation $M$ belongs to $\rowcone(\set{\mechsamp})$.
Let $\set{i_1,\dots,i_k}$ be a set of $k$ individuals that is a superset of the individuals who participated in the survey. Suppose that an attacker knows that collectively their record values are $r_{j_1},\dots,r_{j_k}$ although the attacker may not know the specific assignment of record value to individual.
Let $\sigma$ be any assignment of the record values $r_{j_1},\dots,r_{j_k}$ to the individuals $\set{i_1,\dots,i_k}$. That is, $\sigma(i_1)$ is the record value assigned to individual $i_1$, etc. Let $D^\sigma\in\inp$ be the possible input dataset that is determined by $\sigma$: the tuples corresponding to individuals $i_1,\dots,i_k$ have values $\sigma(i_1),\dots,\sigma(i_k)$, respectively, and all other tuples are set to ''?''.
\vspace{0.5cm}\noindent\textbf{Step 1:} reduction to the case where $k=W$ (where $W$ is the number of individuals in the population).
Without loss of generality we can assume $k=W$. To see why, if an individual $j$ is not one of the $i_1,\dots,i_k$, then the attacker knows for sure that individual $j$ did not participate in the survey. In this case, it is impossible for $\mechsamp$ to output any sanitized data in which the $j^\text{th}$ tuple is different from ``?'' and it is impossible to have any input dataset where the $j^\text{th}$ tuple is different from ``?''. We can therefore redefine the input space to consist of all individuals except $j$ and $\mechsamp$ operates as before (by dropping tuples independently and sorting the result). In other words, if individual $j$ did not participate in the survey with probability $1$, we can just pretend individual $j$ never existed. We can repeat this process of eliminating from consideration all individuals not in $\set{i_1,\dots,i_k}$.
Therefore, without loss of generality, we set $k=W$ so that the attacker knows that individuals $1,\dots, W$ and knows that their record values are $r_{i_1},\dots, r_{i_W}$ but may not know the specific assignment of records to individuals. Furthermore, the attacker believes that each individual participated in the survey with (independent) probability $q\geq\frac{1}{2-p}$.
\vspace{0.5cm}\noindent\textbf{Step 2:} show that the relative preferences of assignments is the same.
If $\sigma^\prime$ is any other assignment and $D^{\sigma^\prime}$ the corresponding database\footnote{i.e. the record belonging to individual $i$ is $\sigma^\prime(i)$.} then, by Theorem \ref{thm:samplecone}, $P(\mech(D^\sigma)=\omega)=P(\mech(D^{\sigma^\prime}=\omega)$ and consequently
\begin{eqnarray*}
\frac{P(\data=D^\sigma~|~\mech(\data)=\omega) }{ P(\data=D^{\sigma^\prime}~|~\mech(\data)=\omega)} &=& \frac{P(\data=D^\sigma)}{P(\data=D^{\sigma^\prime})}
\end{eqnarray*}
\vspace{0.5cm}\noindent\textbf{Step 3:} show that parity is protected when attacker knows the true assignment $\sigma$.
First, note that the condition:
{\small
\begin{eqnarray}
\lefteqn{\hspace{-1em}P\left(\parity\left(\substack{\text{Individuals}\\\text{who did not}\\ \text{participate in survey}}\right)=0~|~\substack{\mechdrop(\data)=\omega}\right)}\nonumber\\
&&\hspace{-3em}\geq P\left(\parity\left(\substack{\text{Individuals }\\\text{who did not} \\\text{participate in survey}}\right)=1~|~\substack{\mechdrop(\data)=\omega}\right)\label{eqn:sampthisisit}
\end{eqnarray}
}
is exactly equivalent to the following condition (after subtracting the right hand side from the left hand side and plugging in the relevant probabilities):
{\small
\begin{eqnarray}
\forall \omega\in\range(\mech):\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\nonumber\\
\sum\limits_{D_j\subseteq D^\sigma} \frac{P(\mech(D_j)=\omega)\left(-(1-q)\right)^{\text{miss}(D_j)} q^{\text{nonblank}(D_j)}}{P(\mech(\data)=\omega)}\geq 0\label{eqn:sampthisisit2}
\end{eqnarray}
}
where the notation $D_j\subseteq D^\sigma$ means that $D^\sigma$ can be converted to $D_j$ by replacing some tuples with ``?'', nonblank$(D_j)$ is the number of tuples in $D_j$ not equal to ``?'', and miss$(D_j)$ is the number of tuples in $D_j$ equal to ``?''. Note that the terms corresponding to $D_j$ such that $D_j\not\subseteq D^\sigma$ have been dropped because they are multiplied by a $0$ prior.
Multiplying by $P(\mech(\data)=\omega)$, dividing by $q^W$ (size of population), and noting that $q^{\text{nonblank}(D_j)}/q^{W} =q^{-miss(D_j)}$, our goal is now to prove:
\begin{eqnarray}
\lefteqn{\forall \omega\in\range(\mech):}\nonumber\\
&&\hspace{-2em}\sum\limits_{D_j\subseteq D^\sigma} \hspace{-0.75em}P(\mech(D_j)=\omega)\left(-\frac{1-q}{q}\right)^{\text{miss}(D_j)} \geq 0\label{eqn:sampprove}
\end{eqnarray}
for any $\mech$ in the row cone of $\mechsamp$.
By Lemma \ref{lem:commute},
\begin{eqnarray*}
\cnf(\set{\mechsamp})&=&\cnf(\set{\mechsort\circ\mechdrop})\\
&\subseteq& \cnf(\set{\mechdrop})
\end{eqnarray*}
By
Corollary \ref{cor:one}, $\mech$ is therefore of the form $\randalg\circ\mechdrop$ for some postprocessing algorithm $\randalg$. This means that every row of $M$ (the matrix representation of $\mech$) is nonnegative linear combination of the rows of $\matdrop$ (the matrix representation of $\mechdrop$). Thus all we need to do is to show that:
\begin{eqnarray}
\lefteqn{\forall \omega\in\range(\mechdrop):}\nonumber\\
&&\hspace{-1.5cm}\sum\limits_{D_j\subseteq D^\sigma} \hspace{-0.75em}P(\mechdrop(D_j)=\omega)\left(-\frac{1-q}{q}\right)^{\text{miss}(D_j)}\hspace{0em}\geq 0\label{eqn:sampprove2}
\end{eqnarray}
because every inequality (one for each $\omega\in\range(\mech)$) in Equation \ref{eqn:sampprove} is a nonnegative linear combination of the inequalities in Equation \ref{eqn:sampprove2} since the rows in the matrix representation of $\mech$ are nonnegative linear combinations of the rows of the matrix representation of $\mechdrop$.
Now consider the vector:
$$(y_1,\dots, y_n)^T$$
where $y_j=\left(-\frac{1-q}{q}\right)^{\text{miss}(D_j)}$ if $D_j\subseteq D_\sigma$ and $y_j=0$ if $D_j\not\subseteq D_\sigma$. Then the conditions in Equation \ref{eqn:sampprove2} are equivalent to:
\begin{eqnarray}
\matdrop ~\cdot~(y_1,\dots,y_n)^T\succeq 0\label{eqn:sampprove3}
\end{eqnarray}
where $\matdrop$ is the matrix representation of $\mechdrop$ and the notation $\vec{v}\succeq 0$ means that all of the components of $\vec{v}$ are nonnegative.
Now, it is easy to see that:
\begin{eqnarray*}
y&=&(y_1,\dots, y_n)\\
&=&\bigoplus\limits_{i=1}^W C_i
\end{eqnarray*}
where the $C_i$ are column vectors defined as:
\begin{eqnarray*}
j^\text{th}\text{ entry of } C_i=
\begin{cases}
1 &\text{ if $\sigma(i)=r_j\in \tdom$}\\
-\frac{1-q}{q}&\text{ if $j=N+1$}
\end{cases}
\end{eqnarray*}
Recall $\sigma(i)$ is the assignment of a record to individual $i$ (in this step we assume that $\sigma$ is known to the attacker and we remove this assumption in Step 4). Recall also that $\tdom=\set{r_1,\dots,r_N}$ is the domain of possible record values and that tuples come from the set $\tdom\cup\set{?}$, where ``?'' represents a missing value (and which follows all the $r_i$ in the ordering of the domain). The values in $\tdom$ are ordered only for the purposes of expressing our algorithms as matrices\footnote{Thus we need an order on columns, which correspond to datasets, and the order on datasets is induced by the order on the tuples, for example see Figure \ref{fig:mechdrop}.}.
Recalling the equation for $\matdrop$, the matrix representation of $\mechdrop$ as the Kronecker product $\matdrop=\bigoplus_{i=1}^W B_p$ where $B_p$ is defined in Equation \ref{eqn:matdropb} in Appendix \ref{app:samplecone}, Equation \ref{eqn:sampprove3} is equivalent to:
\begin{eqnarray}
0\preceq \left(\bigoplus\limits_{i=1}^W B_p\right)\left(\bigoplus\limits_{i=1}^W C_i\right)
= \bigoplus\limits_{i=1}^W (B_pC_i)\label{eqn:sampprove4}
\end{eqnarray}
And thus our goal is to prove that the components of $B_pC_i$ are nonnegative. Considering the rows of $B_p$, we see that the dot product between row $j<N+1$ of $B_p$ and the vector $C_i$ is either $p$ or $0$. The dot product between row $N+1$ of $B_p$ and the vector $C_i$ is $(1-p)-\frac{1-q}{q}$. This quantity is nonnegative as long as $q\geq\frac{1}{2-p}$. Thus for this setting of $q$, Equation \ref{eqn:sampprove4} is true, which implies Equation \ref{eqn:sampprove3} is true, which implies Equation \ref{eqn:sampprove2}, which implies Equation \ref{eqn:sampprove}, which implies Equation \ref{eqn:sampthisisit}, which is what we needed to prove.
\vspace{0.5cm}\noindent\textbf{Step 4:} show that parity is protected when attacker does not know the true assignment $\sigma$.
When the attacker does not know the true assignment, there is a probability distribution over assignments. In this case, the difference in posterior probability of the parity is equivalent to a nonnegative linear combination of Equation \ref{eqn:sampthisisit2} (where we vary $D^{\sigma^\prime}$ through all possible assignments $\sigma^\prime$ of record values to individuals (as long as the assignments are consistent with the attacker's background knowledge) and the weights of the nonnegative combination are $P(\data=D^{sigma^\prime})$). Thus the difference in posterior probability of the parity is a nonnegative linear combination of nonnegative quantities (by Step 3) and therefore is nonnegative.
\end{proof}
}
\section{Proof of Lemma \lowercase{\ref{lem:mechgeo}}}\label{app:mechgeo}
\begin{lemma}\emph{(Proof and restatement of Lemma \ref{lem:mechgeo}).}
$\linebreak[0]\mech_{DNB(p,1)}$, the differenced negative binomial mechanism with $r=1$, is the geometric mechanism.
\end{lemma}
\begin{proof}
We need to show that the difference between two independent Geometric$(p)$ distributions has the probability mass function $f(k)=\frac{1-p}{1+p}p^{|k|}$.
Let $X$ and $Y$ be independent Geometric$(p)$ random variables and let $Z=X-Y$. Then
\begin{eqnarray*}
P(Z=k)&=&
\begin{cases}
\sum\limits_{j=0}^\infty P(X=j+k)P(Y=j) &\text{ if }k\geq 0\\
\sum\limits_{i=0}^\infty P(X=j)P(Y=j+|k|) &\text{ if }k<0
\end{cases}
\end{eqnarray*}
Combining both cases, we get
\begin{eqnarray*}
P(Z=k)&=&\sum\limits_{j=0}^\infty (1-p)p^{j+|k|}(1-p)p^j\\
&=&(1-p)^2p^{|k|}\sum\limits_{j=0}^\infty(p^2)^j\\
&=&(1-p)^2p^{|k|}\frac{1}{1-p^2}\\
&=&(1-p)^2p^{|k|}\frac{1}{(1-p)(1+p)}\\
&=&\frac{1-p}{1+p}p^{|k|}
\end{eqnarray*}
\end{proof}
\eat{
\begin{theorem}\label{thm:geo}
Let the input domain $\inp=\set{\dots, -2, -1, 0, 1, 2, \dots}$ be the set of integers. Let $\mech_{\epsilon\text{-Geo}}$ be the algorithm that adds to its input a random integer $k$ with distribution $\frac{e^\epsilon-1}{e^\epsilon+1} e^{-\epsilon |k|}$. A bounded row vector $\vec{x}=(\dots, x_{-2}, x_{-1}, x_0, x_1, x_2, \dots)$ belongs to $\rowcone(\set{\mech_{\epsilon\text{-Geo}}})$ if for all integers $k$,
$$-x_{k-1} + (e^\epsilon+1/e^\epsilon)x_k - x_{k+1}\geq 0$$
\end{theorem}
\begin{proof}
Let $M_{\epsilon\text{-Geo}}$ be the matrix representation of $\mech_{\epsilon\text{-Geo}}$. Define the matrix $W$ whose rows and columns are indexed by the integers such that the $(i,j)^\text{th}$ entry of $W$, denoted by $W_{i,j}$ is
\begin{eqnarray*}
W_{i,j}&=&
\begin{cases}
-1 & \text{ if } |i-j|=1\\
e^\epsilon + 1/e^\epsilon & \text{ if } i=j\\
0 & \text{ otherwise }
\end{cases}
\end{eqnarray*}
it is easy to see that $(e^\epsilon/(e^\epsilon-1)^2)W$ is the inverse of $M_{\epsilon\text{-Geo}}$. Since each column of $W$ has $L_1$ norm equal to $e^\epsilon+1/e^\epsilon+2$, we can apply Theorem \ref{thm:invcnfinf} so that the row vector $\vec{x}$ is in the row cone if and only if $\vec{x}W$ has no negative components.
\end{proof}
}
\section{Proof of Theorem \lowercase{\ref{thm:dnbrowcone}}}\label{app:dnbrowcone}
We first need an intermediate result.
\begin{lemma}\label{lem:fourierh}
Let $X$ and $Y$ be independent random variables with the Binomial$(\frac{p}{1+p},r)$ distribution (where $p/(1+p)$ is the success probability and $r$ is the number of trials). Let $Z=X-Y$ and let $f_B\left(k;\frac{p}{p+1},r\right)=P(Z=k)$ for integers $k=-r, \dots, 0, \dots r$.
Define the function $h$ as $h(k)=(-1)^k f_B\left(k;\frac{p}{p+1},r\right)$. The Fourier series transform $\widehat{h}$ of $h$ (defined as $\widehat{h}(t)=\sum_{\ell=-\infty}^\infty h(\ell)e^{i\ell t}$) is equal to
\begin{eqnarray*}
\widehat{h}(t)=\frac{1}{(1+p)^{2r}}(1-pe^{it})^r(1-pe^{-it})^r
\end{eqnarray*}
\end{lemma}
\begin{proof}
Define the random variable $Y^\prime =-Y$. Then $X+Y^\prime=Z$. Thus
\begin{eqnarray*}
\widehat{h}(t)&=&\sum\limits_{\ell=-\infty}^\infty h(\ell)e^{i\ell t}\\
&=&\sum\limits_{\ell=-\infty}^\infty (-1)^\ell f_B\left(\ell;\frac{p}{p+1},r\right)e^{i\ell t}\\
&=&\sum\limits_{\ell=-\infty}^\infty (-1)^\ell e^{i\ell t}P(Z=\ell)\\
&=&\sum\limits_{\ell=-\infty}^\infty e^{i\ell t}(-1)^\ell\sum\limits_{j=-\infty}^\infty P(X=\ell-j)P(Y^\prime=j)\\
&=&\sum\limits_{\ell=-\infty}^\infty e^{i\ell t}\sum\limits_{j=-\infty}^\infty (-1)^{\ell-j}P(X=\ell-j)(-1)^jP(Y^\prime=j)\\
&=&\sum\limits_{\ell=-\infty}^\infty \sum\limits_{j=-\infty}^\infty (-1)^{\ell-j}e^{i(\ell-j)t}P(X=\ell-j)(-1)^je^{ijt}P(Y^\prime=j)\\
&=&\sum\limits_{j=-\infty}^\infty (-1)^je^{ijt}P(Y^\prime=j)\sum\limits_{\ell=-\infty}^\infty(-1)^{\ell-j}e^{i(\ell-j)t}P(X=\ell-j)
\end{eqnarray*}
Now,
\begin{eqnarray*}
\lefteqn{\sum\limits_{\ell=-\infty}^\infty(-1)^{\ell-j}e^{i(\ell-j)t}P(X=\ell-j)}\\
&=&\sum\limits_{\ell=-\infty}^\infty(-1)^{\ell}e^{i\ell t}P(X=\ell)\\
&=&\sum\limits_{\ell=0}^r(-1)^{\ell}e^{i\ell t}P(X=\ell)\\
&&\text{(Since $X$ can only be $0,\dots,r$)}\\
&=&\sum\limits_{\ell=0}^r(-1)^{\ell}e^{i\ell t} {r \choose \ell}\left(\frac{p}{1+p}\right)^\ell\left(\frac{1}{1+p}\right)^{r-\ell}\\
&=&\frac{1}{(1+p)^r}\sum\limits_{\ell=0}^r(-1)^{\ell}e^{i\ell t} {r \choose \ell}p^{\ell}\\
&=&\frac{1}{(1+p)^r}\sum\limits_{\ell=0}^r{r\choose \ell}(-pe^{it})^{\ell} \\
&=&\frac{1}{(1+p)^r}(1-pe^{it})^r\text{ by the Binomial theorem}
\end{eqnarray*}
Thus continuing our previous calculation,
\begin{eqnarray*}
\widehat{h}(t)&=&\sum\limits_{j=-\infty}^\infty (-1)^je^{ijt}P(Y^\prime=j) \frac{1}{(1+p)^r}(1-pe^{it})^r\\
&=&\sum\limits_{j=-r}^0 (-1)^je^{ijt}P(Y^\prime=j) \frac{1}{(1+p)^r}(1-pe^{it})^r\\
&&\text{(since $Y^\prime$ can only be $-r,\dots, 0$)}\\
&=&\sum\limits_{j=-r}^0 (-1)^je^{ijt}P(Y=-j) \frac{1}{(1+p)^r}(1-pe^{it})^r\\
&&\text{(since $Y^\prime=-Y$ )}\\
&=&\sum\limits_{j=0}^r (-1)^je^{-ijt}P(Y=j) \frac{1}{(1+p)^r}(1-pe^{it})^r\\
\end{eqnarray*}
Now, similar to what we did before, we can derive that $\sum\limits_{j=0}^r (-1)^je^{-ijt}P(Y=j)= \frac{1}{(1+p)^r}(1-pe^{-it})^r$ and therefore
\begin{eqnarray*}
\widehat{h}(t)=\frac{1}{(1+p)^{2r}}(1-pe^{it})^r(1-pe^{-it})^r
\end{eqnarray*}
\end{proof}
\begin{theorem}\emph{(Restatement and proof of Theorem \ref{thm:dnbrowcone}).}
A bounded row vector $\vec{x}=(\dots, \linebreak[0]x_{-2}, \linebreak[0]x_{-1}, \linebreak[0]x_0, x_1, x_2, \dots)$ belongs to $\rowcone(\set{\mech_{DNB(p,r)}})$ if for all integers $k$,
$$\forall k:~~~\sum\limits_{j=-r}^r (-1)^j f_B\left(j;\frac{p}{1+p},r\right) x_{k+j} \geq 0$$
where $p$ and $r$ are the parameters of the differenced negative binomial distribution and $f_B(\cdot;p/(1+p),r)$ is the probability mass function of the difference of two independent binomial (not negative binomial) distributions whose parameters are $p/(1+p)$ (success probability) and $r$ (number of trials).
\end{theorem}
\begin{proof}
For convenience, define the function $h$ as follows:
\begin{eqnarray*}
h(j)&=&(-1)^jf_B\left(j;\frac{p}{1+p},r\right)
\end{eqnarray*}
Let $g_{NB}(\cdot;p,r)$ be the probability distribution function for the difference of two independent NB$(p,r)$ random variables. Then the matrix representation $M_{DNB(p,r)}$ of the differenced negative binomial mechanism $\mech_{DNB(p,r)}$ is the matrix whose rows and columns are indexed by the integers and whose entries are defined as:
\begin{eqnarray*}
(i,j)\text{ entry of }M_{DNB(p,r)}&=& g_{NB}(i-j;p,r)
\end{eqnarray*}
By Theorem \ref{thm:invcnfinf} we need to show that $M_{DNB(p,r)}$ is the inverse of $\frac{(1+p)^{2r}}{(1-p)^{2r}}H$ where $H$ is the matrix whose rows and columns are indexed by the integers and whose entries are defined as:
\begin{eqnarray*}
(i,j)\text{ entry of }H&=&h(i-j)=(-1)^{i-j}f_B\left(i-j;\frac{p}{1+p},r\right)
\end{eqnarray*}
(to see how Theorem \ref{thm:invcnfinf} is applied, note that each entry of the product $\vec{x}H$ has the form $\sum\limits_{j=-r}^r (-1)^j f_B\left(j;\frac{p}{1+p},r\right) x_{k+j} $).
Now, to show that $M_{DNB(p,r)}$ and $\frac{(1+p)^{2r}}{(1-p)^{2r}}H$ are inverses of each other, we note that
\begin{eqnarray}
\lefteqn{(i,j)\text{ entry of }\left(M_{DNB(p,r)}H\right)}\nonumber\\
&=&\sum\limits_{\ell=-\infty}^\infty g_{NB}(i-\ell;p,r)h(\ell-j)\nonumber\\
&=&\sum\limits_{\ell^\prime=-\infty}^\infty g_{NB}(i-j-\ell^\prime;p,r)h(\ell^\prime)\nonumber\\
&=&\sum\limits_{\ell^\prime=-r}^r g_{NB}(i-j-\ell^\prime;p,r)h(\ell^\prime)\label{eqn:convnb}
\end{eqnarray}
The last step follows from the fact that $f_B(\ell^\prime;p,r)$ and $h(\ell^\prime)$ are nonzero only when $\ell^\prime$ is between $-r$ and $r$ since $f_B(\cdot;p,r)$ is the probability mass function of the difference of two binomial random variables (each of which is bounded between $0$ and $r$).
Now, Equation \ref{eqn:convnb} is the definition of the convolution \cite{rudin} of $g_{NB}(\cdot;p,r)$ and $h$ at the point $i-j$. That is,
\begin{eqnarray*}
(g_{NB}(\cdot;p,r)\star h)(k)=\sum\limits_{\ell^\prime=-r}^r g_{NB}(k-\ell^\prime;p,r)h(\ell^\prime)
\end{eqnarray*}
and thus to show that $M_{DNB(p,r)}$ and $\frac{(1+p)^{2r}}{(1-p)^{2r}}H$ are inverses of each other, we just need to show that the convolution of $g_{NB}(\cdot;p,r)$ and $h$ at the point $0$ is equal to $\frac{(1-p)^{2r}}{(1+p)^{2r}}$ and that the convolution at all other integers is $0$. In other words, we want to show that for all integers $k$,
\begin{eqnarray}
(g_{NB}(\cdot;p,r)\star h)(k) = \frac{(1-p)^{2r}}{(1+p)^{2r}}~\delta(k)\label{eqn:showconvdelta}
\end{eqnarray}
where $\delta$ is the function that $\delta(0)=1$ and $\delta(k)=0$ for all other integers. Take the Fourier series transform of both sides while noting two facts: (1) the Fourier series transform of $\delta$ is $\widehat{\delta}(t)=\sum\limits_{\ell=-\infty}^\infty \delta(\ell)e^{i\ell t}\equiv 1$, and (2) the Fourier transform of a convolution is the product of the Fourier transforms \cite{rudin}. Then the transformed version of Equation \ref{eqn:showconvdelta} becomes
\begin{eqnarray}
\widehat{g_{NB}}(t)~\widehat{h}(t) = \frac{(1-p)^{2r}}{(1+p)^{2r}}\widehat{\delta}(t) \equiv \frac{(1-p)^{2r}}{(1+p)^{2r}}\label{eqn:fouriershow}
\end{eqnarray}
for all real $t$, where $\widehat{g_{NB}}$, $\widehat{h}$, $\widehat{\delta}$ are the Fourier series transforms of $g_{NB}(\cdot;p,r)$, $h$, and $\delta$, respectively. Once we prove that Equation \ref{eqn:fouriershow} is true, this implies Equation \ref{eqn:showconvdelta} is true (by the inverse Fourier transform) which then implies that $M_{DNB(p,r)}$ and $\frac{(1+p)^{2r}}{(1-p)^{2r}}H$ are inverses of each other and this would finish the proof (by Theorem \ref{thm:invcnfinf}).
Thus our goal is to prove Equation \ref{eqn:fouriershow}.
The Fourier series transform (i.e. characteristic function), as a function of $t$, of the NB$(p,r)$ distribution is known to be:
\begin{eqnarray*}
\left(\frac{1-p}{1-pe^{it}}\right)^r
\end{eqnarray*}
so $g_{NB}(\cdot;p,r)$, being the difference of two independent negative binomial random variables, has the Fourier series transform (as a function of $t$)
\begin{eqnarray*}
\widehat{g_{NB}}(t)&=&\left(\frac{1-p}{1-pe^{it}}\right)^r \left(\frac{1-p}{1-pe^{-it}}\right)^r
\end{eqnarray*}
By Lemma \ref{lem:fourierh},
\begin{eqnarray*}
\widehat{h}(t)=\frac{1}{(1+p)^{2r}}(1-pe^{it})^r(1-pe^{-it})^r
\end{eqnarray*}
Thus Equation \ref{eqn:fouriershow} is true and we are done.
\end{proof}
\subsection{The Consistent Normal Form}\label{ref:cnf:cnf}
Recall that we treat any privacy definition $\priv$ as the set of algorithms with the same input domain. For example, we view $k$-anonymity as the set of all algorithms that produce $k$-anonymous tables \cite{samarati01:microdata}. As noted in \cite{privaxioms:journal}, such a set can often have inconsistencies.
For example, consider an algorithm $\mech$ that first transforms its input into a $k$-anonymous table and then builds a statistical model from the result and outputs the parameters of that model. Technically, this algorithm $\mech$ does not satisfy $k$-anonymity because ``model parameters'' are not a ``$k$-anonymous table.''
However, it would be strange if the data curator decided that releasing a $k$-anonymous table was acceptable but releasing a model built solely from that table (without any side information) was not acceptable.
The motivation for the consistent normal form is that it makes sense to enlarge the set $\priv$ by adding $\mech$ into this set.
It turns out that privacy axioms can help us identify the algorithms that should be added. For this purpose, we will use the following two axioms from \cite{privaxioms:journal}.
\begin{axiom}[Post-processing \cite{privaxioms:journal}]\label{ax:post}
Let $\priv$ be a privacy definition (set of algorithms). Let $\mech\in \priv$ and let $\randalg$ be any algorithm whose domain contains the range of $\mech$ and whose random bits are independent of the random bits of $\mech$. Then the composed algorithm $\randalg\circ\mech$ (which first runs $\mech$ and then runs $\randalg$ on the result) should also belong to $\priv$.\footnote{Note that if $\mech_1$ and $\mech_2$ are algorithms with the same range and domain such that $P(\mech_1(D_i)=\omega)=P(\mech_2(D_i)=\omega)$ for all $D_i\in\inp$ and $\omega\in\range(\mech_1)$, then we consider $\mech_1$ and $\mech_2$ to be equivalent.}
\end{axiom}
Note that Axiom \ref{ax:post} prevents algorithm $\randalg$ from using side information since its only input is $\mech(D)$.
\begin{axiom}[Convexity \cite{privaxioms:journal}]\label{ax:conv}
Let $\priv$ be a privacy definition (set of algorithms). Let $\mech_1\in \priv$ and $\mech_2\in \priv$ be two algorithms satisfying this privacy definition. Define the algorithm $\choice^p_{\mech_1,\mech_2}$ to be the algorithm that runs $\mech_1$ with probability $p$ and $\mech_2$ with probability $1-p$. Then $\choice^p_{\mech_1,\mech_2}$ should belong to $\priv$.
\end{axiom}
The justification in \cite{privaxioms:journal} for the convexity axiom (Axiom \ref{ax:conv}) is the following. If both $\mech_1$ and $\mech_2$ belong to $\priv$, then both are trusted to produce sanitized data from the input data. That is, the outputs of $\mech_1$ and $\mech_2$ leave some amount of uncertainty about the input data. If the data curator randomly chooses between $\mech_1$ and $\mech_2$, the sensitive input data is protected by two layers of uncertainty: the original uncertainty added by either $\mech_1$ or $\mech_2$ and the uncertainty about which algorithm was used. Further discussion can be found in \cite{privaxioms:journal}.
Using these two axioms, we define the \emph{consistent normal form} as follows:\footnote{Note that this is a more general and useful idea than the observation in \cite{privaxioms:journal} that 2 specific variants of differential privacy do not satisfy the axioms but do imply a third variant that does satisfy the axioms.}
\begin{definition}\emph{($\cnf$).}\label{def:cnf}
Given a privacy definition $\priv$, its consistent normal form, denoted by $\cnf(\priv)$, is the smallest set of algorithms that contains $\priv$ and satisfies Axioms \ref{ax:post} and \ref{ax:conv}.
\end{definition}
Essentially, the consistent normal form uses Axioms \ref{ax:post} and \ref{ax:conv} to turn implicit assumptions about which algorithms we trust into explicit statements --
if we are prepared to trust any $\mech\in\priv$ then by Axioms \ref{ax:post} and \ref{ax:conv} we should also trust any $\mech\in\cnf(\priv)$. The set $\cnf(\priv)$ is also the largest set of algorithms we should trust if we are prepared to accept $\priv$ as a privacy definition.
The following theorem provides a useful characterization of $\cnf(\priv)$ that will help us analyze privacy definitions in Section \ref{sec:applications}.
\begin{theorem}\label{thm:closure}
Given a privacy definition $\priv$, its consistent normal form $\cnf(\priv)$ is equivalent to the following.
\begin{enumerate}
\item Define $\priv^{(1)}$ to be the set of all (deterministic and randomized algorithms) of the form $\randalg\circ\mech$, where $\mech\in\priv$, $\range(\mech)\subseteq\domain(\randalg)$, and the random bits of $\randalg$ and $\mech$ are independent of each other.
\item For any positive integer $n$, finite sequence $\mech_1,\dots,\mech_n$ and probability vector $\vec{p}=(p_1,\dots,p_n)$, use the notation $\choice^{\vec p}(\mech_1,\dots,\mech_n)$ to represent the algorithm that runs $\mech_i$ with probability $p_i$.
Define $\priv^{(2)}$ to be the set of all algorithms of the form $\choice^{\vec{p}}(\mech_1,\dots,\mech_n)$ where $n$ is a positive integer, $\mech_1,\dots,\mech_n\in\priv^{(1)}$, and $\vec{p}$ is a probability vector.
\item Set $\cnf(\priv)=\priv^{(2)}$.
\end{enumerate}
\end{theorem}
\begin{proof}
See Appendix \ref{app:close}.
\end{proof}
\begin{corollary}\label{cor:one}
If $\priv=\set{\mech}$ consists of just one algorithm, $\cnf(\priv)$ is the set of all algorithms of the form $\randalg\circ\mech$, where $\range(\mech)\subseteq\domain(\randalg)$ and the random bits in $\randalg$ and $\mech$ are independent of each other.
\end{corollary}
\begin{proof}
See Appendix \ref{app:corone}.
\end{proof}
\subsection{The Row Cone}\label{subsec:cnf:rowcone}
Having motivated the row cone in Section \ref{sec:overview:rowcone}, we now formally define it and derive its basic properties.
\begin{definition}[Row Cone]
Let $\inp=\set{D_1,D_2,\dots}$ be the set of possible input datasets and let $\priv$ be a privacy definition. The \emph{row cone} of $\priv$, denoted by $\rowcone(\priv)$, is defined as the set of vectors:
{\small
\begin{eqnarray*}
\Bigg\{\Big(c*P[\mech(D_1)=\omega],~c*P[\mech(D_2)=\omega],\dots\Big) ~:~ c\geq 0,~ \mech\in\cnf(\priv),~ \omega\in\range(\mech)\Bigg\}
\end{eqnarray*}
}
\end{definition}
Recalling the matrix representation of algorithms (as discussed in Section \ref{sec:overview:matrix} and Figure \ref{fig:matrix}), we see that a vector belongs to the row cone if and only if it is proportional to some row of the matrix representation of some trusted algorithm $\mech\in\cnf(\priv)$.
Given a $\mech\in\cnf(\priv)$ and $\omega\in\range(\mech)$, the attacker uses the vector $(P[\mech(D_1)=\omega],~P[\mech(D_2)=\omega],\dots)\in\rowcone(\priv)$ to convert the prior distribution $P(\data=D_i)$ to the posterior $P(\data=D_i~|~\mech(\data)=\omega)$. Scaling this likelihood vector by $c>0$ does not change the posterior distribution, but it does make it easier to work with the row cone.
Constraints satisfied by $\rowcone(\priv)$ are therefore constraints shared by all of the likelihood vectors $(P[\mech(D_1)=\omega],~P[\mech(D_2)=\omega],\dots)\in\rowcone(\priv)$ and therefore they constrain the ways an attacker's beliefs can change no matter what trusted algorithm $\mech\in\cnf(\priv)$ is used and what sanitized output $\omega\in\range(\mech)$ is produced.
The row cone has an important geometric property:
\begin{theorem}\label{thm:cone}
$\rowcone(\priv)$ is a convex cone.
\end{theorem}
\begin{proof}
See Appendix \ref{app:cone}.
\end{proof}
The fact that the row cone is a convex cone means that it satisfies an associated set of linear constraints (from which we derive semantic privacy guarantees).
For technical reasons, the treatment of these constraints differs slightly depending on whether the row cone is finite dimensional (which occurs if the number of possible datasets is finite) or infinite dimensional (if the set of possible datasets is countably infinite). We discuss this next.
\subsubsection{Finite dimensional row cones.}
A closed convex set in finite dimensions is expressible as the solution set to a system of linear inequalities \cite{Boyd:convex}. When the row cone is \emph{closed} then the linear inequalities have the form:
\begin{eqnarray*}
A_{1,1} P[\mech(D_1)=\omega] + \dots + A_{1,n} P[\mech(D_n)=\omega] &\geq& 0\\
A_{2,1} P[\mech(D_1)=\omega] + \dots + A_{2,n} P[\mech(D_n)=\omega] &\geq& 0\\
\phantom{A_{2,1}}\vdots\phantom{(\mech(D_1)=\omega) + \dots + A_{2,n}}\vdots\phantom{ P(\mech(D_n)=\omega)} &\vdots&\vdots
\end{eqnarray*}
(with possibly some equalities of the form $B_1 P[\mech(D_1)=\omega] + \dots + B_n P[\mech(D_n)=\omega] = 0$ thrown in). When the row cone is not closed, it is still well-approximated by such linear inequalities: their solution set contains the row cone, and the row cone contains the solution set when the '$\geq$' in the constraints is replaced with '$>$'.
\subsubsection{Infinite dimensional row cones.}
When the domain of the data is countably infinite\footnote{We need not consider uncountably infinite domains since digital computers can only process finite bit strings, of which there are countably many.}, vectors in the row cone have infinite length since there is one component for each possible dataset. The vectors in the row cone belong to the vector space $\ell_\infty$, the set of vectors whose components are bounded. Linear constraints in this vector space can have the form:
\begin{eqnarray}
A_1 P[\mech(D_1)=\omega] + A_2 P[\mech(D_2)=\omega] + \dots \geq 0\label{eqn:zfdc}\\
(\text{where }\sum_i |A_i|<\infty)\nonumber
\end{eqnarray}
but, if one accepts the Axiom of Choice, linear constraints are much more complicated and are generally defined via finitely additive measures \cite{analysishandbook}. On the other hand, in constructive mathematics\footnote{More precisely, mathematics based on Zermelo-Fraenkel set theory plus the Axiom of Dependent Choice \cite{analysishandbook}}, such more complicated linear constraints cannot be proven to exist (\cite{analysishandbook}, Sections 14.77, 23.10, and 27.45, and \cite{Lauwers10fa}). Therefore we only consider the types of linear constraints shown in Equation \ref{eqn:zfdc}.
\subsubsection{Interpretation of linear constraints.}
Starting with a linear inequality of the form $A_1 P(\mech(D_1)=\omega) + A_2 P(\mech(D_2)=\omega) + \dots \geq 0$, we can separate out the positive coefficients, say $A_{i_1}, A_{i_2},\dots$, from the negative coefficients, say $A_{i^\prime_1}, A_{i^\prime_2},\dots$, to rewrite it in the form:
\begin{eqnarray*}
A_{i_1} P(\mech(D_{i_1})=\omega) + A_{i_2} P(\mech(D_{i_2})=\omega) + \dots \geq |A_{j^\prime_1}|~ P(\mech(D_{i^\prime_1})=\omega) + |A_{i^\prime_2} |~P(\mech(D_{i^\prime_2})=\omega) + \dots
\end{eqnarray*}
where all of the coefficients are now positive. We can view each $A_{i_j}$ as a possible value for the prior probability $P(\data=D_{i_j})$ (or a value proportional to a prior probability). Setting $S_1=\set{D_{i_1},D_{i_2},\dots}$ and $S_2=\set{D_{i^\prime_1}, D_{i^\prime_2},\dots}$. This allows us to interpret the linear constraints as statements such as $\alpha P(\data\in S_1,\mech(\data)=\omega)\geq P(\data\in S_2,\mech(\data)=\omega)$. Further algebraic manipulations (and a use of constants independent of $\mech$) result in statements such as:
\begin{eqnarray}
\alpha &\geq& \frac{P(\data\in S_2~|~\mech(\data)=\omega)}{P(\data\in S_1~|~\mech(\data)=\omega)}\label{sem:1}\\
\alpha^\prime &\geq& \frac{P(\data\in S_2~|~\mech(\data)=\omega)}{P(\data\in S_1~|~\mech(\data)=\omega)} \Big/\frac{P(\data\in S_2)}{P(\data\in S_1)}\label{sem:2}
\end{eqnarray}
Equation \ref{sem:1} means that if an attacker uses a certain class of prior distributions then after seeing the sanitized data, the probability of some set $S_2$ is no more than $\alpha$ times the probability of some set $S_1$. Equation \ref{sem:2} means that if an attacker uses a certain class of priors, then the relative odds of $S_2$ vs. $S_1$ can increase by at most $\alpha^\prime$ after seeing the sanitized data\footnote{In fact, that idea has led to the creation of a large class of privacy definitions \cite{pufferfish} as a followup to this framework; the linear constraints that characterize privacy definitions in \cite{pufferfish} are precisely the constraints of what we here call the row cone, hence all the difficult parts of the framework have been bypassed in \cite{pufferfish}.}.
Of particular importance are the sets $S_1$ and $S_2$ of possible input datasets, whose relative probabilities are constrained by the privacy definition. In an ideal world they would correspond to something we are trying to protect (for example, $S_1$ could be the set of databases in which Bob has cancer and $S_2$ could be the set of databases in which Bob is healthy). If a privacy definition is not properly designed, $S_1$ and $S_2$ could correspond to concepts that may not need protection for certain applications (for example, $S_1$ could be the set of databases with even parity and $S_2$ could be the set of databases with odd parity).
In any case, it is important to examine existing privacy definitions and even specific algorithms to see which sets they end up protecting.
\eat{
\subsection{Simple Examples from Folklore}\label{sec:cnf:ex}
In this section, as a warmup, we relate $\cnf(\priv)$ and $\rowcone(\priv)$ to known semantic guarantees from folklore.
\subsubsection{Differential Privacy}\label{sec:cnf:ex:diffp}
Let $\diffpriv$ denote the set of algorithms satisfying $\epsilon$-differential privacy (Definition \ref{def:diffpriv}). It is easy to see that $\diffpriv$ satisfies Axioms \ref{ax:post} and \ref{ax:conv} and so it is already in consistent normal form: $\diffpriv=\cnf(\diffpriv)$.
Furthermore, $\rowcone(\diffpriv)$ can be easily extracted. The vector $\vec{x}=(x_1,x_2,\dots)\in\rowcone(\diffpriv)$ if and only if $x_i\leq e^\epsilon x_j$ whenever $D_i$ and $D_j$ differ in the value of one tuple. Alternatively, with $c>0$, the vector $$(c*P[\mech(D_1)=\omega],~~ c*P[\mech(D_2)=\omega],~~\dots)$$ belongs to $\rowcone(\diffpriv)$ if and only if the linear inequality $cP(\mech(D_i)=\omega)\leq e^\epsilon cP(\mech(D_j)=\omega)$ is satisfied for all pairs of datasets $D_i,D_j$ that differ in the value of one tuple.
Here is how these linear inequalities translate into semantic guarantees. A simple, well-known computation shows:
\begin{eqnarray}
& P(\mech(D_i)=\omega)\leq e^\epsilon P(\mech(D_j)=\omega) \nonumber \\
\Leftrightarrow & \frac{P(\data=D_i) P(\mech(D_i)=\omega)}{P(\data=D_j) P(\mech(D_j)=\omega) } \leq e^\epsilon \frac{P(\data=D_i) }{P(\data=D_j) } \label{eqn:bayesdiffp}
\end{eqnarray}
According to folklore, Equation \ref{eqn:bayesdiffp} is interpreted in terms of prior odds and posterior odds: if the attacker believes that table $D_i$ (e.g., the table where Bob has cancer) is $\alpha$ times as likely as $D_j$ (e.g., the table where Bob does not have cancer and all else is the same), then after seeing the sanitized output $\omega$, the attacker will believe that $D_i$ is only at most $e^\epsilon\alpha$ times as likely as $D_j$. We emphasize that these semantics of differential privacy are well known; we included this example because it helps illustrate the concepts of $\cnf(\priv)$ and $\rowcone(\priv)$, whose definition and use are contributions of this paper.
\subsubsection{Syntactic Methods}\label{sec:cnf:ex:syntactic}
Syntactic privacy definitions are those that place restrictions on the format of the output that an algorithm is allowed to produce. As discussed in Section \ref{sec:related:syntactic}, $k$-anonymity \cite{samarati01:microdata} is a prototype of such privacy definitions. The original version of $k$-anonymity did not place any restrictions on the types of generalizations (coarsening) that can be performed on the input data. In the folklore, it is well-known that a $k$-anonymous algorithm can encode its entire input as a $k$-anonymous table. As a result, its consistent normal form is easy to compute.
\begin{theorem}\label{thm:nok}Given a fixed schema with a quasi-identifier that contains an integer-valued attribute, the consistent normal form of $k$-anonymity consists of every algorithm whose input domain contains tables with this schema. The row cone consists of all vectors.
\end{theorem}
The essence of Theorem \ref{thm:nok} is that without additional restrictions, $k$-anonymity cannot prevent a malicious anonymization algorithm from uniquely encoding its input into the format of a $k$-anonymous table that can be efficiently decoded to retrieve the input. Since the privacy definition cannot prevent such behavior, no worst-case semantic guarantees exist.
With restrictions on how the data is coarsened and on how the anonymization algorithms behave \cite{xiaotransparent,cormodeminimize}, it is possible to exclude such malicious algorithms whose outputs uniquely determine their inputs. However, such restrictions do not necessarily produce privacy definitions that prevent \emph{side-channel attacks} in which an algorithm uses the output format to encode some sensitive information about the input. Examples of side-channel attacks include: the minimality attack \cite{wong:minimality,fang08:hiding} in which algorithms are forced to minimize a utility metric and end up accidentally leaking sensitive information\footnote{The restrictions studied by \cite{xiaotransparent,cormodeminimize} were designed to thwart such attacks.}; an algorithm that outputs the table in Figure \ref{fig:kanb} (see Section \ref{sec:related:syntactic}) if the input is the table from Figure \ref{fig:kana} and suppresses all attributes otherwise; an algorithm that suppresses the Age attribute only if Bob does not have cancer (hence unsuppressed Age values imply Bob has cancer).
For these reasons, we believe that when new syntactic privacy definitions are proposed, they should be accompanied by their row cones so that deficiencies such as possibilities of side-channel attacks can be evaluated.
\subsubsection{Partitioning in Lieu of Syntactic Restrictions}\label{sec:cnf:ex:partition}
Partitioning mechanisms such as \cite{minimaldefense} are alternatives to syntactic methods. Given a partitioning $\partition$ of the input domain $\inp$, let $\mech_{\partition}$ be the algorithm that, on input $D$, returns the id of the partition containing $D$. Setting $\priv=\set{\mech_{\partition}}$, then $\cnf(\priv)\equiv\cnf(\set{\mech_{\partition}})$, the set of algorithms we should trust, is the set of algorithms that satisfy:
\begin{definition}[$\partition$-Partition Privacy] Given a partitioning $\partition$ of the input space $\inp$, a mechanism $\mech$ satisfies $\partition$-partition privacy if $P[\mech(D_i)=\omega]=P[\mech(D_j)=\omega]$ whenever datasets $D_i$ and $D_j$ belong to the same partition in $\partition$.
\end{definition}
As with differential privacy, the row cone can be easily read off of the definition: $\vec{x}=(x_1,x_2,\dots)\in\rowcone(\set{\mech_{\partition}})$ if and only if $x_i=x_j$ whenever $D_i$ and $D_j$ are in the same partition. Alternatively, for $c>0$, the vector $(cP[\mech(D_1)=\omega],~cP[\mech(D_2)=\omega],~\dots)\in\rowcone(\set{\mech_{\partition}})$ if and only if $cP(\mech(D_i)=p) = cP(\mech(D_j)=\omega)$ for $D_i$ and $D_j$ in the same partition. The Bayesian guarantees are obvious: if $D_i$ was believed to be $\alpha$ times as likely as $D_j$ before seeing the sanitized output, then it is still $\alpha$ times as likely after seeing the sanitized output as long as $D_i$ and $D_j$ are in the same partition.
One may weaken the privacy definition by allowing a choice between different partitionings $\partition_1,\partition_2,\dots,\partition_k$. The trusted set of algorithms would become $\cnf(\set{\mech_{\partition_1},\mech_{\partition_2},\dots,\mech_{\partition_k}})$. The semantic guarantees then heavily depend on the different ways these partitions intersect each other.
}
\section{Introduction}\label{sec:intro}
\input{intro}
\section{The Bird's-Eye View}\label{sec:overview}
\input{overview}
\section{Related Work}\label{sec:related}
\input{related}
\section{Consistent Normal Form and the Row Cone}\label{sec:cnf}
\input{cnf}
\section{Applications}\label{sec:applications}
\input{applications}
\subsection{Randomized Response}\label{sec:applications:rr}
\input{rr}
\subsection{FRAPP and PRAM}\label{sec:applications:frapp}
\input{frapp}
\subsection{Additive Noise} \label{sec:applications:addnoise}
\input{noise}
\subsection{Relaxing Privacy Definitions}\label{sec:applications:relax}
\input{relax}
\section{Conclusions}\label{sec:conclusions}
\input{conclusions}
\bibliographystyle{abbrv}
\subsubsection{Differenced Negative Binomial Mechanism}\label{sec:applications:negbin}
The Geometric$(p)$ distribution is a probability distribution over nonnegative integers $k$ with mass function $p^k(1-p)$. The negative binomial distribution, NB$(p,r)$, is a probability distribution over nonnegative integers $k$ with mass function ${k+r-1\choose k} p^k(1-p)^r$. It is well-known (and easy to show) that an NB$(p,r)$ random variable has the same distribution as the sum of $r$ independent Geometric$(p)$ random variables.
In order to get a distribution over the entire set of integers, we can use the difference of two independent NB$(p,r)$ random variables. This leads to the following noise addition algorithm:
\begin{definition}\label{def:negbinmech}\emph{(Differenced Negative Binomial Mechanism $\mech_{DNB(p,r)}$).}
Define $\mech_{DNB(p,r)}$ to be the algorithm that adds $X-Y$ to its input, where $X$ and $Y$ are two independent random variables having the negative binomial distribution with parameters $p$ and $r$. We call $\mech_{DNB(p,r)}$ the \emph{differenced negative binomial mechanism}.
\end{definition}
The relationship to the geometric mechanism \cite{universallyUtilityMaximizingPrivacyMechanisms}, which adds a random integer $k$ with distribution $\frac{1-p}{1+p}p^{|k|}$, is captured in the following lemma:
\begin{lemma}\label{lem:mechgeo}
$\linebreak[0]\mech_{DNB(p,1)}$, the differenced negative binomial mechanism with $r=1$, is the geometric mechanism.
\end{lemma}
\begin{proof}
See Appendix \ref{app:mechgeo}.
\end{proof}
The following theorem gives us the row cone of the differenced negative binomial mechanism.
\begin{theorem}\label{thm:dnbrowcone}
A bounded row vector $\vec{x}=(\dots, \linebreak[0]x_{-2}, \linebreak[0]x_{-1}, \linebreak[0]x_0, x_1, x_2, \dots)$ belongs to $\rowcone(\set{\mech_{DNB(p,r)}})$ if for all integers $k$,
$$\forall k:~~~\sum\limits_{j=-r}^r (-1)^j f_B\left(j;\frac{p}{1+p},r\right) x_{k+j} \geq 0$$
where $p$ and $r$ are the parameters of the differenced negative binomial distribution and $f_B(\cdot;p/(1+p),r)$ is the probability mass function of the difference of two independent binomial (not negative binomial) distributions whose parameters are $p/(1+p)$ (success probability) and $r$ (number of trials).
\end{theorem}
\begin{proof}
See Appendix \ref{app:dnbrowcone}.
\end{proof}
To interpret Theorem \ref{thm:dnbrowcone} note that (1) the coefficients of the linear inequality are given by the distribution of the difference of two binomials, (2) the coefficients alternate in signs, and (3) for each integer $k$, the corresponding linear inequality has the coefficients shifted over by $k$ spots.
One interpretation of Theorem \ref{thm:dnbrowcone}, therefore, is that if an attacker has managed to rule out all possible inputs except $k-r, k-r+1,\dots, k+r-1, k+r$ and has a prior on these inputs that corresponds to the difference of two binomials (centered at $k$) then after seeing the sanitized output of $\mech_{DNB(p,r)}$, the attacker will believe that the set of possible inputs $\set{\dots, k-3, k-1, k+1,\dots}$ is not more likely than $\set{\dots, k-4, k-2, k, k+2, \dots}$. Again we see a notion of protection of parity but for a smaller set of possible inputs, and note that \emph{initially} this looks like a one-sided guarantee -- the posterior probability of odd offsets from $k$ does not increase beyond the posterior probability of the even offsets from $k$.
However, what is surprising to us is that this kind of guarantee has many strong implications. To illustrate this point, consider $\mech_{DNB(p,1)}$ which is equivalent to the geometric mechanism. The linear inequalities in Theorem \ref{thm:dnbrowcone} then simplify (after some simple manipulations) to $-x_{k-1} + (p+1/p)x_k - x_{k+1}\geq 0$ which means that a mechanism must satisfy for all $k$, $-P[\mech(k-1)=\omega] +(p+1/p)P[\mech(k)=\omega]-P[\mech(k+1)=\omega]\geq 0$. Using these inequalities in the following telescoping sum, we see that they imply the familiar $\epsilon$-differential privacy constraints with $\epsilon=-\log p$ (so $e^\epsilon=1/p)$).
\begin{eqnarray*}
&& p^{-1} P[\mech(k)=\omega]-P[\mech(k-1)=\omega] \\
&=&\sum\limits_{j=0}^\infty p^j\left(-P[\mech(k-1+j)=\omega] +(p+1/p)P[\mech(k+j)=\omega]-P[\mech(k+1+j)=\omega]\right)\geq 0\\
&& p^{-1} P[\mech(k)=\omega]-P[\mech(k+1)=\omega] \\
&=&\sum\limits_{j=0}^\infty p^j\left(-P[\mech(k-1-j)=\omega] +(p+1/p)P[\mech(k-j)=\omega]-P[\mech(k+1-j)=\omega]\right)\geq 0
\end{eqnarray*}
The take-home message, we believe, from this example is that protections on parity, even one-sided protections can be very powerful (for example, we saw how the one-sided protections in Theorem \ref{thm:dnbrowcone} can imply the two-sided protections in differential privacy). Thus an interesting direction for future work is to develop methods for analyzing how different guarantees relate to each other; for example, if we protect a fact $X$, then what else do we end up protecting?
\subsubsection{Skellam Noise}\label{sec:applications:skellam}
In the previous section, we saw how (differenced) negative binomial noise was related to protections against attackers with (differenced) binomial priors, thus exhibiting a dual relationship between the binomial and negative binomial distributions. In this section, we study noise distributed according to the Skellam distribution \cite{skellamdist}, which turns out to be its own dual.
The Poisson$(\lambda)$ distribution is a probability distribution over nonnegative integers $k$ with distribution $e^{-\lambda}\frac{\lambda^k}{k!}$. A random variable $Z$ has the Skellam$(\lambda_1,\lambda_2)$ distribution if it is equal to the difference $X-Y$ of two independent random variables $X$ and $Y$ having the Poisson$(\lambda_1)$ and Poisson$(\lambda_2)$ distributions, respectively\cite{skellamdist}.
\begin{theorem}\label{thm:skellam}
Let the input domain $\inp=\set{\dots, -2, -1, 0,\linebreak[0] 1, 2, \dots}$ be the set of integers. Let $\mech_{\text{skell($\lambda_1,\lambda_2$)}}$ be the algorithm that adds to its input a random integer $k$ with the Skellam$(\lambda_1,\lambda_2)$ distribution and let $f_Z(\cdot; \lambda_1,\lambda_2)$ be the probability mass function of the Skellam$(\lambda_1,\lambda_2)$ distribution. A bounded row vector $\vec{x}=(\dots, x_{-2}, x_{-1}, x_0, x_1, x_2, \dots)$ belongs to $\rowcone(\set{\mech_{\text{skell($\lambda_1,\lambda_2$)}}})$ if for all integers $k$,
\begin{eqnarray*}
\sum\limits_{j=-\infty}^\infty (-1)^j f_Z(j;\lambda_1,\lambda_2)x_{k+j}\geq 0
\end{eqnarray*}
\end{theorem}
\begin{proof}
See Appendix \ref{app:skellam}.
\end{proof}
As before, we see that Skellam noise protects parity if the attacker uses a Skellam prior that is shifted\footnote{i.e. the prior has the distribution of $Z+k$ where $k$ is a constant and $Z$ has the Skellam distribution.} by $k$ so that the posterior probability of the set $\set{\dots,k-3, k-1, k+1, k+3,\dots}$ cannot be higher than that of the set $\set{\dots,k-2,k,k+2,\dots}$.
\subsubsection{Other distributions.}
When the input domain is the set of integers there is a general technique for deriving the row cone corresponding to an algorithm that adds integer-valued noise to its inputs. If the noise distribution has probability mass function $f$, then the matrix representation of the noise-addition algorithm is a matrix $M$ (with rows and columns indexed by integers) whose $(i,j)$ entry is $f(i-j)$. One can take the Fourier series transform (characteristic function) $\widehat{f}(t)=\sum_{\ell=-\infty}^\infty f(\ell)e^{i\ell t}$. Let $g$ be the inverse transform of $1/\widehat{f}(t)$, if it exists. Then the inverse of the matrix $M$ is a matrix whose $(i,j)$ entries are $g(i-j)$. In combination with Theorem \ref{thm:invcnfinf}, this allows one to derive the linear constraints defining the row cone. We used this approach to derive the results of Sections \ref{sec:applications:negbin} and \ref{sec:applications:skellam} and the proof of Theorem \ref{thm:dnbrowcone} provides a formal justification for this technique.
\subsection{Basic Concepts}\label{sec:overview:basic}
Let $\inp=\set{D_1, D_2,\dots}$ be the set of all possible databases. We now explain the roles played by data curators, attackers, and privacy definitions.
\textbf{The Data Curator} owns a dataset $D\in\inp$. This dataset contains information about individuals, business secrets, etc., and therefore cannot be published as is. Thus the data curator will first choose a privacy definition and then an algorithm $\mech$ that satisfies this definition. The data curator will apply $\mech$ to the data $D$ and will then release its output (i.e. $\mech(D)$), which we refer to as the \emph{sanitized output}. We assume that the schema of $D$ is public knowledge and that the data curator will disclose the privacy definition, release all details of the algorithm $\mech$ (except for the specific values of the random bits it used), and release the sanitized output $\mech(D)$.
\textbf{The Attacker} will use the information about the schema of $D$, the sanitized output $\mech(D)$, and knowledge of the algorithm $\mech$ to make inferences about the sensitive information contained in $D$. In our model, the attacker is computationally unbounded. The attacker may also have side information -- in the literature this is often expressed in terms of a prior distribution over possible datasets $D_i\in\inp$. In this paper we are mostly interested in guarantees against attackers who reason probabilistically and so we also assume that an attacker's side information is encapsulated in a prior distribution.
\textbf{A Privacy Definition} is often expressed as a set of algorithms that we trust (e.g., \cite{warner65:randomizedResponse,privaxioms:journal}), or a set of constraints on how an algorithm behaves (e.g., \cite{diffprivacy}), or on the type of output it produces (e.g., \cite{samarati01:microdata}). Note that treating a privacy definition as a set of algorithms is the more general approach that unifies all of these ideas \cite{privaxioms:journal} -- if a set of constraints is specified, a privacy definition becomes the set of algorithms that satisfy those constraints; if outputs in a certain form (such as $k$-anonymous tables \cite{samarati01:microdata}) are required, a privacy definition becomes the set of algorithms that produce those types of outputs, etc. The reason that a privacy definition should be viewed as a set of algorithms is that it allows us to manipulate privacy definitions using set theory.
Formally, a privacy definition is
the set of algorithms \emph{with the same input domain} that are trusted to produce nonsensitive outputs from sensitive inputs.
We therefore use the notation $\priv$ to refer to a privacy definition and $\mech\in\priv$ to mean that the algorithm $\mech$ satisfies the privacy definition $\priv$.
The data curator will choose a privacy definition based on what it can guarantee about the privacy of sensitive information. If a privacy definition offers too little protection (relative to the application at hand), the data curator will avoid it because sensitive information may end up being disclosed, thereby causing harm to the data curator. On the other hand, if a privacy definition offers too much protection, the resulting sanitized data may not be useful for statistical analysis. Thus it is important for the data curator to know exactly what a privacy definition guarantees.
\textbf{The Goal} is to determine what guarantees a privacy definition provides. In this paper, when we discuss semantic guarantees, we are interested in the guarantees that always hold regardless of what sanitized output is produced by an algorithm satisfying that privacy definition. We focus on computationally unbounded Bayesian attackers and look for bounds on how much their beliefs change after seeing sanitized data. It is important to note that the guarantees will depend on assumptions about the attacker's prior distribution. This is necessary, since it is well-known that without any assumptions, it is impossible to preserve privacy while providing useful sanitized data \cite{naorgame,nfl,pufferfish}.
\subsection{Overview}\label{sec:overview:method}
In a nutshell, our approach is to represent deterministic and randomized algorithms as matrices (with possibly infinitely many rows and columns) and to represent privacy definitions as sets of algorithms and hence as sets of matrices. If our goal is to analyze only a single algorithm, we simply treat it as a privacy definition (set) containing just one algorithm. The steps of our framework then require us to normalize the privacy definitions to remove some implicit assumptions (we call the result the \emph{consistent normal form}), extract the set of all rows that appear in the resulting matrices (we call this the \emph{row cone}), find linear inequalities describing those rows, reinterpret the coefficients of the linear inequalities as probabilities, and reinterpret the inequalities themselves as statements about probabilities to get semantic guarantees. In this section, we describe these steps in more detail and defer a technical exposition of the consistent normal form and row cone to Section \ref{sec:cnf}.
\subsubsection{Algorithms as matrices.}\label{sec:overview:matrix}
Since our approach relies heavily on linear algebra, it is convenient to represent algorithms as matrices. \emph{Every} algorithm $\mech$, randomized or deterministic, that runs on a digital computer can be viewed as a matrix in the following way.
An algorithm has an input domain $\inp=\set{D_1, D_2,\dots}$ consisting of datasets $D_i$, and a range $\set{\omega_1,\omega_2,\dots}$. The input domain $\inp$ and $\range(\mech)$ are necessarily countable because each $D_i\in\inp$ and $\omega_j\in\range(\mech)$ must be encoded as finite bit strings. The probability $P(\mech(D_i)=\omega_j)$ is well defined for both randomized and deterministic algorithms. The \emph{matrix representation} of an algorithm is defined as follows (see also Figure \ref{fig:matrix}).
\begin{definition}[Matrix representation of $\mech$]\label{def:matrix} Let $\mech$ be a deterministic or randomized algorithm with domain $\inp=\set{D_1, D_2,\dots}$ and range $\set{\omega_1,\omega_2,\dots}$. The matrix representation of $\mech$ is a (potentially infinite) matrix whose columns are indexed by $\inp$ and rows are indexed by $\range(\mech)$. The value of each entry $(i,j)$ is the quantity $P(\mech(D_j)=\omega_i)$.
\end{definition}
\begin{figure}
\begin{eqnarray*}
\bordermatrix{
& \red{D_1} & \red{D_2} & \red{\dots} \cr
\blue{\omega_1} & P(\mech(D_1)=\omega_1) & P(\mech(D_2)=\omega_1) & \dots \cr
\blue{\omega_2} & P(\mech(D_1)=\omega_2) & P(\mech(D_2)=\omega_2) & \dots \cr
\blue{\omega_3} & P(\mech(D_1)=\omega_3) & P(\mech(D_2)=\omega_3) & \dots \cr
\vdots & \vdots &\vdots& \vdots \cr
}
\end{eqnarray*}
\label{fig:matrix}\caption{The matrix representation of $\mech$. Columns are indexed by datasets $\in\domain(\mech)$ and rows are indexed by outputs $\in\range(\mech)$.}
\end{figure}
\subsubsection{Consistent Normal Form of Privacy Definitions.}
Recall from Section \ref{sec:overview:basic} that we take the unifying view that a privacy definition is a set of algorithms (i.e., the set of algorithms that satisfy certain constraints or produce certain types of outputs).
Not surprisingly, there are many sets of algorithms that do not meet common expectations of what a privacy definition is \cite{privaxioms:journal}. For example, suppose that we decide to trust an algorithm $\mech$ to generate sanitized outputs from the sensitive input data $D$. Suppose we know that a researcher wants to run algorithm $\randalg$ on the sanitized data to build a histogram. If we are willing to release the sanitized output $\mech(D)$ publicly, then we should also be willing to release $\randalg(\mech(D))$. That is, if we trust $\mech$ then we should also trust $\randalg\circ\mech$ (the composition of the two algorithms). In other words, if $\mech\in \priv$, for some privacy definition $\priv$, then $\randalg\circ\mech$ should also be in $\priv$.
Many privacy definitions in the literature do not meet criteria such as this \cite{privaxioms:journal}. That is, $\mech$ may explicitly satisfy a given privacy definition but $\randalg\circ\mech$ may not. However, since the output of $\mech$ is made public and anyone can run $\randalg$ on it, these privacy definitions come with the implicit assumption that the composite algorithm $\randalg\circ\mech$ should be trusted.
Thus, given a privacy definition $\priv$, we first must expand it to include all of the algorithms we should trust (via a new application of privacy axioms). The result of this expansion is called the \emph{consistent normal form} and is denoted by $\cnf(\priv)$. Intuitively, $\cnf(\priv)$ is the complete set of algorithms we should trust if we accept the privacy definition $\priv$ and the privacy axioms. We describe the consistent normal form in full technical detail in Section \ref{ref:cnf:cnf}.
\subsubsection{The Row Cone}\label{sec:overview:rowcone}
\begin{figure}[t]
\center
\begin{tikzpicture}
[scale=4, font=\scriptsize,
extreme/.style={shape=circle,draw=black!100, fill=blue!50!black!50, minimum size=2mm, inner sep=0pt, thick}]
\coordinate (zz1) at (0,0);
\coordinate (pp1) at (1.4, 1.4/10) ;
\coordinate (qq1) at (1/1.5, 1);
\draw[fill=blue!10, color=blue!10] (zz1)--(pp1)--(qq1)--(zz1);
\draw[<->, thick,blue] (-0.1, -0.1/10) to node[sloped, anchor=north west, pos=0.1]{$A_1P[\mech(D_1)=\omega] + A_2 P[\mech(D_2)=\omega]+\dots \geq 0$} (1.5,1.5/10);
\draw[<->, thick] (-0.1/1.5, -0.1) to node[sloped, anchor=south west, pos=0.05]{$B_1P[\mech(D_1)=\omega] + B_2 P[\mech(D_2)=\omega]+\dots \geq 0$} (1.1/1.5,1.1);
\end{tikzpicture}
\label{fig:rowcone}
\caption{An example of a row cone (shaded) and its defining linear inequalities.}
\end{figure}
Recall that we represent algorithms as matrices (Definition \ref{def:matrix}) and privacy definitions as sets of algorithms. Therefore $\cnf(\priv)$ is really a \emph{set of matrices}.
The row cone of $\priv$, denoted by $\rowcone(\priv)$, is the set of vectors of the form $c\vec{x}$ where $c\geq 0$ and $\vec{x}$ is a row of a matrix corresponding to some algorithm $\mech\in\cnf(\priv)$.
How does the row cone capture the semantics of $\priv$? Suppose $\mech\in\cnf(\priv)$ is one of the algorithms that we trust. Let $D$ be the true input dataset and let $\omega = \mech(D)$ be the sanitized output that we publish. A Bayesian attacker who sees output $\omega$ and is trying to derive sensitive information will need to compute the posterior distribution $P(\data=D_i~|~\mech(\data)=\omega)$ for all datasets $D_i$. This posterior distribution is a function of the attacker's prior $P(\data=D_i)$ and the vector of probabilities:
$$[P(\mech(D_1)=\omega),~~ P(\mech(D_2)=\omega),~~ \dots]$$
This vector belongs to $\rowcone(\priv)$ because it corresponds to some row of the matrix representation of $\mech$ (i.e., the row associated with output $\omega$). Note that multiplying this vector by any positive constant will leave the attacker's posterior beliefs unchanged. The row cone is essentially the set of all such probability vectors that the attacker can ever see if we use a trusted algorithm (i.e. something belonging to $\cnf(\priv)$); therefore it determines all the ways an attacker's beliefs can change (from prior to posterior).
Thus constraints satisfied by the row cone are also constraints on how prior probabilities could be turned into posterior probabilities.
In Figure \ref{fig:rowcone} we illustrate a row cone in 2 dimensions (i.e. the input domain consists of only 2 datasets). Each vector in the row cone is represented as a point in 2-d space.
Later in the paper, it will turn out that
the row cone is always a convex set and hence has an associated system of linear inequalities (corresponding to the intersection of halfspaces containing the row cone) as shown in Figure \ref{fig:rowcone}.
\subsubsection{Extracting Semantic Guarantees From the Row Cone}
The row cone is a convex set (in fact, a convex cone) and so satisfies a set of linear inequalities having the forms \cite{Boyd:convex}:
\begin{eqnarray*}
A_1 P(\mech(D_1)=\omega) + A_2 P(\mech(D_2)=\omega) + \dots &\geq& 0 \text{ or}\\
A_1 P(\mech(D_1)=\omega) + A_2 P(\mech(D_2)=\omega) + \dots &=& 0\text{ or }\\
A_1 P(\mech(D_1)=\omega) + A_2 P(\mech(D_2)=\omega) + \dots &>& 0
\end{eqnarray*}
that must hold for all trusted algorithms $\mech\in\cnf(\priv)$ and sanitized outputs $\omega\in\range(\mech)$ they can produce.
The key insight is that we can re-interpret the magnitude of the coefficients $|A_1|, |A_2|,\dots$ of these linear inequalities as probabilities (dividing by $|A_1|+|A_2| + ...$ if necessary) and then re-interpret the linear inequalities as statements about prior and posterior probabilities of an attacker. We give a detailed example in Section \ref{sec:applications:rr}, where we apply our framework to randomized response. The semantic guarantees we extract then have the form: ``if the attacker's prior belongs to set $X$ then here are restrictions on the posterior probabilities the attacker can form'' (note that avoiding any assumptions on prior probabilities/knowledge is not possible if the goal is to release even marginally useful sanitized data \cite{naorgame,nfl,pufferfish}).
\subsection{Evaluating Privacy}
Research in statistical privacy mainly focuses on developing privacy definitions and algorithms for publishing sanitized data (i.e., nonsensitive information) derived from sensitive data. To the best of our knowledge, this paper provides the first framework for extracting semantic guarantees from privacy definitions. Other work on evaluating privacy definitions looks for the presence or absence of specific vulnerabilities in privacy definitions or sanitized data.
In the official statistics community, re-identification experiments are performed to assess whether individuals can be identified from sanitized data records \cite{nowsurvey}. In many such experiments, software is used to link sanitized data records to the original records \cite{winkler04reident}. Reiter \cite{reiterDisclosureRisk} provides a detailed example of how to apply the decision-theoretic framework of Duncan and Lambert \cite{duncanL89:disclosure} to measure disclosure risk. There are many other methods for assessing privacy for the purposes of official statistics; for surveys, see \cite{nowsurvey,willenborgW96:disclosure,willenborg00:elements}.
Other work in statistical privacy seeks to identify and exploit specific types of weaknesses that may be present in privacy definitions.
Dwork and Naor \cite{naorgame} formally proved that it is not possible to publish anonymized data that prevents an attacker from learning information about people who are not even part of the data unless the anonymized data has very little utility or some assumptions are made about the attacker's background knowledge.
Lambert \cite{lambert93:disclosure} suggests that harm can occur even when an individual is linked to the wrong anonymized record (as long as the attacker's methods are plausible). Thus one of the biggest themes in privacy is preventing an attacker from linking an individual to an ``anonymized'' record \cite{dalenius86:haystack}, possibly using publicly available data \cite{sweeney02:kAnon} or other knowledge \cite{ashwin06:ldiversity}. Dinur and Nissim \cite{dinur:privacy} and later Dwork et al. \cite{dwork07:limits} showed fundamental limits to the amount of information that can be released even under very weak privacy definitions (information-theoretically and computationally \cite{Dwork09STOCOnTheComplexity}). These attacks generally work by removing noise that was added in the sanitization process \cite{karguptaDWS03:randomPerturbation,huangDC04:deriving,liu08:noiseattacks}. Ganta et al. \cite{composition08ranjit} demonstrated a composition attack where independent anonymized data releases can be combined to breach privacy; thus a desirable property of privacy definitions is to have privacy guarantees degrade gracefully in the presence of multiple independent releases of sanitized data. The minimality attack \cite{wong:minimality} showed that privacy definitions must account for attackers who know the algorithm used to generate sanitized data; otherwise the attackers may reverse-engineer the algorithm to cause a privacy breach. The de Finetti attack \cite{kifer09attack} shows that privacy definitions based on statistical models are susceptible to attackers who make inferences using different models and use those inferences to undo the anonymization process; thus it is important to consider a wide range of inference attacks. Also, one should consider the possibility that an attacker may be able to manipulate data (e.g. by creating many new accounts in a social network) prior to its release to help break the subsequent anonymization of the data \cite{backstrom07:attackSocialNetwork}. Note also that privacy concerns can also be associated with aggregate information such as trade secrets (and not just rows in a table) \cite{cliftondefin,pufferfish}.
\subsection{Privacy Definitions}
In this section, we review some privacy definitions that will be examined in this paper.
\subsubsection{Syntactic Privacy Definitions}\label{sec:related:syntactic}
A large class of privacy definitions places restrictions on the format of the output of a randomized algorithm. Such privacy definitions are known as \emph{syntactic privacy definitions}. The prototypical syntactic privacy definition is $k$-anonymity \cite{samarati01:microdata,sweeney02:kAnon}.
In the $k$-anonymity model, a data curator first designates a set of attributes to be the \emph{quasi-identifier}. An algorithm $\mech$ then satisfies $k$-anonymity if its input is a table $T$ and its output is another table $T^*$ that is $k$-\emph{anonymous} -- for every tuple in $T^*$, there are $k-1$ other tuples that have the same value for the quasi-identifier attributes \cite{samarati01:microdata,sweeney02:kAnon}.
Algorithms satisfying $k$-anonymity typically work by generalizing (coarsening) attribute values. For example, if the data contains an attribute representing the age of a patient, the algorithm could generalize this attribute into age ranges of size $10$ (e.g., $[0-9], [10-19]$, etc.) or ranges of size $20$, etc. Quasi-identifier attributes are repeatedly generalized until a table $T^*$ satisfying $k$-anonymity is produced.
The rationale behind $k$-anonymity is that quasi-identifier attributes may be recorded in publicly available datasets. Linking those datasets to the original table $T$ may allow individual records to be identified, but linking to the $k$-anonymous table $T^*$ will not result in unique matches.
\eat{
\subsubsection{Syntactic Privacy Definitions}\label{sec:related:syntactic}
A large class of privacy definitions places restrictions on the format of the output of a randomized algorithm. Such privacy definitions are known as \emph{syntactic privacy definition}. The prototypical syntactic privacy definition is $k$-anonymity \cite{samarati01:microdata,sweeney02:kAnon}.
In the $k$-anonymity model, a data curator first designates a set of attributes to be the \emph{quasi-identifier}. An algorithm $\mech$ then satisfies $k$-anonymity if its input is a table $T$ and its output is another table $T^*$ that is $k$-\emph{anonymous} -- for every tuple in $T^*$, there are $k-1$ other tuples that have the same value for the quasi-identifier attributes \cite{samarati01:microdata,sweeney02:kAnon}. For example, consider the input table in Figure \ref{fig:kana} where the attributes \{``nationality'', ``age'', ``zip code''\} have been designated as the quasi-identifier. Given the input table from Figure \ref{fig:kana}, an algorithm $\mech$ that satisfies $3$-anonymity could output the $3$-anonymous table in Figure \ref{fig:kanb}. Algorithms satisfying $k$-anonymity typically work by generalizing (coarsening) attribute values. For example, the age attribute may be generalized into an age range of size $10$ (e.g., $[0-9], [10-19]$, etc.) or ranges of size $20$. Quasi-identifier attributes are repeatedly generalized a table $T^*$ satisfying $k$-anonymity is produced.
The rationale behind $k$-anonymity is that quasi-identifier attributes may be recorded in publicly available datasets. Linking those datasets to the original table $T$ may allow individual records to be identified, but linking to the $k$-anonymous table $T^*$ will not result in unique matches.
\begin{figure}[t]
\small{
\centering
\subfigure[Original Table]{\label{fig:kana}
\begin{tabular}{|c|c|c|c|}\hline
Zip Code & Age & Nationality & Disease\\\hline
13053 & 25 & Indian & Cold\\\hline
13068 & 39 & Russian & Stroke\\\hline
13053 & 27 & American & Flu\\\hline
14850 & 43 & American & Cancer\\\hline
14850 & 57 & Russian & Cancer\\\hline
14853 & 40 & Indian & Cancer\\\hline
\end{tabular}
}
\subfigure[$3$-Anonymous Table]{\label{fig:kanb}
\begin{tabular}{|c|c|c|c|}\hline
Zip Code & Age & Nationality & Disease\\\hline
130** & $<40$ & * & Cold\\\hline
130** & $<40$ & * & Stroke\\\hline
130** & $<40$ & * & Flu\\\hline\hline
1485* & $\geq 40$ & * & Cancer\\\hline
1485* & $\geq 40$ & * & Cancer\\\hline
1485* & $\geq 40$ & * & Cancer\\\hline
\end{tabular}
}
\caption{Example of $k$-Anonymity}\label{fig:kan}
}
\end{figure}
Many variants of $k$-anonymity exist to handle different applications (such as social networks \cite{wuSNSurvey}) or to address vulnerabilities such as the need to protect sensitive attributes \cite{ashwin06:ldiversity,li:tclose}. For further details, see \cite{nowsurvey,fungSurvey}. We will examine these approaches in Section \ref{sec:cnf:ex:syntactic}.
}
\subsubsection{Randomized Response}
Randomized response is a technique developed by Warner \cite{warner65:randomizedResponse} to deal with privacy issues when answering sensitive questions in a face-to-face survey. There are many variations of randomized response. One of the most popular is the following: a respondent answers truthfully with probability $p$ and lies with probability $(1-p)$, thus ensuring that the interviewer is not certain about the respondent's true answer.
Thus the scenario where we can apply randomized response is the following:
the input table $T$ contains 1 binary attribute and $k$ tuples. We can apply randomized response to $T$ by applying the following procedure to each tuple: flip the binary attribute with probability $1-p$. The perturbed table, which we call $T^*$, is then released. Note that randomized response is a privacy definition that consists of exactly one algorithm: the algorithm that flips each bit independently with probability $1-p$. We use our framework to extract semantic guarantees for randomized response in Section \ref{sec:applications:rr}.
\subsubsection{PRAM and FRAPP}\label{sec:related:frapp}
PRAM \cite{gouweleeuwKWW98:PRAM} and FRAPP \cite{shipraH05:frapp} are generalizations of randomized response to tables where tuples can have more than one attribute and the attributes need not be binary. PRAM can be thought of as a set of algorithms that independently perturb tuples, while FRAPP is an extension of PRAM that adds formally specified privacy restrictions to these perturbations.
Let $\tdom$ be the domain of all tuples. Each algorithm $\mech_Q$ satisfying PRAM is associated with a transition matrix $Q$ of transition probabilities, where the entry $Q_{b,a}$ is the probability $P(a\rightarrow b)$ that the algorithm changes a tuple with value $a\in\tdom$ to the value $b\in \tdom$. Given a dataset $D=\set{t_1,\dots,t_n}$, the algorithm $\mech_Q$ assigns a new value to the tuple $t_1$ according to the transition probability matrix $Q$, then it independently assigns a new value to the tuple $t_2$, etc. It is important to note that the matrix representation of $\mech_Q$ (as discussed in Section \ref{sec:overview:matrix}) \emph{is not the same} as the transition matrix $Q$. As we will discuss in Section \ref{sec:applications:frapp}, the relationship between the two is that the matrix representation of $\mech_Q$ is equal to $\bigoplus_n Q$, where $\bigoplus$ is the Kronecker product.
FRAPP, with privacy parameter $\gamma$, imposes a restriction on these algorithms. This restriction, known as $\gamma$-amplification \cite{evfimievski:limiting:breaches}, requires that the transition matrices $Q$ satisfy the constraints $\frac{Q_{b,a}}{Q_{c,a}}\leq \gamma$ for all $a,b,c\in\tdom$. This condition can also be phrased as $\frac{P(b\rightarrow a)}{P(c\rightarrow a)}\leq \gamma$.
\subsubsection{Differential Privacy}
Differential privacy \cite{diffprivacy,dwork06Calibrating} is defined as follows:
\begin{definition} \label{def:diffpriv}A randomized algorithm $\mech$ satisfies \emph{$\epsilon$-differential privacy} if for all pairs of databases $T_1,T_2$ that differ only in the value of one tuple and for all sets $S$, $P(\mech(T_1)\in S)\leq e^\epsilon P(\mech(T_2)\in S)$.
\end{definition}
Differential privacy guarantees that the sanitized data that is output has little dependence on the value of any individual's tuple (for small values of $\epsilon$). It is known to be a weaker privacy definition than randomized response. Using our framework, we show in Section \ref{sec:applications:rrdiffp} that the difference between the two is that randomized response provides additional protection for the parity of every subset of the data.
\subsubsection{The relationship between randomized response and differential privacy.}\label{sec:applications:rrdiffp}
When setting $\epsilon=\log\frac{p}{1-p}$ then it is well known that randomized response satisfies $\epsilon$-differential privacy. Also, for this parameter setting, differential privacy provides the same protection as randomized response for any given bit in the dataset -- a bit corresponds to the record of one individual and differential privacy would allow a bit's value to be retained with probability at most $e^\epsilon/(1+e^\epsilon)=p$ (and therefore flipped with probability $1-p$). However, Theorem \ref{thm:rrsemantics} shows that randomized response goes beyond the protection afforded by differential privacy by requiring stronger protection of the parity of larger sets of bits as well.
Note that Kasiviswanathan et al. \cite{smithlearn} proved a learning-theoretic separation result between randomized response and differential privacy which roughly states that randomized response cannot be used to efficiently learn a problem called MASKED-PARITY. That concept of parity involves solving a linear system of equations in a $d$-dimensional vector space over the integers modulo 2. While very different from the notion of parity that we study, one direction of future work is to determine if our result about the semantic guarantees of randomized response can lead to a new proof of the result by Kasiviswanathan et al. \cite{smithlearn}.
|
1,108,101,563,845 | arxiv | \section{Introduction}\label{sec1}
Let $\mathbf{X}_1,\ldots,\mathbf{X}_n$ be independent and identically
distrib\-uted (I.I.D.) $p$-variate
random vectors generated from the following model:
\begin{equation}
\label{model} \mathbf{X}_i=\mathbf{W}_i+\bolds{\mu}\qquad
\mbox{for } i=1,\ldots ,n,
\end{equation}
where
$\bolds{\mu}=(\mu_{1},\ldots,\mu_{p})^T$ is a $p$-dimensional unknown
vector of means, $\mathbf{W}_i=(W_{i1},\ldots, W_{i p})^T$ and $\{
\mathbf{W}_i\}_{i=1}^n$ are I.I.D. random vectors with zero mean and
common covariance $\bolds{\Sigma}$. For the $i$th sample, $\{W_{ij}\}
_{j=1}^p$ is a sequence of weakly stationary dependent random variables
with zero mean and
variances $\sigma_j^2$. Motivated by the high-dimensional applications
arising in genetics, finance and other fields,
the current paper focuses on testing high-dimensional hypotheses
\begin{equation}
\label{nullhyper} H_0\dvtx \bolds{\mu}=0\quad \mbox{vs} \quad H_1\dvtx
\mbox{nonzero $\mu_j$ are sparse and faint.}
\end{equation}
The specifications for the sparsity and faintness in the above $H_1$
are the following. There are $p^{1-\beta}$ nonzero $\mu_j$'s (signals)
for a $\beta\in(1/2,1)$, which are sparse since
the signal bearing dimensions constitute only a small fraction of the
total $p$ dimensions. Also under the $H_1$, the signal strength is
faint in that the nonzero $\mu_j = \sqrt{ 2 r \log(p)/n}$ for $r \in
(0, 1)$.
These specification of the $H_1$ have been the most challenging
``laboratory'' conditions in developing novel testing procedures under
high dimensionality.
\citet{DonohoJin} pioneered the theory of the Higher Criticism (HC)
test which was originally conjectured in \citet{Tukey}, and showed that
the HC test can attain the optimal detection boundary established by
\citet{Ingster} for uncorrelated Gaussian random vectors ($\bolds
{\Sigma
}=\mathbf{I}_p$).
The optimal detection boundary is a phase-diagram in the space of
$(\beta, r)$, the two quantities which define the sparsity and the
strength of nonzero $\mu_j$'s under the $H_1$,
such that if $(\beta, r)$ lies above the boundary, there exists a test
which has asymptotically diminishing probabilities of the type I and
type II errors simultaneously; and if $(\beta, r)$ is below the
boundary, no such test exists.
Hall and Jin (\citeyear{HallJin}, \citeyear{HallJin2010}) investigated the impacts of the column-wise
dependence on the HC test. In particular, \citet{HallJin} found
that the HC test is adversely affected if the dependence is of long
range dependent. If the dependence is weak, and the covariance matrix
is known or can be estimated reliably, the dependence can be utilized
to enhance the signal strength of the testing problem so as to improve
the performance of the HC test. The improvement is reflected in
lowering the needed signal strength $r$ by a constant factor. \citet{Delaigle2009} evaluated the HC test under a nonparametric setting
allowing column-wise dependence, and showed that {the detection
boundary of \citet{DonohoJin}} for the HC test can be maintained
under weak column-wise dependence. \citet{Delaigle} showed
that the standard HC test based on the normality assumption can perform
poorly when the underlying data deviate from the normal distribution
and studied a version of the HC test based on the $t$-statistics
formulation. \citet{Cai2011} considered detecting Gaussian
mixtures which differ from the null in both the mean and the variance.
Arias-Castro, Bubeck and Lugosi (\citeyear{Arias-Castro2012a,Arias-Castro2012b}) established the lower
and upper bounds for the minimax risk for detecting sparse differences
in the covariance.
We show in this paper that there are alternative test procedures for
weakly dependent sub-Gaussian data with unknown covariance which attain
the same detection boundary as the HC test established in \citet{DonohoJin} for Gaussian distributed data with $\bolds{\Sigma
}=\mathbf{I}_p$.
The alternative test statistics are obtained by first constructing,
for $\gamma=1$ and $2$,
\[
T_{\gamma n}(s) = \sum_{j=1}^p |
\sqrt{n}\bar{X}_j/\sigma_j|^{\gamma
} I\bigl( |
\bar{X}_j| \geq\sigma_j\sqrt{\lambda_p(s)/n}
\bigr),
\]
which threshold with respect to $\bar{X}_j$ at a level $\sqrt{\lambda
_p(s)/n}$ for $s \in(0,1)$, where $\lambda_p(s)=2s\log p$, $\bar
{X}_j$ is the sample mean of the $j$th margin of the data vectors and
$I(\cdot)$ is the indicator function. We note that $\gamma=1$ and $2$
correspond to the $L_1$ and $L_2$ versions of the thresholding
statistics, respectively; and $\gamma=0$ corresponds to the HC test
statistic. In the literature, the $L_1$ statistic is called the hard
thresholding in \citet{DonohoJohnstone} and \citet{DonohoJin08},
and the $L_0$ statistic is called the clipping thresholding in \citet{DonohoJin08}.
We then
maximize standardized versions of $T_{\gamma n}(s)$ with respect to $s$
over $\mathcal{S}$, a subset of $(0,1)$, which results in the following
maximal $L_\gamma$-thresholding statistics:
\begin{equation}
\hat{\mathcal{M}}_{\gamma n}=\max_{s\in\mathcal{S}}
\frac
{T_{\gamma
n}(s)-\hat{\mu}_{T_{\gamma n},0}(s)}{\hat{\sigma}_{T_{\gamma n},0}(s)}\qquad \mbox{for $\gamma=0, 1$ and $2$,} \label{eq:Mgamman}
\end{equation}
where
$\hat{\mu}_{T_{\gamma n},0}(s)$ and $\hat{\sigma}_{T_{\gamma n},0}(s)$
are, respectively, estimators of the mean\break ${\mu}_{T_{\gamma n},0}(s)$
and standard deviation ${\sigma}_{T_{\gamma n},0}(s)$ of $T_{\gamma
n}(s)$ under $H_0$, whose forms will be given later in the paper.
{By developing the asymptotic distributions of $\hat{\mathcal
{M}}_{\gamma n}$, the maximal $L_{\gamma}$-thresholding tests are
formulated for $\gamma=0, 1$ and $2$ with the maximal $L_0$-test being
equivalent to the HC test.} An analysis on the relative power
performance of the three tests reveals that if the signal strength
parameter $r \in(0,1)$, the maximal $L_2$-thresholding test is at
least as powerful as the maximal $L_1$-thresholding test, and both the
$L_1$ and $L_2$-thresholding tests are at least as powerful as the HC
test. If we allow a slightly stronger signal so that $r > 2\beta-1$,
the differential power performance of the three tests is amplified with
the maximal $L_2$-test being the most advantageous followed by the
maximal $L_1$-test.
In addition to the connection to the HC test, the maximal $L_{\gamma
}$-thresholding test, by its nature of formulation, is related to the
high-dimensional multivariate testing procedures, for instance, the
tests proposed by \citet{BaiSara} and \citet{ChenQin}.
While these tests can maintain accurate size approximation under a
diverse range of dimensionality and column-wise dependence, their
performance is hampered when the nonzero means are sparse and faint.
The proposed test formulation is also motivated by a set of earlier
works including \citet{DonohoJohnstone} for selecting significant
wavelet coefficients, and \citet{Fan1996} who considered testing for the
mean of a random vector $\mathbf{X}$ with I.I.D. normally distributed components.
We note that the second step of maximization with respect to $s \in
\mathcal{S} \subset(0,1)$ is designed to make the test adaptive to the
underlying signals strength and sparsity, which is the essence of the
HC procedure in \citet{DonohoJin}, as well as that of \citet{Fan1996}.
The rest of the paper is organized as follows. In Section~\ref{sec2} we provide
basic results on the $L_2$-thresholding statistic via the large
deviation method and the asymptotic distribution of the single
threshold statistic. Section~\ref{sec3} gives the asymptotic distribution of
$\hat{\mathcal{M}}_{2n}$ as well as the associated test procedure.
Power comparisons among the HC and the maximal $L_1$ and
$L_2$-thresholding tests are made in Section~\ref{sec4}. Section~\ref{sec5} reports
simulation results which confirm the theoretical results. Some
discussions are given in Section~\ref{sec6}. All technical details are relegated
to the \hyperref[app]{Appendix}.
\section{Single threshold test statistic}\label{sec2}
Let $\mathbf{X}_1, \ldots, \mathbf{X}_n$ be an independent
\mbox{$p$-dimensional} random sample from a common distribution $F$,
and $\mathbf{X}_i = \mathbf{W}_i + \bolds{\mu}$, where
$\bolds{\mu} = (\mu_1, \ldots, \mu_p)^T$ is the vector of means and
$\mathbf
{W}_{i}= (W_{i 1}, \ldots,\break W_{i p})^{T}$ is a vector consisting of
{potentially dependent random variables} with zero mean and finite
variances. The dependence among $\{W_{i j}\}_{j=1}^p$ is called the
column-wise dependence in $\mathbf{W}_i$. Those nonzero $\mu_j$ are
called ``signals.''
Let $\bar{X}_j={n}^{-1}\sum_{i=1}^nX_{i j}$, $\sigma_j^2 =\operatorname
{Var}(W_{i j})$ and $s_{j}^2=(n-1)^{-1}\sum_{i=1}^n(X_{ij}-\bar
{X}_j)^2$ be the sample variance for the $j$th margin.
The signal strength in the $j$th margin can be measured by the
$t$-statistics $\sqrt{n} \bar{X}_j /s_j$ or the $z$-statistics $\sqrt {n} \bar{X}_j /\sigma_j$ if $\sigma_j$ is known.
For easy expedition, the test statistics will be constructed based on
the $z$-statistics by assuming $\sigma_{j}$ is known and, without loss of
generality, we assume $\sigma_j^2=1$.
Using the $t$-statistics actually leads to less restrictive conditions
for the underlying random variables since the large deviation results
for the self-normalized
$t$-statistics can be established under weaker conditions to allow
heavier tails in the underlying distribution as demonstrated in \citet{Shao}, \citet{Jing} and \citet{WangHall}.
{See \citet{Delaigle} for analysis on the sparse signal
detection using the $t$-statistics.}
We assume the following assumptions in our analysis:
\begin{longlist}[(C.1)]
\item[(C.1)] The dimension $p = p(n) \to\infty$ as $n \to\infty$ and $\log
(p)=o(n^{1/3})$.
\item[(C.2)] {There exists a positive constant $H$ such that, for any $j \neq
l\in\{1,\ldots, p\}$, }
$E(e^{h^{T}(W_{1 j}^d,W_{1 l}^d)})<\infty$ for $h \in[-H,H]\times[-H,H]$
and $d=2$.
\item[(C.3)] For each $i=1, \ldots,n$, $\{W_{ij}\}_{j=1}^p$ is a {weakly
stationary} sequence such that $E(W_{ij})=E(W_{i(j+k)}) =0$ and $\operatorname
{Cov}(W_{ij},W_{i(j+k)})$ does not depend on $j$ for any integer $k$.
And $\sum_k|\rho_k|<\infty$ where $\rho_k=\operatorname{Cov}(W_{i1}, W_{i(k+1)})$.
\item[(C.4)] Among the $p$ marginal means, there are $m=p^{1-\beta}$ signals
for a $\beta\in(1/2, 1)$ and the signal $\mu_j = \sqrt{2r\log(p)/n}$
for a $r > 0$. The signals' locations $\ell_1<\ell_2<\cdots<\ell_m$ are
randomly selected from $\{1, 2, \ldots, p\}$ without replacement
so that
\begin{eqnarray}
\label{random-location} P(\ell_1=p_1,\ldots,\ell_m=p_m)
=\pmatrix{p\cr m}^{-1}
\nonumber
\\[-8pt]
\\[-8pt]
\eqntext{\mbox{for all
$1 \le
p_1 < p_2 < \cdots< p_m \le p$}.}
\end{eqnarray}
\end{longlist}
(C.1) specifies the growth rate of $p$ relative to the sample size $n$ is
in the paradigm of ``large $p$, small $n$.'' {That $\log p=o(n^{1/3})$
is the rate we can attain for Gaussian data or cases where we can
attain ``accurate'' enough estimation of $\mu_{T_{\gamma n},0}$, which
satisfies equation (\ref{eq:cri1}). When data are not Gaussian and the
``accurate'' estimators are not attainable, the growth rate of $p$ will
be more restrictive at $p=n^{1/\theta}$ ($\theta>0$), as will be
discussed in the next section.}
(C.2) assumes the joint distributions of $(W_{i j}, W_{i l})$ is
sub-Gaussian, which implies each marginal $W_{ij}$ is sub-Gaussian as
well. (C.3) prescribes weak dependence among $\{W_{ij}\}_{j=1}^p$. The
first part of (C.4) reiterates the sparse and faint signal setting. The
range of the signal strength includes the case of $r \in(0,1)$,
representing the most fainted detectable signal strength, which has
been considered in \citet{DonohoJin} and other research works.
The second part of (C.4) provides a random allocation mechanism for the
signal bearing dimensions, which is the same as the one assumed in \citet{HallJin2010}.
Existing research on the detection boundary of the
HC test for the sparse mean problem [\citet{DonohoJin}; \citet{HallJin2010}]
is largely conducted for the case of $n=1$ when the data
are Gaussian. This is understandable since the sample means are
sufficient statistics and there is no loss of {generality} when we
treat the problem as $n=1$, even if we have multiple observations.
However, when the underlying distributions are as specified in (C.2), we
cannot translate the test problem to $n=1$ without incurring a loss of
information.
We first consider the $L_2$ version of the thresholding statistic $T_{2
n}$
in this section. The study of the $T_{1 n}$ version is outlined in
Section~\ref{sec4} when we compare the power performance to the HC test. Let
$Y_{j,n} = n \bar{X}_j^2$. Then,
the $L_2$-thresholding statistic can be written as
\begin{equation}
T_{2 n}(s)=\sum_{j=1}^p
Y_{j,n}I\bigl\{Y_{j,n}\geq\lambda_p(s) \bigr\},
\label{eq:Tns}
\end{equation}
where $s$ is the thresholding parameter that takes values over a range
within $(0,1)$.
There is no need to consider $s \ge1$ in the thresholding {since
large deviation results given in \citet{Petrov} imply that under $H_0$$,
P( \max_{1\leq j\leq p}Y_{j,n}\le\lambda_p(s)) \to1.
$}
Define a set of slowing varying functions:
$L_p^{(1)}=2r\log p+1$, $L_p^{(2)}=\break 2\sqrt{s\log p/\pi}$,
$L_p^{(3)}=s(\sqrt{s}-\sqrt{r})^{-1}\sqrt{\log p/\pi}$, $L_p^{(4)}=
8r\log p$,
$L_p^{(5)}=4s^{3/2}\times \pi^{-{{1}/{2}}}(\log p)^{3/2}$ and
$L_p^{(6)}={2s^2(\log p)^{3/2}}/{\sqrt{\pi}(\sqrt{s}-\sqrt{r})}$.
Let $\phi(\cdot)$ and $\bar{\Phi}(\cdot)$
be the density and survival functions of the standard normal
distribution.
Let $\mu_{T_{2n}, 0}(s)$
and $\sigma^2_{T_{2n}, 0}(s)$
be the mean and variance of $T_{2n}(s)$ under $H_0$, respectively, and
$\mu_{T_{2n}, 1}(s)$ and
$\sigma^2_{T_{2n}, 1}(s)$ be those, respectively, under the $H_1$ {as
specified in (C.4)}.
The following proposition depicts the mean and variance of $T_{2n}(s)$
{by applying Fubini's theorem and the large deviation results
[\citet{Petrov} and Lemma~A.1 in Zhong, Chen and Xu (\citeyear{ZhenChenXu})].
\begin{proposition}
\label{chap4-cor2} Under \textup{(C.1)--(C.4)}, $E\{T_{2n}(s)\}$ and $\operatorname
{Var}\{T_{2n}(s)\}$ are, respectively,
\begin{eqnarray}\label{eq:meanTn0}\quad
&&\mu_{T_{2n}, 0}(s)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad= p\bigl\{2{\lambda_p^{1/2}(s) }\phi
\bigl({\lambda ^{1/2}_p(s)}\bigr)+2\bar{\Phi}\bigl({
\lambda^{1/2}_p(s)}\bigr)\bigr\} \bigl\{1+O\bigl\{
n^{-1/2}{\lambda^{3/2}_p(s)}\bigr\}\bigr\},
\\
\label{eq:varTn0}&&\sigma_{T_{2n},0}^2 (s)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad= p\bigl\{2\bigl[
\lambda^{3/2}_p(s)+3{\lambda ^{1/2}_p(s)}
\bigr]\phi\bigl({\lambda^{1/2}_p(s)}\bigr)+6\bar{\Phi}\bigl({
\lambda ^{1/2}_p(s)}\bigr)\bigr\} \bigl\{1+o(1)\bigr\}
\end{eqnarray}
under the $H_0$; and
\begin{eqnarray*}
\mu_{T_{2n}, 1}(s)&=& \bigl
\{L_p^{(1)}p^{1-\beta}I(s<r)+L_p^{(3)}p^{1-\beta
-(\sqrt{s}-\sqrt{r})^2}I(s>r)
\bigr\} \bigl\{1+o(1)\bigr\}\\
&&{}+\mu_{T_{2n},
0}(s),
\\
\sigma_{T_{2n},1}^2 (s)&=&\bigl\{L_p^{(4)}p^{1-\beta}I(s<r)+
L_p^{(5)}p^{1-s}+L_p^{(6)}p^{1-\beta-(\sqrt{s}-\sqrt{r})^2}I(s>r)
\bigr\}\\
&&{}\times \bigl\{ 1+o(1)\bigr\}
\end{eqnarray*}
under the $H_1$ specified in \textup{(C.4)}.
\end{proposition}
Expressions (\ref{eq:meanTn0}) and (\ref{eq:varTn0}) provide the first
and the second order terms of $\mu_{T_{2n}, 0}(s)$ and $\sigma
_{T_{2n},0}^2 (s)$, which are needed when we consider their empirical
estimation under $H_0$ when formulating the $L_2$ thresholding test
statistic. Note that $\mu_{T_{2n}, 0}(s)=L_p^{(2)}p^{1-s}\{1+o(1)\}$
and $\sigma_{T_{2n},0}^2 (s)=L_p^{(5)}p^{1-s}\{1+o(1)\}$.
Only the first order terms for the variance are needed under $H_1$, but
the approximation to $\mu_{T_{2n},1}(s)$ has to be more accurate so as
to know the order of the difference between $\mu_{T_{2n},1}(s)$ and
$\mu
_{T_{2n},0}(s)$.
Proposition \ref{chap4-cor2} indicates that the column-wise dependence as specified in
(C.3) does not have much leading order impact on the variance of $T_{2
n}(s)$. The leading order variance is almost the same when $\mathbf
{W}_i$ are column-wise independent. The difference only appears in the
coefficients of the slow-varying functions $L_p^{(4)}$, $L_p^{(5)}$ and
$L_p^{(6)}$,
while their orders of magnitude remain unchanged.
The reason behind this phenomena is the thresholding.
It can be understood by an analogue for multivariate Gaussian
distributions with nonzero correlation. Despite the dependence in the
Gaussian distribution, exceedances beyond high thresholds are
asymptotically independent [\citet{Sibuya} and \citet{Joe}].\vadjust{\goodbreak}
We now study the asymptotic distribution of $T_{2 n}(s)$ to prepare for
the proposal of the maximal $L_2$-thresholding statistic.
Write
\[
T_{2n}(s)= \sum_{j=1}^p
Z_{j,n}(s),
\]
where $Z_{j,n}(s):=Y_{j,n}I\{Y_{j,n}> \lambda_p(s)\}$ and $\lambda_p(s)
=2 s \log(p)$.
For integers $a, b \in[-\infty,\infty]$ such that $a < b$, define
$\mathscr{F}_{a}^b=\sigma\{Z_{l,n}(s)\dvtx l\in(a, b)\}$ as the $\sigma
$-algebra generated by $\{Z_{l,n}(s)\}_{l=a}^b$ and define the $\rho
$-mixing coefficients
\begin{equation}
\rho_{Z(s)}(k) =
\sup_{l, \xi\in L^2(\mathscr{F}_{-\infty}^l ), \zeta\in
L^2(\mathscr
{F}^{\infty}_{l+k} ) }\bigl|
\operatorname{Corr}(\xi,\zeta)\bigr|.\label{eq:mixing}
\end{equation}
See \citet{Doukhan} for comprehensive discussions on the mixing concept.
The following is a condition regarding the dependence among $\{
Z_{j,n}(s)\}_{j=1}^p$.
\begin{longlist}[(C.5)]
\item[(C.5)] For any $s \in(0,1)$, the sequence of random variables $\{
Z_{j,n}(s)\}_{j=1}^{p}$ is $\rho$-mixing such that
$\rho_{Z(s)}(k)\leq C\alpha^k$ for some $\alpha\in(0,1)$ and a
positive constant~$C$.
\end{longlist}
The requirement of $\{Z_{j,n}(s)\}_{j=1}^p$ being $\rho$-mixing for
each $s$ is weaker than requiring the original data
columns $\{X_{i j}\}_{j=1}^p$ being $\rho$-mixing, whose mixing
coefficient $\rho_{X_i}(k)$ can be similarly defined as (\ref{eq:mixing}).
This is because, according to Theorem 5.2 in \citet{Bradley},
\[
\rho_{Z(s)}(k)\leq\sup_{i\leq n}\rho_{X_i}(k)=
\rho_{X_1}(k)\qquad \mbox{for each $k=1,\ldots,p$ and $s \in(0,1)$.}
\]
The following theorem reports the asymptotic normality of $T_{2n}(s)$
under both $H_0$ and $H_1$.
\begin{theorem}\label{th1}
Assume \textup{{(C.1)--(C.5)}}.
Then, for any $s \in(0, 1)$,
\begin{eqnarray*}
&&\hphantom{i}\mathrm{(i)}\quad \sigma^{-1}_{T_{2n}, 0}(s)\bigl\lbrace
T_{2n}(s) - \mu _{T_{2n},0}(s) \bigr\rbrace\stackrel{d} {\to}
N(0,1) \qquad\mbox{under $H_0$};
\\
&&\mathrm{(ii)}\quad \sigma^{-1}_{T_{2n}, 1}(s) \bigl\lbrace
T_{2n}(s) - \mu _{T_{2n}, 1}(s) \bigr\rbrace\stackrel{d} {\to}
N(0,1) \qquad\mbox{under $H_1$}.
\end{eqnarray*}
\end{theorem}
From (\ref{eq:meanTn0}) and (\ref{eq:varTn0}), define the leading order
terms of ${\mu}_{T_{2n}, 0}(s)$ and $\sigma_{T_{2n}, 0}^2(s)$, respectively,
\begin{eqnarray*}
\tilde{\mu}_{T_{2n}, 0}(s)&=& p\bigl\{2{\lambda^{1/2}_p(s)
}\phi\bigl({\lambda ^{1/2}_p(s)}\bigr)+2\bar{\Phi}\bigl({
\lambda^{1/2}_p(s)}\bigr)\bigr\} \qquad\mbox{and}
\\
\tilde{\sigma}_{T_{2n}, 0}^2(s)&=& p\bigl\{2\bigl[
\lambda^{3/2}_p(s)+3{\lambda ^{1/2}_p(s)}
\bigr]\phi\bigl({\lambda^{1/2}_p(s)}\bigr)+6\bar{\Phi}\bigl({
\lambda ^{1/2}_p(s)}\bigr)\bigr\}.
\end{eqnarray*}
It is clear that the asymptotic normality in Theorem \ref{th1}(i) remains if we
replace $\sigma_{T_{2n}, 0}(s)$ by $\tilde{\sigma}_{T_{2n}, 0}(s)$.
To formulate a test procedure based on the thresholding statistic $T_{2
n}(s)$, we need to estimate $\mu_{T_{2n},0}(s)$
by a $\hat{\mu}_{T_{2n}, 0}(s)$, say. Ideally, if
\begin{equation}
\mu_{T_{2n},0}(s) - \hat{\mu}_{T_{2n},0}(s) = o\bigl\{\tilde{\sigma
}_{T_{2n}, 0}(s)\bigr\}, \label{eq:cri1}\vadjust{\goodbreak}
\end{equation}
the first part of Theorem \ref{th1} remains valid if we replace $\mu
_{T_{2n},0}(s)$ with $\hat{\mu}_{T_{2n},0}(s)$.
An obvious choice of $\hat{\mu}_{T_{2n},0}(s)$ is $\tilde{\mu}_{T_{2
n}, 0}(s)$, which is known upon given $p$ and $s$.
Indeed,
if $W_{ij}$s are the standard normally distributed,
we have
\[
\mu_{T_{2n},0}(s) = \tilde{\mu}_{T_{2n},0}(s) \qquad\mbox{for } s \in(0,1),
\]
implying the leading order is exactly $\mu_{T_{2n},0}(s)$ for the
Gaussian data. Hence, if we take $\hat{\mu}_{T_{2n},0}(s)=\tilde{\mu
}_{T_{2n},0}(s)$, (\ref{eq:cri1}) is satisfied for the Gaussian data.
For non-Gaussian observations, the difference between $\mu
_{T_{2n},0}(s)$ and\break $\tilde{\mu}_{T_{2 n}, 0}(s)$ may not be a smaller
order of $\sigma_{T_{2n}, 0}(s)$.
Specifically, from (\ref{eq:meanTn0}) and (\ref{eq:varTn0}), we have
\[
\frac{ \mu_{T_{2n},0}(s) - \tilde{\mu}_{T_{2
n},0}(s)}{\sigma
_{T_{2n},0}(s)}= O \bigl\{{\lambda^{5/4}_p(s)}p^{(1-s)/2}n^{-1/2}
\bigr\}.
\]
To make the above ratio diminishing to zero, the strategy of \citet{Delaigle} can be adopted by restricting $p=n^{1/\theta}$ and
{ $s \in((1-\theta)_{+}, 1)$ for a positive~$\theta$}, where $(a)_{+}
= a$ if $a > 0$ and $(a)_{+} = 0$ if $a \le0$.
Under this circumstance,
\begin{equation}
\frac{ \mu_{T_{2n},0}(s) - \tilde{\mu}_{T_{2
n},0}(s)}{\sigma
_{T_{2n},0}(s)} =O \bigl\{({2s/\theta\log n})^{5/4}n^{{(1-s-\theta)}/{(2\theta)
}}
\bigr\}\to0. \label{eq:restrict}
\end{equation}
Clearly, for a not so high dimension with $\theta\ge1$, (\ref
{eq:restrict}) holds for all $s \in(0,1)$,
and $\tilde{\mu}_{T_{2 n},0}(s)$ satisfies (\ref{eq:cri1}).
For higher dimensions with $\theta< 1$, the thresholding level $s$ has
to be restricted to ensure (\ref{eq:restrict}).
The restriction can
alter the detection boundary of the test we will propose in the next
section. This echoes a similar phenomena for the HC test given in
\citet{Delaigle}.
To expedite our discussion, we assume in the rest of the paper that
(\ref{eq:cri1}) is satisfied by the $\hat{\mu}_{T_{2n},0}(s)$. We note
such an arrangement is not entirely unrealistic, as a separate effort
may be made to produce more accurate estimators.
Assuming so allows us to stay focused on the main agenda of the
testing problem.
The asymptotic normality established in Theorem \ref{th1} allows an asymptotic
\mbox{$\alpha$-level} test
that rejects $H_0$ if
\begin{equation}
T_{2n}(s)-\hat{\mu}_{T_{2n},0}(s)>z_\alpha\tilde{
\sigma}_{T_{2n},
0}(s), \label{eq:test1}
\end{equation}
where $z_\alpha$ is the upper $\alpha$ quantile of the standard normal
distribution.
\section{Maximal thresholding}\label{sec3}
While the asymptotic normality of $T_{2n}(s)$ in Theorem \ref{th1} ensures the
single thresholding level test in (\ref{eq:test1}) a correct size
asymptotically, the power of the
test depends on $s$, the underlying signal strength $r$ and the
sparsity $\beta$.
A test procedure is said to be able to separate a pair of null and
alternative hypotheses
asymptotically if the sum of the probabilities of the type I and type~II errors converges to zero as $n \to\infty$.
Let $\alpha_n$ be a sequence of the probabilities of type I error,
which can be made converging to zero as $n \to\infty$.
The sum of the probabilities of the type I and type II errors for the test
given in (\ref{eq:test1}) with nominal size $\alpha_n$ is approximately
\begin{equation}
\mathrm{Err}_{\alpha_n}:=\alpha_n+P \biggl(\frac{T_{2n}(s)-\mu
_{T_{2n},0}(s)}{\sigma
_{T_{2n},0}(s)}\leq
z_{\alpha_n} \Big| H_1 \biggr),
\label{eq:errors}
\end{equation}
which is attained based on the facts that (i) the size $\alpha_n$ is
attained asymptotically and (ii) $\hat{\mu}_{T_{2n}, 0}(s)$ and
$\tilde
{\sigma}_{T_{2n}, 0}(s)$ are sufficiently accurate estimators { in the
test procedure (\ref{eq:test1}).}
Our strategy is to first make $\alpha_n\to0$ such that $z_{\alpha
_n}=C(\log p)^\varepsilon$ for an arbitrarily
small $\varepsilon>0$ and a constant $C>0$.
The second term on the right-hand side of (\ref{eq:errors}) is
\begin{eqnarray}
&&\mathrm{Err}_{\mathit{II}}:=P \biggl(\frac{T_{2n}(s)-\mu_{T_{2n},1}(s)}{\sigma_{T_{2n},
1}(s)}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\hspace*{34pt}\qquad\leq z_{\alpha_n}
\frac{ \sigma_{T_{2n}, 0}(s)}{\sigma_{T_{2n},
1}(s)}-\frac{\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)}{ \sigma_{T_{2n},
1}(s) } \biggr).
\end{eqnarray}
Because $z_{\alpha_n}$ is slowly varying, $0<\sigma
_{T_{2n},0}(s)/\sigma
_{T_{2n},1}(s)\leq1$
and $(T_{2n}(s)- \mu_{T_{2n},1}(s))/\sigma_{T_{2n}, 1}(s)$ is
stochastically bounded,
a necessary and sufficient condition that
ensures $\mathrm{Err}_{\alpha_n}\to0$ is
\begin{equation}
\label{detectable-condition} \Delta_2(s;r,\beta):=\frac{\mu_{T_{2n},1}(s)-\mu
_{T_{2n},0}(s)}{\sigma
_{T_{2n}, 1}(s)}\to\infty.
\end{equation}
From Proposition \ref{chap4-cor2},
it follows that, up to a factor $1+o(1)$,
\begin{eqnarray*}
\Delta_2(s;r,\beta)=\cases{
C_1p^{(1+s-2\beta)/2},
& \quad$\mbox{if } s\leq r \mbox{ and } s\leq
\beta$;
\vspace*{2pt}\cr
C_2p^{(1-\beta)/2},
&\quad $\mbox{if } s\leq r \mbox{ and } s>
\beta;$
\vspace*{2pt}\cr
C_3p^{1/2-\beta+r-(\sqrt{s}-2\sqrt{r})^2/2},
& \quad$\mbox{if } s> r \mbox{ and } s\leq(
\sqrt{s}-\sqrt{r})^2+\beta;$
\vspace*{2pt}\cr
C_4p^{(1-\beta-(\sqrt{s}-2\sqrt{r})^2)/2},
&\quad $\mbox{if } s> r \mbox{ and } s> (
\sqrt{s}-\sqrt{r})^2+\beta,$}
\end{eqnarray*}
where $C_1=\sqrt{2}(\pi s)^{{1}/{4}}(\frac{r}{s})(\log
p)^{{1}/{4}}$,
$C_2=\frac{1}{2}(r\log p)^{1/2}$, $C_3={s^{1/4}(\log p)^{-
{1}/{4}}}/\break \{
\sqrt{2}\pi^{1/4}(\sqrt{s}-\sqrt{r})\}$ and $C_4=(2\sqrt{\pi
}(\sqrt {s}-\sqrt{r}))^{-{{1}/{2}}}(\log p)^{-{1}/{4}}$.
Let
\begin{eqnarray*}
\varrho^{\ast}(\beta)=\cases
\beta-1/2, & \quad$\mbox{$1/2 < \beta\leq3/4$;}$
\vspace*{2pt}\cr
(1-\sqrt{1-\beta})^2, &\quad $\mbox{$3/4<\beta<1$.}$}
\end{eqnarray*}
As demonstrated in \citet{DonohoJin} and {\citet{Ingster}},
the phase diagram $r=\varrho^{\ast}(\beta)$ is the optimal detection
boundary for testing the hypotheses we are considering in this paper
when the data are Gaussian and $\bolds{\Sigma}=\mathbf{I}_p$.
Here the optimality means that for any $r > \varrho^{\ast}(\beta)$,
there exists at least one test such that the sum of the probabilities
of the type I and type II errors diminishes to zero as $n \to\infty$;
but for $r < \varrho^{\ast}(\beta)$, no such test exists.
For correlated Gaussian data such that $\bolds{\Sigma} \ne\mathbf
{I}_p$, \citet{HallJin2010} found that the detection boundary
$r=\varrho
^{\ast}(\beta)$ may be lowered by transforming the data via the inverse
of Cholesky factorization $\mathbf{L}$ such that $\mathbf{L}\Sigma
\mathbf{L}^T=\mathbf{I}_p$. {More discussion on the optimality is given
in Section~\ref{sec6}.}
From the expression of $\Delta_2(s;r,\beta)$ given above, it can be
shown (see the proof of Theorem \ref{detect-upper-bound} in the \hyperref[app]{Appendix}) that if $r>\varrho
^{\ast}(\beta)$
there exists at least one $s \in(0,1)$ for each pair of $(r,\beta)$
such that (\ref{detectable-condition}) is satisfied and, hence, the
thresholding test would be powerful. This is the key for the maximal
$L_2$-thresholding test that we will propose later to attain the
detection boundary.
It is clear that we have to make the thresholding level $s$ adaptive to
the unknown $r$ and $\beta$.
One strategy is to use a range of thresholding levels, say, $s \in
{\mathcal{S}} \subset(0,1)$, so that the underlying $(r,\beta)$ can be
``covered.'' This is the very idea of the HC test.
Let $\hat{\mathcal{T}}_{2,n}(s)= \tilde{\sigma}_{T_{2n},0}^{-1}(s)\{
T_{2n}(s)-\hat{\mu}_{T_{2n},0}(s)\}$ be the standardized version of
$T_{2n}(s)$.
Define the maximal thresholding statistic
\[
\hat{\mathcal{M}}_{2 n} = \sup_{s \in{\mathcal{S}}} \hat {\mathcal
{T}}_{2,n}(s),
\]
where ${\mathcal{S}}=(0,1-\eta]$ for an arbitrarily small positive
$\eta$.
Let
\begin{equation}\qquad
\mathcal{S}_n={\bigl\{s_i\dvtx \mbox{$s_i=Y_{i,n}/(2
\log p)$ and $0<Y_{i,n}<2(1-\eta)\log p$}\bigr\}\cup\{1-\eta
\}}.\label{eq:Sn}
\end{equation}
Since both $\hat{\mu}_{T_{2n},0}(s)$ and $\tilde{\sigma}_{T_{2n},0}(s)$
are monotone decreasing functions of $s$, it can be shown that $\hat
{\mathcal{M}}_{2n}$ can be attained on $\mathcal{S}_n$, namely,
\begin{equation}
\label{max-discrete} \hat{\mathcal{M}}_{2n}=\max_{s\in\mathcal{S}_n}
\hat{\mathcal {T}}_{2,n}(s).
\end{equation}
This largely reduces the computational burden of $\hat{\mathcal
{M}}_{2n}$. The asymptotic distribution of $\hat{\mathcal{M}}_{2n}$ is
established in the following theorem.
\begin{theorem}
\label{asy-gumbel} Assume \textup{(C.1)--(C.3)}, \textup{(C.5)} and (\ref{eq:cri1})
hold. Then, under~$H_0$,
\[
P\bigl(a(\log p)\hat{\mathcal{M}}_{2 n}-b(\log p,\eta)\leq x\bigr)\to
\exp \bigl(-e^{-x}\bigr),
\]
where $a(y)=(2\log(y))^{1/2}$ and $b(y,\eta)=2\log(y)+2^{-1}\log
\log
(y)-2^{-1}\times\break \log(\frac{4 \pi}{(1-\eta)^2})$.
\end{theorem}
The theorem leads to an asymptotic $\alpha$-level test that rejects
$H_0$ if
\begin{equation}
\hat{\mathcal{M}}_{2 n} >\mathcal{B}_\alpha=\bigl(
\mathcal{E}_\alpha +b(\log p,\eta)\bigr)/a(\log p), \label{eq:L2test}
\end{equation}
where $\mathcal{E}_\alpha$ is the upper $\alpha$ quantile of the Gumbel
distribution $\exp(-e^{-x})$.
We name the test the maximal $L_2$-thresholding test. The following
theorem shows that its detection boundary is $r=\varrho^{\ast}(\beta)$.
\begin{theorem}
\label{detect-upper-bound} Under conditions \textup{(C.1)--(C.5)} and
assuming (\ref{eq:cri1}) holds, then
\textup{(i)} if $r>\varrho^{\ast}(\beta)$,
the sum of the type I and II errors of the maximal $L_2$-thresholding
tests
converges to 0 { when the nominal sizes $\alpha_n =\bar{\Phi}((\log
p)^\varepsilon) \to0$ for an arbitrarily} small $\varepsilon>0$ as $n\to
\infty$.
\textup{(ii)} If $r<\varrho^{\ast}(\beta)$, the sum of the type I and II errors
of the maximal $L_2$-thresholding test converges to 1 when the nominal
sizes $\alpha_n \to0$ as $n\to\infty$.
\end{theorem}
It is noted that when $r>\varrho^{\ast}(\beta)$ in part (i) of Theorem
\ref{detect-upper-bound}, we need to restrict the rate of the nominal type I error $\alpha
_n$'s convergence to 0, since the conclusion of part (i) may not be
true for all $\alpha_n \to0$. However, in part (ii) where $r <
\varrho
^{\ast}(\beta)$, no restriction for $\alpha_n$ is required, which has
to be the case, as otherwise there is no guarantee that $r=\varrho
^{\ast
}(\beta)$ is the detection boundary of the test.
If the estimator $\hat{\mu}_{T_{2n,0}}(s)$ cannot attain (\ref
{eq:cri1}) and
$\tilde{\mu}_{T_{2n,0}}(s)$ is used as the estimator, we have to
restrict $p=n^{1/\theta}$ for a $\theta\in(0, 1)$ and
limit $s \in(1-\theta, 1)$. In this case, the above theorem is valid
if we replace
$\varrho^{\ast}(\beta)$ by $\varrho^{\ast}_{\theta}(\beta)$, where
\begin{eqnarray*}
\varrho^{\ast}_\theta(\beta)=\cases{
(\sqrt{1-\theta}-\sqrt{1-\beta-\theta/2})^2, &\quad $\mbox{if $1/2<
\beta \leq (3-\theta)/4$};$
\vspace*{2pt}\cr
\beta-1/2, & \quad$\mbox{if $(3-\theta)/4<\beta\leq3/4$};$
\vspace*{2pt}\cr
(1-\sqrt{1-\beta})^2, & \quad$\mbox{if $3/4<\beta<1$},$}
\end{eqnarray*}
which is clearly inferior to $\varrho^{\ast}(\beta)$. The boundary
$\varrho^{\ast}_{\theta}(\beta)$ is the same as the one in \citet{Delaigle} based on the marginal $t$-statistics, whereas our
result is based on the $z$-statistics. The $t$-statistic formulation
reduces the demand on the tails of the distributions as shown in
\citet{Delaigle}.
We note that if $\theta\ge1$, Theorem \ref{detect-upper-bound} remains so that the Gaussian
detection boundary is still valid.
\section{Power comparison}\label{sec4}
We compare the power of the maximal $L_2$-thresh\-olding test with those
of the HC test and the maximal $L_1$-thresholding test in this section.
Let us first introduce these two tests.
The HC test is based on
\begin{equation}
\label{HCtest} \hat{\mathcal{T}}_{0,n}(s)=\frac{{T}_{0 n}(s)-2p\bar{\Phi}(\lambda
^{1/2}_p(s))}{\sqrt{2p\bar{\Phi}(\lambda^{1/2}_p(s))(1-2\bar{\Phi
}(\lambda^{1/2}_p(s)))}},
\end{equation}
where $T_{0 n} (s)=\sum_{j=1}^p I(Y_{j,n}\geq\lambda_p(s))$. Like
\citet{Delaigle2009}, we consider here a two-sided HC test instead
of a one-sided test treated in \citet{DonohoJin}. With the same
reasoning as Donoho and Jin [(\citeyear{DonohoJin}), page~968], we define the HC test statistic
\[
\hat{\mathcal{M}}_{0 n} = \max_{s \in{\mathcal{S}}} \hat {\mathcal
{T}}_{0,n} (s),
\]
where $\mathcal{S} = (0, 1-\eta]$ for an arbitrary small $\eta$ and is
the same as the maximal $L_2$-thresholding statistic.
Using the same argument for the maximal $L_2$-thresholding statistic,
it can be shown that $\hat{\mathcal{M}}_{0 n}$ attains its maximum
value on $\mathcal{S}_{n}$ given in (\ref{eq:Sn}) as well.
According to \citet{DonohoJin}, under $H_0$,
\[
P\bigl(a(\log p)\hat{\mathcal{M}}_{0 n} -b(\log p,\eta)\leq x\bigr)\to
\exp\bigl(-e^{-x}\bigr),
\]
with the same normalizing sequences as those in Theorem \ref{asy-gumbel}.
Let $\mathcal{B}_{\alpha}$ be the same as that of the maximal
$L_2$-thresholding test given in (\ref{eq:L2test}). An $\alpha$ level
HC test rejects~$H_0$ if
\begin{equation}
\hat{\mathcal{M}}_{0 n} >\mathcal{B}_{\alpha}.\label{eq:HCtest}
\end{equation}
Let us introduce the maximal $L_1$-thresholding test statistic. Recall that
\[
T_{1n}(s)=\sum_{j=1}^p|
\sqrt{n}\bar{X}_{j}|I\bigl(|\bar{X}_{j}|>\sqrt {\lambda
_p(s)/n}\bigr).
\]
It can be shown that the mean and variance of {\bf$T_{1n}(s)$} under
$H_0$ are, respectively,
\begin{eqnarray*}
\mu_{T_{1n},0}(s)&=&\sqrt{2/\pi}p^{1-s}\bigl\{1+o(1)\bigr\}
\quad\mbox{and}\\
\sigma^2_{T_{1n},0}(s)&=&\bigl\{2p^{1-s}
\sqrt{(s/\pi)\log p}\bigr\} \bigl\{1+o(1)\bigr\}.
\end{eqnarray*}
Define
\[
\hat{\mathcal{T}}_{1,n}(s)=\frac{T_{1n}(s)-\hat{\mu
}_{T_{1n},0}(s)}{\tilde{\sigma}_{T_{1n},0}(s)},
\]
where $\hat{\mu}_{T_{1n},0}(s)$ is a sufficiently accurate estimator of
${\mu}_{T_{1n},0}(s)$ in a similar sense to~(\ref{eq:cri1}) and
$\tilde
{\sigma}_{T_{1n},0}^2(s) =2 p^{1-s} \sqrt{(s/\pi)\log p}$. The maximal
$L_1$-thresh\-olding statistic is
\[
\hat{\mathcal{M}}_{1n}=\max_{s\in\mathcal{S}}\hat{
\mathcal{T}}_{1,n}(s),
\]
where, again, $\mathcal{S} = (0, 1-\eta]$.
It can be shown that $\hat{\mathcal{M}}_{1n}=\max_{s\in\mathcal{S}_n}
\hat{\mathcal{T}}_{1,n}(s)$ for the same $S_n$ in (\ref{eq:Sn}).
Using a similar approach to that in Theorem \ref{asy-gumbel}, we can show that
\[
P\bigl(a(\log p)\hat{\mathcal{M}}_{1n}-b(\log p,\eta)\leq x\bigr)\to
\exp\bigl(-e^{-x}\bigr).
\]
Hence, an $\alpha$-level maximal $L_1$-thresholding test rejects the
$H_0$ if
\begin{equation}
\hat{\mathcal{M}}_{1n}>\mathcal{B}_{\alpha}.
\label{eq:L1test}
\end{equation}
From (\ref{eq:L2test}), (\ref{eq:HCtest}) and (\ref{eq:L1test}), the
three tests have the same critical values $\mathcal{B}_\alpha$ at
nominal level $\alpha$. This brings convenience for the power
comparison.
Let us define the power of the three tests
\[
\Omega_{\gamma}(r,\beta):=P(\hat{
\mathcal{M}}_{\gamma n}>\mathcal {B}_{\alpha})
\]
for $\gamma= 0, 1$ and $2$, respectively.
Notice that
\begin{equation}
\label{th4decom} \hat{\mathcal{M}}_{\gamma n}=\max_{s\in\mathcal{S}_n}
\bigl\{ \mathcal {T}_{\gamma n}(s)\tilde{e}_\gamma(s)+{\tilde{\sigma
}^{-1}_{T_{\gamma
n}, 0}(s)} \bigl(\mu_{T_{\gamma n},0}(s)-\hat{
\mu}_{T_{\gamma
n},0}(s) \bigr) \bigr\},
\end{equation}
where $\tilde{e}_\gamma(s)={\sigma_{T_{\gamma n}, 0}(s)}/{\tilde
{\sigma
}_{T_{\gamma n}, 0}(s)}$ and
\[
\mathcal{T}_{\gamma n}(s)={{\sigma}^{-1}_{T_{\gamma n}, 0}(s)} {
\bigl(T_{\gamma n}(s)-\mu_{T_{\gamma n},0}(s) \bigr)}=\mathcal{T}_{\gamma
n,1}(s)R_\gamma(s)+
\Delta_{\gamma,0}(s;r,\beta),
\]
in which $R_\gamma(s)={\sigma_{T_{\gamma n}, 1}(s)}/{{\sigma
}_{T_{\gamma n}, 0}(s)}$,
$\mathcal{T}_{\gamma n,1}(s)={{\sigma}^{-1}_{T_{\gamma n}, 1}(s)}
(T_{\gamma n}(s)-\mu_{T_{\gamma n},1}(s) )$ and $\Delta_{\gamma
,0}(s;r,\beta)={{\sigma}^{-1}_{T_{\gamma n}, 0}(s)} (\mu
_{T_{\gamma
n},1}(s)-\mu_{T_{\gamma n},0}(s) )$.
As shown in (\ref{L2-100}), (\ref{HC-100}) and (\ref{L1-100}) in the
\hyperref[app]{Appendix},
\begin{eqnarray*}
\Delta_{0,0}(s;r,\beta)&=&(s\pi\log p)^{{1}/{4}}p^{1/2-\beta
+s/2}I(r>s)\\
&&{}+L_p^{(6)}p^{1/2-\beta-(\sqrt{s}-\sqrt{r})^2+s/2}I(r<s),
\\
\Delta_{1,0}(s;r,\beta)&=&(s\pi\log p)^{{1}/{4}}(r/s)^{{1}/{4}}
p^{1/2-\beta+s/2}I(r>s)\\
&&{}+L_p^{(6)}p^{1/2-\beta-(\sqrt{s}-\sqrt
{r})^2+s/2}I(r<s)
\end{eqnarray*}
and
\begin{eqnarray*}
\Delta_{2,0}(s;r,\beta)&=&(s\pi\log p)^{{1}/{4}}(r/s)p^{1/2-\beta
+s/2}I(r>s)\\
&&{}+L_p^{(6)}p^{1/2-\beta-(\sqrt{s}-\sqrt{r})^2+s/2}I(r<
s),
\end{eqnarray*}
where $L_p^{(6)}=\{2(\sqrt{s}-\sqrt{r})\}^{-1}s^{1/4}(\pi\log p)^{-1/4}$.
{Derivations given in the proof of Theorem \ref{th4} in the \hyperref[app]{Appendix} show that}
for $\gamma=0,1$ and $2$,
\begin{equation}
\hat{\mathcal{M}}_{\gamma n}\sim\max_{s\in\mathcal{S}_n}\Delta
_{\gamma
,0}(s;r,\beta), \label{eq:equvi}
\end{equation}
where ``$a\sim b$'' means that the $a/b=1+o_p(1)$. This implies that we
only need to compare $\max_{s\in\mathcal{S}_n}\Delta_{\gamma
,0}(s;r,\beta)$ in the power comparison.
From the established expressions of $\Delta_{\gamma, 0}(s; r,\beta)$,
we note two facts. One is that
if $r>2\beta-1$, for any $s\in(2\beta-1,r)$,
\begin{eqnarray} \label{eq:4.20a}
\Delta_{2,0}(s;r,\beta)/\Delta_{1,0}(s;r,
\beta)&=&(r/s)^{3/4}>1\quad \mbox{and}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\Delta_{1,0}(s;r,\beta)/
\Delta_{0,0}(s;r,\beta)&=&(r/s)^{1/4}>1.
\end{eqnarray}
The other is if $r\in(\varrho^*(\beta),2\beta-1]$, asymptotically,
\begin{equation}
\Delta_{0,0}(s;r,\beta)=\Delta_{1,0}(s;r,\beta)=\Delta
_{2,0}(s;r,\beta)\qquad \mbox{for all $s\in\mathcal{S}$}. \label{eq:4.20b}
\end{equation}
Hence, when $(r,\beta)$ lies just above the detection boundary, the
three $\Delta_{\gamma, 0}$ functions are the same.
If $(r, \beta)$ moves further away from the detection boundary so that
$r > 2 \beta-1$, there will be a clear ordering among the $\Delta
_{\gamma,0}$ functions.
The following theorem summarizes the relative power performance.
\begin{theorem}
\label{th4} Assume \textup{(C.1)--(C.5)} and (\ref{eq:cri1}) hold. For any
given significant level $\alpha\in(0,1)$, the powers of the HC, the
maximal $L_1$ and $L_2$-thresholding tests under $H_1$ as specified in
\textup{(C.4)} satisfy, as $n \to\infty$,
\begin{equation}
\label{pow-order-1} \Omega_{0}(r,\beta)\leq\Omega_1(r,\beta)
\leq\Omega_2(r,\beta)\qquad \mbox{for $r>2\beta-1$
\end{equation}
and $\Omega_{\gamma}(r,\beta)$s are asymptotic equivalent for $r\in
(\varrho^*(\beta),2\beta-1]$.
\end{theorem}
{The theorem indicates that when $(r,\beta)$ is well above the
detection boundary such that $r > 2 \beta-1$, there is a clear
ordering in the power among the three tests, with the $L_2$ being the
most powerful followed by the $L_1$ test. However, when $(r,\beta)$ is
just above the detection boundary such that $r\in(\varrho^*(\beta
),2\beta-1]$, the three tests have asymptotically equivalent powers. In
the latter case, comparing the second order terms of $\hat{\mathcal
{M}}_{\gamma n}$
may lead to differentiations among the powers of the three tests.
However, it is a rather technical undertaking to assess the impacts of
the second order terms.}
The analysis conducted in Theorem \ref{th4} is applicable to the setting of
Gaussian data with $n=1$ and $\bolds{\Sigma}$ satisfying (C.3), which is
the setting commonly assumed in the investigation of the detection
boundary for the HC test [\citet{DonohoJin}; \citet{HallJin2010}
and Arias-Castro, Bubeck and
Lugosi (\citeyear{Arias-Castro2012a})].
Specifically, the power ordering among the three maximal thresholding
tests in Theorem \ref{th4} remains but under lesser conditions (C.3)--(C.5).
Condition~(C.1) is not needed since the Gaussian assumption allows us to
translate the problem to $n=1$ since the sample mean is sufficient.
Condition (C.2) is automatically satisfied for the Gaussian distribution.
The condition (\ref{eq:cri1}) is met for the Gaussian data, as we have
discussed in Section~\ref{sec2}.
\section{Simulation results}\label{sec5}
We report results from simulation experiments which were designed to
evaluate the performance of the maximal $L_1$ and $L_2$-thresholding
tests and the HC test.
The purpose of the simulation study is to confirm the
theoretical findings that there is an
ordering in the power among the three tests discovered in Theorem \ref{th4}.
Independent and identically distributed $p$-dim random vectors $\mathbf
{X}_i$ were
generated according to
\[
\mathbf{X}_i=\mathbf{W}_i+\bolds{\mu},\qquad i=1,\ldots,n,
\]
where $\mathbf{W}_i=(W_{i1},\ldots,W_{ip})^T$ is a stationary random
vector and $\{W_{ij}\}_{j=1}^p$ have the same marginal distribution $F$.
In the simulation, $\mathbf{W}_i$ was generated from a $p$-dimensional
multivariate Gaussian distribution with zero mean and covariance
$\bolds{\Sigma} = (\sigma_{ij})_{p \times p}$, where $\sigma_{ij} =
\rho^{|i-j|}$ for $\rho=0.3$ and $0.5$, respectively.
The simulation design on $\bolds{\mu}$ had the sparsity parameter
$\beta=0.6, 0.7$ and $0.8$, respectively, and the signal strength $r
=0.1, 0.3, 0.5, 0.6, 0.8, 0.9, 1.1$ and $1.2$, respectively. We chose
two scenarios on the dimension and sample size combinations: (a) a
large $p$, small $n$ setting
and (b) both $p$ and $n$ are moderately large. For scenario (a), we
chose $p=\exp(c_0n^{0.3}+c_1)$, where $c_0=1.90$ and $c_1=2.30$ so that
the dimensions $p$ were 2000 and 20,000, and the sample sizes $n$ were
$30$ and 100, respectively. We note that under the setting $\beta=0.8$,
there were only $4$ and 7 nonzero means, respectively, among the 2000
and 20,000 dimensions. And those for $\beta=0.7$ were $9$ and $19$,
respectively, and those for $\beta=0.6$ were $20$ and $52$,
respectively. These were quite sparse. For scenario (b), we chose
$p=n^{1.25}+184$ such that $(p,n)=(500,100)$ and $(p,n)=(936, 200)$.
The maximal $L_2$-test statistic $\hat{\mathcal{M}}_{2n}$ was
constructed using $\tilde{\mu}_{T_{2n},0}(s)$ and $\tilde{\sigma
}_{T_{2n}, 0}(s)$ given in (\ref{eq:meanTn0}) and (\ref{eq:varTn0}),
respectively, as the mean and standard deviation estimators.
The maximal $L_1$ test statistic and the HC test statistic, $\hat
{\mathcal{M}}_{1n}$ and $\hat{\mathcal{M}}_{0 n} $, were constructed
similarly using the leading order mean and standard deviation under
$H_0$. The set of thresholding level $\mathcal{S}$ was chosen to be
$(0, 1-\eta]$ with $\eta=0.05$.
Figures~\ref{figure1}--\ref{figure4} display the average empirical sizes
and powers of the HC, the maximal $L_1$ and $L_2$-thresholding tests
based on 20,000 simulations, with
Figures~\ref{figure1}--\ref{figure2} for scenario (a) and Figures~\ref{figure3}--\ref{figure4} for scenario (b).}
To make the power comparison fair and conclusive, we adjusted the
nominal level of the tests so that the simulated sizes of the tests
were all around
$\alpha=0.05$, with the HC having slightly larger sizes than those of
the maximal $L_1$ test, and the sizes of the maximal $L_1$ test were
slightly larger than those of the maximal $L_2$ test. These were
designed to rule out potential ``favoritism'' in the power comparison
due to advantages in the sizes of the maximal $L_2$ and/or $L_1$ tests.
\begin{figure}
\includegraphics{1168f01.eps}
\caption{Empirical sizes and powers of the HC (dotted lines with
squares), the maximal $L_1$- (dashed lines with dots) and $L_2$- (solid
lines with circles) thresholding tests when $p=2000$ and $n=30$ with
the marginal distribution the standard normal.}\label{figure1}
\end{figure}
\begin{figure}
\includegraphics{1168f02.eps}
\caption{Empirical sizes and powers of the HC (dotted lines with
squares), the maximal $L_1$- (dashed lines with dots) and $L_2$- (solid
lines with circles) thresholding tests when $p=20\mbox{,}000$ and $n=100$ with
the marginal distribution the standard normal.}\label{figure2}
\end{figure}
\begin{figure}
\includegraphics{1168f03.eps}
\caption{Empirical sizes and powers of the HC (dotted lines with
squares), the maximal $L_1$- (dashed lines with dots) and $L_2$- (solid
lines with circles) thresholding tests when $p=500$ and $n=100$ with
the marginal distribution the standard normal.}\label{figure3}
\end{figure}
\begin{figure}
\includegraphics{1168f04.eps}
\caption{Empirical sizes and powers of the HC (dotted lines with
squares), the maximal $L_1$- (dashed lines with dots) and $L_2$- (solid
lines with circles) thresholding tests when $p=936$ and $n=200$ with
the marginal distribution the standard normal.}\label{figure4}
\end{figure}
Figures~\ref{figure1}--\ref{figure4} show that the power of the tests
were the most influenced by the signal strength parameter $r$, followed
by the sparsity $\beta$. The powers were insensitive to the level of
dependence $\rho$, which confirmed our finding that the thresholding
largely removes the dependence. The observed ordering in the empirical
power shown in Figures~\ref{figure1}--\ref{figure4} were consistent to
the conclusions in Theorem \ref{th4}. We observed that in all the simulation
settings, despite some size advantages by the HC test and/or the
maximal $L_1$ test, the maximal $L_2$ test had better power than the
maximal $L_1$ and the HC test, and the maximal $L_1$ test had better
power than the HC test. We find that for each fixed level of sparsity
$\beta$, when the signal strength $r$ was increased so that $(r, \beta
)$ moved away from the detection boundary $r=\varrho^{\ast}(\beta)$,
the difference among the powers of the three tests was enlarged. This
was especially the case for the most sparse case of $\beta=0.8$ and was
indeed confirmatory to Theorem \ref{th4}.
The simulated powers of the three tests were very much the same at
$r=0.1$ and were barely changed even when both $n$ and $p$ were
increased. This was consistent with the fact that $r=0.1$ is below the
detection boundary for $\beta=0.7$ and 0.8 considered in the simulation.
\section{Discussion}\label{sec6}
Our analysis shows that there are alternative $L_1$ and $L_2$
formulations to the HC test which attain the detection boundary
$r=\varrho^{\ast}(\beta)$ of the HC test. The tests based on the $L_1$
and $L_2$ formulations are more powerful than the HC test when the $(r,
\beta)$ pair is away from the detection boundary such that $r > 2\beta-1$.
The three tests have asymptotically equivalent power when $(r,\beta)$
is just above the detection boundary.
The detection boundary $r=\varrho^{\ast}(\beta)$ coincides with that of
the HC test discovered in \citet{DonohoJin} for the Gaussian data
with independent components. That the three tests considered in this
paper attain the detection boundary $r=\varrho^{\ast}(\beta)$ under the
considered sub-Gaussian setting with column-wise dependence can be
understood in two aspects. One is that the three test statistics are
all directly formulated via the marginal sample means $\bar{X}_j$ which
are asymptotically normally distributed; the other is that the
thresholding statistics are asymptotically uncorrelated as implied from
Proposition \ref{chap4-cor2}.
According to \citet{Ingster} and \citet{DonohoJin}, $r=\varrho
^{\ast
}(\beta)$ is the optimal detection boundary for Gaussian distributed
data with independent components. However, it may not be optimal for
the dependent nonparametric setting considered in this paper. Indeed,
for weakly dependent Gaussian data, \citet{HallJin2010} showed that the
detection boundary $r=\varrho^{\ast}(\beta)$ can be lowered by
utilizing the dependence. The latter was carried out by
pre-transforming the data with~$\mathbf{L}$, the inverse of the
Cholesky decomposition of $\bolds{\Sigma}$, or an empirical estimate
of $\mathbf{L}$ and then conducting the HC test based on the
transformed data. It is expected that the main results of this paper on
the relative performance of the three tests would remain valid for the
transformed data. \citet{HallJin} and \citet{Delaigle2009}
studied the detection boundary for dependent data and \citet{Cai2012}
studied the boundary for detecting mixtures with a general known
distribution. However, the optimal detection boundary under the
dependent sub-Gaussian distribution setting is still an open problem.
\begin{appendix}
\section*{Appendix: Technical details}\label{app}
In this Appendix we provide proofs to Theorems \ref{asy-gumbel}, \ref{detect-upper-bound}
and \ref{th4} reported in
Sections~\ref{sec3} and \ref{sec4}.
Throughout this Appendix we use $L_p=C\log^b(p)$ to denote slow varying
functions
for some constant $b$ and positive constant $C$, and $\phi(\cdot)$ and
$\bar{\Phi}(\cdot)$ for the density and survival functions of the
standard normal distribution, respectively. Let $\rho_k$ be the
correlation coefficient between $W_{i1}$ and $W_{i(k+1)}$, and
write $\rho_1=\rho$ for simplicity and $\mu_j=E(X_{ij})$ for $i\in\{
1,\ldots, n\}$ and $j\in\{1,\ldots,p\}$. Put $\lambda_p(s)=2 s \log p$.
\begin{pf*}{Proof of Theorem \ref{asy-gumbel}}
Let $u=\bar{\Phi}(\lambda^{1/2}_p(s))$. Write $\mathcal
{J}_2(u):=\hat
{\mathcal{T}}_{2,n}(s)$ and
\[
\mathcal{M}_{2n}=\max_{s\in(0,1-\eta]}\hat{\mathcal
{T}}_{2,n}(s)=\max_{u\in[u_0, 1/2)}\mathcal{J}_2(u),
\]
where $u_0=\bar{\Phi}(\lambda_p^{1/2}(1-\eta))$. Using the same
technique for the proof of Theorem 1 in Zhong, Chen and Xu (\citeyear{ZhenChenXu}), it may be
shown that the joint asymptotic normality of $\mathcal{T}_{2,n}(s)$ at
any finite points $\underline{s}=(s_1,\ldots,s_d)^T$. This is
equivalent to the joint asymptotic normality of $\mathcal{J}_{2}(u)$ at
$u_i=\bar{\Phi}(\sqrt{2s_i\log p})$ for $i=1, \ldots, d$.
We want to show the tightness of the process $\mathcal{J}_2(u)$. Let
$f_{n,u}(x)=\sigma_0^{-1}(u)\times x^2I\{|x|>g(u)\}$, where $g(u)=\bar{\Phi
}^{-1}(u)$, $\sigma_0^2(u)=\sigma_0^2(p;s)$ and $\sigma
_0^2(p;s)=\break \sigma
_{T_{2n},0}^2(s)/p$. Write
\[
\mathcal{J}_2(u)=p^{-1/2}\sum_{j=1}^p
\bigl\{f_{n,u}\bigl(|\sqrt{n}\bar {X}_j|\bigr)-E
\bigl(f_{n,u}\bigl(|\sqrt{n}\bar{X}_j|\bigr)\bigr) \bigr\}.
\]
Based on the finite dimensional convergence of $\mathcal{J}_2(u)$ and
Theorem 1.5.6 in Van der Vaart and Wellner (\citeyear{Wellner}), we only need to show
the asymptotically equicontinuous of $\mathcal{J}_2(u)$, that is, for
any $\varepsilon>0$ and $\eta>0$ there exists a finite partition
$\Lambda=\bigcup_{i=1}^k \Lambda_i$ such that
\begin{equation}
\label{equaconti} \lim\sup_{n\to\infty}P^*\Bigl\{\max
_{1\leq i\leq k}\sup_{u,v\in
\Lambda
_i}\bigl|\mathcal{J}_2(u)-
\mathcal{J}_2(v)\bigr|>\varepsilon\Bigr\}<\eta,
\end{equation}
where $P^*$ is the outer probability measure.
Define $
\mathscr{F}_n=\{f_{n,u}(|\sqrt{n}\bar{X}_j|)=\sigma_0^{-1}(u)|\sqrt {n}\bar{X}_j|^2I\{|\sqrt{n}\bar{X}_j|>g(u)\}\dvtx u\in\Lambda
:=[u_0,1/2)\}
$
and $\rho(f_{n,u}-f_{n,v})=[E\{f_{n,u}(|\sqrt{n}\bar
{X}_j|)-f_{n,v}(|\sqrt{n}\bar{X}_j|)\}^2]^{1/2}$. It can be shown that
if $u>v$,
\[
\rho(f_{n,u}-f_{n,v})^2=\bigl\{2-2
\sigma_0^{-1}(u)\sigma_0(v)\bigr\} \bigl
\{1+o(1)\bigr\}.
\]
Thus, for every $\delta_n\to0$, $\sup_{|u-v|<\delta_n}\rho
(f_{n,u}-f_{n,v})\to0$, which implies that for each $\delta>0$,
$\Lambda$ can be partitioned into finitely many sets $\Lambda
_1,\ldots
,\Lambda_k$ satisfying
\[
\max_{1\leq i\leq k}\sup_{u,v\in\Lambda_i}\rho
(f_{n,u}-f_{n,v})<\delta.
\]
Let $N_0:=N(\varepsilon,\mathscr{F}_n, \rho)$ be the bracketing number,
the smallest number of functions $f_1,\ldots, f_{N_0}$ in $\mathscr
{F}_n$ such that for each $f$ in $\mathscr{F}_n$ there exists an $f_i$
($i\in\{1,\ldots,{N_0}\}$) satisfying $\rho(f-f_i)\leq\varepsilon
\leq
1$. Applying Theorem 2.2 in \citet{Andrews}, if the
following two conditions hold for an even integer $Q\geq2$ and a real
number $\gamma>0$ such that
\begin{eqnarray}
\label{condi-1} \sum_{d=1}^\infty
d^{Q-2}\alpha(d)^{{\gamma}/{(Q+\gamma)
}}&<&\infty \quad\mbox{and}
\\
\label{condi-2} \int_{0}^1\varepsilon^{-{\gamma}/{(2+\gamma)}}N(
\varepsilon ,\mathscr {F}_n, \rho)^{1/Q}\,d\varepsilon&<&\infty,
\end{eqnarray}
we have for $n$ large enough
$
\|\sup_{\rho(f_{n,u}-f_{n,v})<\delta\atop u,v\in\Lambda
_i}|\mathcal
{J}_2(u)-\mathcal{J}_2(v)| \|_Q<k^{-1/Q}\eta\varepsilon$.
Invoking the maximal inequality of \citet{Pisier}, it follows that
\[
\Bigl\|\max_{1\leq i\leq
k}\mathop{\sup_{\rho(f_{n,u}-f_{n,v})<\delta}}_{
s,t\in\Lambda_i}\bigr|
\mathcal{J}_2(u)-\mathcal{J}_2(v)\bigr| \Bigr\|_Q<\eta
\varepsilon.
\]
Now using the Markov inequality, we get for $n$ large enough
\begin{eqnarray*}
&&P^*\Bigl\{\max_{1\leq i\leq k}\sup_{u,v\in\Lambda_i}\bigl|\mathcal
{J}_2(u)-\mathcal{J}_2(v)\bigr|>\varepsilon\Bigr\}
\\
&&\qquad \leq \Bigl\|\max_{1\leq i\leq k}\mathop{\sup_{\rho(f_{n,u}-f_{n,v})<\delta
}}_{ u,v\in\Lambda_i}\bigl|
\mathcal{J}_2(u)-\mathcal{J}_2(v)\bigr|\Bigr \| _Q/
\varepsilon< \eta.
\end{eqnarray*}
Hence, the condition (\ref{equaconti}) holds and $\mathcal{J}_2(u)$ is
asymptotically tight.
It remains to show (\ref{condi-1}) and (\ref{condi-2}) hold. For
(\ref
{condi-2}), we note that $\mathscr{F}_n$ is a V-C class for each $n$.
This is because
\[
\mathscr{G}_n=\bigl\{f_{n,u}(x)=\sigma_0^{-1}(u)I
\bigl(x>g(u)\bigr)\dvtx u\in(u_0,1/2)\bigr\}
\]
is a V-C class with VC index 2. Let $\varphi(x)=x^2$. Then $\mathscr
{F}_n=\varphi\cdot\mathscr{G}_n$ is a V-C class by Lemma 2.6.18 in Van
der Vaart and Wellner (\citeyear{Wellner}). Let $G_n(x,u_0)=\sup_{u\in\Lambda
}|f_{n,u}(x)|$ be the envelop function for class $\mathscr{F}_n$.
Clearly, we can take $G_n(x,u_0)=\sigma_0^{-1}(u_0)x^2$. It is easy to
see that $\rho\{G_n(|\sqrt{n}\bar{X}_i|,u_0)\}<\infty$ for a constant
$u_0>0$. Applying a result on covering number of V-C classes [Theorem
2.6.7, Van der Vaart and Wellner (\citeyear{Wellner})], we get $N(\varepsilon
,\mathscr
{F}_n, \rho)\leq K \varepsilon^{-2}$
for a universal constant $K$. It can be verified that if $Q>2+\gamma$,
then (\ref{condi-2}) holds.
The condition~(\ref{condi-1}) follows from the assumption that \mbox{$\rho
_Z(d)\leq C\alpha^d$}.
As a result, $\mathcal{J}_2(u)$ converge to a zero mean Gaussian
process $\mathcal{N}_2(u)$ with
\[
\operatorname{Cov}\bigl(\mathcal{N}_2(u),\mathcal{N}_2(v)\bigr)=
\frac{\sigma
_0(u)}{\sigma
_0(v)}=\exp\biggl(-{\frac{1}{2}}\bigl[\log\bigl\{
\sigma_0^2(v)\bigr\}-\log\bigl\{\sigma
_0^2(u)\bigr\} \bigr]\biggr)
\]
for $u<v$. It can be shown that there exists an Ornstein--Uhlenbeck (O--U)
process $\mathcal{U}_2(\cdot)$ with mean zero 0 and $E(\mathcal
{U}_2(u)\mathcal{U}_2(v))=\exp\{-|u-v|\}$ such that $\mathcal
{N}_2(u)=\mathcal{U}_2({\frac{1}{2}}\log\{\sigma_0^2(u)\})$.
Therefore, by a
result for the O--U process in Leadbetter, Lindgren and
Rootz{\'e}n [(\citeyear{Leadbetter}), page 217],
\begin{eqnarray*}
P\Bigl(\max_{s\in\mathcal{S}}\hat{\mathcal{T}}_{2,n}(s)<B_{\tau
_n}(x)
\Bigr)&=&P\Bigl(\max_{u\in\Lambda}\mathcal{N}_2(u)<B_{\tau_n}(x)
\Bigr)\bigl\{ 1+o(1)\bigr\}
\\
&=&P\Bigl(\max_{u\in(0,\tau_n)}\mathcal{U}_2(u)<B_{\tau_n}(x)
\Bigr)\to\exp\bigl\{ -\exp (-x)\bigr\},
\end{eqnarray*}
where $\tau_n={\frac{1}{2}}\log\{{\sigma_0^2({\frac
{1}{2}})}/{\sigma_0^2(u_0)}\}$,
$B_{\tau_n}(x)=(x+b^*(\tau_n))/a(\tau_n)$, $a(t)=\break (2\log(t))^{1/2}$ and
$b^*(t)=2\log(t)+2^{-1}\log\log(t)-{\frac{1}{2}}\log(\pi)$. From
(\ref
{eq:varTn0}), we have $\tau_n=\frac{1-\eta}{2}\log p\{1+o(1)\}$. Since
\begin{eqnarray*}
a(\tau_n)\max_{u\in(0,\tau_n)}\mathcal{U}_2(u)-b^*(
\tau_n)&=&\frac
{a(\tau
_n)}{a(\log p)}\Bigl[a(\log p)\max_{u\in(0,\tau_n)}
\mathcal {U}_2(u)-b^*(\log p)\Bigr]
\\
&&{} +\frac{a(\tau_n)}{a(\log p)}b^*(\log p)-b^*(\tau_n),
\end{eqnarray*}
${a(\tau_n)}/{a(\log p)}\to1$ and
\begin{eqnarray*}
\frac{a(\tau_n)}{a(\log p)}b^*(\log p)-b^*(\tau_n)&=&\frac{a(\tau
_n)}{a(\log p)}
\bigl[b^*(\log p)-b^*(\tau_n)\bigr]
\\
&&{} +b^*(\tau_n)\biggl[\frac{a(\tau_n)}{a(\log p)}-1\biggr]\to-\log
\frac
{(1-\eta)}{2},
\end{eqnarray*}
we have
\begin{eqnarray*}
&&a(\tau_n)\max_{u\in(0,\tau_n)}\mathcal{U}_2(u)-b^*(
\tau_n)\\
&&\qquad=a(\log p)\max_{u\in(0,\tau_n)}\mathcal{U}_2(u)-
\biggl(b^*(\log p)+\log\frac{(1-\eta)}{2}\biggr).
\end{eqnarray*}
Finally, note that $b^*(\log p)+\log\frac{(1-\eta)}{2
=b(\log p, \eta)$. This finishes the proof of Theorem \ref{asy-gumbel}.
\end{pf*}
\begin{pf*}{Proof of Theorem \ref{detect-upper-bound}}
(i). The proof is made under four cases. For each case, we find the
corresponding detectable region and the
union of the four regions are the overall detectable region of the
thresholding test. Basically, we show for any $(\beta,r)$ above
$\varrho
^*(\beta)$ within one of the four cases, there exists at least one
threshold level $s$ such that $H_1$ is detectable. For notation
simplification, we only keep the leading order terms for $\mu
_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)$, $\sigma_{T_{2n,1}}(s)$, $\sigma
_{T_{2n,0}}(s)$ and $\Delta_2(s;r,\beta)$.
\textit{Case} 1: $s\leq r$ and $s\leq\beta$. In this case,
$\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)=L_pp^{1-\beta}$ and
$\sigma_{T_{2n},1}(s)=\sigma_{T_{2n},0}(s)=L_pp^{(1-s)/2}$. Hence,
\[
\Delta_2(s;r,\beta)=\frac{\mu_{T_{2n},1}(s)-\mu
_{T_{2n},0}(s)}{\sigma
_{T_{2n},1}(s)}=L_pp^{(1+s-2\beta)/2}.
\]
So to make
$(\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s))/\sigma_{T_{2n},1}(s)\to
\infty$,
we need $s> 2\beta-1$.
It follows that the detectable region for
this case is $r\geq2\beta-1$. Specifically, if we
select $s=\min\{r,\beta\}$, we arrive at the best divergence rate for
$\Delta_2(s;r,\beta)$ of order $L_pp^{(1+\min\{r,\beta\}-2\beta)/2}$.
\textit{Case} 2: $s\leq r$ and $s>\beta$. In this case,
$\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)=L_pp^{1-\beta}$,
$\sigma_{T_{2n},1}(s)=L_pp^{(1-\beta)/2}$, and
$\sigma_{T_{2n},0}(s)=L_pp^{(1-s)/2}$. Then,
\[
\Delta_2(s;r,\beta)=\frac{\mu_{T_{2n},1}(s)-\mu
_{T_{2n},0}(s)}{\sigma
_{T_{2n},1}(s)}=L_pp^{(1-\beta)/2}.
\]
So the detectable region in the $(\beta,r)$ plane is $r>\beta$. In
this region, the best divergence rate of $\Delta_2$ is of order
$L_pp^{(1-\beta)/2}$ for any $\beta<s\leq r$.
\textit{Case} 3: $s> r$ and $s\leq(\sqrt{s}-\sqrt{r})^2+\beta$.
The case is equivalent to $\sqrt{r}< \sqrt{s}\leq
(r+\beta)/(2\sqrt{r})$ and
$\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)=L_pp^{1-(\sqrt{s}-\sqrt
{r})^2-\beta}$,
$\sigma_{T_{2n},1}(s)=\sigma_{T_{2n},0}=L_pp^{(1-s)/2}$. Then
\begin{equation}
\label{4:eq:3} \Delta_2(s;r,\beta)=\frac{\mu_{T_{2n},1}(s)-\mu
_{T_{2n},0}(s)}{\sigma
_{T_{2n},1}(s)}=L_pp^{{{1}/{2}}-\beta+r-(\sqrt{s}-2\sqrt{r})^2/2}.
\end{equation}
To ensure (\ref{4:eq:3}) diverging to
infinity, we need
\[
2\sqrt{r}-\sqrt{1-2\beta+2r}<\sqrt{s}<2\sqrt{r}+\sqrt{1-2\beta+2r}.
\]
Thus, the detectable region must satisfy
\begin{eqnarray*}
\sqrt{r}&<&(r+\beta)/(2\sqrt{r}), \qquad 1-2\beta+2r>0\quad \mbox{and}\\
2\sqrt {r}-\sqrt{1-2
\beta+2r}&\leq&(r+\beta)/(2\sqrt{r}).
\end{eqnarray*}
This translates to
\[
\beta-{\tfrac{1}{2}}<r<\beta\quad\mbox{and } \mbox{either } r\leq\beta/3 \mbox
{ or } r> \beta/3 \mbox{ and}\quad r\geq(1-\sqrt{1-\beta})^2.
\]
\textit{Case} 4: $s> r$ and $s>(\sqrt{s}-\sqrt{r})^2+\beta$. This
is equivalent to $\sqrt{s}>\max\{(r+\beta)/(2\sqrt{r}),
\sqrt{r}\}$. In this case,
$\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)=L_pp^{1-(\sqrt{s}-\sqrt
{r})^2-\beta}$,\break
$\sigma_{T_{2n},1}(s)=L_pp^{(1-(\sqrt{s}-\sqrt{r})^2-\beta)/2}$. Then
\[
\Delta_2(s;r,\beta)=\frac{\mu_{T_{2n},1}(s)-\mu
_{T_{2n},0}(s)}{\sigma
_{T_{2n},1}(s)}=L_pp^{(1-(\sqrt{s}-\sqrt{r})^2-\beta)/2}.
\]
Hence, it requires that
\[
\sqrt{r}-\sqrt{1-\beta}<\sqrt{s}<\sqrt{r}+\sqrt{1-\beta}.
\]
In order to find an $s$, we
need $\sqrt{r}+\sqrt{1-\beta}>\max\{(r+\beta)/(2\sqrt{r}),
\sqrt{r}\}$. If $\sqrt{r}>(r+\beta)/(2\sqrt{r})$, namely, $r>\beta$,
the above inequality is obviously true. If $r\leq\beta$,
then $\sqrt{r}+\sqrt{1-\beta}>(r+\beta)/(2\sqrt{r})$ is equivalent
to $r>(1-\sqrt{1-\beta})^2$. So the detectable region is
$r>(1-\sqrt{1-\beta})^2$ in this case.
In summary of cases 1--4, the union of the detectable regions in the
above four cases is $r>\varrho^*(\beta)$, as illustrated in Figure~\ref{figure5}.
\setcounter{figure}{4}
\begin{figure}
\includegraphics{1168f05.eps}
\caption{The detectable subregions of the $L_2$ threshold test. Case
1: the union of \{I, II, III, IV\}; Case 2, the region is I; Case 3:
the union of \{II, III, IV, V, VI, VII\}; Case 4: the union of \{I, II,
III, VI, VII\}.}
\label{figure5}
\end{figure}
Now we are ready to prove the theorem. We only need to show that the
sum of type I and II
errors of the maximal test goes to 0 when $r>\varrho^*(\beta)$. Because
the maximal test is of
asymptotic $\alpha_n$ level, it suffices to show that
the power goes to 1 in the detectable region
as $n\to\infty$ and $\alpha_n\to0$. Recall that the $\alpha_n$ level
rejection region is $R_{\alpha_n}=\{\hat{\mathcal{M}}_{2n}>\mathcal
{B}_{\alpha_n}\}$.
From Theorem \ref{asy-gumbel}, we notice that $\mathcal{B}_{\alpha_n}=O\{(\log\log
p)^{1/2}\}:=L_p^*$.
Then, it is sufficient if
\begin{equation}
\label{4:infty} P\bigl(\mathcal{M}_{2n}/L_p^*\to\infty
\bigr)\to1 \qquad\mbox{as } n\to\infty
\end{equation}
at every $(\beta,r)$ in the detectable region. Since $\mathcal
{M}_{2n}\geq\mathcal{T}_{2n}(s)$ for any $s\in\mathcal{S}$,
therefore, (\ref{4:infty}) is true if for any point in the detectable
region, there exists a $\lambda_p(s)=2s\log p$ such that
\begin{equation}
\label{T2ninfinity} \mathcal{T}_{2n}(s)/L_p^*\stackrel{p} {
\to}\infty.
\end{equation}
Therefore, we want to show
\begin{eqnarray}
\label{4:infty:1}&& \frac{T_{2n}(s)-\mu_{T_{2n},0}(s)}{L_p^*\sigma_{Tn,0}(s)}\nonumber\\
&&\qquad= \biggl(\frac
{T_{2n}(s)-\mu_{T_{2n},1}(s)}{L_p^*\sigma_{T_{2n},1}(s)}+\frac{\mu
_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)}{L_p^*\sigma_{T_{2n},1}(s)}
\biggr)\frac
{\sigma_{T_{2n},1}(s)}{\sigma_{T_{2n},0}(s)}
\\
&&\qquad\stackrel{p} {\to} \infty.\nonumber
\end{eqnarray}
Because ${(T_{2n}(s)-\mu_{T_{2n},1}(s))}/{L_p^*\sigma
_{T_{2n},1}(s)}=o_p(1)$ and $\sigma_{T_{2n},0}(s)\leq\sigma
_{T_{2n},1}(s)$, (\ref{4:infty:1}) is true if
${(\mu_{T_{2n},1}(s)-\mu_{T_{2n},0}(s))}/{L_p^*\sigma
_{T_{2n},1}(s)}\to
\infty$. As we have shown in the early proof, for every $(r,\beta)$ in
the detectable region, there exists an $s$ such that $\frac{\mu
_{T_{2n},1}(s)-\mu_{T_{2n},0}(s)}{L_p\sigma_{T_{2n},1}(s)}\to\infty$
for any slow varying function $L_p$. This
concludes (\ref{T2ninfinity}) and hence (\ref{4:infty}), which
completes the proof of part (i).
(ii) Note that
\[
\hat{\mathcal{M}}_{2n}=\max_{s\in\mathcal{S}_n}
\biggl\{\bigl(\mathcal {T}_{2n,1}(s){R}_2(s)+
\Delta_{2,0}(s;r,\beta)\bigr)\tilde{e}_2(s)+
\frac
{\mu
_{T_{\gamma n},0}(s)-\hat{\mu}_{T_{\gamma n},0}(s)}{\tilde{\sigma
}_{T_{\gamma n}, 0}(s)} \biggr\},
\]
where ${R}_2(s),\tilde{e}_2(s)$ and $\mathcal{T}_{2n,1}(s)$ are defined
in (\ref{th4decom}) and
\begin{eqnarray}
\label{L2-100} \Delta_{2,0}(s;r,\beta)&=&\frac{\mu_{T_{2n},1}(s)-\mu
_{T_{2n},0}(s)}{{\sigma}_{T_{2n}, 0}(s)}\nonumber\\
&=&(s\pi\log
p)^{1/4}(r/s)p^{1/2-\beta+s/2}I(r>s)
\\
&&{} +\frac{s^{1/4}(\pi\log p)^{-1/4}}{2(\sqrt{s}-\sqrt {r})}p^{1/2-(\sqrt{s}-\sqrt{r})^2-\beta+s/2}I(r< s).
\nonumber
\end{eqnarray}
If $r<\varrho^*(\beta)$, then $r<\beta$ and $r<(r+\beta)^2/(4r)$. Hence,
\begin{eqnarray*}
{R}_2(s)=\cases{
1 + o(1), &\quad\hspace*{-3pt}$\mbox{if } s\leq r;$
\vspace*{2pt}\cr
1 + o(1), & \quad\hspace*{-3pt}$\mbox{if } r<s \leq
\displaystyle\frac{(r+\beta)^2}{4r};$
\vspace*{2pt}\cr
s^{{1}/{4}}(\sqrt{s}-\sqrt{r})^{-{{1}/{2}}}p^{{{1}/{2}}( 2 \sqrt{s r}-r -
\beta
)}\bigl\{ 1 +
o(1)\bigr\}, & \quad\hspace*{-3pt}$\mbox{if } s>\displaystyle \frac{(r+\beta)^2}{4r}.$}
\end{eqnarray*}
It is also noticed that $r<\varrho^*(\beta)$ implies that $(r+\beta
)^2/(4r)>1$. Therefore, for all $s\in\mathcal{S}_n$, ${R}_2(s)=1+o(1)$.
If $r<\varrho^*(\beta)$, then $r<2\beta-1$. Hence, $\max_{s\leq
r}\Delta
_{2,0}(s;r,\beta)\leq L_pp^{1/2-\beta+r/2}\to0$ as $p(n)\to\infty$.
If $r<\varrho^*(\beta)$ and $r<1/4$, then $r<\beta-1/2$. It follows
that, for all $s>r$,
\[
1/2-(\sqrt{s}-\sqrt{r})^2-\beta+s/2=1/2+r-\beta-\tfrac{1}{2}(
\sqrt {s}-2\sqrt{r})^2\leq1/2+r-\beta<0.
\]
If $r<\varrho^*(\beta)$ and $r>1/4$, then for all $s>r$,
\[
1/2-(\sqrt{s}-\sqrt{r})^2-\beta+s/
\leq1/2+r-\beta-\tfrac{1}{2}(1-2\sqrt{r})^2<0.
\]
Hence, $\max_{s> r}\Delta_{2,0}(s;r,\beta)\leq L_pp^{1/2+r-\beta}I\{
r<1/4\}+L_pp^{1-\beta-(1-\sqrt{r})^2}I\{r>1/4\}\to0$ as $p(n)\to
\infty$.
In summary, we have ${R}_2(s)=1+o(1)$ and\break $\max_{s\in\mathcal
{S}_n}\Delta_{2,0}(s;r,\beta)\to0$ if $r<\varrho^*(\beta)$. Therefore,
together with assumption~(\ref{eq:cri1}), $\hat{\mathcal
{M}}_{2n}=\max_{s\in\mathcal{S}_n}\mathcal{T}_{2n,1}(s)\{1+o_p(1)\}$.
We note that, by employing the same argument of Theorem \ref{asy-gumbel}, it can be
shown that
\[
P \Bigl(a(\log p)\max_{s\in\mathcal{S}}\mathcal{T}_{2n,1}(s)-b(
\log p,\delta)\leq x \Bigr)\to\exp\bigl(-e^{-x}\bigr),
\]
where $\delta$ is defined just above (\ref{Tn1order}). Then the power
of the test
\begin{eqnarray*}
&&P \bigl(\hat{\mathcal{M}}_{2n}>\bigl(\mathcal{E}_{\alpha_n}+b(
\log p,\eta )\bigr)/a(\log p) \bigr)\nonumber
\\
&&\qquad=P \bigl(\hat{\mathcal{M}}_{2n}>\bigl(\mathcal{E}_{\alpha_n}+b(
\log p,\delta )\bigr)/a(\log p) \bigr)\bigl\{1+o(1)\bigr\}\\
&&\qquad=\alpha_n
\bigl\{1+o(1)\bigr\}\to0.\nonumber
\end{eqnarray*}
Thus, the sum of type I and II errors goes to 1. This completes the
proof of part~(ii).
\end{pf*}
\begin{pf*}{Proof of Theorem \ref{th4}}
We first prove that $\hat{\mathcal{M}}_{\gamma n}\sim\max_{s\in
\mathcal
{S}_n}\Delta_{\gamma,0}(s;r,\beta)$, which will be proved in two parts:
\begin{eqnarray}
\hat{\mathcal{M}}_{\gamma n}&\sim&{\mathcal{M}}_{\gamma n} \quad\mbox{and}\label{th4part1}
\\
{\mathcal{M}}_{\gamma n}&\sim&\max_{s\in\mathcal{S}_n}\Delta
_{\gamma
,0}(s;r,\beta),\label{th4part2}
\end{eqnarray}
where ${\mathcal{M}}_{\gamma n}=\max_{s\in\mathcal{S}_n}\mathcal
{T}_{\gamma n}(s)=\max_{s\in\mathcal{S}_n} \{\mathcal
{T}_{\gamma
n,1}(s)R_\gamma(s)+\Delta_{\gamma,0}(s;r,\beta) \}$.
To show (\ref{th4part1}), note the decomposition for $\hat{\mathcal
{M}}_{\gamma n}$ in (\ref{th4decom}). Let $\tilde{\mathcal
{M}}_{\gamma
n}=\max_{s\in\mathcal{S}_n} \{\mathcal{T}_{\gamma n}(s)\tilde
{e}_\gamma(s) \}$. We can first show that $\hat{\mathcal
{M}}_{\gamma
n}\sim\tilde{\mathcal{M}}_{\gamma n}$ because of the following inequality:
\begin{eqnarray*}
&&\tilde{\mathcal{M}}_{\gamma n}- \biggl|\max_{s\in\mathcal{S}_n}
\frac{\mu
_{T_{\gamma n},0}(s)-\hat{\mu}_{T_{\gamma n},0}(s)}{\tilde{\sigma
}_{T_{\gamma n}, 0}(s)} \biggr|
\\
&&\qquad\leq\hat{\mathcal{M}}_{\gamma n} \leq\tilde{
\mathcal{M}}_{\gamma n}+ \biggl|\max_{s\in\mathcal{S}_n} \frac
{\mu_{T_{\gamma n},0}(s)-\hat{\mu}_{T_{\gamma n},0}(s)}{\tilde
{\sigma
}_{T_{\gamma n}, 0}(s)} \biggr|.
\end{eqnarray*}
Under condition (\ref{eq:cri1}), that is, $\max_{s\in\mathcal{S}}\tilde
{\sigma
}^{-1}_{T_{\gamma n}, 0}(s) (\mu_{T_{\gamma n},0}(s)-\hat{\mu
}_{T_{\gamma n},0}(s) )=o(1)$, hence, $\hat{\mathcal{M}}_{\gamma
n}\sim\tilde{\mathcal{M}}_{\gamma n}$.
Second, we can show ${\mathcal{M}}_{\gamma n}\sim\tilde{\mathcal
{M}}_{\gamma n}$. Note the following inequality:
\begin{eqnarray*}
&&\min \Bigl\{{\mathcal{M}}_{\gamma n}\min_{s\in\mathcal{S}_n}\tilde
{e}_\gamma(s),{\mathcal{M}}_{\gamma n}\max_{s\in\mathcal{S}_n}
\tilde {e}_\gamma(s) \Bigr\} \\
&&\qquad\leq\tilde{\mathcal{M}}_{\gamma n}
\leq\max \Bigl\{{\mathcal{M}}_{\gamma n}\min_{s\in\mathcal
{S}_n}
\tilde {e}_\gamma(s),{\mathcal{M}}_{\gamma n}\max
_{s\in\mathcal{S}_n} \tilde {e}_\gamma(s) \Bigr\}.
\end{eqnarray*}
Under conditions (C.1)--(C.4), $\min_{s\in\mathcal{S}_n}\tilde{e}_\gamma
(s)=\max_{s\in\mathcal{S}_n}\tilde{e}_\gamma(s)=1+o(1)$. So we have
\[
\tilde{\mathcal{M}}_{\gamma n}\sim{\mathcal{M}}_{\gamma n}\min
_{s\in
\mathcal{S}_n}\tilde{e}_\gamma(s)\sim{\mathcal{M}}_{\gamma n}
\min_{s\in\mathcal{S}_n}\tilde{e}_\gamma(s)\sim{
\mathcal{M}}_{\gamma n}.
\]
In summary, we have $\hat{\mathcal{M}}_{\gamma n}\sim\tilde
{\mathcal
{M}}_{\gamma n}\sim{\mathcal{M}}_{\gamma n}$. Therefore,
$\hat{\mathcal{M}}_{\gamma n}\sim{\mathcal{M}}_{\gamma n}$.
The path leading to (\ref{th4part2}) is the following. First of all,
it can be shown using an argument similar to the one used in the proof
of Theorem \ref{asy-gumbel} that
\[
P \Bigl(a(\log p)\max_{s\in\mathcal{S}}\mathcal{T}_{\gamma
n,1}(s)-b(
\log p,\delta)\leq x \Bigr)\to\exp\bigl(-e^{-x}\bigr),
\]
where $\delta=\max\{\eta-r+2r\sqrt{1-\eta}-\beta,\eta\}
I(r<1-\eta)+\max
\{1-\beta,\eta\}I(r>1-\eta)$. Thus, for $\gamma=0, 1$ and $2$,
\begin{equation}
\label{Tn1order} \max_{s\in\mathcal{S}}\mathcal{T}_{ \gamma n,1}(s)=O_p
\bigl\{\log ^{1/2}(\log p)\bigr\}.
\end{equation}
Equations (\ref{delta0-sigma-ratio-L2-1}) to (\ref
{delta0-sigma-ratio-L2-8}) in the following reveal that for all $s\in
\mathcal{S}$ and $r>\varrho^*(\beta)$, we can classify $s\in
\mathcal
{S}$ into two sets $\mathcal{S}_1$ and $\mathcal{S}_2$ such that
\begin{eqnarray*}
&&\phantom{i}\mathrm{(i)}\quad \Delta_{\gamma,0}(s;r,\beta)\gg {R}_\gamma(s)\qquad \mbox{for $s\in
\mathcal{S}_1$}
\\
&&\mathrm{(ii)}\quad \Delta_{\gamma,0}(s;r,\beta)\to0\quad \mbox{and}\quad {R}_\gamma
(s)=1+o(1) \qquad\mbox{for $s\in\mathcal{S}_2$},
\end{eqnarray*}
where ``$c\gg d$'' means that $c/d=L_pp^{\xi}$ for some $\xi>0$.
Because $r$ is above the detection boundary $\varrho^*(\beta)$, there
exists at least one $s\in\mathcal{S}_1$ such that $\Delta_{\gamma
,0}(s;r,\beta)\to\infty$.
Hence,
\begin{equation}
\label{Delta-R-order} \max_{s\in\mathcal{S}}\Delta_{\gamma,0}(s;r,\beta)=
\max_{s\in
\mathcal
{S}_1}\Delta_{\gamma,0}(s;r,\beta)\gg \max
_{s\in\mathcal
{S}}{R}_\gamma(s).
\end{equation}
Namely, the maximum of $\Delta_{\gamma,0}(s;r,\beta)$ is reached on Set
$\mathcal{S}_1$ where $\Delta_{\gamma,0}(s;r,\beta)$ diverges at a much
faster rate than that of $\tilde{R}_\gamma(s)$, if the latter ever diverges.
Let $A(s)=\mathcal{T}_{2n,1}(s){R}_\gamma(s)$. Combining (\ref
{Tn1order}) and (\ref{Delta-R-order}), we have
\[
\Bigl|\max_{s\in\mathcal{S}_n} \mathcal{T}_{\gamma n,1}(s)\Bigr|\Bigl|\max
_{s\in
\mathcal{S}_n}{R}_\gamma(s)\Bigr|=o_p\Bigl\{\max
_{s\in
\mathcal{S}_n}\Delta_{\gamma,0}(s;r,\beta)\Bigr\}.
\]
This implies that $|\max_{s\in\mathcal{S}_n} A(s)|=o_p\{\max_{s\in
\mathcal{S}_n}\Delta_{\gamma,0}(s;r,\beta)\}$.
Together with the following inequality:
\begin{eqnarray*}
\max_{s\in\mathcal{S}_n} \Delta_{\gamma,0}(s;r,\beta)-\Bigl|
\max_{s\in
\mathcal{S}_n} A(s)\Bigr|&\leq&\max_{s\in\mathcal{S}_n}\bigl\{A(s)+
\Delta _{\gamma
,0}(s;r,\beta)\bigr\}
\\
&\leq&\max_{s\in\mathcal{S}_n} \Delta_{\gamma,0}(s;r,\beta)+\max
_{s\in
\mathcal{S}_n} A(s);
\end{eqnarray*}
we conclude that (\ref{th4part2}) holds.
It remains to show the existence of $\mathcal{S}_1$ and $\mathcal{S}_2$
in arriving at (\ref{Delta-R-order}). We only prove it for the $L_2$
test. To complete that, we compare the relative order between $\Delta
_{2,0}(s;r,\beta)$ and ${R}_2(s)$ for three regions above the detection
boundary $\varrho^{\ast}(\beta)$: (i)~$r>\beta$ (ii) $r \in(2
\beta-1,
\beta]$ and (iii) $r\in(\varrho^{\ast}(\beta), 2\beta-1]$. In regions
(i) and (ii) with $r>(1-\sqrt{1-\beta})^2$, we can show that
\begin{eqnarray}
\label{delta0-sigma-ratio-L2-1}\quad \Delta_{2,0}(s;r,\beta)&\gg& {R}_2(s)\qquad
\mbox{for $s> 2\beta-1$};
\\
\Delta_{2,0}(s;r,\beta)&\to&0 \quad\mbox{and}\quad {R}_2(s)=1 + o(1)
\qquad\mbox {for $s\leq2\beta-1$}.\label{delta0-sigma-ratio-L2-2}
\end{eqnarray}
In region (ii) with $r<(1-\sqrt{1-\beta})^2$,
we have
\begin{eqnarray}\qquad\quad
\label{delta0-sigma-ratio-L2-3} \Delta_{2,0}(s;r,\beta)&\gg &{R}_2(s)\qquad
\mbox{for $2\beta-1<s\leq (2\sqrt {r}+\sqrt{1+2r-2\beta})^2$},
\\
\Delta_{2,0}(s;r,\beta)&\to&0 \quad\mbox{and}\quad {R}_2(s)=1 + o(1)\qquad
\mbox{for $s\leq2\beta-1$}\label{delta0-sigma-ratio-L2-4}
\nonumber
\\[-8pt]
\\[-8pt]
&& \eqntext{\mbox{and $(2\sqrt{r}+\sqrt{1+2r-2\beta})^2<s< 1$}.}
\end{eqnarray}
For $r\in(\varrho^*(\beta), 2\beta-1]$ in region (iii). If
$r>(1-\sqrt {1-\beta})^2$, define $D_1=(0,(2\sqrt{r}-\sqrt{1+2r-2\beta})^2)$ and
$D_2=((2\sqrt{r}-\sqrt{1+2r-2\beta})^2,1)$. Then it may be shown that
\begin{eqnarray}
\Delta_{2,0}(s;r,\beta)&\to&0 \quad\mbox{and}\quad {R}_2(s)=1 + o(1)
\qquad\mbox {for $s\in D_1$}; \label{delta0-sigma-ratio-L2-5}
\\
\Delta_{2,0}(s;r,\beta)&\gg& {R}_2(s)\qquad \mbox{for $s\in
D_2$}.\label
{delta0-sigma-ratio-L2-6}
\end{eqnarray}
If $r<(1-\sqrt{1-\beta})^2$, define $D_3=(0,(2\sqrt{r}-\sqrt {1+2r-2\beta
})^2)\cup ((2\sqrt{r}+\break \sqrt{1+2r-2\beta})^2,1)$ and
$D_4=((2\sqrt {r}-\sqrt{1+2r-2\beta})^2, (2\sqrt{r}+ \sqrt{1+2r-2\beta})^2)$.
Then, it
can be shown that
\begin{eqnarray}
\Delta_{2,0}(s;r,\beta)&\to&0 \quad\mbox{and}\quad {R}_2(s)=1 + o(1)\qquad
\mbox{for $s\in D_3$}; \label{delta0-sigma-ratio-L2-7}
\\
\Delta_{2,0}(s;r,\beta)&\gg& {R}_2(s)\qquad \mbox{for $s\in
D_4$}.\label
{delta0-sigma-ratio-L2-8}
\end{eqnarray}
The results in (\ref{delta0-sigma-ratio-L2-1})--(\ref
{delta0-sigma-ratio-L2-8}) indicate that in each region listed above,
$\max\Delta_{2,0}(s;r,\beta)$ will be attained in situations covered by
(\ref{delta0-sigma-ratio-L2-1}), (\ref{delta0-sigma-ratio-L2-3}),
(\ref
{delta0-sigma-ratio-L2-6}) and (\ref{delta0-sigma-ratio-L2-8}), which
together imply (\ref{Delta-R-order}).
Next, we compute $\Delta_{\gamma,0}(s;r,\beta)$ for the HC ($\gamma=0$)
and the $L_1$ ($\gamma=1$) test. For the HC test, let
$G_{p,1}(s)=P(Y_{i,n}>2s\log p)$. Under assumptions (C.1)--(C.2),
applying the large deviation results [\citet{Petrov}], it may be shown that
\begin{eqnarray*}
G_{p,1}(s)&=&\bigl\{\bigl(2\sqrt{\pi\log p}(\sqrt{s}-\sqrt{r})
\bigr)^{-1}p^{-(\sqrt
{s}-\sqrt{r})^2}\bigr\} \bigl\{1+o(1)\bigr\} \qquad\mbox{if $r<s$
and}
\\
G_{p,1}(s)&=&\bigl\{1-\bigl(2\sqrt{\pi\log p}(\sqrt{r}-\sqrt {s})
\bigr)^{-1}p^{-(\sqrt
{r}-\sqrt{s})^2}\bigr\} \bigl\{1+o(1)\bigr\}\qquad \mbox{if $r>s.$}
\end{eqnarray*}
The mean and variance of $T_{0 n}(s)$ under $H_0$ are
$\mu_{T_{0 n},0}(s)=(\sqrt{s\pi\log p})^{-1}\times p^{1-s}\{1+o(1)\}$ and $
{\sigma}_{T_{0 n},0}^2(s)=(\sqrt{s\pi\log p})^{-1}p^{1-s}\{
1+o(1)\}$ respectively.
The mean and variance of $T_{0 n}(s)$ under the $H_1$ as specified in
(C.4) are, respectively,
\begin{eqnarray*}
\mu_{T_{0 n}, 1} (s)&=& p^{1-\beta
}G_{p,1}(s)+
\bigl(p-p^{1-\beta
}\bigr)2\bar {\Phi}\bigl(\lambda^{1/2}_p(s)
\bigr)\bigl\{1+o(1)\bigr\} \qquad\mbox{and}
\\
{\sigma}^2_{T_{0 n},1}(s) &=&
p^{1-\beta
}G_{p,1}(s) \bigl(1-G_{p,1}(s)\bigr)\\
&&{}+p
\bigl(1-p^{-\beta}\bigr)2\bar{\Phi}\bigl(\lambda ^{1/2}_p(s)
\bigr) \bigl(1-2\bar{\Phi}\bigl(\lambda^{1/2}_p(s)\bigr)
\bigr).
\end{eqnarray*}
These imply that, up to a factor $\{1+o(1)\}$,
\begin{eqnarray}
&&\mu_{T_{0 n},1}(s)-\mu_{T_{0 n},0}(s)\nonumber\\
&&\qquad=\bigl\{\bigl(2\sqrt{\pi\log p}(
\sqrt {s}-\sqrt{r})\bigr)^{-1}p^{1-\beta-(\sqrt{s}-\sqrt
{r})^2}I(r<s)
\\
&&\hspace*{160pt}\qquad{}+p^{1-\beta
}I(r>s)
\bigr\}\nonumber
\end{eqnarray}
and
\[
{R}_0(s)=\cases{
1, \qquad
\mbox{if } s\leq(\sqrt{s}-\sqrt{r})^2+\beta;
\vspace*{2pt}\cr
s^{1/4}\bigl|2(\sqrt{s}-\sqrt{r})\bigr|^{-{{1}/{2}}}p^{-{
{1}/{2}}((\sqrt{s}-\sqrt
{r})^2+\beta-s)},\vspace*{2pt}\cr
\quad\hspace*{22pt}\mbox{if } s> (\sqrt{s}-\sqrt{r})^2+\beta.}
\]
Hence,
\begin{eqnarray}
\label{HC-100}\qquad \Delta_{0,0}(s;r,\beta
&=&
\frac{s^{1/4}}{2(\sqrt{s}-\sqrt{r})(\pi\log p)^{1/4}}p^{1/2-\beta
-(\sqrt{s}-\sqrt{r})^2+s/2}I(r<s)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&{} +(s\pi\log p)^{1/4}p^{1/2-\beta+s/2}I(r>s).
\end{eqnarray}
For the $L_1$ test, the mean and variances of $T_{1n}(s)$ under $H_1$
specified in (C.4) are, respectively, up to a factor $1+o(1)$,
\begin{eqnarray*}
\mu_{T_{1n},1}(s)&=&\frac{\sqrt{s}}{\sqrt{2\pi}(\sqrt{s}-\sqrt {r})}p^{1-\beta-(\sqrt{s}-\sqrt{r})^2}I(r<s)
\\
&&{} +(\sqrt{2r\log p}) p^{1-\beta}I(r>s)+\sqrt{2/\pi}p^{1-s
\qquad\mbox{and}
\\
{\sigma}^2_{T_{1n},1}(s)&=&\frac{s\sqrt{\log p}}{\sqrt{\pi}(\sqrt {s}-\sqrt{r})}p^{1-\beta-(\sqrt{s}-\sqrt{r})^2}I(r<s)
+p^{1-\beta}I(r>s)\\
&&{}+2\sqrt{(s/\pi)\log p}p^{1-s}.
\end{eqnarray*}
It follows that, up to a factor $1+o(1)$,
\begin{eqnarray}
\mu_{T_{1n},1}(s)-\mu_{T_{1n},0}(s)& =&\frac{\sqrt{s}}{\sqrt{2\pi}(\sqrt{s}-\sqrt{r})}p^{1-\beta
-(\sqrt
{s}-\sqrt{r})^2}I(r<s)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&{}+(
\sqrt{2r\log p}) p^{1-\beta}I(r>s)
\end{eqnarray}
and
\[
{R}_1(s)=\cases{
1, \qquad
\mbox{if } s\leq r \mbox{ and } s\leq\beta;
\vspace*{2pt}\cr
\displaystyle(\sqrt{2})^{-1}\biggl(\frac{s}{\pi}\biggr)^{-{1}/{4}}(\log
p)^{-{1}/{4}}p^{(s-\beta
)/2},
\vspace*{2pt}\cr
\qquad\quad \mbox{if } s\leq r \mbox{ and } s
\geq\beta;
\vspace*{2pt}\cr
1,
\qquad\mbox{if } s> r \mbox{ and } s\leq(\sqrt{s}-\sqrt{r})^2+
\beta;
\vspace*{2pt}\cr
s^{{1}/{4}}(2\sqrt{s}-2\sqrt{r})^{-{{1}/{2}}}p^{-{
{1}/{2}}((\sqrt
{s}-\sqrt
{r})^2+\beta-s)},\vspace*{2pt}\cr
\qquad\quad\mbox{if } s> r \mbox{ and } s> (\sqrt{s}-\sqrt{r})^2+
\beta.}
\]
Therefore,
\begin{eqnarray}\label{L1-100}
\qquad\Delta_{1,0}(s;r,\beta
&=&
\frac{s^{{1}/{4}}}{2(\pi\log p)^{{1}/{4}}(\sqrt {s}-\sqrt {r})}p^{1/2-\beta-(\sqrt{s}-\sqrt{r})^2+s/2}I(r<s)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&{} +(s\pi\log p)^{{1}/{4}}(r/s)^{{1}/{4}} p^{1/2-\beta
+s/2}I(r>s).
\end{eqnarray}
Replicating the above proof for the $L_2$ test, it can be shown that,
for $\gamma=0$ and~1,
\[
\hat{\mathcal{M}}_{\gamma n}\sim\max_{s\in\mathcal{S}_n}
\Delta _{\gamma
,0}(s;r,\beta).
\]
At last, we will compare $\max_{s\in\mathcal{S}_n}\Delta_{\gamma
,0}(s;r,\beta)$ for $\gamma=0,1$ and 2 when $r>2\beta-1$.
Let $s^*_n=\arg\max\{s\dvtx s\in\mathcal{S}_n\cap(2\beta-1,r)\}$ be a
threshold in $(2\beta-1,r)$ that is closest to $r$. Then the maximal
value of $\Delta_{\gamma,0}(s,r,\beta)$ over $\mathcal{S}_n$ is
attained at $s^*_n$. Note that such $s_n^*$ exists with probability 1.
To show this point, it is enough to show that $\mathcal{S}_n\cap
(2\beta
-1,r)\neq\varnothing$, which is equivalent to showing that $P(\bigcup_{i=1}^p\{Y_{i,n}\in((4\beta-2)\log p, 2r\log p)\})\to1$. Let $\{
k_1,\ldots,k_q\}\in(1,\ldots,p)$ be a sub-sequence such that $q\to
\infty$ and $k_{\min}=\min_j|k_j-k_{j-1}|\to\infty$. Let
$D_n=\prod_{i=k_1}^{k_q}P(\{Y_{i,n}\in((4\beta-2)\log p, 2r\log p)^c\})-P(\bigcap_{i=k_1}^{k_q}\{Y_{i,n}\in((4\beta-2)\log p, 2r\log p)^c\})$. By
mixing assumption (C.5) and the triangle inequality, it can be seen
that $|D_n|\leq q\alpha_Z(k_{\min})\to0$ as $n\to\infty$. Then it
follows that
\begin{eqnarray*}
&&P\Biggl(\bigcup_{i=1}^p\bigl
\{Y_{i,n}\in\bigl((4\beta-2)\log p, 2r\log p\bigr)\bigr\}\Biggr)
\\
&&\qquad\geq P\Biggl(\bigcup_{i=k_1}^{k_{q}}\bigl
\{Y_{i,n}\in\bigl((4\beta-2)\log p, 2r\log p\bigr)\bigr\} \Biggr)
\\
&&\qquad=1-P\Biggl(\bigcap_{i=k_1}^{k_q}\bigl
\{Y_{i,n}\in\bigl((4\beta-2)\log p, 2r\log p\bigr)^c\bigr\}
\Biggr)
\\
&&\qquad=1-\prod_{i=k_1}^{k_q}P\bigl(\bigl
\{Y_{i,n}\in\bigl((4\beta-2)\log p, 2r\log p\bigr)^c\bigr\}
\bigr)+D_n\to1,
\end{eqnarray*}
where we used $P(\{Y_{i,n}\in((4\beta-2)\log p, 2r\log p)^c\})<1$ for
all $i=1,\ldots,p$. Comparing (\ref{L2-100}), (\ref{HC-100}) and
(\ref
{L1-100}), we see that $\Delta_{0,0}(s_n^*;r,\beta)< \Delta
_{1,0}(s_n^*;\break r,\beta)<\Delta_{2,0}(s_n^*;r,\beta)$.
It follows that, for $r>2\beta-1$,
\[
\max_{s\in\mathcal{S}_n}\Delta_{0,0}(s;r,\beta)<
\max_{s\in
\mathcal
{S}_n}\Delta_{1,0}(s;r,\beta) < \max
_{s\in\mathcal{S}_n}\Delta _{2,0}(s;r,\beta).
\]
Therefore, asymptotically with probability 1,
$\hat{\mathcal{M}}_{0 n} < \hat{\mathcal{M}}_{1n}< \hat{\mathcal{M}}_{2n}$,
which results in $\Omega_{0}(r,\beta)\leq\Omega_1(r,\beta)\leq
\Omega
_2(r,\beta)$. This completes the proof.
\end{pf*}
\end{appendix}
\section*{Acknowledgments}
The authors thank
the Editor, an Associate Editor and two referees for insightful and
constructive comments which have improved the presentation of the paper.
We are also very grateful to Dr. Jiashun Jin and Dr. Cun-Hui
Zhang for stimulating discussions.
\begin{supplement}[id=suppA]
\stitle{A supplement to ``Tests alternative to
higher criticism for high-dimensional means under sparsity and
column-wise dependence''\\}
\slink[doi]{10.1214/13-AOS1168SUPP}
\sdatatype{.pdf}
\sfilename{aos1168\_supp.pdf}
\sdescription{The supplementary material contains proofs for
Proposition~\ref{chap4-cor2} and Theorem \ref{th1} in Section~\ref{sec2}.}
\end{supplement}
|
1,108,101,563,846 | arxiv | \section{Introduction}
Solitons are ubiquitous in one-dimensional non-linear dynamical systems \cite{ drezin} and have manifested in diverse physical systems \cite{sol, bis, poly_acy}, ranging from polyacetylene \cite{ pkp_poly} to optical fibres \cite{has, GPagr} and Bose-Einstein condensates (BECs) in cigar shaped BEC \cite{pethick, kev}.
Characteristically, they take the form of dark \cite{densch, busch}, grey \cite{sh} and bright \cite{kh, mcd, aspect} solitons, respectively in the repulsive and attractive interaction regimes of BEC. Their existence and stability depend on the balancing effect of dispersion with non-linearity.
On the other hand, the kink and anti-kink are solitonic solutions of the $\lambda \phi^4$-theory in one dimension \cite{rajaram}, owing their stability to their topological charges \cite{um, vivek}. They interpolate between the two degenerate vacuua in the broken parity phase \cite{man}, while passing through the normal phase, where the order parameter vanishes. The analogous solutions for the complex order parameter are the previously mentioned, dark and grey solitons of the non-linear Schr\"{o}dinger equation (NLSE), describing the mean-field dynamics of the one dimensional BEC. Akin to the kink and anti-kink, these extended objects asymptotically connect points on the degenerate vacuum manifold of the broken global $U(1)$ symmetry phase of BEC \cite{pethick, stringary}. The grey soliton is the well-known Lieb mode, first obtained in the second-quantized model \cite{lieb}, and subsequently found as an exact solution of the NLSE \cite{Kul, 1r27, jack}, which connects points in the vacuum manifold without passing through the normal phase. It has been observed in BEC \cite{sh, becker, romero}, and later as excitations in water body \cite{sol_ocn, Ca} and other physical systems \cite{DNC, GBC}. Generically, these excitations are composed of the hyperbolic tangent function, which asymptotically takes positive and negative values. The well-known bright soliton has a characteristic profile in the form of the hyperbolic secant function, vanishing at spatial infinity \cite{kiv}.
Recently, a new form of quantum matter, the quantum droplet \cite{1r5, 1r5_ptr}, has been identified in BEC, after taking into consideration beyond mean-field (BMF) Lee-Huang-Yang (LHY) quantum correction \cite{lee}.The droplets are self-bound and exist in free space, recently observed experimentally in a number of systems like Bose-Bose (BB) mixture \cite{1r14, 1r15, 1r16} and dipolar BEC \cite{3r1}.
Experimentally, variation of the coupling from weak to strong attractive domains yields a transition from expanding to localized state, which agrees with theoretical BMF predictions. The equilibrium properties of quantum droplets, e.g., size, critical number density and binding energy, have been found in agreement with the theoretical predictions \cite{FSM1, FSM2}.
The dimension crossovers from 1D$\rightarrow$3D \cite{Lavoine} and 2D$\rightarrow$3D in binary BECs have also been analysed \cite{malo}. Droplet with constituents having electric and magnetic moments have been explored \cite{cm}, where it is observed that the droplets experience a crossover from the cigar to pancake shapes in terms of relative dipole orientations.
A new type of droplet in binary magnetic gases has been recently found, where single component self-bound droplet can couple with other magnetic components, not in the droplet phase \cite{sm}.
The study of collective excitations in the form of Goldstone modes, corresponding to the spontaneously broken internal and translational symmetries, is the subject of many recent investigations \cite{malo2, pfou, petrov, pfou1}.
It is well-established that the BMF effect emerges from the zero-point energy summation of the Bogoliubov modes and depends on the dimensions of the system \cite{1r5, 1r5_ptr, 1r16}. The BMF correction, in three-dimensions is attractive and scales as $n^{5/2}$ ($n$ being the number density), whereas in one-dimension, scaling is proportional to $n^{3/2}$ and repulsive in nature \cite{1r16, malo}. The corresponding mean field equation is the amended NLSE, having cubic and quadratic non-linearities.
In one-dimension, exact self-bound droplet solution has been obtained by Petrov and Astrakharchik \cite{1r5_ptr}, revealing its characteristic flat-top nature. As is well-known, modulation instability (MI) dictates the stability of the propagating modes in non-linear systems. Recently, the growth rate of MI for one-dimensional quantum droplet and BB mixture have been investigated \cite{pkp, kare}. The possible MI of this system has been studied, with and without spin-orbit coupling, indicating the parameter domain where the instability of propagating plane waves may lead to solitonic excitation. The parameter domain beyond MI is conducive for generation of soliton and soliton trains \cite{hulet1}.
Here, we explicitly demonstrate kink-like solitons in the droplet regime, similar in structure to dark and grey solitons, but also showing significant differences. These solitons necessarily require the presence of a constant background, which is exactly one-third of the uniform condensate amplitude: $(\frac{\sqrt{2M}}{\pi \hbar})g^{3/2}/\delta g$. They smoothly connect the normal zero-condensate vacuum with the droplet configuration and occur in pairs, similar to kink/anti-kinks in the $\lambda\phi^4$-theory \cite{rajaram}. However, these excitations asymptotically approach vanishing condensate density only at one end, unlike kink/anti-kinks and dark/grey solitons, which connect the degenerate vacua at both the asymptotic domain. They are similar to compactons \cite{5r2}, manifesting in the liquid-normal material boundary, having strictly compact support with vanishing derivatives. However, the quantum liquid solitons smoothly interpolate between the droplet and the normal phase. The fact that the droplet has a broad flat density profile makes it plausible that these solitonic excitations may appear at the boundary of the droplets, wherein the condensate phase smoothly joins with the normal phase.
The paper is organized as follows. In Sec. II, theory of quantum droplets is briefly described leading to the amended Gross-Pitaevskii (GP) equation in the form of NLSE with quadratic coupling, governing its dynamics in one-dimension. Sec. III is devoted to obtain the consistency relations for the kink-like solution with background and obtain its stability criteria. It is to be mentioned that the consistency conditions lead to a bounded below chemical potential, which is identical to the lowest chemical potential of droplets. In Sec. IV, we consider a more general exact solution of a kink-like behavior and find the inter-connection between the background, amplitude and healing length of the solitonic profile. In Sec. V, using the suitable background subtraction, we obtain the ground state energy and momentum for kink-like soliton. Finally, we conclude with the summary of results and future directions for investigation.
\section{Theory of Quantum Droplets in One Dimension}
Quantum droplets have been observed by exploiting the Bose-Bose mixture of ultracold atoms. The formation of quantum droplets are the result of balance between the MF and BMF interactions such that $0<\delta g \ll g$, where $g=g_{\uparrow \uparrow}=g_{\downarrow \downarrow}$, is the intra-particle interaction with each component having equal number of atoms, $n = n_{\uparrow} = n_{\downarrow}$ and $\delta g = g_{\uparrow \downarrow}+g$, being the inter-particle interaction.
It has been illustrated earlier that, in three-dimensions, MF and BMF interactions scale for density is different : $E_{MF} \propto n^2$ and $E_{BMF} \propto n^{5/2}$, whereas, in one-dimension, energy of the effective one component BEC is,
\begin{equation} \label{eqn:1}
E_{1D} = \frac{\delta g n^2}{2}-\frac{2\sqrt{2m}\left(gn\right)^{3/2}}{3\pi \hbar},
\end{equation}
with $n_0 = 8g^3/(9 \pi^2 \delta g^2)$ being the equilibrium density and chemical potential $\mu_0 = -\delta g n_0/2$.
Thus, at certain density, MF and BMF effects compensate each other leading to the emergence of stable droplets. The negative chemical potential is essential as it prevents self-evaporation of droplets.
In the case of one-dimension, the mean field dynamical equation takes the form of amended GP equation:
\begin{equation} \label{eqn:2}
i \hbar \frac{\partial \Psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} +\delta g~ \left|\Psi\right|^2\Psi- \frac{\sqrt{2m}}{\pi \hbar}g^{3/2} \left|\Psi\right|\Psi,
\end{equation}
characterized by cubic and quadratic non-linearities. This mean-field expression arises from an effective two-components BEC, with attractive inter-particle and intra-particle interaction in the region, $0<\delta g \ll g$.
We consider the transformation $\Psi(x, t) = \Phi(x-vt)\; exp\, \big[i(k\,x - \frac{\mu}{\hbar}\, t) \big]$ and substitute into the amended NLSE equation. Comparing the imaginary and real parts lead to two coupled equations. First one is the continuity equation as given below:
\begin{equation}
\frac{\partial \Phi}{\partial t} + \frac {\hbar k}{m}\, \frac{\partial \Phi}{\partial X} = 0,
\label{eqn:3}
\end{equation}
with $X = x-vt$ and $v = \hbar k/ m.$ The second equation originating from the real part reads as,
\begin{equation}
-\frac{\hbar^2}{2m} \frac{\partial^2 \Phi}{\partial X^2} + \delta g {|\Phi|}^2 \Phi -\frac{\sqrt{2m}}{\pi \hbar}g^{3/2} |\Phi| \Phi - \bar\mu \Phi = 0.\label{eqn:4}
\end{equation}
The term $\hbar^2 k^2/2m$, coming from the kinetic part of amended GP equation, has been absorbed into $\bar\mu = \mu - \hbar^2 k^2/2m$.
Considering a constant solution $\Phi(X) = \sqrt{\sigma_0}$, Eq. (\ref{eqn:4}) leads to the following
\begin{equation}
\delta g\,\sigma_0- \frac{\sqrt{2m}}{\pi \hbar}g^{3/2}\, \sqrt{\sigma_0} - \mu = 0, \label{eqn:5}
\end{equation}
which gives two distinct allowed values,
\begin{equation}
\sqrt{\sigma_0}_{\pm} = \alpha \Bigl[ 1 \pm \Big(1+ \frac{\mu}{\delta g\, \alpha^2} \Big)^{1/2} \Bigr], \label{eqn:6}
\end{equation}
with $\alpha = \big(\frac{\sqrt{2m}}{\pi \hbar}\big)\, g^{3/2}/2\delta g$.
It is noteworthy that, for a particular $\mu = -(\frac{4m}{9\pi^2 \hbar^2})\, g^3/\delta g,$ the resulting $\sqrt{\sigma_0}_+$ and $\sqrt{\sigma_0}_-$ are same whereas for any other $\mu$, the constant solution leads to two different backgrounds, which establishes its non-analytic nature.
In the next section, we take the soliton solution having the kink-like behavior and illustrate its properties.
\section{Kink-like Soliton}
It is to be noted that the amended NLSE has non-linearity, similar to the weak and strong coupling domains of BEC \cite{sal1}. Although the kink-type solutions exist in NLSE, the same has not been existed for the strong coupling. Therefore we consider a general ansatz having a propagating kink-like excitation, phase-locked with a non-zero background:
\begin{equation}
\Phi(X)=A+B~ \tanh\big( X/\xi \big),
\label{ans1}
\end{equation}
where $A, B$ are constants with $\xi$ being the healing length. Substituting the above in Eq. (\ref{eqn:4}), one gets the following relationships:
\begin{eqnarray}
&\frac{1}{\xi^2} = \Big(\frac{\delta g m}{\hbar^2}\Big) B^2, \qquad \quad A = \Big(\frac{\sqrt{2m}}{3\pi \hbar}\Big)\, \frac{g^{3/2}}{\delta g},\nonumber\\
&B = \pm\Big(\frac{\sqrt{2m}}{3\pi \hbar}\Big)\, \frac{g^{3/2}}{\delta g}, \qquad \frac{\hbar^2\,k^2}{2m} = \mu +\Big(\frac{4\,m}{9\pi^2 \hbar^2}\Big)\, \frac{g^3}{\delta g},
\label{eqn:7}
\end{eqnarray}
which illustrate that healing length is inversely proportional to soliton amplitude and controlled by the MF coupling. From the above equation, it is evident that the solitons necessarily reside on a constant condensate background, which is exactly one-third of the uniform condensate amplitude. They occur in pairs, $B=\pm A$, vanishing asymptotically at one end, and connecting the normal vacuum with the quantum droplets located at the origin, with an appropriate translation of soliton profile.
This result is in agreement with the Petrov's flat bulk region for droplet \cite{1r5_ptr}. Moreover, the positive MF coupling leads to the minimum chemical potential based on dispersion relation as: $\mu_{\min} = \mu_0= -(\frac{4m}{9\pi^2 \hbar^2})\, g^3/\delta g < 0,$ which establishes that the chemical potential is bounded below and is identical to the self trapped droplet condition. The soliton amplitude lies between zero and twice the constant background, taking the form,
\begin{equation}
\psi_{\pm} \left(x,t\right)=\frac{\sqrt{2m}}{3\pi \hbar} \Big(\frac{g^{3/2}}{ \delta g}\Big)
\left[1\pm\tanh\,( X/\xi)\right]\; exp\, [i\left(kx- \mu t/\hbar\right)], \label{eqn:8}
\end{equation}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6 cm,height=4.2 cm]{phi.pdf}
\includegraphics[width=6 cm,height=4.0 cm]{phi2.pdf}
\end{center}
\caption{(Color online) The variations of envelope profile and density are shown as function of position ($x$) for different values of $g$ (yellow: 5, blue: 7) with $\delta g =0.5$ at $t=1, v=2$ and $\hbar = m =1,$ interpolating smoothly the vacuum and droplet. The solid curve is for kink, while the dashed one is the anti-kink. }
\label{fig1}
\end{figure}
where $\xi = \frac{3\pi\hbar^2}{m} ( \delta g/ g^3)^{1/2}$. In Fig. \ref{fig1}, variations of the soliton profile and its density are depicted as a function of position for different values of BMF repulsion ($g$). The solid and dashed curves are for the two solitons travelling with equal velocity in opposite directions, ``$\psi_{s/d}=A\;\big[1\pm\text{tanh}\, \big(\frac{x\,\pm\, vt}{\xi}\big) \big]$". They can appear at the two boundaries of the droplet in a static configuration. The amplitude of the solitons increases by increase of the repulsive interspecies interaction.
Interestingly, $B = 0$ and $B \rightarrow 0$ limits in the solution are not same. In case of $B=0,$ one obtains the constant background whereas $B \rightarrow 0$, profile has the constant background $\frac{\sqrt{2m}}{3\pi \hbar}\, (g^{3/2}/\delta g)$.
We now investigate the stability of these solitons through the Vakhitov-Kolokolov (VK) stability analysis \cite{vk}, which determines the `cost' of increasing the particle number density incrementally. For this purpose, we determine the number density,
\begin{equation}
n=N/L=\frac{1}{L}\int_{-L/2}^{L/2} \left|\psi \right|^2~dx = \Big(\frac{4m}{9 \pi^2 \hbar^2}\Big)\, \frac{g^3}{\delta g},
\end{equation}
the variation of number density with respect to the chemical potential determines the VK stability. For $L\rightarrow \infty$, VK stabilty criterion results into,
\begin{equation}
\frac{\partial n}{\partial \mu}=-\frac{1}{ \delta g}, \label{eqn:9}
\end{equation}
showing the stability of the kink-like soliton.
Before closing this section, it is worth pointing out that the kink-type solution with background is distinct from the grey soliton of NLSE with a complex profile, $\Psi_{GS} = \sqrt{\sigma}_0 \big(i sin \theta + cos \theta\, tanh\, (x\,cos \theta)\big)$. In the present case, the analogous solution corresponding to grey soliton is: $\Phi (X) = A \,sin \theta + B\, cos\theta \,tanh \,\big( X cos \theta/\xi \big)$, with $\theta$ being the free parameter. Explicit calculation leads to, $A \,sin\theta = B\, cos\theta = \frac{\sqrt{2m}}{3\pi \hbar}\, (g^{3/2}/\delta g),$
which yields the same kink-like profile given in Eq.(\ref{eqn:8}). In the following section, we discuss about the more general kink-like soliton and their properties.
\section{A General Kink-like Soliton}
The solution of the amended GP equation in the form of NLSE with quadratic coupling can be obtained through M\"obius or the so-called fractional transformation, connecting the solutions of solvable non-linear systems to the general ones under consideration \cite{p1}. The desired solution of ANLSE is taken as the general Pad\'e-type form \cite{pkp1, pkp2},
\begin{equation}
\Phi(x) = \frac{A + B \, f(x)}{1 + D \,f(x)}, \label{eqn:11}
\end{equation}
where $f(x)$ satisfy the elliptic function equation
\begin{equation}
f'' (x) + a f^3 (x) + c f (x) = 0. \label{eqn:12}
\end{equation}
In the present case, we consider the kink-type scenario with $f(x) = tanh\,(X/\xi)$. The emerging inter-connecting relationships, from the amended GP equation, between $A, B$ and $D$ are given as following:
\begin{equation}
\begin{aligned}
&\frac{\hbar^2}{m\, \xi^2} (B - AD) - g_1 B^3 + g_2 B^2 D + \mu B D^2 = 0,\\
&\frac{\hbar^2}{m\, \xi^2} (B - AD) D - 3g_1 A B^2 + g_2 B^2 + 2g_2 ABD + \mu AD^2 + 2\mu BD = 0,\\
&g_1 B^3 + 3g_1 A^2 B - 2g_2 AB- g_2 A^2 D - g_2 B^2 D - 2\mu AD - \mu B - \mu B D^2= 0,\\
&g_1 A^3 - g_2 A^2 + 3g_1 A B^2 - 2g_2 ABD- g_2 B^2 - \mu A - \mu AD^2 -2 \mu B D = 0.
\label{eqn:13}
\end{aligned}
\end{equation}
A straightforward but lengthy calculation leads to,
\begin{equation}
(A - B)^2 = \frac{\mu}{g_1} (1-D)^2,\qquad
g_1 (A^2 + B^2) - g_2 (B + AD) - \mu(1+ D^2) = 0, \label{eqn:14}
\end{equation}
explicitly, yielding the background $A$ and soliton amplitude $B$, in terms of $D$
\begin{equation}
\begin{aligned}
A &= B \pm (\mu/g_1)^{1/2} \;[1-D]\\
B &= \frac{1}{2g_2} \Bigl[ \Big(\pm 2 (\mu g_1)^{1/2} (D-1) + g_2 (D+1)\Big) \pm \Big( \big(\pm 2 (\mu g_1)^{1/2} (D-1) \\&+ g_2 (D+1)\big)^2 - 4 g_1 \big( \mp 2g_2 (\mu/g_1)^{1/2} D(1-D) - 4 \mu D \big)\Big)^{1/2} \Bigr],
\end{aligned} \label{eqn:15}
\end{equation}
and the healing length is given by
\begin{equation}
\begin{aligned}
\frac{1}{ \xi^2} = \Big(\frac{m}{\hbar^2}\Big)\, \frac{1}{(3-D^2)}\Big[3g_1 B^2 - 2g_2 BD - \mu D^2\Big].
\end{aligned} \label{eqn:16}
\end{equation}
The constant $D$ can be obtained by exploiting the relationships given in Eq.(\ref{eqn:15}) and Eq. (\ref{eqn:16}), together with the first two equations in Eq. (\ref{eqn:13}).
In our next section, we derive the energy and momentum for kink-like soliton. We also explain the BMF effect in energy density and show the procedure to counter the divergent background energy part.
\section{ Energy and Momentum of the Kink-like Soliton}
The ground state energy can be computed following the standard approach \cite{1r27}, with the Hamiltonian density,
\begin{equation}
\mathcal{H}=\frac{\hbar^2}{2m}\left|\frac{\partial \Psi}{\partial x} \right|^2+\frac{1}{2}\, \delta g\left|\Psi\right|^4 - \frac{2}{3}\,\frac{\sqrt{2m}}{\pi \hbar}g^{3/2} \left|\Psi \right|^3.
\label{eqn:17}
\end{equation}
We consider $\Psi(x, t) = \sqrt{\sigma}\, exp\,[i(kx - \omega t)]$, with $\sqrt{\sigma} = A + B \, tanh\,(X/\xi)$ and use in the amended NLSE. As it will seen later, $\sigma$ is positive semi-definite for our ansatz solution. From the imaginary and real parts, one gets,
\begin{equation}
\begin{aligned}
&\frac{\partial \sqrt{\sigma}}{\partial t} = - v\,\frac{\partial \sqrt{\sigma}}{\partial X} \qquad \quad\text{with}\quad v = \frac{\hbar k}{m},\\
&\frac{\hbar^2}{2m}\,\frac{\partial}{\partial X} \Bigl( \frac{\partial \sqrt{\sigma}}{\partial X} \Bigr)^2 = - \Big( \mu - \frac{\hbar^2 k^2}{m}\Big)\, \frac{\partial \sigma}{\partial X} + \frac{\delta g}{2} \;\frac{\partial \sigma^2}{\partial X} - \frac{2}{3}\; \frac{\sqrt{2m}}{\pi \hbar}g^{3/2}\, \frac{\partial\sigma^{3/2}}{\partial X}.
\end{aligned}
\label{eqn:18}
\end{equation}
As mentioned earlier, the density of grey soliton is constant at asymptotic ends. In order to obtain the converging energy, constant background density has to be subtracted from the Hamiltonian (See, e.g., \cite{1r27}). However, for the kink-like case, inconsistent density distributions at the boundaries, lead to a non-trivial background subtraction. It is to be noted that the difference in densities, between two asymptotic ends, of the kink-like soliton is $\frac{8m}{9\pi\hbar^2}( g^3/\delta g^2)$. We now consider the rest-frame with $k = 0$ and integrate the above equation by taking the appropriate boundary densities to obtain
\begin{equation}
\begin{aligned}
\frac{\hbar^2}{2m}\, \Bigl( \frac{\partial \sqrt{\sigma}}{\partial X} \Bigr)^2 = \frac{\delta g}{2} \sigma^2 - \frac{2}{3}\; \frac{\sqrt{2m}}{\pi \hbar}g^{3/2}\, \sigma^{3/2} - \mu\, \sigma + \frac{32\,m^2}{81\pi^4\hbar^4}\Big(\frac{g^6}{\delta g^3}\Big) + \frac{8m}{9\pi^2\hbar^2} \,\mu\,\Big( \frac{g^3}{ {\delta g}^2}\Big),
\end{aligned} \label{eqn:19}
\end{equation}
the energy density in terms of $\sigma$, can be written in the following form
\begin{equation}
\mathcal{H}=\frac{\hbar^2}{2m}\Bigl(\frac{\partial \sqrt{\sigma}}{\partial X}\Bigr)^2+\frac{\delta g}{2}\,\sigma^2 - \frac{2}{3}\,\frac{\sqrt{2m}}{\pi \hbar}g^{3/2} \sigma^{3/2} - \mu \sigma.
\label{eqn:20}
\end{equation}
Substituting Eq. (\ref{eqn:19}) in Eq. (\ref{eqn:20}), energy for the kink-like soliton can be represented as:
\begin{equation}
\begin{aligned}
\mathcal{E} = \int_{- \infty}^{\infty}\, dX\; \mathcal{H} &= \int_{- \infty}^{\infty}\, dX\; \Bigl[g_1 \sigma^2 - \frac{4}{3} \;\frac{\sqrt{2m}}{\pi \hbar}g^{3/2}\, \sigma^{3/2} - 2 \mu\, \sigma \\
&+ \frac{32\,m^2}{81\pi^4\hbar^4}\Big(\frac{g^6}{\delta g^3}\Big) + \frac{8m}{9\pi^2\hbar^2} \,\mu\,\Big( \frac{g^3}{ {\delta g}^2}\Big)\Bigr].
\end{aligned} \label{eqn:21}
\end{equation}
It is important to point out that the last two constant terms are the background energy which compensate the divergent terms in the energy density. One obtains the final expression of total energy, by exploiting the relationships achieved in Eq. (\ref{eqn:7}), as following
\begin{equation}
\mathcal{E}= \frac{8\sqrt{2} }{27}\, \Big(\frac{m g^2}{\pi^3 \hbar^2}\Big)\, \Bigl(\frac{ g}{\delta g}\Bigr)^{5/2}, \label{eqn:22}
\end{equation}
it is evident from above that the total energy is controlled by interaction parameters $\delta g$ and $g$.
The momentum can be obtained by considering the kink-like soliton in a a box of finite length $L$ from ($-L/2$ to $L/2$). For solitonic profile $\Psi(x)$, one gets
\begin{equation}
\begin{aligned}
\mathcal{P} = -\frac{i\hbar}{2}\, \int_{-L/2}^{L/2} dx \Big[\Psi^{\star}\, \frac{\partial \Psi}{\partial x} -\Psi\, \frac{\partial \Psi^{\star}}{\partial x}\Big] = \hbar k\, A^2 \,\big[L - \xi\, tanh(L/2) \big].
\end{aligned}
\end{equation}
As expected, the first term comes from the background $A\, exp\,[i(k x -\mu t/\hbar]$, whereas the second term arises from the soliton contribution. Interestingly, the two contributions have opposite sign which physically signifies that the background and solitonic profiles are moving in opposite directions. The soliton contribution in $L \rightarrow \infty$ limit yields
\begin{equation}
\mathcal{P} = - \frac{2 \sqrt{2}}{3\pi}\,mv\, \Big(\frac {g}{\delta g}\Big)^{3/2} \equiv M\,v,
\end{equation}
where we used $v = \frac{\hbar\,k}{m}$ and Eq. (\ref{eqn:7}) and $M = -\frac{2 \sqrt{2}}{3\pi}\,m\,(g/\delta g)^{3/2} $ is the effective mass akin to the earlier observed negative mass for the grey soliton in a trap \cite{malo4, kev3, pkp5}.
The energy of kink-like solion, in terms of effective mass, can be represented in the form
\begin{equation}
\mathcal{E}= - \frac{4M}{9\pi^2 \hbar^2}\, \Bigl(\frac{ g^3}{\delta g}\Bigr). \label{eqn:23}
\end{equation}
Remarkably, on comparing the above equation with the lowest value of chemical potential for self-bound droplet $\mu_0 = -\frac{4m}{9\pi^2 \hbar^2}\,( g^3/\delta g)$, one obtains $\mathcal{E} = (M/m) \mu_0$.
\section{Conclusion}
In conclusion, we have obtained the kink-like quantum soliton solutions in the quantum liquid, which smoothly interpolate between the normal phase and the flat-top droplet. They necessarily require a constant background of quantum nature. Remarkably, the background amplitude is exactly one-third of the uniform condensate amplitude. Evidently, these kink-like solitons are different from kink/anti-kink, dark, and grey solitons, as at one end, asymptotically they connect to the normal state. These dark and grey solitons represent localized defects. The chemical potential has a value $\mu_{0}$, which is identical to the condition for the self-trapped flat-top solution. The possibilty and nature of extended objects like, Ma and Akhmediev breathers \cite{brth1, brth2}, as well as rogue waves is worth studying for the droplets \cite{ brth3}. The collision of solitons and their role in the evaporation of the droplets is under investigation.
\vskip 1cm
\ack {AS and PKP acknowledge the support from DST, India through Grant No.: DST/ICPS/QuST/Theme-1/2019/2020-21/01. Neeraj is thankful to DST, India for providing fellowship under the INSPIRE programme, the Grant No.: DST/INSPIRE Fellowship/2016/IF160177.}
\vskip 1.4cm
|
1,108,101,563,847 | arxiv | \section{Introduction}
\label{sec:intro}
Accessing, transmitting or storing medical data volumes can be a crucial task because the filesize often gets very large. Therefore downscaled versions of the original signal, e.g. in telemedical applications, can be very useful for tasks like browsing and fast previewing.
Subband coding provides an appropriate way to achieve scalability features without additional overhead \cite{lnt2011-23}. Using the wavelet transform, the signal is decomposed into a lowpass (LP) and a highpass (HP) band with the energy concentrated in the LP band. Blur and ghosting artifacts in the LP band caused by the motion of the CT or MRT volumes can be compensated by incorporating adequate motion compensation (MC) methods directly into the wavelet transform. This adaption to the signal leads to a higher visual quality of the LP band and a better energy compaction in fewer transform coefficients. While the first property is very important, when the LP band shall be used in medical applications as a downscaled representative, the second property results in a higher coding efficiency \cite{958672}.
In this paper we will present a novel approach to use common motion vector fields of a traditional mesh-based MC in a much more efficient way by exploiting the geometric structure of the underlying grid. This way we can guarantee that the number of bits required for encoding the motion information stays the same, while the visual quality of the LP band increases.
Section~\ref{sec:compWT} presents a brief overview of the compensated wavelet lifting, followed by a detailed description of the new edge adaptive graph-based approach for motion compensation in Section~\ref{sec:edge}. Simulation results are shown in Section~\ref{sec:results} followed by a short conclusion in Section~\ref{sec:conclusion}.
\section{Compensated Wavelet Lifting}
\label{sec:compWT}
\begin{figure}[tb]
\begin{scriptsize}
\centering
\psfragscanon
\psfrag{x}{$x$}
\psfrag{y}{$y$}
\psfrag{z}{$z$}
\psfrag{t}{$t$}
\psfrag{1}{$1$}
\psfrag{2}{$2$}
\psfrag{3}{$3$}
\psfrag{4}{$4$}
\psfrag{frac12}{$\frac{1}{2}$}
\psfrag{2t-1}{$2t-1$}
\psfrag{2t}{$2t$}
\psfrag{f1}{$f_{1}$}
\psfrag{f2}{$f_{2}$}
\psfrag{f1}{$f_{1}$}
\psfrag{f2}{$f_{2}$}
\psfrag{f3}{$f_{3}$}
\psfrag{f4}{$f_{4}$}
\psfrag{f2t-1}{$f_{2t-1}$}
\psfrag{f2t}{$f_{2t}$}
\psfrag{HP1}{$\text{HP}_1$}
\psfrag{LP1}{$\text{LP}_1$}
\psfrag{HP2}{$\text{HP}_2$}
\psfrag{LP2}{$\text{LP}_2$}
\psfrag{HPt}{$\text{HP}_t$}
\psfrag{LPt}{$\text{LP}_t$}
\psfrag{MC}{MC}
\psfrag{IMC}{$\text{MC}^{-1}$}
\psfrag{prediction}{\textcolor{red}{prediction}}
\psfrag{step}{\textcolor{red}{step}}
\psfrag{update}{\textcolor{red}{update}}
\psfragscanoff
\includegraphics[width=\columnwidth]{Bilder/compensated_haar_multi_2D}
\caption{Compensated Haar lifting structure in temporal direction (MCTF).}
\label{fig:MCTF}
\end{scriptsize}
\end{figure}
By factorising the filter representation of the wavelet transform, it is possible to incorporate arbitrary compensation methods directly into the lifting structure~\cite{Sweldens1995}. Fig.~\ref{fig:MCTF} shows the lifting structure of the Haar wavelet and how it can be extended by a compensation method. The decomposition of the signal occurs in temporal direction, which is known as Motion Compensated Temporal Filtering (MCTF)~\cite{334985}. As Fig.~\ref{fig:MCTF} shows, the wavelet lifting consists of two steps, the prediction and the update step. The HP coefficients $\text{HP}_t$ are computed in the prediction step according to
\begin{equation}
\text{HP}_t = f_{2t}-\lfloor \mathcal{W}_{2t-1\rightarrow 2t}(f_{2t-1})\rfloor.
\end{equation}
Instead of a simple subtraction of the reference frame $f_{2t-1}$ from the current frame $f_{2t}$, a predictor, denoted by the warping operator $\mathcal{W}_{2t-1\rightarrow 2t}$, is used. This process is described by MC in Fig.~\ref{fig:MCTF}. To calculate the LP coefficients, the MC has to be inverted. This happens in the update step and is denoted by $\text{MC}^{-1}$. To achieve an equivalent wavelet transform, the index of $\mathcal{W}$ has to be reversed when calculating the LP coefficients
\begin{equation}
\text{LP}_t = f_{2t-1}+\lfloor \frac{1}{2}\mathcal{W}_{2t\rightarrow 2t-1}(\text{HP}_t)\rfloor.
\end{equation}
To avoid rounding errors, floor operators are applied in the transform \cite{647983}. Considering medical data, the reconstruction of the original signal without any loss is a very important aspect.
\section{Edge Adaptive Graph-Based Motion Compensation}
\label{sec:edge}
To reconstruct the original signal at the decoder side, it is necessary to encode the corresponding LP and HP bands as well as the motion information used for the MC and $\text{MC}^{-1}$. In traditional compensation methods, like block-based or mesh-based approaches \cite{lnt2012-40}, \cite{lnt2014-23}, the motion is stored in form of motion vector fields.
The novelty of this paper is to exploit the motion vector fields of a mesh-based motion estimation to get the displacements of a compensated grid and to use these displacements instead of the intensity values of the underlying frame for the motion compensation. Thereby the number of bits to code the motion information stays the same, while the visual quality of the LP band will increase by incorporating the geometric structure of the data.
\begin{figure}[tb]
\centering
\psfragscanon
\psfrag{f1}{$f_{2t-1}$}
\psfrag{f2}{$f_{2t}$}
\psfrag{MVF}{motion vectors}
\psfrag{upsample}{calculate positions}
\psfrag{between}{between GPs}
\psfrag{f1}{$f_{2t-1}$}
\psfrag{f2}{$f_{2t}$}
\psfrag{l2}{$e_b$}
\psfrag{l3}{$e_i$}
\psfrag{l4}{$\tilde{e}_i$}
\psfrag{w}{$w(i)$}
\psfrag{JP}{$\mathbf{J_P}$}
\includegraphics[width=0.45\textwidth]{Bilder/all_grids2}
\psfragscanoff
\caption{After the regular grid gets deformed by using the corresponding motion vectors, the subpixel positions have to be calculated. A traditional mesh-based MC uses the interpolated values of $f_{2t-1}$ at the subpixel positions, while the proposed method exploits the varying edge lengths of the compensated grid.}
\label{fig:overview}
\end{figure}
Considering a 2-D mesh-based compensation which is calculated by putting a quadrilateral mesh of arbitrary grid size over frame $f_{2t-1}$ and deformed regarding frame $f_{2t}$, every motion vector of every single grid point (GP) is stored in the corresponding motion vector field. Then the missing positions of the pixels laying between the compensated GPs have to be calculated. An example for this process of deforming and upsampling a grid can be seen in Fig.~\ref{fig:overview}. Due to the deforming process, the links between the GPs are changing their length compared to the regular grid.
\subsection{Graph-Based Motion Compensation}
\begin{figure}[tb]
\centering
\psfragscanon
\psfrag{f1}{$f_{2t-1}$}
\psfrag{f2}{$f_{2t}$}
\psfrag{even}{even frame}
\psfrag{odd}{odd frame}
\includegraphics[width=0.37\textwidth]{Bilder/graph_building}
\psfragscanoff
\caption{4-grid neighborhood for one single node of the even frame connected to the odd frame.}
\label{fig:basic}
\end{figure}
A smart way to easily incorporate the varying edge lengths into the motion compensation is the graph-based wavelet lifting. As introduced in~\cite{narang2009lifting}, it is possible to perform a lifting-based wavelet transform on arbitrary graphs $G(\mathcal V,E)$, where $\mathcal{V}$ is the set of nodes, indexed as $1,2,3,...,N$ and $E$ is the set of links $e$ between the nodes. Every link is defined by a triplet $(i,j,w_{ij})$, where $i$ and $j$ are the start and end nodes respectively and $w_{ij}$ is the weight which has a value $\neq0$ if $i$ and $j$ are linked to each other. Also every node has a value which is listed in the vector $X$. For the graph-based wavelet transform a splitting of the nodes into even and odd subsets is required. As a consequence the corresponding adjacency matrix $\mathbf{A}$ has to be rearranged accordingly
\begin{equation}
X = \begin{pmatrix}
X_\text{even} \\
X_\text{odd}
\end{pmatrix} \quad
\mathbf{A} = \begin{pmatrix}
\mathbf{F} & \mathbf{J} \\ \mathbf{K} & \mathbf{L}
\end{pmatrix},
\end{equation}
where the submatrices $\mathbf{F}$ and $\mathbf{L}$ contain edges, which connect nodes of same parity and the submatrices $\mathbf{J}$ and $\mathbf{K}$ contain all edges, which connect nodes of different parity. By applying
\begin{equation}
\begin{aligned}
H &= X_\text{even}-\mathbf{J_P}\times X_\text{odd}\\
L &= X_\text{odd}+\mathbf{K_U}\times H
\end{aligned}
\label{eq_graph_trafo}
\end{equation}
we get the vector $H$, which contains the HP coefficients whereas the vector $L$ contains the LP coefficients. The matrices $\mathbf{J_P}$ and $\mathbf{K_U}$ are computed from $\mathbf{J}$ and $\mathbf{K}$ by assigning prediction and update weights depending on the desired application. Since the matrices $\mathbf{F}$ and $\mathbf{L}$ are not used in~(\ref{eq_graph_trafo}), a perfect splitting of nodes should be intended~\cite{hidane2013lifting}.
Considering images as graph signals as introduced in \cite{Lanz2016} and applying the graph-based wavelet transform, every single pixel of a frame has to be interpreted as a node. Accordingly the intensity values of the pixels are stored in vector $X$. To fulfill the constraint of perfect splitting, every frame gets assigned as even and odd according to its number of appearance in the sequence as shown in Fig.~\ref{fig:basic}. Then every node of an even frame gets linked by a previously defined neighborhood in the odd frame for constructing matrix $\mathbf{J}$ and vice versa for matrix $\mathbf{K}$. In Fig.~\ref{fig:basic} a 4-grid neighborhood is chosen which is shown for node 6 of the even frame connected to the corresponding nodes 2,5,6,7 and 10 in the odd frame.
After a proper weighting of the referenced nodes, which is often measured as the spatial or photometric similarity between $i$ and $j$ \cite{6694319}, the degree matrices $\mathbf{D_{J/K}}$ of the weighted matrices $\mathbf{J}$ and $\mathbf{K}$ are computed. Using random walks on Graph $G$ delivers the Markov chain with the transition matrices
\begin{equation}
\begin{aligned}
\mathbf{P_J} & = \mathbf{D_J^{-1}J}\\
\mathbf{P_K} & = \mathbf{D_K^{-1}K}.
\end{aligned}
\end{equation}
According to \cite{lee2011multiscale} one element $p_{ij}$ of such a transition matrix equates the probability of being at node $j$ starting from node $i$. Therefore the transition matrices $\mathbf{P_J}$ and $\mathbf{P_K}$ are used as prediction matrix $\mathbf{J_P}$ and update matrix $\mathbf{K_U}$ respectively. Then a graph-based Haar wavelet transform can be carried out on images:
\begin{equation}
\begin{aligned}
H &= X_\text{even}-\lfloor\mathbf{J_P}\times X_\text{odd}\rfloor\\
L &= X_\text{odd}+\lfloor\frac{1}{2}\mathbf{K_U}\times H\rfloor.
\end{aligned}
\end{equation}
By rearranging the vectors $H$ and $L$ containing the transform coefficients to the original frame size, the HP and the LP bands are achieved again.
\subsection{Conversion of Motion Vector Fields into Adjacency Matrices}
\label{subsec:conversion}
\begin{figure}[tb]
\centering
\psfragscanon
\psfrag{f1}{$f_{2t-1}$}
\psfrag{f2}{$f_{2t}$}
\psfrag{l2}{$e_b$}
\psfrag{l3}{$e_i$}
\psfrag{l4}{$\tilde{e}_i$}
\psfrag{w(i)}{$w_{ij}(e)$}
\psfrag{Jp}{$\mathbf{J_P}$}
\includegraphics[width=0.5\textwidth]{Bilder/new_distances_zoom1}
\psfragscanoff
\caption{A zoom into the deformed grid of $f_{2t-1}$ and the regular grid of $f_{2t}$ shows the various edges which are used to calculate the prediction matrix $\mathbf{J_P}$ according to the weighting function $w_{ij}(e)$ given in~(\ref{equ:w}).}
\label{fig:zoom}
\end{figure}
The varying distances resulting from the process of deforming and upsampling the considered grid are used as edge weights in the prediction matrix $\mathbf{J_P}$ and the update matrix $\mathbf{K_U}$. Since the displacements in the signal can mainly be characterized as contraction and expansion of different kinds of tissue a proper weighting function is required which assigns higher values to decreased edge lengths and lower values to increased edge lengths.
Due to the deformed mesh and the graph connections there exists a plenty of varying distances which can be used for a proper weighting. As shown in Fig.~\ref{fig:zoom} mainly two different kinds of connections can be distinguished, namely:
\begin{itemize}[topsep=0mm]
\itemsep0pt
\item[-] inter-frame edges $e_b$: edges between the even and the odd frame
\item[-] intra-frame edges $e_i$: edges inside the odd frame
\end{itemize}
Considering the change of their length if the underlying grid gets deformed a further differentiation can be introduced:
\begin{itemize}[topsep=0mm]
\itemsep0pt
\item[-] regular edges $e_i$: intra-frame edges on a regular grid
\item[-] compensated edges $\tilde{e}_i$: intra-frame edges on a compensated grid
\end{itemize}
Hence, if $\tilde{e}_i$ is smaller than the corresponding $e_i$, a contraction of the specific tissue occurs. Otherwise the classification of the underlying movement is not unique, because if $\tilde{e}_i > e_i$, this can correspond to an expansion of the tissue or another kind of tissue could be referenced.
\pagebreak
Keeping these definitions in mind a weighting function $w_{ij}(e)$ can be formed which assigns higher weights if a contraction is identified:
\begin{equation}
\begin{aligned}
w_{ij}(e) &= \exp(-\frac{1}{2}\cdot (e_b^2+e^2))\cdot \exp(|e_b - \tilde{e}_i|),\\
e &=
\begin{cases}
\tilde{e}_i & \text{if } \tilde{e}_i<e_i\\
e_b & \text{else}. \\
\end{cases}
\end{aligned}
\label{equ:w}
\end{equation}
Other weighting functions would also be possible. After connecting every node of frame $f_{2t}$ by a previously defined neighborhood to the deformed frame $f_{2t-1}$ and using the above described weighting function, the resulting weights $w_{ij}$ are used for calculating matrix $\mathbf{J_P}$, as exemplarily shown in Fig.~\ref{fig:zoom} for node 6 of the even frame $f_{2t}$. Matrix $\mathbf{K_U}$ can easily be found by taking just the transpose of $\mathbf{J_P}$.
A further property of this method is based on the fact that the upsampling process results in subpixel coordinates. A large number of investigations showed, that it is of advantage to round them to full pixel positions. Therefore it can occur that some end nodes are multiply referenced. This procedure contributes to a sharper differentiation of the classification of the underlying movement and therefore to a higher weighting of nodes belonging to a contraction.
\section{Simulation Results}
\label{sec:results}
\begin{table*}[tb]
\centering
\begin{tabular}{c|cccc|cccc}
\multirow{2}{*}{} & \multicolumn{4}{c|}{PSNR LP {[}dB{]}} & \multicolumn{4}{c}{Mean energy HP} \\
& \textit{thorax1} & \textit{thorax2} & \textit{thorax3} & \textit{head} & \textit{thorax1} & \textit{thorax2} & \textit{thorax3} & \textit{head} \\ \hline
no MC & 43.85 & 42.86 & 43.37 & 33.74 & 2862.35 & 6103.93 & 3350.72 & 32101.65 \\
block-based & 47.51 & 46.09 & 46.99 & 38.31 & 827.35 & 1794.19 & 908.34 & 6504.12 \\
mesh-based & 49.92 & 47.90 & 49.40 & 40.31 & 767.66 & 2540.14 & 892.06 & 8256.79 \\
edge adaptive graph-based& 51.62 & 50.16 & 51.53 & 44.17 & 767.24 & 2327.04 & 862.48 & 7635.86 \\ \hline
$\Delta$: proposed to mesh-based & +1.70 & +2.26 & +2.13 & +3.86 & -0.42 & -213.10 & -29.58 & -620.93
\end{tabular}
\caption{The table lists results regarding the visual quality and the mean energy for the considered sequences and various compensation methods. The values are averaged over the whole sequences, while the last row contains a delta between the proposed edge adaptive graph-based and the mesh-based approach.}
\label{tab:results}
\end{table*}
To evaluate the proposed edge adaptive graph-based wavelet transform three \textit{thorax} data sets and one \textit{head} CT data set were used. The \textit{thorax} sequences have a resolution of $512\times 512$~pixels at 12~bit per sample and describe a beating heart over time. \textit{thorax1} and \textit{thorax2} each consist of $10$ timesteps whereas \textit{thorax3} has $127$ frames in spatial direction. The \textit{head} sequence consists of $36$ frames in spatial direction at the same bit depth with a resolution of $448\times 448$ pixels.
For the simulation one Haar wavelet decomposition step is performed. Beside the proposed edge adaptive graph-based method the decomposition is done with a block-based, a mesh-based, and without a MC method. For the edge adaptive graph-based MC a 25-nearest-neighbor graph is used for connecting the particular even and odd frames. The MC of the corresponding grid is calculated according to~\cite{lnt2012-29} with a quadrilateral mesh and a grid size of $8\times 8$~pixels. For comparison the mesh-based approach is calculated with the same parameters, while the block-based approach uses a block size equal to the grid size and a search range of 8~pixels. To overcome the unconnected pixels appearing at the inversion in the update step of the block-based MC a nearest-neighbor interpolation was used.
The averaged values regarding the visual quality and the mean energy for the considered sequences and various compensation methods can be seen in Table~\ref{tab:results}. As expected, the PSNR for all methods is significantly higher than a wavelet transform without any MC. As a consequence the mean energy of the corresponding HP band which can be regarded as the prediction error for a compensated wavelet transform decreases. By the last of row Table~\ref{tab:results} a $\Delta$ between the edge adaptive graph-based and the mesh-based approach is provided. The table proves that the edge adaptive graph-based MC outperforms the block-based and mesh-based approaches in terms of visual quality of the LP band and also the mesh-based approach regarding the mean energy of the HP band.
\begin{table}[tb]
\centering
\begin{tabular}{c|cccc}
\multirow{2}{*}{filesizes {[}kB{]}} & \textit{thorax1} & \textit{thorax2} & \textit{thorax3} & \textit{head} \\
& \multicolumn{4}{c}{no MC} \\ \hline
LP & 715.63 & 828.20 & 9235.87 & 2570.01 \\
HP & 697.42 & 946.60 & 8574.07 & 3002.13 \\
MVF & - & - & - & - \\ \hline
$\Sigma$ & 1413.05 & 1774.80 & 17809.94 & 5572.14 \\
&
\multicolumn{4}{c}{}\\
& \multicolumn{4}{c}{block-based} \\ \hline
LP & 859.64 & 985.98 & 10898.34 & 2849.85 \\
HP & 813.61 & 979.16 & 10356.63 & 2903.70 \\
MVF & 22.28 & 27.63 & 282.52 & 80.1152 \\ \hline
$\Sigma$ & 1695.52 & 1965.77 & 21537.54 & 5833.66\\
&
\multicolumn{4}{c}{}\\
& \multicolumn{4}{c}{mesh-based} \\ \hline
LP & 758.75 & 883.86 & 9633.65 & 2759.30 \\
HP & 733.48 & 916.73 & 9337.15 & 2841.11 \\
MVF & 20.18 & 24.34 & 254.20 & 62.66 \\ \hline
$\Sigma$ & 1512.41 & 1824.93 & 19225.00 & 5663.06 \\
&
\multicolumn{4}{c}{}\\
& \multicolumn{4}{c}{edge adaptive graph-based} \\ \hline
LP & 857.40 & 950.61 & 10782.08 & 2825.85 \\
HP & 767.63 & 905.40 & 9555.49 & 2773.93 \\
MVF & 20.18 & 24.34 & 254.20 & 62.66 \\ \hline
$\Sigma$ & 1645.21 & 1880.35 & 20591.77 & 5662.44
\end{tabular}
\caption{The table summerizes the overall filesizes of the single subband volumes and the required motion vector fields (MVF) for the considered sequences and various compensation methods.}
\label{tab:filesizes}
\end{table}
To evaluate the compressibility, the resulting subbands are coded losslessly using the wavelet-based volume coder JPEG~2000. In this simulation the OpenJPEG \cite{openjpeg} implementation was used. The motion vector fields were coded using the QccPack library \cite{fowler2000qccpack}. For both subband volumes 4 further wavelet decomposition steps in $xy$-direction were applied. Table~\ref{tab:filesizes} lists the filesizes in kilobytes from lossless coding of the LP, HP and the corresponding motion vector fields and the sum of them for the considered sequences and various compensation methods. According to \cite{lnt2014-23} a wavelet transform without any MC is recommended, when the quality of the LP band is not of interest. This is confirmed by the first part of Table~\ref{tab:filesizes}, where the overall filesize for a wavelet transform without any MC is much lower compared to the other approaches. The reasons for this behavior are the correlated noisy structures that can be exploited by the traditional wavelet transform without a MC. However, if a compensated wavelet transform is applied, it is not possible to exploit the structures of the noise anymore. Therefore the particular filesizes of the single subbands increase. And in addition the corresponding motion vector fields have also to be coded and contribute to the overall filesizes. But when the LP band is used as a scalable representation, the quality is important which can be increased by various compensation methods. The mesh-based method achieves a higher PSNR and a smaller filesize compared to the block-based method. The edge adaptive graph-based approach achieves a further improvement of the visual quality by 2.5~dB on average, while the overall filesize decreases compared to the block-based MC but increases slightly compared to the mesh-based MC. However, due to the fact that the edge adaptive graph-based approach uses the same motion vector fields as the mesh-based approach, the bits needed to code the motion vector fields are exactly the same. This can be seen in Table~\ref{tab:filesizes} by the rows regarding the filesizes of the motion vector fields.
A low mean energy of the HP band indicates a good MC of the odd frame. For a high quality LP band not only a good MC but also a proper $\text{MC}^{-1}$ is required. The block-based approach contains annoying block artifacts in the LP band because of unconnected pixels. There exist various ways to conceal this erroneous structures like interpolation methods on the motion vector fields as proposed in \cite{bozinovic2005} or extrapolation methods like FSE as shown in \cite{lnt2013-18}. In contrast the mesh-based approach has for the sequences \textit{thorax2} and \textit{head} a higher mean energy of the HP band, but the PSNR of the LP band is better compared to the block-based approach according to Tabel~\ref{tab:results}. The $\text{MC}^{-1}$ of the mesh-based approach accepts an error by using only an approximation term instead of the quite complex inversion in the update step but works even better than the block-based approach. However, the proposed edge adaptive graph-based method results in a quite low mean energy of the HP band and ends up in a high quality LP band at the same time. The inversion of the edge adaptive graph-based MC works very simple by just taking the transpose of the prediction matrix.
\section{Conclusion}
\label{sec:conclusion}
In this paper a novel edge adaptive graph-based compensated wavelet transform for medical data sets was introduced. To avoid the usage of interpolated intensity values for motion compensation, a new approach of exploiting common mesh-based motion vector fields is proposed. By incorporating the geometric structure of the data regarding the varying edge lengths of the compensated grid, a high quality LP band and a HP band with a low mean energy can be achieved. Since the motion information used for compensation is exactly the same like the mesh-based approach uses, the number of bits needed to code the motion vectors fields stays the same. As the overall filesize is slightly larger compared to the mesh-based approach, further work aims at the investigation of a proper coding of edge adaptive graph-based compensated subbands. Also the suitability of different weighting functions should be examined.
\section*{ACKNOWLEDGEMENT}
We gratefully acknowledge that this work has been supported by the Deutsche Forschungsgemeinschaft (DFG) under contract number KA 926/4-3.
\bibliographystyle{IEEE}
|
1,108,101,563,848 | arxiv | \section{introduction}
As a result of a surge in data storage techniques, many data sets can be viewed as being sampled continuously on their domain of definition. It is therefore natural to think of the data points as being objects and embed them into an appropriate mathematical space that accounts for the particular properties and structure of the space. The development of meaningful statistical treatment of these objects is known as functional data analysis and views each random element as a point in a function space. Not surprisingly, this has become an active field of research in recent years. If the random functions can be considered an ordered collection $\{X_t\}_{t \in \znum}$ we call this collection dependent functional data or a functional time series. The function space where each $X_t$ takes its values is usually assumed to be the Hilbert space $L^2([0,1])$, in which case we can parametrize our functions $\tau \mapsto X_t(\tau), \tau \in [0,1]$.
While the literature on classical time series finds its origin in harmonic analysis, the literature on its functional counterpart started in the time domain. The frequency domain arises however quite naturally in the analysis of dependent functional data. The second order dependence structure encodes the relevant information on the shape and smoothness properties of the random curves. It provides a way to optimally extract the intrinisically infinite variation carried by the random functions to lower dimension. In case the functional time series is weakly stationary, the second order dependence structure can be specified in the time domain through an infinite sequence of lag $h$ covariance operators
\[
\mathcal{C}_{h}= \mean\big[(X_0-m) \otimes (X_h-m) \big], \qquad h \in \znum,
\]
where $m$ is the mean function of $X$, which is the unique element of $H$ such that
\[
\langle m, g \rangle =\mean\langle X, g \rangle, \qquad g \in H.
\]
Unlike independent functional data, where one only needs to consider the within curves dynamics as captured by the operator $\mathcal{C}_0$, functional time series require to take into account also \emph{all} the between curve dynamics as given by the infinite sequence of lag covariance operators $\mathcal{C}_h$ for $ h\ne 0$. The full second order dynamics are then more straightforwardly captured in the frequency domain, and an initial framework for Fourier analysis of random functions was therefore developed in \citet{PanarTav2013a}. Their framework of frequency domain-based inference is however restricted to processes for which the notion of a spectral density operator, defined as the Fourier transform of the sequence of $h$-lag covariance operators,
\[
\mathcal{F}_{\omega} =\frac{1}{2\pi} \sum_{h \in \znum} \mathcal{C}_h e^{-\im h \omega}, \qquad \omega \in [-\pi,\pi],
\]
exists.
In this case, the autocovariance operator at lag $h$ can itself be represented as
\begin{align} \label{covInvFT}
\mathcal{C}_h = \int_{-\pi}^{\pi} e^{\im h \omega} \mathcal{F}_{\omega} d\omega,
\end{align}
where the convergence holds in the appropriate operator norm. For processes of which $\mathcal{F}_{\omega}$ has absolutely summable eigenvalues, \citet{Panar2013b} derived a functional Cram{\'e}r representation and showed that the eigenfunctions of $\mathcal{F}_{\omega}$ allow a harmonic principal component analysis, providing an optimal representation of the time series in finite dimension. An optimal finite dimensional representation of a functional time series was also derived by \cite{Hormann2015} for $L^2_m$-approximable sequences under slightly weaker assumptions. In both works, the spectral density operator can be seen to take the same role as the covariance operator for independent functional data in the classical Karhunen-Lo{\`e}ve expansion \citep{Karhunen1947,Loeve1948}. \Citet{vde16}~extended frequency domain-based inference for functional data to locally stationary processes allowing thus to relax the notion of weak stationarity and to consider time-dependent second order dynamics through a time-varying spectral density operator. Since frequency domain based inference does not require structural modeling assumptions other than weak dependence conditions, it has proved helpful in the construction of stationarity tests \citep[see e.g.,][]{avd16,vDCD18}.
The aforementioned literature is restricted to processes that assume the spectral density operators exists at all frequencies as elements of the space of trace-class operators, $S_1(H)$. This excludes many interesting processes of which the dependence structure decays more slowly to the extent that the spectral density operator does not necessarily exist for all frequencies or of which the spectral measure has discontinuities. Processes that are characterized by long memory, caused by for example highly persistent cyclical or seasonal components, arise quite naturally in a variety of fields such as hydrology or economics. For instance, the supply and demand curves for electricity prices show a strong daily as well as weakly seasonal pattern. The latter data is moreover known that have discontinuities in the spectral distribution which is also the case for high-resolution financial data. Since statistical inference techniques for these type of data must also take into account their within- and between curves dynamics, it is of importance to be able to develop frequency domain analysis under weaker conditions and to investigate under what conditions such an analysis is possible. In this note, we aim to provide the main building blocks for this relaxation and establish functional versions of the two fundamental results that lie at the core of frequency domain analysis for classical stationary time series: Herglotz's Theorem and the Cram{\'e}r Representation Theorem. It is worth remarking that we establish these two results under necessary condtions. These results allow in particular to develop optimal finite dimension reduction techniques for such highly relevant applications, which are currently not available.
The structure of this note is as follows. In section \ref{sec2}, we start by introducing the necessary notation and terminology. In section \ref{Herglotz}, we establish the existence of a functional Herglotz's theorem. For this, we make precise the concept of an operator-valued measure and the notion of operator-valued kernel functions. In section \ref{Gcram}, Herglotz's theorem is used to prove a generalized functional Cram{\'e}r representation for a large class of weakly stationary Hilbert-valued time series, including those with discontinuities in the spectral measure and long-memory processes. Finally,
a Karhunen-Lo{\`e}ve expansion on the frequency components in the Cram{\'e}r representation is applied in order to obtain a harmonic principal component analysis of the series.
\section{Notation and preliminaries}\label{sec2}
\subsection{The function space}
We first introduce some necessary notation. Let $(T,\mathscr{B})$ be a measurable space with $\sigma$-finite measure $\mu$. Furthermore, let $E$ be a Banach space with norm $\norm{\cdot}_E$ and equipped with the Borel $\sigma$-algebra. We then define $L^p_E(T,\mu)$ as the Banach space of all strongly measurable functions $f:T\to E$ with finite norm
\[
\norm{f}_{L^p_E(T,\mu)}=\Big(\int \norm{f(\tau)}^p_E\,d\mu(\tau)\Big)^{\tfrac{1}{p}}
\]
for $1\leq p<\infty$ and with finite norm
\[
\norm{f}_{L^\infty_E(T,\mu)}=\inf_{\mu(N)=0}\sup_{\tau\in T\without N}\norm{f(\tau)}_E
\]
for $p=\infty$. We note that two functions $f$ and $g$ are equal in $L^p$, denoted as $f \overset{L^p}{=} g$, if $\norm{f-g}_{L^p_E(T,\mu)}=0$. If $E$ is a Hilbert space with inner product $\innerprod{\cdot}{\cdot}_E$ then $L^2_E(T,\mu)$ is also a Hilbert space with inner product
\[
\innerprod{f}{g}_{L^2_E(T,\mu)}=\int\innerprod{f(\tau)}{g(\tau)}_E\,d\mu(\tau).
\]
For notational convenience, we simply write $\innerprod{f}{g}$ if no ambiguity about the space $L^2_E(T,\mu)$ is possible. In particular, for $\mu$ is the Lebesque measure and $E= \cnum$, we simply write $L^p(T)$ and denote its norm by $\|\cdot\|_p$.
We shall extensively make use of linear operators on a Hilbert space $H$. A linear operator on a Hilbert space $H$ is a function $A:H\to H$ that preserves the operations of scalar multiplication and addition. We shall denote the class of bounded linear operators by $\mathcal{L}(H)$ and its norm by $\snorm{\cdot}_{\mathcal{L}}$. Furthermore, the class of trace-class operators and Hilbert-Schmidt operators will be denoted by $S_1(H)$ and $S_2(H)$, respectively and their norms by $\snorm{\cdot}_{1}$ and $\snorm{\cdot}_{2}$. Equipped with these norms $(\mathcal{L}(H), \snorm{\cdot}_\mathcal{L})$ and $(S_1(H), \snorm{\cdot}_1)$ form Banach spaces while $(S_2(H), \snorm{\cdot}_2)$ forms a Hilbert space with inner product $\innerprod{\cdot}{\cdot}_{S_2}$. An operator $A \in \mathcal{L}(H)$ is called self-adjoint if $\innerprod{Af}{g} = \innerprod{f}{A g}$ for all $f,g \in H$, while we say it is non-negative definite if $\innerprod{Ag}{g} \ge 0$ for all $g \in H$. It will be convenient to denote the respective operator subspaces of self-adjoint and non-negative operators with ${(\cdot)}^\dagger$ and ${(\cdot)}^+$, respectively. It is straightforward to verify that $\mathcal{L}(H)^+\subseteq \mathcal{L}(H)^\dagger$ and $S_p(H)^+\subseteq S_p(H)^\dagger$.
\subsection{Functional time series}\label{FTS}
We define a functional time series $\{X_t \colon t \in \znum\}$ as a sequence of random elements on a probability space $(\Omega,\mathcal{A},\prob)$ taking values in $H:=L^2([0,1])$, the separable Hilbert space of square integrable functions. Additionally, we denote $\mathbb{H}=L^2_{H}(\Omega,\prob)$ the Hilbert space of all $H$-valued random variables $X$ with finite second moment $\mean\norm{X}^2_2<\infty$. We shall assume throughout that $X_t\in\mathbb{H}$. {A functional time series $X=(X_t\colon t\in\znum)$ is called strictly stationary if, for all finite sets of indices $J \subset \mathbb{Z}$, the joint distribution of $(X_{t+j}\colon j \in J)$ does not depend on $t\in\znum$. Similarly, $X$ is weakly stationary if its first- and second order moments exist and are invariant under translation in time. Without loss of generality, we shall assume the mean function is zero, i.e., $m=0 \in H$. In this case, the $h$-th lag covariance operators $\mathcal{C}_{h}$ can be defined by
\[
\innerprod{\mathcal{C}_{h}g_1}{g_2}
=\innerprod{ \mean(X_h \otimes X_0)g_1}{g_2} =\mean\big[\innerprod{g_1}{X_0}\,\innerprod{X_h}{g_2}\big], \qquad
g_1,g_2 \in H,\]
and belongs to $S_2(H)$ for all $h \in \znum$. Parseval's identity moreover implies that $\mathcal{C}_0$ belongs to $S_1(H)$. In the setting of time series analysis, it is however natural to assume that the strength of serial correlation decreases as the time distance between the random functions increases and hence that at least $\{\mathcal{C}_h\}_{h \in \znum} \in S_1(H)$. This will be assumed in section \ref{Gcram}.
In the literature, frequency domain analysis for weakly stationary functional time series so far has been restricted to processes of which the dependence structure decays at a much faster rate, namely
\begin{align} \label{decayCh}
\sum_{h \in \znum} \snorm{\mathcal{C}_h}_1 < \infty.
\end{align}
Under this condition, it is straightforward to show that the autocovariance operator $\mathcal{C}_h$ forms a Fourier pair with the spectral density operator given by
\[
\mathcal{F}_{\omega} =\frac{1}{2\pi} \sum_{h \in \znum} \mathcal{C}_h e^{-\im h \omega},
\]
where the convergence holds in $\snorm{\cdot}_1$ and the spectral density operator acts on $H$. Given assumption \eqref{decayCh}, a classical Ces{\`a}ro-sum argument \citep[see e.g.][]{b81}, was used by \citet{PanarTav2013a} to derive that $\mathcal{F}_{\omega}$ is non-negative definite and hence belongs to $S_{1}(H)^+$. As already mentioned in the introduction, the autocovariance operator at lag $h$ itself can then be represented as
\begin{align} \label{covInvFT}
\mathcal{C}_h = \int_{-\pi}^{\pi} e^{\im h \omega} \mathcal{F}_{\omega} d\omega,
\end{align}
where the convergence holds in $\snorm{\cdot}_1$. \citet{Panar2013b} showed that a zero mean stationary functional time series $X$ satisfying condition \eqref{decayCh} admits a functional spectral representation of the form
\begin{align}\label{eq:cramerwkp}
X_t = \int_{-\pi}^{\pi} e^{\im\omega t}\,dZ_{\omega} \qquad \text{a.s.},
\end{align}
where $Z_{\omega}$ is a functional orthogonal increment process such that, for fixed $\omega$, $Z_{\omega}$ is a random element in $L^2_\cnum([0,1])$ with $\mean \| Z_{\omega}\|_2^2 = \int_{-\pi}^{\omega}\snorm{ \mathcal{F}_{\lambda}}_1 d \lambda$. If the summability conditions as in \eqref{decayCh} do not hold, the spectral density operator does not necessarily exist and as a consequence \eqref{covInvFT} and \eqref{eq:cramerwkp} are no longer valid. In the next section, we shall derive a frequency domain representation of $\{C_h\}$ without imposing any assumptions on its rate of decay. In section \ref{Gcram}, we use this representation to derive a functional Cram{\'e}r representation that can be seen as a true generalization of the classisal Cram{\'e}r representation theorem to the function space.
\section{Herglotz's Theorem on a function space}\label{Herglotz}
In this section we derive a functional generalization of the classical Herglotz's theorem. Note that it is intuitive from \eqref{covInvFT} that if we do not have a density operator, then the measure itself must be operator-valued for the equation to be balanced. A first naive question therefore would be if an operator with representation
\begin{align}
\int_{-\pi}^{\pi} e^{\im h \omega}\,d\mathscr{F}(\omega), \qquad h \in \znum,
\end{align}
where $\mathscr{F}$ is an operator-valued measure on $[-\pi,\pi]$, exists. Secondly, what properties must an operator possess to be represented by such an integral. From the classical Herglotz's theorem, we know that the non-negative definite complex-valued function on the integers are precisely those that can be identified to have a frequency domain representation with respect to a finite Radon measure.
\begin{theorem}[Herglotz's theorem] \label{thm:Herglotzscalar}
A function $\gamma(\cdot): \znum \to \cnum$ is non-negative definite if and only if
\[
\gamma(h) =\int_{\pi}^{\pi} e^{\im h \omega}\,dF(\omega), \qquad h \in \znum
\]
where $F(\cdot)$ is a right-continuous, non-decreasing bounded function on $[-\pi,\pi]$ with $F(-\pi) = 0$.
\end{theorem}
Here, the so-called spectral distribution function $F(\cdot)$ with $F(-\pi)=0$ is uniquely determined by the covariance function $\gamma(h)$, $h \in \znum$. To extend this result to our functional setting, we require the notion of non-negative definiteness of operator-valued functions on $\znum$ as well as definitions of operator-valued measures and of integrals with respect to such measures. For the former, we proceed as in the scalar-valued case with the help of non-negative operator-valued kernels.
\begin{definition} \label{pdkernel}\mbox{}
\begin{romanlist}
\item
A function $c: \znum \times \znum \to \mathcal{L}(H)$ is called a non-negative definite $\mathcal{L}(H)$-valued kernel if
\[
\sum_{i,j=1}^{n} \innerprod{c(i,j)\,g_j}{g_i} \ge 0
\]
for all $g_1,\ldots,g_n \in H$, and $n \in \nnum$.
\item
A function $\mathcal{C}:\mathcal\znum\to\mathcal{L}(H)$ is called non-negative definite if the kernel $c: \mathcal \znum\times\znum\to \mathcal{L}(H)$ defined by $c(i,j) = \mathcal{C}(i-j)$ is a non-negative definite kernel.
\end{romanlist}
\end{definition}
Non-negative definite operator-valued kernels are an extremely powerful tool in functional analysis, especially in operator theory and representation theory. A generalization of this concept to general involutive semigroups other than $\znum$ can be found in \citet{Neebbook}. For Hilbert-valued time series, Definition \ref{pdkernel} provides a link between the properties of the covariance kernel and the corresponding covariance operator viewed as a function on the integers. More specifically, we have the following result.
\begin{proposition} \label{prop:nonnegCh}
Let $\{X_t \colon t\in \znum\}$ be a weakly stationary process in $\mathbbm{H}$ with second order dependence structure given by the sequence of $h$-lag covariance operators $\{\mathcal{C}_{h}\}_{h \in \znum}$. Then $\mathcal{C}_{(\cdot)}: \znum \to \mathcal{L}(H)$ is a non-negative definite function.
\end{proposition}
\begin{proof}
The operator-valued kernel $c(i,j) =\mathcal{C}_{i-j}$, $i,j \in \znum$ satisfies
\[
\sum_{i,j=1}^{n} \innerprod{\mathcal{C}_{(i-j)}g_j}{g_i} = \sum_{i,j=1}^{n} \innerprod{\mean (X_{i} \otimes X_{j})g_j}{g_i} =\mean \bignorm{\sum_{i=1}^{n} \innerprod{X_{i}}{ g_i}}^2_2 \ge 0
\]
and is therefore non-negative definite by Definition \ref{pdkernel}.
\end{proof}
\subsection{$\mathcal{L}(H)$-valued measures}
In this subsection, we explain how $\mathcal{L}(H)$-valued measures can be defined. Let us first provide some intuition by making an analogy with positive scalar-valued measures. Recall that a positive measure $\mu$ on a measurable space $(V,\mathcal{B})$ is defined as a countably additive map on $\mathcal{B}$ taking values in the compactification $[0,\infty]$ of the set $\rnum^+:=[0,\infty)$. The compactification is necessary for $\sigma$-additivity to hold (although for finite measures the compact subset $[0,\mu(V)]$ is sufficient). We note that $\rnum^+$ is an example of a pointed convex cone, that is, it is a convex and nonempty subset of $\rnum$ that is closed under nonnegative scalar multiplication and contains the zero element. It is moreover dense in $\rnum^+_{\infty}: =[0,\infty]$. Taking this view, a positive measure more generally can be defined as a countably additive map taking values in the compactification of a pointed convex cone. For the purpose of this paper, we are solely interested in measures taking values in the compactification of $\mathcal{L}(H)^+$, the pointed convex cone of $\mathcal{L}(H)$ consisting of all non-negative elements $\mathcal{L}(H)$. For the general theory on cone-valued measures we refer to \citet{Neebbook} and \citet{Glockner2003}.
To make such measures meaningful from a practical point of view, it is important that we can describe them by a set of structure preserving maps that map its dual (cone) into $[0,\infty]$, i.e., via continuous linear functionals that are $\rnum^+_{\infty}$-valued. This allows to relate an operator-valued measure to a family of positive scalar-valued measures, where the latter provides us with important information on the operator-valued measure itself. This is however a non-trivial task. Before we introduce the definition, we shall heuristically explain the compactification of $\mathcal{L}(H)^+$, which we shall denote by $\mathcal{L}^+_{\infty}$. We then explain why this yields a meaningful definition of an $\mathcal{L}(H)^+$-valued measure. \\
\subsubsection{Compactification of $\mathcal{L}(H)^+$} \label{sec:compactification}
A natural strategy to obtain a compactification of a cone is to construct a mapping $\mathcal{L}(H)^+ \to \mathcal{L}^+_{\infty}$. This mapping must be such that each element $B \in \mathcal{L}(H)^+$ can be uniquely identified with an element $C \in \mathcal{L}^+_{\infty}$. From functional analysis, we know that we can identify a locally convex space, such as a cone, with its image into its bidual by means of the canonical injection
\[x \mapsto (\lambda \mapsto \lambda(x)),\]
where $x$ is an element of the cone and $\lambda$ of its dual cone. To make this mapping and the dual cone more precise, recall that we can identify the real Banach space $(\mathcal{L}(H)^\dagger, \opnorm{\cdot})$ with the dual space of $(S_{1}(H)^{\dagger}, \snorm{\cdot}_{1})$ via the pairing
\[ \innerprod{\cdot}{\cdot} : \mathcal{L}(H)^\dagger \times S_{1}(H)^\dagger \to \rnum, \qquad (B,A) \mapsto \tr(B A). \tageq \label{eq:pair}\]
The map $B \mapsto \phi_B$ is an isometric isomorphism from the space of bounded hermitian operators into the dual space of the hermitian trace class operators,
i.e., $S_\infty(H)^{\dagger} \to (S_1(H)^{\dagger})^{\star}$. For each $B \in \mathcal{L}(H)^\dagger$, this yields a linear functional on the space of hermitian trace class operators. However, the functionals $(S_1(H)^{\dagger})^{\star}$ are not necessarily continuous. If we equip $\mathcal{L}(H)^\dagger$ with the weak operator topology, i.e., the coarsest topology that makes all linear functionals of its predual $S_{1}(H)$ continuous as functions on $\mathcal{L}(H)^\dagger$, then the dual pairing provides us with a topological embedding\footnote{Meaning that $\phi$ yields a homeomorphism, i.e., a bijective function with $\phi$ and $\phi^{-1}$ continuous, where $\phi(\mathcal{L}(H)^\dagger)$ inherits the subspace topoplogy from $({\mathcal{L}(H)^\dagger})^{\star \star}$.}
\[ \phi : \mathcal{L}(H)^\dagger \to {(\mathcal{L}(H)^\dagger)}^{\star \star} \qquad \phi(B)(A) = \trace(BA), \tageq \label{eq:topbed} \]
between $\mathcal{L}(H)^\dagger$ and $\phi(\mathcal{L}(H)^\dagger)$. This means in particular that the evaluation mappings ${(\mathcal{L}(H)^\dagger)}^{\star \star} \to \rnum, \alpha \mapsto \alpha(A)$ are continuous.
This dual pairing between the spaces $(\mathcal{L}(H)^\dagger, \opnorm{\cdot})$ and $(S_{1}(H)^{\dagger}, \snorm{\cdot}_{1})$ provides us also with the required dual pair of cones necessary to construct the mapping $\mathcal{L}(H)^+ \to \mathcal{L}^+_{\infty}$.
\begin{proposition}\label{prop:dualcones}
The space $\mathcal{L}(H)^+$ forms a generating pointed convex cone of $\mathcal{L}(H)$. Its dual cone is the class of non-negative trace class operators $S_1(H)^+
.$
\end{proposition}
The proof is given in section \ref{dualcone}. One important technicality is that the topological embedding \eqref{eq:topbed} is in itself not enough to consistently and uniquely identify a $\mathcal{L}(H)^+$-valued measure with a family of $\rnum^+_{\infty}$-valued measures. For this, we require that $\phi(\mathcal{L}(H)^+)$ is in fact the set of \textit{all} addition-preserving maps from the dual cone into $\rnum^+$, i.e., all monoid homomorphisms $S_1(H)^+ \to \rnum^+$ where the associative binary operation is addition. That is, we need that
\[ \label{eq:homreq}
\phi(\mathcal{L}(H)^+) = \text{Hom}(S_1(H)^+, \rnum^+) \tageq
\]
Unlike in finite dimensions, where the topological and algebraic dual coincide and all semigroup homomorphisms $S_1(H)^+ \to \rnum^+$ are automatically continuous, this is not necessarily the case in infinite dimension. Yet, it is crucial that \eqref{eq:homreq} holds. To see this, suppose we have a family of measures ${\{\mu_A\}}_{A \in S_1(H)^+}$ such that $A \mapsto \mu_A \in {(\rnum^+)}^\mathcal{B}$ is a monoid homomorphism. If \eqref{eq:homreq} does not hold, then we have no guarantee that this family identifies a $\mathcal{L}(H)^+$-valued measure. In order for \eqref{eq:homreq} to hold, it is however sufficient that the monoid homomorphisms $S_1(H)^+ \to \rnum^+$ are continuous. It can be shown that positive functionals on a Banach space are in fact continuous \citep[][Prop I.7]{Neeb98}. Since for every $B \in \mathcal{L}(H)^+$, the functional $\phi(B)$ is a positive functional on $S_1(H)^{\dagger}$ it therefore must be continuous. Suppose we now have an element of $\alpha \in \text{Hom}(S_1(H)^+, \rnum^+)$. Continuity of $\alpha$ together with the fact that $S_1(H)^+$ is generating the space $S_1(H)^\dagger$, implies by the Hahn-Banach theorem that it extends uniquely to a linear functional $S_1(H)^\dagger \to \rnum$. But this means it can be represented by an element $B \in \mathcal{L}(H)^+$, i.e., $\phi(B) = \alpha$. Therefore we have that \eqref{eq:homreq} is satisfied. Furthermore, $\phi(\mathcal{L}(H)^+)$ can be shown to have a dense image
\citep{Glockner2003} in the set
\[
\text{Hom}_{mon}(S_{1}(H)^+, \rnum^+_{\infty}),
\]
which is a compact topological monoid\footnote{It is a monoid as a set and the monoid operations are continuous.}
of monoid homomorphisms from $S_1(H)^+$ into the compact additive monoid $\rnum^+_{\infty}$. That this set is a compact topological monoid follows by noting that $\text{Hom}_{mon}(S_1(H)^+, \rnum^+_{\infty})$ can be seen as a closed vector subspace of the compact set ${(\rnum^+_\infty)}^{S_1(H)^+}$ and therefore inherits the topology of pointwise convergence\footnote{Note that this corresponds to the weak operator topology on $\mathcal{L}(H)^\dagger$.}
on $S_1(H)^+$. Addition is therefore continuous w.r.t. to this topology. Moreover being a closed subset of a compact set means it is compact. It is therefore natural to define the compactification of $\mathcal{L}(H)^+$ by this this set, i.e.,
\[ \tageq \label{eq:comp}
\mathcal{L}^+_{\infty}:= \text{Hom}_{mon}(S_1(H)^+, \rnum^+_{\infty}).
\]
\subsubsection{Construction of $\mathcal{L}(H)^+$-valued measures}
With the compactification given by \eqref{eq:comp}, we formally define a $\mathcal{L}(H)^+$-valued measure as follows.
\begin{definition}
Let $(V,\mathcal{B})$ be a measurable space. A {\em $\mathcal{L}(H)^+$-valued measure} is a countably additive function
\[\mu: \mathscr{B} \to \mathcal{L}^+_{\infty} \]
with $\mu(\emptyset)=O_H$. The measure $\mu$ is called a finite $\mathcal{L}(H)^+$-valued measure if $\mu(V) \in \text{Hom}_{mon}(S_1(H)^+, \rnum^+) \cong \mathcal{L}(H)^+$. It is $\sigma$-finite if $E$ is the countable union of measurable sets with finite measure, i.e., if $E = \bigcup_{i=1}^{\infty} E_i$ and where $\mu(E_i) \in \mathcal{L}(H)^+$ for all $i \in \nnum$.
\end{definition}
Using \eqref{eq:topbed}, we construct the measures uniquely based on a family of positive measures that depend additively on elements $A \in S_1(H)^+$. Fix an element $E \in \mathcal{B}$, and let $\mu(E) \in \mathcal{L}^+_{\infty}$. By construction, we can identify with $\mu(E)$ all maps
\[A \mapsto \mu(E)(A) : = \mu_A(E) =\trace(\mu(E) A), \qquad A \in S_1(H)^+.\] It is clear that $\mu_A : \mathcal{B} \to R_{\infty}^+$ is a countable additive positive measure for all $A \in S^+_1(H)$. That is, $A \mapsto \mu(E)(A)$ is a homomorphism between monoids with respect to addition, i.e., depends additively on $A$; For every $A_1, A_2 \in S_1(H)^+$
\begin{align*}
\mu(E)(A_1+A_2) & = \trace(\mu(E)(A_1+A_2)) = \trace(\mu(E)A_1)+\trace(\mu(E)A_2)
\\& = \mu(E)(A_1)+\mu(E)(A_2) ;
\\
\mu(E)(O_H) & = \trace(\mu(E)O_H) = 0.
\end{align*}
On the other hand, if we have a family of positive measures $(\nu_A)_{A \in S_1^(H)^+}$ on $(V,\mathcal{B})$ such that, for fixed $E \in \mathcal{B}$, $A \mapsto \nu_A(E) \in (\rnum^+_{\infty})^{\mathcal{B}}$ is a monoid homomorphism. It is clear from the definition of $B^+_{\infty}$ that there exists a unique $\mu(E) \in \mathcal{L}^+_{\infty}$ such that $\mu(E)(A) = \nu_A(E)$ for all $A \in S_1(H)^+$.
Countable additivity is direct from noting that elements of the monoid \eqref{eq:comp} can be seen as continuous functionals on $S_1(H)^+$ w.r.t. the topology inherited from $S_1(H)^+$, we therefore have
\begin{align*}
\mu\big(\bigcup_i^{\infty} E_i\big)(A) & = \nu_A(\bigcup_i^{\infty} E_i)= \lim_{N\to \infty} \sum_{i=1}^{N} \nu_A(E_i) = \sum_{i=1}^{\infty} \mu(E_i)(A)
\end{align*}
for all $A \in S_1(H)^+$. The following theorem summarizes this and is a simplification of Theorem I.10 of \citet{Neeb98}.
\begin{theorem} \label{thm:Cons_m}
Let $\{\mu_A\}_{A \in S_1(H)^+}$ be a family of non-negative measures on the measurable space $(V, \mathscr{B})$ s.t. for each borel set $E \subseteq V$ the assignment $A \mapsto \mu_A(E)$ is a monoid homomorphism. Then there exists for each $E \in \mathscr{B}$ a unique element $\mu(E) \in \mathcal{L}^+_{\infty}$ with $\mu(E)(A) = \mu_A(E)$ for all $A \in S_1(H)^+$ and the function $\mu : \mathscr{B} \to \mathcal{L}^+_{\infty}$ is a $\mathcal{L}(H)^+$-valued measure.
\end{theorem}
In particular, it follows that a $\mathcal{L}^+_{\infty}$-valued Radon measure $\mu$ on a locally compact space $V$ is a measure with the property that, for each non-negative trace class operator $A \in S_1(H)^+$, the measure $\mu_A: \mathscr{B} \to \rnum^+_{\infty}$
is a finite non-negative Radon measure on $V$. Moreover, integrability of a function with respect to the scalar measure furthermore implies integrability of the function with respect to the $\mathcal{L}(H)^+$-valued measure.
\subsection{Functional Herglotz's theorem}
We are now ready to prove the following generalization of Theorem \ref{thm:Herglotzscalar}.
\begin{theorem}[Functional Herglotz's Theorem]\label{Herglotzthm}
A function $\Gamma:\znum \to \mathcal{L}(H)$ is non-negative definite if and only if
\begin{align} \label{eq:Herglotzint}
\Gamma(h) = \int_{-\pi}^{\pi} e^{\im h \omega} d\mathscr{F}(\omega) \qquad h \in \znum,
\end{align}
where $\mathscr{F}$ is a right-continuous, non-decreasing, finite $\mathcal{L}^+_{\infty}$-valued measure on $[-\pi,\pi]$ with $F(-\pi)=O_H$. The finite measure $\mathscr{F}$ is uniquely determined by $\Gamma(h), h \in \znum$.
\end{theorem}
\begin{proof}
Suppose first that $\Gamma(h)$ admits the representation \eqref{eq:Herglotzint} with respect to a right-continuous, non-decreasing, finite $\mathcal{L}^+_{\infty}$-valued measure $\mathscr{F}$ on $[-\pi,\pi]$. Note first that since $\mathscr{F}$ is a finite measure, the integral is well-defined. Let us denote the representation of this integral by $\nu :\mathbb{Z} \to \mathcal{L}(H)$. Since $\Gamma(h)^{\dagger} = \Gamma(-h)$, it is clear that $\Gamma(\cdot)$ is hermitian.
By (ii) of Definition \ref{pdkernel}, it is sufficient to show that the operator-valued kernel $c(h_1,h_2) : =\nu(h_1h_2^{\star}) = \nu(h_1-h_2)$ is non-negative definite. Using (i) of Definition \ref{pdkernel},
\begin{align*}
& \sum_{j,k=1}^n \innerprod{\nu(h_j-h_k)g_k}{g_j} \\&
= \sum_{j,k=1}^n \innerprod{ \int_{[-\pi,\pi]} e^{\im (h_j-h_k) \omega} d\mu(\omega)g_k}{g_j} \\& = \int_{[-\pi,\pi]} \sum_{j,k=1}^n \innerprod{ e^{\im (h_j-h_k) \omega} g_k}{g_j} d\mu(\omega) \\&
= \int_{[-\pi,\pi]} \bignorm{\sum_{k=1}^n e^{\im h_k\omega} g_k}^2_2 d\mu(\omega) \ge 0 \end{align*}
for all $h_1,\ldots, h_n \in \znum$ and $g_1,\ldots,g_n \in H$.
Conversely, suppose that $\Gamma(\cdot)$ is a $\mathcal{L}(H)^+$-valued function on $\znum$ and let $A$ be a an element of $S_1(H)^+$. Define the function $\Gamma_A: \znum \to \cnum$ by $\Gamma_A(h) := \trace(\Gamma(h)A)$. Since the square root of both $\Gamma(h)$ and $A$ are well-defined and the trace satisfies $\trace(\Gamma(h)A)=\trace(A\Gamma(h))$, it is direct that $\Gamma_A$ is a non negative definite function on the integers. By Theorem \ref{thm:Herglotzscalar}, we therefore have the representation
\[
C_A(h) = \int_{-\pi}^{\pi} e^{\im h \omega} d\mathscr{F}_A(\omega) \qquad h \in \znum,
\]
where $\mathscr{F}_A(\cdot)$ with $\mathscr{F}_A(-\pi)=0$ is a uniquely determined Radon measure on $[-\pi,\pi]$. The measure is finite since $C_A(\znum) = C_A(0) = \mathscr{F}_A([-\pi,\pi]) < \infty$. A similar derivation as in the previous subsection demonstrates that $A \mapsto \mathscr{F}_A(E)$ is a monoid homomorphism $S_1(H)^{+} \to \rnum^+$ with respect to addition for each $E \in \mathscr{B}$. By Theorem \ref{thm:Cons_m}, the family of measures $\{\mathscr{F}_A\}_{A \in S_1(H)^{+}}$ therefore uniquely identifies a finite measure $\mathscr{F}(E) \in \mathcal{L}^+_{\infty}$ on the measurable space $([-\pi,\pi],\mathcal{B})$ with $\mathscr{F}(E)(A) =\trace(\mathscr{F}(E) A) = \mathscr{F}_A(E)$ for all $A \in S_1(H)^+$.
This completes the proof.
\end{proof}
\section{A generalized functional Cram{\'e}r representation}\label{Gcram}
The Spectral Representation Theorem \citep[][]{Cramer1942}, often called the {\em Cram{\'e}r re\-presentation}, is as fundamental to frequency domain analysis as Wold's representation is to the time domain. It asserts that every (finite dimensional) zero-mean weakly stationary process can be represented as a superposition of sinusoids with random amplitudes and phases that are uncorrelated. An important ingredient in establishing this classical theorem is the existence of an isometric isomorphism that allows to identify a weakly stationary time series on the integers with an orthogonal increment process on $[-\pi,\pi]$.
As already mentioned, an initial generalization of the Cram{\'e}r representation to functional-valued weakly stationary time series was first considered by \citet{Panar2013b}, but is restricted to processes for which the assumption $\sum_{h \in \znum} \snorm{\mathcal{C}_h}_1 < \infty$ holds. In this section, we shall use the established functional Herglotz's theorem (Theorem \ref{Herglotz}) to derive a functional Cram{\'e}r representation that can be seen as a true generalization of the classical theorem to the function space. In addition, we establish a Cram{\'e}r-Karhunen-Lo{\'e}ve representation and a harmonic prinicipal component analysis for a very general class of processes of which the spectral measure can have finitely many discontinuities. In order to establish the Cram{\'e}r representation, we shall make the following necessary assumption on the second order dependence structure.
\begin{assumption} \label{as:Ch}
$\{X_t \colon t \in \znum \}$ is a weakly stationary $H$-valued time series with lag covariance operators $\{\mathcal{C}_h\}_{h \in \znum} \in S_1(H)$.
\end{assumption}
As explained in the introduction, we believe this is not just a necessary condition for a frequency domain representation of the process to exist but also a natural assumption for $L^2$-valued processes.
Under this assumption, we will show that the mapping
\[ X_t \mapsto e^{\im t \cdot } \]
forms an Hilbert space isometric isomorphism between
$L^2_H(\Omega, \prob)$ and
$L^2([-\pi,\pi], \mu_{\mathscr{F}})$ where we define
\[\mu_{\mathscr{F}}(E) := \snorm{\mathscr{F}(E)}_1 \tageq \label{mufm}\] for all Borel sets $E \subseteq [-\pi,\pi]$. Here, $\mathscr{F}$ is the operator-valued measure on $[-\pi,\pi]$ induced by the sequence of covariance operators $\{C_h\}_{h \in \znum}$ of $\{X_t \colon t \in \znum \}$. Before we derive the properties of the mapping, we have to verify that this indeed defines a measure. This is the contents of the following lemma.
\begin{lemma} \label{prop:muFm}
Let Assumption \ref{as:Ch} be satisfied. Then the function $\mu_{\mathscr{F}}$ defined in \eqref{mufm} is a finite scalar-valued measure on $[-\pi,\pi]$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{prop:muFm}]
By Proposition \ref{prop:nonnegCh}, the covariance function $\mathcal{C}_{(\cdot)}: \znum \to S_1(H)$ of a weakly stationary $H$-valued time series is non-negative definite. Under assumption \ref{as:Ch}, Theorem \ref{Herglotz} implies this function uniquely determines a $S_1(H)^+$-valued measure $\mathscr{F}$ on $[-\pi,\pi]$ .
Using the properties of $\mathscr{F}$, it is now straightforward to verify that the function $\mu_{\mathscr{F}} : \mathcal{B} \to [0,\infty]$ is a non-negative scalar-valued measure on the measurable space $([-\pi,\pi],\mathcal{B})$. Firstly, for each Borel set $E \subseteq [-\pi,\pi]$ and every $e \in H$, we have $\innerprod{\mathscr{F}(E) e}{e}\ge 0$ and thus $\mu_{\mathscr{F}} = \trace(\mathscr{F}) \ge 0$. Secondly, $\innerprod{\mathscr{F}(\emptyset) e}{e} = \innerprod{O_H e}{e} =\mu_{\mathscr{F}}(\emptyset) = 0$ which follows by definition of $\mathscr{F}$. Thirdly, for all countable selections of pairwise disjoint set $\{E_i\}_{i \in \nnum}$ in $\mathcal{B}$, countable additivity of $\mathscr{F}$ yields
\[ \mu_{\mathscr{F}}\big(\bigcup_{i=1}^{\infty}E_i\big) = \sum_{j}^{\infty}\innerprod{\mathscr{F}( \bigcup_{i=1}^{\infty}E_i) e_j}{e_j} = \sum_{j=1}^{\infty}\innerprod{\sum_{i}^{\infty}\mathscr{F}( E_i) e_j}{e_j},\]
where $\{e_j\}_{j \in \nnum}$ is an orthonormal basis of $H$. By continuity of the inner product and the fact that $\mathscr{F}([-\pi,\pi])< \infty$, Fubini's theorem implies
\[ \sum_{i=1}^{\infty} \sum_{j=1}^{\infty}\innerprod{\mathscr{F}( E_i) e_j}{e_j} =\sum_{i=1}^{\infty} \snorm {\mu_\mathscr{F}(E_i)}_1 = \sum_{i=1}^{\infty} \mu_{\mathscr{F}}(E_i) \]
and thus $\mu_{\mathscr{F}}$ is countably additive. Finally, since $\mathscr{F}$ is $S_1(H)^+$-valued measure on $[-\pi,\pi]$ it is direct that $\mu_\mathcal{F}([-\pi,\pi]) <\infty$.
\end{proof}
Additionally, to be able to properly define the spectral representation we require the notion of a $H$-valued orthogonal increment process.
\begin{definition}\label{Zproc}
A $H$-valued random process $\{Z_\omega \colon -\pi \le \omega \le \pi \}$ defined on a probability space $(\Omega, \mathcal{A},\prob)$ is a functional orthogonal increment process, if for all $g_1, g_2 \in H$ and $ -\pi \le \omega \le \pi$
\begin{romanlist}
\item the operator $\mean(Z_\omega \otimes Z_{\omega})$ is an element of $S_{1}(H)^+$
\item $\mean\innerprod{Z_\omega}{g_1}= 0$
\item $\innerprod{\mean\big((Z_{\omega_4} - Z_{\omega_3})\otimes {(Z_{\omega_2} - Z_{\omega_1})}\big)g_1}{g_2}= 0, \qquad (\omega_1,\omega_2] \cap (\omega_3,\omega_4]=\emptyset$
\item $\innerprod{\mean\big((Z_{\omega+\varepsilon} - Z_{\omega})\otimes {Z_{\omega+\varepsilon} - Z_{\omega}}\big)g_1}{g_2}\to 0$ as $\varepsilon \downarrow 0$.
\end{romanlist}
\end{definition}
To establish the isomorphism, let $\mathcal{H} \subset \mathbbm{H}$ denote the space spanned by all finite linear combinations of the random functions $X_t$, i.e., $\mathcal{H}= {\text{sp}}\{X_t \colon t \in \znum\}$. We remark that the inner product on $\mathbbm{H}$ satisfies
\[
\innerprod{X_1}{X_2}_{\mathcal{\mathbbm{H}}} =\trace(\mean(X_1 \otimes X_2) ). \tageq \label{eq:inTr}
\]
Furthermore let the space $\mathscr{H}$ denote the space of all square-integrable functions on $[-\pi,\pi]$ with respect to the measure $\mu_\mathscr{F}$, i.e., $\mathscr{H}= L^2([-\pi,\pi], \mu_{\mathscr{F}})$. This space becomes a Hilbert space once we endow it with the inner product
\[
\innerprod{f}{g}_{\mathscr{H}} = \int_{-\pi}^{\pi} f(\omega) \widebar{g(\omega)}d \mu_\mathscr{F}(\omega) =\trace(\int_{-\pi}^{\pi} f(\omega) \overline{g(\omega)}d\mathscr{F}(\omega)) \qquad f,g \in H,
\]
where the last equality follows by non-negative definiteness of $\mathscr{F}$ and linearity of the trace operator.
\begin{theorem} \label{isomorph}
Let $\mathscr{F}$ be the $S_{1}(H)^+$-valued measure corresponding to the process $\{X_t\}$. Then there exists an isometric isomorphism $\mathcal{T}$ between $\overline{\text{sp}}\{X_t\}$ and $L^2([-\pi,\pi], \mu_{\mathscr{F}})$ such that
\[\mathcal{T}X_t = e^{\im t \cdot}, \qquad t \in \znum.\]
The process defined by
\[Z_{\omega} = \mathcal{T}^{-1}\big(1_{(-\pi,\omega]}(\cdot)\big)\]
is then a functional orthogonal increment process of which the covariance structure is uniquely determined by $\mathscr{F}$ and satisfies
\[
\mathbb{E}\big[(Z_{\omega}-Z_{\lambda}) \otimes( Z_{\omega}-Z_{\lambda})\big]= \mathscr{F}(\omega)- \mathscr{F}(\lambda), \quad -\pi \le \lambda \le \omega \le \pi.
\]
\end{theorem}
\begin{proof}[Proof of Theorem \ref{isomorph}]
Consider first the mapping $\mathcal{T} : \mathcal{H} \to \mathscr{H}$ given by
\[
\mathcal{T} (\sum_{j=1}^{n}a_j X_{t_j})= \sum_{j=1}^{n}a_j e^{\im t \cdot} \qquad t \in \znum.
\]
It is straightforward to see the mapping is linear and preserves inner products. Let $Y = \sum_{j=1}^{n}a_j X_{t_j}$ and $W = \sum_{j=1}^{n}b_j X_{t_j}$. By Theorem \ref{Herglotz},
\begin{align*}
\innerprod{\mathcal{T}Y}{\mathcal{T}W}_\mathscr{H} & =\sum_{i,j=1}^{n}a_i \overline{b}_j \innerprod{e^{\im \cdot t_i}}{e^{\im \cdot t_j}}_\mathscr{H}
\\& =\sum_{i,j=1}^{n}a_i \overline{b}_j \int_{-\pi}^{\pi} e^{\im \lambda (t_i-t_j)} d\mu_{\mathscr{F}}(\lambda)
\\& =\sum_{i,j=1}^{n}a_i \overline{b}_j \trace(\int_{-\pi}^{\pi} e^{\im \lambda (t_i-t_j)} d{\mathscr{F}}(\lambda))
\\& =\lsum_{i,j=1}^{n}a_i \overline{b}_j \trace(C_{i-j} )= \lsum_{i,j=1}^{n}a_i \overline{b}_j \innerprod{X_{t_i}}{ X_{t_j}}_{\mathbbm{H}} = \innerprod{Y}{W}_\mathbbm{H}.
\end{align*}
For the extension of the isomorphism over the closure of $\mathcal{H}$ onto the closure of $\mathscr{H}$, note that if $Y$ is an element of $\bar{\mathcal{H}}$ then there must exist a sequence $\{Y_n\}_{n \ge 1} \in \mathcal{H}$ converging to $Y$. Denote $\mathcal{T}({Y})$ to be the limit of $\mathcal{T}(Y_n)$, i.e.,
\[\mathcal{T}(Y) := \lim_{n \to \infty}\mathcal{T}(Y_n).\]
Since $\{Y_n\}$ is a Cauchy sequence and $\mathcal{T}$ norm-preserving, $\{\mathcal{T}Y_n\}$ is a Cauchy sequence in $L^2([-\pi,\pi],\mu_{\mathscr{F}})$ and thus $\mathcal{T}(Y) \in \bar{\mathscr{H}}$. If there is another sequence $\{Y^{'}_n\} \in \mathcal{H}$ converging to ${Y}$, then the limit must be unique since
\[
\lim_{n \to \infty}\norm{\mathcal{T}(Y_n)- \mathcal{T}(Y^{'}_n)}_{\mathscr{H}} =\lim_{n \to \infty} \norm{Y_n - Y^{'}_n}_{\mathbbm{H}} = 0,
\]
and therefore the extension is well-defined. Preservation of linearity and the isometry property are straightforward from linearity of $\mathcal{T}$ on $\mathcal{H}$ and continuity of the inner product, respectively. To show that the closure of $\mathscr{H}$ is in fact $L^2([-\pi,\pi], \mu_{\mathscr{F}})$, we recall that the Stone-Weierstrass theorem (Fej{\'e}r's theorem) implies that $\mathscr{H}$ is dense in the space of $2\pi$-periodic continuous functions on $[-\pi,\pi]$. Moreover, by Proposition \ref{prop:muFm}, $\mu_{\mathscr{F}}$ is a finite Radon measure (i.e., finite and regular) on $[-\pi,\pi]$. The set of continuous functions with compact support are therefore in turn uniformly dense
in $L^2 ([-\pi,\pi], \mu_{\mathscr{F}})$ \citep[see e.g.,][]{bog06,R87}. Consequently we find $\bar{\mathscr{H}}=L^2 ([-\pi,\pi], \mu_{\mathscr{F}})$. The inverse mapping $\mathcal{T}^{-1}: L^2 ([-\pi,\pi], \mu_{\mathscr{F}}) \to \bar{\mathcal{H}}$ is therefore properly defined. This finishes the proof of the first part of the theorem. \\
Les us then define, for any $\omega \in (-\pi,\pi]$, the process
\[ Z_{\omega} = \mathcal{T}^{-1}\big(1_{(-\pi,\omega]}(\cdot)\big)\]
with $Z_{-\pi} \equiv 0 \in H$. By the established isometry, this process is well-defined in $\bar{\mathcal{H}}$. Therefore there must exist a sequence $\{Y_n\}$ in $\mathcal{H}$ such that $\lim_{n \to \infty} \|Y_n-Z_{\omega}\|_{\mathbb{H}} = 0$. Since all elements in the sequence have zero-mean, continuity of the inner product implies
\begin{align*}
\innerprod{Z_{\omega}}{f}_\mathbbm{H} &
= \lim_{n \to \infty} \trace(\mean (Y_n \otimes f)) = \lim_{n \to \infty} \innerprod{\mean [Y_n]}{f} =0 \quad \forall f \nequiv 0 \in H
\end{align*}
showing the process $\{Z_{\omega}: -\pi \le \omega \le \pi\}$ has zero mean. Additionally,
\begin{align*}
\innerprod{Z_{\omega_4} - Z_{\omega_3}}{Z_{\omega_2} - Z_{\omega_1}}_{\mathbbm{H}}
&
= \innerprod{1_{(\omega_3,\omega_4]}(\cdot)}{ 1_{(\omega_1,\omega_2]}(\cdot)}_{\mathscr{H}}
\\& = \int_{-\pi}^{\pi} 1_{(\omega_3,\omega_4]}(\omega) 1_{(\omega_1,\omega_2]}(\omega) d\mu_{\mathscr{F}}(\omega). \label{orhinc} \tageq
\end{align*}
For all $(\omega_1,\omega_2] \cap (\omega_3,\omega_4]=\emptyset$, this inner product is zero while for $\omega_3 = \omega_1, \omega_4=\omega_2$ we have
\[
\innerprod{Z_{\omega_2} - Z_{\omega_1}}{Z_{\omega_2} - Z_{\omega_1}}_{\mathbbm{H}} = \mu_{\mathscr{F}}(\omega_2) -\mu_{\mathscr{F}}(\omega_1).
\qquad \omega_1 \le \omega_2\]
showing that the $\{Z_{\omega}\}$ is right-continuous. We can also write \eqref{orhinc} as
\[\trace\big( \int_{-\pi}^{\pi} 1_{(\omega_3,\omega_4]}(\omega) 1_{(\omega_1,\omega_2]}(\omega) d{\mathscr{F}}(\omega)\big).\]
For $\omega_3 = \omega_1, \omega_4=\omega_2$ this implies
\[
\mean (Z_{\omega_2} - Z_{\omega_1})\otimes {{(Z_{\omega_2} - Z_{\omega_1})}} = \mathscr{F}(\omega_2) -\mathscr{F}(\omega_1),
\qquad \omega_1 \le \omega_2.\]
where the equality holds in $\snorm{\cdot}_1$. The second order structure of $Z_{\omega}$ is therefore uniquely defined by the operator-valued measure $\mathscr{F}$ of the process $X$.
\end{proof}
The generalization of the Cram{\'e}r representation to processes of which the spectral density operator is not necessarily well-defined is given in the following theorem.
\begin{theorem}[Functional Cram{\'e}r representation] \label{Cramer}
Suppose $\{X_t\}$ satisfies Assumption \ref{as:Ch}. Then there exists a right-continuous functional orthogonal increment process $\{Z_{\omega}, -\pi \le \omega \le \pi\}$ with $Z_{-\pi}\equiv 0 \in H$ such that
\[
X_t = \int_{-\pi}^{\pi} e^{\im t \omega} d Z_{\omega} \qquad \text{a.s.}
\]
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Cramer}]
Consider the subspace $\mathscr{H}_s$ of $\bar{\mathscr{H}}$ containing the simple functions, i.e., the space $\mathscr{H}_s$ contains elements of the form
\[
g(\omega) = \sum_{i=1}^{n} a_i 1_{(-\omega_{i},\omega_{i+1}]}
\]
for a partition $P_n =\{ - \pi = \omega_0 < \omega_1 < \cdots < \omega_{n+1} = \pi\}$ of $[-\pi,\pi]$ and $a_i \in \cnum$.
Then define the mapping $\mathcal{I}: \mathscr{H}_s \to \bar{\mathcal{H}}$ given by
\[
\mathcal{I}(g) = \sum_{i=0}^{n} a_i (Z_{\omega_{i+1}}-Z_{\omega_i}).
\]
By Theorem \ref{isomorph}, this is an isomorphism from $\bar{\mathscr{H}}$ onto $\bar{\mathcal{H}}$ and coincides with $\mathcal{T}^{-1}$. More specifically, $\mathcal{I}(e^{\im \cdot t}) =\mathcal{T}^{-1}(e^{\im \cdot t}) =\mathcal{T}^{-1}\mathcal{T} (X_t)= X_t $ and the statement of the Theorem follows by taking the Riemann-Stieltjes integral limit
\[\bignorm{X_t - \sum_{i=0}^{n} e^{\im \omega_i t} (Z_{\omega_{i+1}}-Z_{\omega_i} )}^2_{\mathbbm{H}} \to 0 \text{ as } n \to \infty,\]
where $\text{mesh}(P_n) \to 0$ as $n \to \infty$. More generally, for any $g \in L^2([-\pi,\pi], \mu_{\mathscr{F}})$, the mapping $\mathcal{I}(g)$ corresponds to the Riemann-Stieltjes integral with respect to the orthogonal increment process $Z_{\omega}$.
\end{proof}
In case of discontinuities in the spectral measure $\mathscr{F}$ we can decompose the process into a purely indeterministic component and a purely deterministic component.
\begin{proposition}\label{cramer_disc}
Assume the spectral measure $\mathscr{F}$ of a stationary process $\{X_t\}$ has $k$ points of discontinuity at $\omega_1, \ldots, \omega_k$. Then with probability one
\begin{align}\label{eq:cramer_disc}
X_t = \int_{(-\pi, \pi] \setminus \{ \omega_1, \ldots, \omega_k\}} e^{\im t \omega} d Z_{\omega} + \sum_{\ell=1}^{k}(Z_{\omega_\ell} - Z_{\omega^{-}_\ell})e^{\im t \omega_\ell},
\end{align}
where $Z_{\omega^{-}_\ell} = \lim_{\omega \uparrow \omega_l} \|Z_{\omega_\ell}-Z_{\omega_\ell} \|^2_\mathbb{H}=0$. Furthermore,
all terms on the right hand side of \eqref{eq:cramer_disc} are uncorrelated and
\[
\var(Z_{\omega_\ell} - Z_{\omega^{-}_\ell} ) = \mathscr{F}_{\omega_\ell}-\mathscr{F}_{\omega^{-}_\ell}
\]
for each $\ell=1,\ldots, k$.
\end{proposition}
The proof is relegated to the Appendix. The spectral representation in Proposition \ref{cramer_disc} can be used to define a Cram{\'e}r-Karhunen-Lo{\`e}ve representation for processes of which the spectral measure has finitely many discontinuities.
\begin{definition}[Cram{\'e}r-Karhunen-Lo{\`e}ve representation] \label{CKL}
Suppose that $\{X_t\}$ is given by
\[X_t = \int_{(-\pi, \pi] \setminus \{ \omega_1,\ldots, \omega_k\}} e^{\im t \omega} d Z_{\omega} +\sum_{\ell=1}^{k} (Z_{\omega_\ell} - Z_{\omega^{-}_\ell})e^{\im t \omega_\ell}.
\]
For $\omega \in (-\pi, \pi] \setminus \{ \omega_1,\ldots, \omega_k\}$ assume the density operator $\mathcal{F}_{\omega}$ of the spectral measure $\mathscr{F}$ exists
with eigendecomposition
\[\mathcal{F}_{\omega} = \sum_{j=1}^{\infty} \nu^{\omega}_j\phi^{\omega}_j \otimes \phi^{\omega}_j.\]
Furthermore, let
\[
\mathscr{F}(\omega_\ell)- \mathscr{F}({\omega^{-}_\ell}) = \sum_{j=1}^{\infty} \nu^{\omega_\ell}_j\phi^{\omega_\ell}_j \otimes \phi^{\omega_\ell}_j
\]
be the eigendecomposition of $\mathscr{F}(\omega_\ell)- \mathscr{F}({\omega^{-}_\ell})$ for $\ell=1,\ldots,k$. Then, we can write
\[X_t = \int_{(-\pi, \pi] \setminus \{ \omega_1,\ldots, \omega_k\}} e^{\im t \omega} \big(\lsum_{j=1}^{\infty} \phi^{\omega}_j \otimes \phi^{\omega}_j \big) d Z_{\omega} +\lsum_{\ell=1}^{k} \big(\lsum_{j=1}^{\infty}\phi^{\omega_\ell}_j \otimes \phi^{\omega_\ell}_j\big) (Z_{\omega_\ell} - Z_{\omega^{-}_\ell})e^{\im t \omega_\ell}.
\]
which is the \textit{Cram{\'e}r-Karhunen-Lo{\`e}ve representation} of the process $\{X_t\}$.
\end{definition}
Note that the spectral measure for all measurable sets $[-\pi,\pi]$ has positive definite increments and therefore $ \mathscr{F}(\omega_\ell)- \mathscr{F}({\omega^{-}_\ell})$ has an eigendecomposition with positive eigenvalues.
If there are no discontinuities then the Cram{\'e}r-Karhunen-Lo{\`e}ve representation simply coincides with the indeterministic component of Definition \ref{CKL}, i.e.,
\[
X_t = \int_{(-\pi, \pi]} e^{\im t \omega} \big(\lsum_{j=1}^{\infty} \phi^{\omega}_j \otimes \phi^{\omega}_j \big) d Z_{\omega} \tageq \label{CKL_con}
\]
In order to derive an optimal finite dimensional representation of the indeterministic component of the process, we require in Definition \ref{CKL} that there is a well-defined spectral density operator except on sets of measure zero. We remark that this assumption also covers a harmonic principal component analysis of long-memory processes (see Remark \ref{longmem}) and holds under much weaker conditions \citep[see e.g.,][]{Hormann2015} than
those stated in \cite{Panar2013b}, who originally derived a Cram{\'e}r-Karhunen-Lo{\`e}ve representation of the form \eqref{CKL_con} for processes with short-memory. \\
The Cram{\'e}r-Karhunen-Lo{\`e}ve representation in Definition \ref{CKL} can be seen to encapsulate the full second order dynamics of the process and gives insight into an optimal finite dimensional representation. It is a `double' spectral representation in the sense that it first decomposes the process into uncorrelated functional frequency components and in turn provides a spectral decomposition in terms of dimension. This is more easily seen by noting that formally we can write it as
\[X_t = \int_{(-\pi, \pi] \setminus \{ \omega_1,\ldots, \omega_k\}} e^{\im t \omega} \lsum_{j=1}^{\infty}\inprod{ d Z_{\omega}}{\phi^{\omega}_j} \phi^{\omega}_j +\lsum_{\ell=1}^{k} \lsum_{j=1}^{\infty}\inprod{Z_{\omega_\ell} - Z_{\omega^{-}_\ell}}{\phi^{\omega_\ell}_j} \phi^{\omega_\ell}_j e^{\im t \omega_\ell}.
\]
Just like the Karhunen-Lo{\`e}ve representation for independent functional data, it separates the stochastic part from the functional part and provides information on the smoothness of the random curves. Furthermore, it enables to represent each frequency component into an optimal basis where its dimensionality can be derived from the relative contribution of the component to the total variation of the process. A truncation of the infinite sums at a finite level therefore allows an optimal way to construct a finite dimensional representation of the process.
\\
Such a truncation for processes that satisfy Definition \ref{CKL} requires that stochastic integrals of the form $\int_{-\pi}^{\pi} U_{\omega}\,dZ_{\omega}$ are well-defined where $U_{\omega}$ is an element of the Bochner space $\mathcal{B}_{\infty} =L^2_{S_{\infty}(H)}([-\pi,\pi],\mu_{\mathscr{F}})$ of all strongly measurable functions $U:[-\pi,\pi]\to S_{\infty}(H)$ such that
\[
\|U\|^2_{\mathcal{B}_\infty} =\int_\Pi \snorm{U_{\omega}}^2_{\infty} d\mu(\omega)<\infty
\] with
\[
\mu(E)=\int_{E} d \mu_{\mathscr{F}}(\omega),
\]
for all Borel sets $E\subseteq[-\pi,\pi]$ and where $\mu_{\mathscr{F}}$ is the measure in \eqref{mufm}. This is proved in the Appendix (Proposition \ref{prop:Stochint}) and generalizes the result in Appendix B 2.3 of \citet{vde16}. With this in place, we obtain a harmonic principal component analysis for processes of which the spectral measure has finitely many jumps.
\begin{corollary}[Harmonic functional principal component analysis] \label{cor:Hfpca}
Suppose $\{X_t\}$ has a Cram{\'e}r-Karhunen-Lo{\`e}ve representation as in Definition \ref{CKL}. Then, among all linear rank reductions of $\{X_t\}$ to a process $\{Y_t\}$ with representation $Y_t = \int_{-\pi}^{\pi} e^{\im \omega t} A_{\omega} d\omega$ where $A_\omega \in \mathcal{B}_{\infty}$ with $\text{rank}(A_{\omega}) \le p(\omega)$, $p: [-\pi,\pi] \to \mathbb{N}$ c{\`a}dl{\`a}g, we have
\[
\|X_t - X^{\star}_t\|^2_{\mathbb{H}} \le \|X_t -Y_t\|^2_{\mathbb{H}},
\]
where
\[
X^{\star}_t = \int_{(-\pi, \pi] \setminus \{ \omega_1,\ldots, \omega_k\}} e^{\im \omega t} \Big(\sum_{j=1}^{p(\omega)} \phi^{\omega}_j \otimes \phi^{\omega}_j \Big) dZ_{\omega} +\lsum_{\ell=1}^{k} \big(\lsum_{j=1}^{p(\omega_\ell)}\phi^{\omega_\ell}_j \otimes \phi^{\omega_\ell}_j\big) (Z_{\omega_\ell} - Z_{\omega^{-}_\ell})e^{\im t \omega_\ell}.
\]
The minimized error is given by
\[
\|X_t-X^{\star}_t\|^2_\mathbb{H} =\int_{(-\pi, \pi] \setminus \{\omega_1,\ldots, \omega_k\}} \big(\lsum_{j>p(\omega)} \nu^{\omega}_j \big) d\omega + \lsum_{\ell=1}^{k}\big(\lsum_{j>p(\omega_\ell)} \nu^{\omega_\ell}_j \big)e^{\im t \omega_\ell}.
\]
\end{corollary}
\begin{proof}
Without loss of generality, we prove this for the case of one discontinuity at frequency $\omega_o$. By orthogonality of the two parts the representation, we find using Proposition \ref{cramer_disc} and Fubini's theorem
\begin{align*}
\|X_t-Y_t\|^2_\mathbb{H} & =\Bignorm{\int_{(-\pi, \pi] \setminus \{ \omega_o\}}\big(I-A_{\omega}\big) e^{\im \omega t}dZ_{\omega}}^2_\mathbb{H} + \Bignorm{\big(I-A_{\omega_o}\big) (Z_{\omega_o} - Z_{\omega^{-}_o})e^{\im t \omega_o}}^2_\mathbb{H}
\\ &= \int_{-\pi}^{\pi} \trace\big( \big(I- A^{\omega} \big) \mathcal{F}_{\omega}d\omega\big(I- A^{\omega} \big)^\dagger \big) + \trace\big( \big(I- A^{\omega_o} \big) \big(\mathscr{F}_{\omega_o}-\mathscr{F}_{\omega^{-1}_o}\big)\big(I- A^{\omega} \big)^\dagger \big)
\end{align*}
From which it is straightforward to see that this is minimized $X^{\star}_t$ where error is given by
\begin{align*}
\|X_t-X^{\star}_t\|^2_\mathbb{H}&= \Bignorm{\int_{(-\pi, \pi] \setminus \{ \omega_o\}} e^{\im \omega t} \big(\lsum_{j>p(\omega)} \phi^{\omega}_j \otimes \phi^{\omega}_j \big) dZ_{\omega}}^2_\mathbb{H}
+\Bignorm{\big(\lsum_{j> p(\omega_o)}\phi^{\omega_o}_j \otimes \phi^{\omega_o}_j\big) (Z_{\omega_o} - Z_{\omega^{-}_o})e^{\im t \omega_o}}^2_\mathbb{H}
\\& = \int_{(-\pi, \pi] \setminus \{ \omega_o\}} \big(\lsum_{j>p(\omega)} \nu^{\omega}_j \big) d\omega + \big(\lsum_{j>p(\omega_o)} \nu^{\omega_o}_j \big)e^{\im t \omega_o}.
\end{align*}
\end{proof}
\begin{remark}[Harmonic functional principal component analysis of long-\\memory processes] \label{longmem}
In analogy to classical time series, the covariance structure of a long-memory functional time series does not decay rapidly. Without loss of generality, assume such a process will have its covariance structure satisfy
\[
\mathcal{C}_h \sim B h^{2d-1} \quad 0 < d < 0.5,
\]
where $B$ a strictly positive element of $S_1^+(H)$. It is clear that for such a process, the dependence structure does not decay rapidly enough for $\sum_{h \in \mathbb{Z}}\snorm{C_h}_p =\infty$ to hold. In order to understand what can be said about the properties of the spectral density operator, note that we can for simplicity mimic the behavior of such process by considering the linear process
\begin{align*}
X_t = \sum_{j=0}^{\infty} \big(\lprod_{0 < k \le j}\textstyle{\frac{k-1+d}{k}}\big) \varepsilon_{t-j}
\end{align*}
where $\varepsilon_t$ is $H$-valued white noise and hence by Theorem \ref{Herglotzthm}, the second order structure is given by
$\mathcal{C}^{\varepsilon}_0=\int_{-\pi}^{\pi} d \mathscr{F}^{\varepsilon}(\omega) =2 \pi \mathcal{F}^{\varepsilon}_0$.
Using the properties of the Gamma function, a standard argument shows that the filter applied to $\{\epsilon_t\}$ yields
\begin{align*}
C^{X}_h =\int_{-\pi}^{\pi} e^{\im \omega h} d \mathscr{F}^{X}(\omega) =\int_{-\pi}^{\pi} e^{\im \omega h} (1-e^{-\im \omega} )^{-2d} d \mathscr{F}^{\varepsilon}(\omega)
\end{align*}
and hence a density of the spectral measure $\mathscr{F}^{X}$ at $\omega=0$ for $d>0$ is not defined. Yet, since this has measure $0$, we can, under the conditions of Theorem \ref{Cramer}, define a harmonic principal component analysis as in Corollary \ref{cor:Hfpca} where the number of discontinuities is $k=0$. That is, the optimal approximating process is given by
\[
X^{\star}_t = \int_{(-\pi, \pi]} e^{\im \omega t} \Big(\sum_{j=1}^{p(\omega)} \phi^{\omega}_j \otimes \phi^{\omega}_j \Big) dZ_{\omega}
\]
and the minimized error is given by $\|X_t-X^{\star}_t\|^2_\mathbb{H} =\int_{(-\pi, \pi] } \big(\lsum_{j>p(\omega)} \nu^{\omega}_j \big) d\omega.$
\end{remark}
\medskip
\noindent
{\bf Acknowledgements.}
This work has been supported in part by the Collaborative Research Center ``Statistical modeling of nonlinear dynamic processes'' (SFB 823, Project A1, C1, A7) of the German Research Foundation (DFG).
|
1,108,101,563,849 | arxiv |
\section{Introduction}
Respondent-level data, also known as microdata, have been widely available in public databases and are essential for students, researchers, and corporate analysts to understand a variety of research questions. Such data are typically collected through surveys and censuses, after which the data holders disseminate these data to the public. Any data dissemination needs to follow legal and ethical guidelines, which are in place to protect the privacy and confidentiality of the respondents
The privacy and confidentiality concerns of releasing microdata could impact different communities to various extents. Not surprisingly, youth is one of the most vulnerable groups when faced with privacy intrusions. According to the Future of Privacy Forum (FPF)\footnote{\urlstyle{same}\url{https://fpf.org/blog/future-of-privacy-forum-releases-new-youth-privacy-and-data-protection-infographic/}}, consequences for youth data disclosure are severe as they are more likely to encounter predators or become victims of bullying and harassment. Less visible risks include commercial exploitation through profiling and behavioral advertising \citep{park_vance_2021}. Policy-makers and legislators across the globe have striven to sheild the privacy of data collected from youth; examples include the Children's Online Privacy Protection Act (COPPA)\footnote{\urlstyle{same}\url{https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa}} in the United States and the General Data Protection Regulation (GDPR)\footnote{\urlstyle{same}\url{https://gdpr-info.eu/}} in the EU.
In this paper, we provide a case study of protecting youth data using the synthetic data approach. Our case study focuses on a particularly high-risk and vulnerable database involving youth, the Youth Risk Behavior Survey (YRBS) in the United States. \par
\subsection{The YRBS Data}
The Youth Risk Behavior Surveillance System (YRBSS) was developed in 1990 by the U.S. Centers for Disease Control and Prevention (CDC) to monitor health behaviors that contribute markedly to leading causes of death, disability, and social problems among youth in the United States. The YRBS is the primary mechanism through which the institution collects data. The YRBS data have been extensively used by researchers and social activists to study youth behavior as well as to promote change. For example, \citet{reising_cygan_2019} provides a guide for school nurses to implement the YRBS, access results, and apply findings in their school communities, and \citet{underwood_brener_halpern-felsher_2020} discusses the strengths and weaknesses of the YRBS in tracking adolescent health behavior. \par
Given the nature of the questions asked in the YRBS, the responses are often sensitive: individuals are asked about their use of substance, sexual behavior, mental health conditions, among other things. It is important to stress that since the respondents are predominantly minors, disclosure of these sensitive information can cause legal, financial, and social consequences to the targeted minor, leading to imprisonment, detention, violence, bullying, or other types of physical and mental harms.
In addition, the risk of disclosure would discourage YRBS respondents from answering these survey questions truthfully, as they might be concerned about their privacy and the risk of being identified, resulting in potential reductions of the survey quality. Given these reasons, it is undoubtedly important to protect the privacy of the YRBS data before their public release. The publicly available YRBS data has undergone some primary privacy protections during the data collection stage, mainly through administering the surveys anonymously and voluntarily among the students. To the best of our knowledge, little has been done to protect privacy and confidentiality at the data processing stage according to the methodology guide of the YRBS\footnote{For a detailed methodology guide of the YRBS, see: \urlstyle{same}\url{https://www.cdc.gov/mmwr/pdf/rr/rr6201.pdf}.}. For the purpose of the case study, we download a sample from the publicly available source and treat it as the confidential data.\par
We retrieve the YRBS data from the YRBSS section of the CDC website. The district-level dataset of high school students contains 504,249 observations from multiple districts across the U.S. from 1991 to 2019. For illustration purpose, we primarily focus on the 2019 survey in New York City and Chicago.
The retrieved YRBS data contain variables such as respondent ID and sample site, demographic variables such as age, sex, and race, body mass index (BMI) variables, sexual minority variables, and the 2019 questionnaire and supplemental variables. We primarily focus on the variables that might present the biggest privacy concerns. Our selected variables are summarized in Table \ref{tab:varlist}.
\begin{table}[t]
\centering
\caption{YRBS categorical variable names, levels, and sensitive status.}
\label{tab:varlist}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{llc}
\hline
Variable name & Characteristics & Sensitive \\
\hline
City & New York City/Chicago & No\\
Age & 12 to 18 years old; seven levels & No \\
Sex & Male/female & No \\
Grade & 9th to 12th grade; four levels & No \\
Race & Seven categories & No \\
Obesity indicator & Yes/No (1 and 2)& Yes\\
Sexuality & Four categories & Yes \\
Ever experienced sexual violence & Yes/No (1 and 2) & Yes \\
Current tobacco use* & Yes/No (1 and 2) & Yes \\
Current alcohol use & Yes/No (1 and 2) & Yes \\
Current marijuana use & Yes/No (1 and 2) & Yes \\
Ever illicit drug use** & Yes/No (1 and 2) & Yes \\
Ever sexual intercourse & Yes/No (1 and 2) & Yes \\
\hline
*smoke cigarettes, electronic vapor, or cigars \\
**ever used cocaine, heroin, or methamphetamine
\end{tabular}}
\end{table}
Variables related to tobacco use and illicit drug use are created by combining some sub-categories, while the other variables remain the same format as in the YRBS. After removing missing values, we arrive at a sample containing $n = 5,949$ observations with $13$ variables. All variables are categorical. We deem variables related to substance use, sex, and violence sensitive and therefore to be synthesized for protection (all variables with ``Yes" in the ``Sensitive" column in Table \ref{tab:varlist}).\par
\subsection{The Synthetic Data Approach}
One approach to providing privacy protection for microdata is to generate synthetic data to be released in place of the confidential data \citep{rubin_1993, little_1993}.
Since its first proposal almost 3 decades ago, the field has witnessed a great amount of research efforts to develop theories and models for releasing synthetic microdata. Given the fact that a subset of our YRBS variables are deemed sensitive, we follow the partially synthetic data approach, where only sensitive variables are replaced by synthetic values while non-sensitive variables remain unchanged \citep{little_1993}. One way to generate partially synthetic data is to first fit Bayesian models with the confidential data to estimate the posterior distributions. One then simulates synthetic values for the sensitive variables given the posterior predictive distributions. With carefully designed Bayesian models, the resulting synthetic data could preserve important statistical characteristics of the confidential data such as means, variances, and joint probability distributions. Moreover, they can protect the privacy of the confidential data by reducing the disclosure risks of the respondents, such as preventing intruders from identifying or inferring the values of sensitive variables for a particular individual. For a detailed overview of synthetic data, see \citet{drechsler_2011}. \par
Given the categorical nature of all of our YRBS variables, we adopt the DPMPM synthesizer, which has been shown effective for survey \citep{hu_reiter_wang_2014} and administrative \citep{drechsler_hu_2021} data. The DPMPM synthesizer is implemented by the \texttt{NPBayesImputeCat} R package \citep{npbayesimputecat} to generate five partially synthetic YRBS datasets.
We next extensively evaluate the utility and disclosure risks of the resulting synthetic data and conclude their effectiveness of providing useful public release of the YRBS sample with sufficient privacy protection. \par
The remainder of this paper is organized as follows: Section \ref{sec:DPMPM} describes our adopted DPMPM synthesizer and our implementation details. Section \ref{sec:utility} evaluates the utility of the synthetic data while Section \ref{sec:risk} evaluates the disclosure risks. we conclude the paper with a some discussions and remarks in Section \ref{sec:conclusion}.\par
\section{The DPMPM Synthesis Model and Implementation}
\label{sec:DPMPM}
The aforementioned categorical nature of our YRBS data prompts us to adopt the Dirichlet Process mixture of products of multinomials (DPMPM) synthesis model. Compared to popular sequential synthesis models such as the classification and regression tree (CART) proposed by \citet{CART2005} where each sensitive variable is synthesized from a univariate model, the DPMPM takes the joint modeling approach by specifying a joint multivariate distribution of categorical variables. Works such as \citet{hu_reiter_wang_2014} and \citet{drechsler_hu_2021} have demonstrated its effectiveness in synthesizing survey and administrative data.
Suppose we have the sample $\mathbf{Y}$ with $n$ observations and $r$ unordered categorical variables, where each record $i$ is denoted as $\mathbf{Y}_i = (Y_{i1},...Y_{ir})$.
The DPMPM synthesis model assumes that each $\mathbf{Y}_i$ belongs to one of $K$ underlying latent classes. The latent classification is, by definition, unobserved and therefore requires estimation. Given the latent class assignment $z_i$ of record $\mathbf{Y}_i$, each categorical variable $j$, i.e., $Y_{ij}$, independently follows a multinomial distribution where $d_j$ is the number of categories in variable $j$ ($j = 1, ..., r$). Mathematically:
\begin{align}
Y_{ij} \mid z_i,\theta \stackrel{ind}{\sim} \text{Multinomial}(\theta_{z_i1}^{(j)}, ...,\theta_{z_id_j}^{(j)};1) \;\; \forall i, j, \label{eq:Y}\\
z_i \mid \bm{\pi} \sim \text{Multinomial}(\pi_1,...,\pi_K; 1) \;\; \forall i, \label{eq:z}
\end{align}
where $\bm{\pi}$ is the probability vector of the latent class assignment and $\bm{\theta}^{(j)}_{k}$ is the probability vector of the categories of variable $j$ for latent class $k$. One way to estimate the model parameters is to use the truncated stick-breaking representation of the Dirichlet process priors following \citet{sethuraman_1994}.
\begin{comment}
One can show that the marginal probability of a record having a particular vector of values can be expressed as averaging over the latent classes:
\begin{align}
Pr(Y_{i1} = y_{i1}, ..., Y_{ir} = y_{ir} | \pi, \theta) = \sum^K_{k=1}\pi_k\prod_{j=1}^r\theta^{(j)}_{ky_{ij}}.
\end{align}
\end{comment}
We implement the Markov chain Monte Carlo (MCMC) estimation process using the \texttt{NPBayesImputeCat} R package \citep{npbayesimputecat}. It uses a blocked Gibbs sampler to estimate the joint posterior distribution and provides posterior draws of all model parameters, from which synthetic data can be generated. We report the utility and disclosure results in the next sections based on $m = 5$ simulated synthetic datasets as the results are not sensitive to $m \geq 5$. \citet{RJ-2021-080} presents detailed instructions of using the \texttt{NPBayesImputeCat} R package for data synthesis. We include our R script below for interested readers.
\begin{small}
\begin{verbatim}
YRBS_syn <- NPBayesImputeCat::DPMPM_nozeros_syn(
X = YRBS_data,
dj = dj,
nrun = 10000,
burn = 5000,
thin = 10,
K = 80,
aalpha = 0.25,
balpha = 0.25,
m = 5,
vars = c("obesity","sexuality","sexual_violence","tobacco",
"alcohol","marijuana","drug","sexual_contact"),
seed = 221,
silent = TRUE)
\end{verbatim}
\end{small}
\begin{comment}
Therefore, the estimation of the model requires the following set of parameters: a vector of probabilities $\pi$ and a collection of vectors of probabilities $\boldsymbol{\theta}$, one for each combination of variable $j$ and latent class $k$. The model uses the truncated stick-breaking representation of Dirichlet Process priors distributions for these parameters following \citet{sethuraman_1994}. The priors are given by:
\begin{align}
\pi_k &= V_k\prod_{l<K}(1-V_l) \;\; \text{for} \;\; k=1,..., K\\
V_k &\stackrel{iid}{\sim} \text{Beta}(1,\alpha) \;\; \text{for} \;\; k = 1, ... K-1, V_K=1,\\
\alpha &\sim \text{Gamma}(a_\alpha, b_\alpha),\\
\boldsymbol{\theta}^{(j)}_k &= (\theta^{(j)}_{k1},...,\theta^{(j)}_{kd_j})\sim \text{Dirichlet}(a^{(j)}_1,...,a^{(j)}_{d_j}) \;\; \forall j, k
\end{align}
The Dirichlet distribution is the conjugate prior of the multinomial distribution. Following \citet{hu_reiter_wang_2014}, the Dirichlet parameters $a_1^{(j)}, ..., a^{(j)}_{d_j}$ are set to 1 to correspond to uniform distributions. The Gamma parameters are set to $a_\alpha = b_\alpha = 0.25$, which is a weak prior that allows the data to dominate the distribution. The Markov chain Monte Carlo (MCMC) sampling procedure is implemented by a blocked Gibbs sampler to estimate the posterior distributions of all parameters. \par
\end{comment}
\begin{comment}
With posterior draws of $\bm{\pi}$ and the collection of $\bm{\theta}$, we follow Equation (\ref{eq:z}) to first sample a $z_i^*$ as the latent class assignment for record $i$. Next, we use $z_i^*$ and corresponding $\bm{\theta}_{z_i^*}^{(j)}$ and follow Equation (\ref{eq:Y}) to sample a synthetic value $Y_{ij}^*$ for record $i$ variable $j$. This process is repeated for each record $i$ so that we create a synthetic sample $\mathbf{Y}^*$.
\end{comment}
\begin{comment}
From the posterior distributions, we sample a value of $\alpha$, $\pi$, and $\theta$. $\pi$ will be used to sample the values of $z_i$ independently from (2). The synthetic record can then be generated using the sampled $\theta$ by sampling from independent multinomial distributions with probabilities $\theta_{k_i}^{(j)}$ for each $j$ given sampled $z_i$. The implementation of the synthetic process is made fairly convenient by the NPBayesImputeCat \citep{npbayesimputecat} package in R. The upper bound of the number of latent classes, $K$, is set to be $80$ (the actual estimated model only occupies around 35 latent classes). The MCMC process runs for 10,000 iterations, with burn-in of 5,000 iterations and storing only every 10th iteration to reduce the autocorrelation between successive draws (thinning = 10). Finally, $m$ synthetic datasets are generated. We find that the results are not sensitive to the choice of $m$ and we choose to report the ones based on $m=5$ for the rest of this paper.\par
\end{comment}
\section{Utility Evaluation and Results}
\label{sec:utility}
For synthetic data to be released, a key criterion is being useful, i.e., they should preserve characteristics of the confidential data.
Two types of utility are typically considered in the literature: global utility and analysis-specific utility. The former evaluates the closeness between the confidential and synthetic data distributions, while the latter evaluates whether synthetic data users can obtain inferences on the synthetic data that are similar to those obtained from the confidential data \citep{woo_reiter_oganian_karr_2009, snoke_raab_nowok_dibben_slavkovic_2018}. We consider a few metrics of each in our utility evaluation of the resulting synthetic YRBS data.
\subsection{Global Utility}
We evaluate the global utility of the synthetic data through propensity scores (pMSE) and the distribution of differences in relative frequencies for cross-tabulations. As the results show, both measurements indicate that our synthetic data preserve a high level of global utility
\subsubsection{Propensity scores (pMSE)}
Propensity score measures the probability for individuals in a dataset being assigned to a specific treatment group given their information on other variables. It is commonly used in causal inference to reduce bias from confounding variables when estimating the effect of an intervention in an observational study. \citet{woo_reiter_oganian_karr_2009} first proposed using it for measuring global utility in the case of synthetic data and and the methodology is further expanded by \citet{snoke_raab_nowok_dibben_slavkovic_2018}. In this context, the treatment is whether the data are synthesized, and the variables used to estimate the probability are all variables in the datasets.\par
The evaluation takes place for each of the $m$ synthetic datasets. First, we combine the confidential and the synthetic datasets into one. Assume the confidential dataset has $n_c$ records and the synthetic dataset has $n_s$ records, we arrive at a concatenated dataset with dimension $(n_c+n_s)$-by-$r$, where $r$ is the number of variables. Next, we create an additional binary variable $S$ for each record indicating whether it belongs to the synthetic or confidential data, i.e., $S_i = 1$ if synthetic and $S_i = 0$ if confidential. With this setup, for each record, we can use the $r$ variables to predict the probability of $S_i$ taking value 1, which is the estimated propensity score, denoted as $\hat{p}_i$. In our case study, a logistic regression is used for the prediction of $\hat{p}_i$ \par
The propensity score mean-squared error, known as $\text{pMSE}$, is computed as
$\text{pMSE} = 1 / (n_c+n_s)\sum_{i=1}^{n_c+n_s}(\hat{p_i}-c)^2$
where $c$ is the proportion of units with synthetic data, i.e., $c = n_s / (n_s+n_c)$. In our case, for each of our $m = 5$ partially synthetic datasets, we compute the pMSE where $n = n_c = n_s$ and $c = 0.5$. As can be seen from its mathematical form, the pMSE is a measurement of how well a model can differentiate between the confidential and the synthetic dataset given all variables. It measures the deviation of the predicted probability from $c = 1/2$, i.e., how much more certain the model is at telling the difference between two datasets than a random guess. Therefore, the smaller the pMSE score, the poorer the model is at distinguishing the two datasets, thus the higher the utility. \par
The average pMSE score computed from our $m = 5$ synethtic datasets is $\mathbf{0.009}$, indicating high global utility of our synethetic data. However, a major limitation of the pMSE measurement is that it is model-dependent, i.e., the pMSE result depends on the model used for distinguishing the two datasets. The logistic regression is presumably a relatively ``weak" model, and more complex algorithm might potentially do a better job in separating the two datasets. See \citet{snoke_raab_nowok_dibben_slavkovic_2018} for further discussion.
\subsubsection{Absolute deviation and differences in relative frequency }
For categorical data, \citet{drechsler_hu_2021} considered the distributions of differences in relative frequencies between the confidential data and synthetic data for various tabulations as a measurement of global utility. For any cross-tabulation of categorical variables, we compute the relative frequency of each cell entry as $c_{jv}^{(t)}$ and $s_{jv}^{(t)}$ ($c$ for confidential and $s$ for synthetic) for $t$th cross-tabulation with $j$th variable and $v$th category. The relative difference is then computed as
\begin{equation}
d^{(t)}_{jv} = \frac{s_{jv}^{(t)}-c_{jv}^{(t)}}{c_{jv}^{(t)}},
\end{equation}
obtaining a matrix $\bm{d}^{(t)}$ for each cross-tabulation $t$. The distribution of $\bm{d}^{(t)}$ for one-way cross-tabulations centers at 0 and ranges from -2 to 2 while that for two-way cross-tabulations centers at 0 ranging from -5 and 5. Plots are included in Appendix \ref{density plots}.
We further consider $|s_{jv}^{(t)}-c_{jv}^{(t)}|$ as the absolute deviation between the two datasets. The smaller the absolute deviation, the closer the two datasets, indicating high global utility. The average absolute deviation for one-way, two-way, and three-way cross-tabulations are $\mathbf{0.005}$, $\mathbf{0.006}$, and $\mathbf{0.005}$, respectively, suggesting high global utility.
\begin{comment}
\begin{table}[h!]
\caption{\label{tab:absolute deviation} Average absolute deviation for cross tabulations}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{lr}
& Average $|s_{jv}^{(t)}-c_{jv}^{(t)}|$\\
\hline
One-way tables & 0.005 \\
Two-way tables & 0.006 \\
Three-way tables & 0.005 \\
\hline
\end{tabular}}
\end{center}
\end{table}
\end{comment}
As with the pMSE metric, both the relative frequency difference and the absolute deviation metrics suggest a high level of global utility of our synthetic YRBS datasets.
\subsection{Analysis-specific Utility}
The analysis-specific utility measures are tailored to the analyses expected to be performed on the synthetic data. The expectation is that a data analyst would obtain similar inferences from the synthetic and the confidential data. To evaluate our synthetic YRBS data, two metrics of analysis-specific utility are considered: inference for a point estimate and inference for regression coefficients.\par
\subsubsection{Inference for a point estimate}
Since the synthesized variables are all categorical, important point estimates are proportions. We believe the proportion of heterosexual students is a highly useful quantity to report, which has a point estimate of $\hat{p}_c = 0.817$ and a 95\% confidence interval of (0.807, 0.827) in the confidential data.
The point estimate and 95\% confidence interval for the $m = 5$ synthetic YRBS datasets can be obtained by using combining rules for partially synthetic data \citep{drechsler_2011}. Specifically, the point estimate is the mean of the point estimate from the $m = 5$ synthetic datasets $\bar{q}_m$, and the variance estimate is expressed as $T_p = b_m / m + \bar{v}_m$
where $b_m$ is the cross-sample variance of the proportions $p^{(l)}$ for each sample $l$ and $\bar{v}_m$ is the mean of the $m$ sample variances. The point estimate for the $m = 5$ synthetic datasets is $\hat{p}_s = 0.815$ with a $95\%$ confidence interval of $(0.801, 0.830)$.
To evaluate the closeness between the two confidence intervals, we compute the interval overlap metric described in \citet{drechsler_reiter_2009}:
$I = (U_i-L_i)/2(U_c-L_c)+ (U_i-L_i)/2(U_s-L_s)$
where $L_s, L_c$ denote the lower CI bound of the synthetic and confidential datasets, $U_s, U_c$ denote the upper CI bound of the two datasets, and $L_i = \max(L_s, L_c)$, $U_i = \min(U_s, U_c)$. The highest possible value of $I$ is 1 and our synthetic data yield an overlap of $\mathbf{0.845}$, indicating high utility for this particular point estimate. We compute the same metric for the proportions of other synthesized variables, including tobacco use, alcohol, marijuana, sexual contact, and drug. All show an interval overlap above 0.885 with the exception for marijuana of 0.770 and drug of 0.310. The low overlap of drug is due to a very small fraction of drug users. \par
\subsubsection{Inference for regression coefficients}
Similar to the inference for a point estimate, we can imagine the data analyst is conducting regression analysis, using some variables to predict the others. For example, one might want to use city, age, sex, and race to predict tobacco use with a logistic regression model. The point estimates and 95\% confidence intervals for selected regression coefficients from the confidential data and the synthetic data are obtained and visualized in Figure \ref{fig:reg_coef}. As before, we use appropriate combining rules for the synthetic data and include interval overlap metrics in the plots.
\begin{figure}[h!]
\centering
\caption{Point estimates, confidence intervals, and interval overlaps for selected regression coefficients in a logistic regression analysis.}
\includegraphics[width = \textwidth]{regression_coef_plot.png}
\label{fig:reg_coef}
\end{figure}
Evidently, the interval overlaps for all considered regression coefficients are extremely high with the exception for the coefficient of city, indicating an overall high level of analysis-specific utility. We run similar regressions on other combinations of variables and obtain similar results
In summary, our partially synthetic YRBS data preserve a high level of utility both in terms of global and analysis-specifc utility considering a series of metrics. We now turn to the evaluation of their disclosure risks.
\section{Disclosure Risk Evaluation and Results}
\label{sec:risk}
The primary objective of releasing synthetic data in place of confidential data is to provide privacy and confidentiality protection. Therefore, an important aspect of synthetic data evaluation is to measure the extent to which synthetic data can reduce disclosure risks. Only when the disclosure risks of generated synthetic data are acceptable by the data disseminators can synthetic data be released to the public.
We consider two types of disclosures: identification disclosure and attribute disclosure. As the names suggest, identification disclosure is when the intruder correctly identifies records of interest, and attribute disclosure is when the intruder correctly infers the true confidential values of the synthetic variables \citep{hu_2019}.
\subsection{Identification Disclosure}
We consider two approaches to evaluate identification disclosure risk: the matching-based approach and the record linkage approach. Both approaches show that our synthetic YRBS have significantly reduced the identification disclosure risks compared to the confidential YRBS.
\subsubsection{Matching-based approach}
In the matching-based approach, we assume the intruder possesses some knowledge for a confidential record $i$ and tries to identify the individual associated with this record $i$ in the released synthetic data \citep{reiter_mitra_2009}. Specifically, we consider specific scenarios that an intruder might encounter and quantify the corresponding disclosure risks using the following three metrics: 1) expect match risk, the expected number of correct identity matches in the released synthetic data; 2) true match rate, the percentage of true and unique matches; and 3) false match rate, the percentage of unique matches that are false matches. Appendix \ref{key_quantities} includes detailed definitions of these three metrics.
In our synthetic YRBS, we assume the un-synthesized city, age, sex, grade, and race are variables available to the potential intruder. We compute the aforementioned three metrics for both the synthetic and the confidential data to evaluate the reduction of disclosure risks. For the synthetic data, we take the average of the metrics over the $m = 5$ synthetic datasets. The \texttt{IdentificationRiskCalculation} R package is used for these implementations \citep{hornby_and_hu}. Results are summarized in Table \ref{disclosure_risk1}.
\begin{table}[h!]
\caption{Identification risk summaries based on the matching-based approach.}\label{disclosure_risk1}
\begin{center}
\begin{tabular}{lrr lrr}
& Confidential & Synthetic & & Confidential & Synthetic\\
\hline
Expected risk & 2234 & 186 & \hspace{0.5cm}False match rate & 0 & 0.891\\
True match rate & 0.257 & 0.012 & \hspace{0.5cm}Unique match & 1526 & 632 \\
\hline
\end{tabular}
\end{center}
\end{table}
Evidently, the expected risk and the true match rate have been reduced (12 and 21 times, respectively) and the false match rate has increased significantly (from 0 to close to 90\%) with the synthesis process, suggesting a high level of identification disclosure risk reduction provided by our synthetic YRBS.\par
\subsubsection{Record linkage approach}
For partially synthetic data, record linkage methods can be applied to linking records in the synthetic dataset to the records in the confidential dataset. Based on variables, called keys, a link between two records can be established and we can evaluate identification risks in terms of true links and false links.\par
As with the matching-based approach, variables such as city, age, sex, grade, and race are considered as keys, i.e., the variables the intruder may use to establish the linkage, in our evaluation for the synthetic YRBS. For each record $i$ in the confidential YRBS, multiple linkages in the synthetic YRBS can be established, and the linkages are ranked by a weight estimated using the expectation-maximization algorithm by \citet{winkler_2000}. We use a greedy algorithm to search for the linkage with the highest weights for each record. The process of linkage establishment and greedy search are implemented by the \texttt{reclin} R package \citep{reclin}.
Similar to the matching-based approach, we calculate the percentages of the true links and false links in both the synthetic and the confidential data for comparison. The confidential YRBS have a true linkage rate of $100\%$ and a false linkage rate of $0\%$ whereas the synthetic YRBS have a true linkage rate of $\mathbf{8.5\%}$ and a false linkage rate of $\mathbf{91.5\%}$. A 11-fold reduction in the true linkage percentage and a 0\% to 91.5\% increase in the false linkage percentage suggest that the synthetic YRBS make it much more difficult for an intruder to establish true record links based on the knowledge they possess, and therefore our synthesis process has successfully reduced identification disclosure risks significantly.\par
\subsection{Attribute Disclosure Risk}
To evaluate attribute disclosure risk, we consider two methods: the correct attribution probability (CAP) and the classification-based approach. The results show that our synthetic YRBS provide a significant attribute disclosure risk reduction compared to the confidential YRBS.\par
\subsubsection{Correct Attribution Probability (CAP)}
The CAP measures the probability that an intruder can correctly predict the value of the target variable for an individual by using the empirical distribution of this variable among synthetic observations with the same key variables. In our evaluation of the synthetic YRBS, the key variables are city, age, sex, grade, and race, and the target variable is marijuana usage.
We follow the set-up in \citet{baillargeon_and_charest}. Let $\mathbf{Y}$ denote the confidential dataset and $y_{ij}$ represents the $j$-th variable of the $i$-th record. For a specific sensitive variable $l$, all possible values for this variable are the targets denoted as $T_1,...,T_G$, where $G$ is the number of levels of the target variable. The intruder attempts to predict the value of $y_{il}$ using some or all of $Y^{-l}$, the set of variables other than $l$. These variables are the keys, denoted as $K_1,...,K_H$. The CAP of record $y_0$ in confidential dataset $\mathbf{Y}$ with synthetic dataset $\mathbf{Z}$ is given as:
\begin{equation}
\text{CAP}_{y_0}(\mathbf{Z}) = \frac{\sum_{i=1}^n I[T(z_i)=T(y_0),K(z_i)=K(y_0)]}{\sum_{i=1}^nl[K(z_i)=K(y_0)]}.
\label{eq:CAP}
\end{equation}
Equation (\ref{eq:CAP}) represents the proportion of target variable matches in all the key variable matches for a particular sensitive variable $l$ and a particular record $y_0$. The CAP for a synthetic dataset can be computed by averaging the CAP over all records. The average CAP for the $m = 5$ synthetic YRBS datasets is $\mathbf{0.749}$, while the CAP computed from the confidential dataset is $0.753$.\par
Comparing 0.749 and 0.753 indicates that the average CAP does not reduce much from the synthesis process for the file as a whole. However, it is important to note that the CAP for each record could be changed by the synthesis process to different extents which cannot be captured by the average CAP. To visualize the change of CAP at the individual level, Figure \ref{fig:individual_CAP} plots the synthetic individual CAP versus the confidential individual CAP by marijuana status.
\begin{figure}[h!]
\begin{center}
\caption{Synthetic individual CAP versus confidential individual CAP given the marijuana variable.}\label{fig:individual_CAP}
\includegraphics[width = 0.8\textwidth]{CAP_plot_sc.png}
\end{center}
\end{figure}
Figure \ref{fig:individual_CAP} shows that most records fall on the 45 degree line, meaning that for these records, there is no major difference in attribution probability before and after synthesis. It is also notable that most records with marijuana usage equals 1, i.e., they use marijuana, have a low CAP in both the confidential and synthetic data, indicating that these records are relatively safe and the true attribute value is hard to be inferred regardless of whether they are synthesized.\par
\subsubsection{Classification-based risk measure}
A weakness of the CAP measure is that is uses a simple model to predict the values of the target variable. With a classification model, sophisticated algorithms can be deployed to predict the value of the target variable using a set of keys. In our evaluation of the synthetic YRBS, we adopt a random forest classifier to perform the task of predicting the value of marijuana use, the same task in the CAP illustrative above. We use city, age, sex, grade, obesity, and sexuality as predictors.\par
A random forest classifier contains a number of decision trees on various subsets of the given dataset and takes the average to improve the predictive accuracy of that dataset \citep{randomforest}. We use the synthetic data $\mathbf{Z}$ to train a model and test the model with the confidential data $\mathbf{Y}$ to evaluate the accuracy. In comparison, we also train a model using the confidential data and testing it on the confidential data. The algorithm is implemented by the \texttt{randomForest} package in R.
The classification error for $\text{Marijuana}=1$ is $0.995$ on the confidential data and $\mathbf{0.988}$ on the synthetic data; the error for $\text{Marijuana}=2$ is $0$ on the confidential data and $\mathbf{0.004}$ on the synthetic data.
These results show that $\text{Marijuana} = 1$ is always difficult to predict: Even if we train the model with confidential data, the model performs poorly on capturing these data. In fact, the error rate has decreased if we train the model on the synthetic data, meaning that the risk is potentially higher in the synthetic data. However, since the marijuana variable is highly skewed, and the random forest classifier has a random component in it that every time it builds a slightly different model, we cannot conclusively state whether the synthesis process has increased or reduced the attribute disclosure risks.
In summary, two metrics of identification disclosure risks indicate that our synthetic YRBS have substantially reduced such risks, while the results are less conclusive for attribute disclosure risk evaluation.
\section{Concluding remarks}
\label{sec:conclusion}
In conclusion, the respondent-level privacy and confidentiality in the YRBS data sample are well protected by the DPMPM synthesis model, especially in terms of identification disclosure risk reduction. At the same time, the synthetic YRBS preserve a high level of data utility, both in terms of global and analysis-specific utility with various metrics.
There are a few limitations of our case study. First, some of the utility and risk evaluation methods consider a few scenarios and some of the measurements such as the inference for a regression coefficient and the classification based approach are model-dependent. However, it is admittedly infeasible to comprehensively consider all possible inferences and prediction models that a data analyst or a intruder might use. Second, the YRBS data are highly skewed and unbalanced, especially in some of the sensitive variables. Such skewness can be challenging to capture by our synthesis model, as well as for a data analyst or a hypothetical intruder. We believe this is the main reason that for highly unbalanced variables the utility and the risk reduction results are not satisfactory.
Despite these limitations, our case study serves as a useful demonstration of the DPMPM synthesis model and illustrates how widely useful and applicable it can be. The model can be extended to the rest of the YRBS survey (those before 2019 and other than NYC and Chicago), as well as some other categorical data in general. We also believe our case study showcases a variety of utility and disclosure risk evaluation metrics in practice, which can be useful and beneficial to data disseminators who are considering the synthetic data approach for microdata release. \par
\bibliographystyle{natbib}
|
1,108,101,563,850 | arxiv | \section*{Introduction}
The determination of the location of an RF transmitter based on measurements from several receivers is a fundamental problem in
a number of applications. These measurements include altitude (ALT), time difference of arrival (TDOA), and frequency difference of arrival (FDOA) ~\cite{Ho1997}. In fact, all of these measurements can be cast as polynomials
in the variables corresponding to the location of the transmitter. The problem of geolocation can then be solved by computing numerical approximations
of the solutions of polynomial systems, and it is simple to immediately provide lower bounds on the number of measurements needed to reduce the
solution set dimension to zero (i.e., so there are only finitely many potential locations). \\
Given a system of polynomial equations, the mathematical area called numerical algebraic geometry provides a range of tools and software for the solution of polynomial systems. Pairing this set of tools with the random sample consensus (RANSAC) method then yields a novel approach to RF transmitter geolocation. \\
The case of solving for emitter location using only TDOA measurements is simpler and has been well studied. Ho and Chan~\cite{Ho1997}
provided a first development of equations for using TDOA measurements to back out emitter location, coming very close to the polynomial
system development given in the next section. The more recent articles~\cite{Compagnoni2014,Compagnoni2016} provide an advanced
description of this TDOA-only setting, based largely on algebraic geometry. Various other approaches have been developed, and at least
one~\cite{Shuster} has made use of numerical algebraic geometry, though in a rather different context. RANSAC has also previously been
applied to the TDOA-only setting~\cite{Li2009}. \\
Although the methods developed in this paper can be used to locate an emitter using TDOA measurements only, FDOA measurements only, or a combination of both, we choose to focus on the FDOA-only case. Doppler resolution is higher than range resolution for signals with narrow-bandwidth and long pulse duration~\cite{Cheney2009,Mason2005}, making it desirable to solve for emitter location using FDOA alone in these cases. Source localization in the FDOA-only case, however, is far more nuanced, as the geometry associated with the FDOA measurements is more interesting and complicated~\cite{Cheney2017}. \\
For geolocation using FDOA alone in particular, there are often multiple possible emitter locations corresponding to observed measurements. This can cause problems for iterative methods that converge to a single solution~\cite{Mason2005}. In contrast, numerical algebraic geometry techniques will find all real, feasible solutions. Our method then uses RANSAC to help determine which of these solutions is most consistent with other data gathered.
In~\S\ref{s:Background} we formulate the TDOA- and FDOA-based source localization problem as polynomial systems and provide an introduction to methods from numerical algebraic geometry to be used for their solution. The proposed geolocation algorithm (FDOAR) incorporating numerical algebraic geometry techniques and the iterative process, RANSAC, is presented in~\S\ref{s:RANSAC} along with numerical results. A novel upper bound on feasible FDOA measurements for use in denoising data is presented in~\S\ref{s:FDOA_bounds}. In~\S\ref{s:Bounds} we present lower bounds on the number of receivers or measurements needed to reduce the solution set to only isolated points. Benefits and limitations of the FDOAR geolocation algorithm are discussed in~\S\ref{s:Disc}.
The primary contributions of this article are
\begin{itemize}
\item a development of the geolocation problem via TDOA and FDOA measurements as polynomial systems;
\item a novel upper bound on possible FDOA measurements; and
\item lower bounds on the number of receivers or measurements needed to reduce the solution set to only points;
\item a novel approach to geolocation using the pairing of numerical algebraic geometry with RANSAC.
\end{itemize}
\section{Background}\label{s:Background}
\subsection{Polynomial Systems for Geolocation}\label{s:GeoPoly}
Using a similar setup to that in ~\cite{Ho1997}, the relationship between TDOA measurements, FDOA measurements, and transmitter location can be represented as a set of polynomials. Consider a system of receivers, labeled $1,\hdots,N.$ Without loss of generality, the first receiver can be chosen as a reference receiver, such that TDOA ($\tau_{i,1}$) and FDOA ($f_{i,1}$) can be calculated between receivers $i$ and 1, where $i=2,\hdots,N$, for a total of $N-1$ pairs of measurements. The problem can be cast as two- or three- dimensional. For generality, we choose to use a three-dimensional earth-centered, earth fixed (ECEF) coordinate system. Thus, each receiver has known location $\mathbf{x}_i=[x_i,y_i,z_i]^T$ and velocity $\dot{\mathbf{x}}_i=[\dot{x}_i,\dot{y}_i,\dot{z}_i]^T.$ It is desirable to solve for the location of a radio-frequency emitter, $\mathbf{x}=[x,y,z]^T$.
\subsubsection{Time Difference of Arrival (TDOA)}
The amount of time it takes for the signal to travel from $\mathbf{x}$ to $\mathbf{x}_i$ is $\|\mathbf{x}-\mathbf{x}_i\|/c$, where $c$ is the speed of propagation. Thus the TDOA between receiver $i$ and receiver 1 is equivalent to
\begin{align*}
c\cdot\tau_{i,1} &= \|\mathbf{x}_i-\mathbf{x}\| - \|\mathbf{x}_1-\mathbf{x}\| \\
&= \sqrt{(x_i-x)^2+(y_i-y)^2+(z_i-z)^2} \\
& \qquad - \sqrt{(x_1-x)^2+(y_1-y)^2+(z_1-z)^2}.
\end{align*}
\noindent Considering all $N$ receivers, this system can be transformed into the set of polynomial equations~\cite{Ho1997}:
\begin{flalign*}
&(c\cdot \tau_{1,2})^2 + 2cr_1 \cdot \tau_{1,2}\\
& - (x_2^2 + y_2^2 + z_2^2) + (x_1^2 + y_1^2 + z_1^2) \\
& + 2\left[(x_2-x_1)x + (y_2-y_1)y + (z_2-z_1)z\right] = 0 \\ \\
&(c\cdot \tau_{1,3})^2 + 2cr_1 \cdot \tau_{1,3} \\
&- (x_3^2 + y_3^2 + z_3^2) + (x_1^2 + y_1^2 + z_1^2) \\
& + 2\left[(x_3-x_1)x + (y_3-y_1)y + (z_3-z_1)z\right] = 0 \\
& \qquad \qquad\qquad \qquad \vdots \\
&(c\cdot \tau_{1,N})^2 + 2cr_1 \cdot \tau_{1,N}\\
&- (x_N^2 + y_N^2 + z_N^2) + (x_1^2 + y_1^2 + z_1^2) \\
& + 2\left[(x_N-x_1)x + (y_N-y_1)y + (z_N-z_1)z\right] = 0 \\ \\
& r_1^2 - (x^2+y^2+z^2) - (x_1^2+y_1^2+z_1^2)\\
& \qquad \qquad \qquad \qquad \;+2 (x_1x + y_1y +z_1z) = 0,
\end{flalign*}
where variable $r_1$, representing the range of receiver 1, is used to remove square roots from the system.
\subsubsection{Frequency Difference of Arrival (FDOA)}\label{s:FDOA}
As developed in~\cite{Ho1997}, the FDOA between receiver $i$ and receiver 1 is,
\begin{align}
\label{FDOA1}
f_{i,1}= \frac{f_0}{c}\left[\dfrac{\dot{\mathbf{x}}_i^T(\mathbf{x}_i-\mathbf{x})}{\|\mathbf{x}_i-\mathbf{x}\|}-\dfrac{\dot{\mathbf{x}}_1^T(\mathbf{x}_1-\mathbf{x})}{\|\mathbf{x}_1-\mathbf{x}\|}\right],
\end{align}
where $f_0$ is the emitted frequency. Although more complicated than the TDOA case, the equations above can be converted to a polynomial system with the addition of more range variables, $r_i$. The system becomes,
\begin{align*}
\tiny
&r_1 r_2 f_{1,2} \; \\
& - \; r_1\left[\dot{x}_2(x_2-x) + \dot{y}_2(y_2-y) + \dot{z}_2(z_2-z)\right] \\
&+ \; r_2\left[\dot{x}_1(x_1-x) + \dot{y}_1(y_1-y) + \dot{z}_1(z_1-z)\right] =0 \\
&r_1 r_3 f_{1,3} \; \\
& - \; r_1\left[\dot{x}_3(x_3-x) + \dot{y}_3(y_3-y) + \dot{z}_3(z_3-z)\right] \\
&+ \; r_3\left[\dot{x}_1(x_1-x) + \dot{y}_1(y_1-y) + \dot{z}_1(z_1-z)\right] =0 \\
&\qquad \qquad \qquad\qquad \; \; \vdots \\
&r_1 r_N f_{1,N} \; \\
&- \; r_1\left[\dot{x}_N(x_N-x) + \dot{y}_N(y_N-y) + \dot{z}_N(z_N-z)\right] \\ &+ \; r_N\left[\dot{x}_1(x_1-x) + \dot{y}_1(y_1-y) + \dot{z}_1(z_1-z)\right] =0 \\ \\
&r_1^2 - (x^2+y^2+z^2) - (x_1^2+y_1^2+z_1^2) \\
&\qquad \qquad \qquad \qquad \;+ 2 (x_1x + y_1y +z_1z) = 0 \\
&\qquad \qquad \qquad \qquad \; \;\vdots \\
&r_N^2 - (x^2+y^2+z^2) - (x_N^2+y_N^2+z_N^2) \\
&\qquad \qquad \qquad \qquad \;+ 2 (x_Nx + y_Ny +z_Nz) = 0.
\end{align*}
\subsection{Numerical Methods for Polynomial Systems}\label{s:NAG}
Rephrasing the above problems as systems of polynomial equations now opens the door to methods from algebraic geometry,
particularly numerical algebraic geometry. The core computational engine of this field is homotopy continuation. The idea is as
follows: Given a polynomial system of equations $\mathbf{f}(\mathbf{z}) = 0$ to be solved ($\mathbf{z}\in\mathbb C^N$), a related system
$\mathbf{g}(\mathbf{z})$ is constructed. For example, the {\em total degree start system} for a system $\mathbf{f}(\mathbf{z})$
is given by $g_i(\mathbf{z}) = z_i^{d_i}-1$, where $d_i$ is the degree of the polynomial $f_i(\mathbf{z})$. A homotopy function,
typically
$$\mathbf{H}(\mathbf{z},t) = t\ \mathbf{g}(\mathbf{z}) + (1-t)\ \mathbf{f}(\mathbf{z}),$$
between these two systems is then constructed so that $\mathbf{H}(\mathbf{z},1) = \mathbf{g}(\mathbf{z})$ and
$\mathbf{H}(\mathbf{z},0) = \mathbf{f}(\mathbf{z})$. Some basic algebraic geometry then guarantees that each isolated solution
of $\mathbf{f}(\mathbf{z})=0$ will be reached by at least one path that varies in $t$ and includes a solution of $\mathbf{g}(\mathbf{z})=0$.
Thus, to find all isolated solutions of $\mathbf{f}(\mathbf{z})=0$, it suffices to find all solutions of $\mathbf{g}(\mathbf{z})$ and use
numerical path-tracking methods~\cite{ag90} to move from $t=1$ to $t=0$. Predictor-corrector methods are the standard choice.
Extraneous paths will diverge, but this wasted computation time can be partially mitigated by terminating any path with norm above
some threshold or working in projective space instead of affine space.
This is only a very basic description of a procedure that is fundamental in dozens of methods.
For example, there are numerical methods for finding all complex positive-dimensional solution components (curves, surfaces, etc.),
for extracting real solutions in various circumstances, for making use of particular polynomial system structure to reduce run time,
and so on. For many further details, see~\cite{SW05,BertiniBook} and the references therein. For now,
it suffices to know a bit more about singular solutions, ill-conditioning, and parameter homotopies.
\subsubsection{Singularity}
Just as $(x-1)^2=0$ has a double root at $x=1$, isolated solutions of polynomial systems can have multiplicity greater than 1.
The Jacobian matrix (the matrix of all first partial derivatives of the system) is singular at such solutions, so these solutions are referred
to as singular. The basic methods of numerical algebraic geometry include so-called {\em endgames}, specialized techniques based on
Puiseux series expansions or the Cauchy integral formula, which can be used to accurately compute numerical approximations to
singular solutions of $\mathbf{f}(\mathbf{z})=0$. However, when tracking paths before $t=0$, paths can become very
close\footnote{Actual path crossing is a probability 0 event~\cite{SW05,BertiniBook}.}, causing the Jacobian matrix to become
ill-conditioned and resulting in poor path-tracking performance. This is mitigated in the software package Bertini~\cite{Bertini} via
step size control and adaptive precision techniques, but can still lead to various recognizable tracking failures~\cite{BertiniBook}.
\subsubsection{Parameter Homotopy}
Finally, in the special setting of repeatedly solving polynomial systems with the same monomial structure but varying coefficients,
there is a particular valuable tool called the {\em parameter homotopy}. The idea is as follows: Suppose we wish to find the solutions of a parameterized
polynomial system of equations, $\mathbf{f}({\mathbf z};\mathbf{p})=0$, at each of a large number $n$ of parameter values
$\mathbf{p}_i\in\mathcal P$, $i=1,\ldots,n$ within some Euclidean (or possibly more general) parameter space $\mathcal P$. We could
use the basic homotopy continuation mechanism described above, but there is a chance that much computational time will be wasted
tracking (possibly many) divergent paths for each $\mathbf{p}_i$. Instead, the idea of the parameter homotopy is to first solve
$\mathbf{f}(\mathbf{z};\mathbf{p_0})$ for some random complex $\mathbf{p}_0\in\mathcal P$, then start from only the finite solutions at
$\mathbf{p}_0$ to track to each $\mathbf{p}_i$. Once again, some basic algebraic geometry guarantees with probability 1 that the
number of finite, isolated solutions at $\mathbf{p}=\mathbf{p}_0$ will be the maximum number of finite, isolated solutions at any choice of $\mathbf{p}$.
\section{Numerical Algebraic Geometry and RANSAC for FDOA-based Geolocation}\label{s:RANSAC}
With a noiseless system, the numerical algebraic geometry methods above can be used to find all solutions to the FDOA system presented in~\S\ref{s:FDOA} accurate to any prescribed numerical accuracy. Specifically, it would take only a single solve in a software such as Bertini~\cite{Bertini} to obtain an emitter location. However, there are a couple issues that arise in real world situations. First, noise and measurement error can plague FDOA calculations and receiver location and velocity estimates. Additionally, if the receivers are positioned in a singular configuration or near one, computing the solution may be prohibitively expensive and the solution itself could be much more accurate in some coordinates than in others. The nonlinear nature of the problem implies that there will often be multiple real solutions, which translate to multiple potential emitter locations. A robust accompaniment for the numerical algebraic geometry methods above is an iterative process such as the RANdom SAmple Consensus (RANSAC) algorithm.
RANSAC, originally developed~\cite{ransac} for application to the location determination problem, is useful when one has data with outliers or corrupt data points. The algorithm works by choosing a few samples from a set of data, determining a model to fit the samples, then calculating how many of the remaining data points can be considered inliers with respect to that model, up to a predetermined tolerance. This process is repeated for a prescribed number of iterations, then the model with the most inliers is returned.
As noted above, using RANSAC for geolocation is not a new idea. In fact, Li et al. applied the algorithm to source location with TDOA in~\cite{Li2009}. This paper proposes a modification of RANSAC to solve for source location with the FDOA polynomial system, a problem that is now accessible due to the utilization of numerical algebraic geometry techniques.
The most notable benefit of using RANSAC for this problem is the ability to ``ignore" noisy or corrupt data. Additionally, since many FDOA measurements are needed for the algorithm, it is natural to reformulate the polynomial system presented in~\S\ref{s:FDOA} to allow for measurements to be taken over multiple time steps. This reduces the number of receivers needed to a single pair, with each system composed of FDOA measurements from three separate time steps (see~\S\ref{s:Bounds} for why data from three time steps is needed).
The algorithm outlined below involves solving a system using the numerical algebraic geometry software, Bertini \cite{Bertini}, during each iteration. Since each system will be of the same form and change only in certain parameter values (location, velocity, FDOA measurements), the solve can be structured as a parameter homotopy~\cite{SW05,BertiniBook}. As discussed in \S\ref{s:NAG}, this allows for only necessary paths to be tracked, which provides faster run times. Additionally, when the solving of an FDOA system results in multiple real, feasible solutions, we have modified the algorithm to consider each solution separately. This ensures there are no missed solutions, as can often result from iterative geolocation methods that converge to a single solution~\cite{Mason2005}.
\subsection{FDOA-RANSAC (FDOAR) Algorithm Outline\\}
\begin{framed}
\noindent{\bf Input}:
\begin{itemize}
\item Locations, velocities, and corresponding FDOA measurement ($f_i$) for $n$ pairs of receivers (or $n$ timesteps of 1 pair of receivers).
\item Number of iterations to run algorithm ($maxiter$).
\item Inlier tolerance ($\varepsilon$).
\end{itemize}
\noindent{\bf Output}:
\begin{itemize}
\item Estimated transmitter location, $\mathbf{x}$.
\end{itemize}
\begin{enumerate}
\item Select three sample points from receiver data (FDOA measurements, receiver locations and velocities).
\item Solve for emitter location using Bertini.
\item Determine feasible solutions (must have positive range values and satisfy Prop. \ref{FDOAbound}).
\item For each feasible solution, determine the number of sample pairs that can be considered inliers.
\begin{enumerate}
\item For each pair of receivers, determine the theoretical FDOA measurement, $\hat{f}_i$, corresponding to the solution.
\begin{itemize}
\item If $|\hat{f}_i-f_i|<\varepsilon$, mark sample as an inlier.
\end{itemize}
\item If the number of inliers for the current solution is greater than the previous recorded location, record current solution as best source location estimate.
\end{enumerate}
\item Repeat for designated number of iterations and return transmitter location estimate.
\end{enumerate}
\end{framed}
\subsection{Numerical Performance}\label{ss:Results}
Numerical simulations were run as follows. Consider a
Cartesian cube of space, 100m long on each side. For each numerical trial, a transmitter was placed at a random location, $\mathbf{x}$, in the cube. Locations and velocities for 40 pairs of receivers were also generated, with locations being limited to the interior of the cube and velocities in the range $[-2,2]$ m/s in each (x,y,z) direction. This is meant to simulate 40 time steps for a single pair of receivers and a stationary transmitter. For each pair of receivers, the FDOA was calculated according to Eq. \ref{FDOA1} and noise was added to simulate various levels of relative FDOA measurement error. We define this,
\begin{align*}
\small
\text{Relative FDOA Measurement Error }:=\dfrac{\sigma^2_{noise}}{\sigma^2_{FDOA}}\times 100\%,
\end{align*}
where $\sigma^2_{noise}$ and $\sigma^2_{FDOA}$ are the variance of the noise and variance of observed FDOA, respectively. The FDOAR algorithm was then run for 20 iterations, returning final transmitter estimate $\hat{\mathbf{x}}.$ The error for the trial was then calculated: $\|\hat{\mathbf{x}}-\mathbf{x}\|$ (m). Results are shown in Fig. \ref{numerr}. For each data point, this process was repeated 50 times and the median of the error was recorded.
Many of the worst performing trials above resulted from transmitters located near the edges of the Cartesian box. We hypothesize that this is the result of very few (or none) of the receiver pairs being located on the side of the transmitter closest to the edge of the box. This caused less information to be learned about the transmitter and resulted in a worse estimate. This is consistent with geolocation intuition and suggests that error values in Fig. \ref{numerr} would decrease if one could ensure that receivers view the emitter from a variety of angles.
\begin{figure}
\centering
\includegraphics[scale=0.33]{num_error.pdf}
\caption{Error in emitter location resulting from various levels of FDOA measurement error. For each data point, 50 instances of coupled RANSAC and Bertini were run, each having 20 iterations and $\varepsilon=0.03$.}
\label{numerr}
\end{figure}
\section{Note on Denoising}\label{s:FDOA_bounds}
Since one of the key contributions of this article is the use of RANSAC for denoising data, we
include here a brief result that allows us to immediately remove FDOA
measurements that are physically unrealizable due to measurement error or noise.
\begin{prop} \label{FDOAbound} The frequency difference of arrival between receivers $i$ and $j$, $f_{i,j}$ satisfies:
\begin{equation*}
\left|f_{i,j}\right|\leq \|\mathbf{v}_j\|+\|\mathbf{v}_i\|,
\end{equation*}
where $\mathbf{v}_i$ and $\mathbf{v}_j$ are the velocity vectors of receivers $i$ and $j$, respectively.
\end{prop}
\begin{IEEEproof}
\begin{align*}
&\left|f_{i,j}\right| = \left|\dfrac{\mathbf{v}_j\cdot (\mathbf{x}_j-\mathbf{x})}{\|{\mathbf{x}_j-\mathbf{x}}\|} -\dfrac{\mathbf{v}_i\cdot (\mathbf{x}_i-\mathbf{x})}{\|{\mathbf{x}_i-\mathbf{x}}\|}\right| \\[7pt]
&= \left|\dfrac{\|\mathbf{v}_j\|\|\mathbf{x}_j-\mathbf{x}\|\cos (\theta_j)}{\|{\mathbf{x}_j-\mathbf{x}}\|} -\dfrac{\|\mathbf{v}_i\|\|\mathbf{x}_i-\mathbf{x}\|\cos (\theta_i)}{\|{\mathbf{x}_i-\mathbf{x}}\|}\right| \\[7pt]
\vspace{2in}
&= \left|\|\mathbf{v}_j\|\cos (\theta_j)-\|\mathbf{v}_i\|\cos (\theta_i)\right| \\[7pt]
&\leq \|\mathbf{v}_j\|+\|\mathbf{v}_i\|.
\end{align*}
\end{IEEEproof}
It would also be interesting to consider denoising via projection to the manifold of
realizable FDOA measurements, similar to the use of projection in linear regression.
A similar approach was previously taken in the TDOA case~\cite{Compagnoni2017}.
We leave this for future work.
\section{Bounds on the Necessary Number of Measurements}\label{s:Bounds}
For systems of linear equations, it is trivial to predict the dimension of the solution set under the assumption
that the equations are linearly independent. This is much the same with polynomial systems, though the
range of degenerate cases is far more nuanced and complicated. With the formulation of the geolocation problem
as a system of polynomial equations in~\S\ref{s:GeoPoly}, it is easy to provide bounds on the minimum number of TDOA and FDOA
measurements\footnote{If we do not allow receivers to take measurements over multiple time steps, a similar table could be provided showing bounds on the number of receivers necessary. Here we refer to the number of measurements rather than the number of receivers for generality.}
needed in various scenarios to reduce the solution set to a finite set of points. The case is also considered where altitude of the emitter is known (ALT constraint). This is the content of Table 1.
\begin{center}
\begin{table}
\begin{tabular}{c|c|c}
& \# measurements (2D) & \# measurements (3D) \\
\hline
TDOA only & 2 & 3 \\
TDOA + ALT & - & 2 \\ \hline
FDOA only & 2 & 3 \\
FDOA + ALT & - & 2\\ \hline
TDOA + FDOA & 1 & 2 \\
TDOA + FDOA + ALT & - & 1
\end{tabular}
\caption{Minimum number of TDOA and FDOA measurements necessary to reduce set of potential transmitter locations to a finite number, for varying dimensions (2 or 3) and types of measurements being used.}
\end{table}
\end{center}
It is important to note that these bounds do not guarantee that there will be only finitely many solutions for every
set of measurements. As an extreme counterexample, consider the case of stacking all receivers at the same
point; the number of (identical) measurements in this case makes no difference.
It is also worth noting that an anomalous positive-dimensional component (with $r_1=0$) shows up in the
FDOA only case. However, this component is easily ignored as it is not physically feasible.
\section{Discussion}\label{s:Disc}
\subsection{Benefits of FDOAR}
We summarize a few of the primary benefits of our approach here:
\begin{enumerate}
\item Solving the geolocation systems using numerical algebraic geometry techniques finds all possible emitter locations. Coupling with RANSAC provides a way to determine which one of those locations best matches the rest of the data.
\item Any bad data from path failures, inaccuracies, measurement error, etc. is automatically ignored, assuming the
source of the errors is not implicit in the structure of the problem.
\item Our method uses FDOA measurements only, though it can be adapted to other measurement combinations.
\item Using multiple time steps, it is necessary to use only two receivers. Additionally, there is no need to designate a reference receiver, which could corrupt all data points if there are errors in its location or velocity.
\item When performing polynomial system solves at multiple points in parameter space, parameter
homotopies could improve efficiency.
\end{enumerate}
\subsection{Limitations}
Each path tracked when solving a polynomial system requires dozens, sometimes hundreds, of linear solves. As a result,
any polynomial systems approach will necessarily be slower than any linear approach. However, linearization necessarily
introduces inaccuracy to nonlinear problems, so the trade-off between speed and accuracy might lead different users to
use different approaches.
As with any RANSAC implementation, speed and accuracy is in part dependent upon the users choice of the maximum number of iterations and inlier tolerance. The optimal choice for these variables can depend greatly on the specifics of the problem. Theoretical results exist that bound the maximum number of iterations with respect to the percentage of inliers present in the data~\cite{Urbancic2014}.
\section{Future Work}
The problem of transmitter geolocation is mathematically rich and practically valuable. As a result, there are many potential
avenues worthy of consideration.
Given a configuration of receivers and a generic set of measurements, it should be possible to decompose the space of
emitter locations into chambers corresponding to the number of physically realizable solutions of the corresponding
geolocation polynomial system. An analysis over some set of such configurations could then help in choosing ``good''
receiver configurations. Similarly, methods such as gradient descent homotopies~\cite{GDHom} could be useful
in finding the boundaries (called the {\em discriminant locus}) between these chambers.
As described above, it would be interesting to understand and make use of the semialgebraic set of physically
realizable FDOA measurements (see~\S\ref{s:FDOA_bounds}). Perhaps this would provide some intuition for the estimation of geolocation accuracy.
\section*{Acknowledgments}
Both authors were partially supported by NSF grant DMS--1719658 and AFOSR grant FA9550-14-1-0185. The authors wish to thank the PI of that AFOSR grant, Margaret Cheney, for introducing us to this problem and numerous stimulating discussions. We also wish to thank Jon Hauenstein for suggesting the use of RANSAC.
\bibliographystyle{abbrv}
|
1,108,101,563,851 | arxiv |
\section{Introduction}
Electrical impedance tomography (EIT) is a low cost, noninvasive, radiation free and portable
imaging modality with various applications in medical imaging, geophysics, civil engineering and nondestructive testing.
In particular, it is an active field of research in medical imaging, where devices based on EIT are already used in practice, with applications to lung imaging such as diagnosis of pulmonary embolism \cite{MARTINS2019}, monitoring patients undergoing mechanical ventilation, breast imaging,
acute cerebral stroke, or cardiac activity monitoring; we refer to the reviews \cite{Bera_2018,MR1955896} and the references therein.
Two mathematical models for EIT have been actively investigated over the last few decades.
The {\it continuum model} has been widely studied in the case where applied currents and voltage measurements are supposed to be known on the entire boundary.
This model is closely related to the {\it Calderón problem}, which has attracted the attention of a large community of mathematicians in the last decades; see \cite{Bera_2018,MR1955896}.
It consists in determining the uniqueness and stability properties of the conductivity reconstruction
when the full Dirichlet-to-Neumann map is known, which corresponds, roughly speaking, to the availability of an infinite amount of applied currents and boundary measurements.
Despite its usefulness, the continuum model is not realistic for applications, indeed, in the case of medical imaging for instance, it does not take into account the fact that currents are applied through electrodes attached by small patches to the patient, and that voltage measurements are also performed through these electrodes.
Therefore, the applied currents and voltage measurements are available only on a subset of the boundary.
In the literature, this situation is referred to as {\it partial measurements} as opposed to {\it full measurements} for the standard Calderón problem.
This leads to the more realistic {\it electrode model} \cite{MR1174044}, which also takes into account the electro-chemical reaction occurring at the interface between the electrode and the skin.
As the field of EIT is growing more mature, the awareness of these restrictions has increased also among mathematicians.
As a consequence, the study of the continuum model with partial boundary data has attracted much attention in the recent years.
Uniqueness results with partial boundary data in dimension $n\geq 3$ were obtained in \cite{MR2262748}, in \cite{MR2299741} for $\C^2$-conductivities, and in \cite{MR2209749} for $W^{3/2+\delta,2n}$-conductivities with $\delta>0$.
Uniqueness results were extended to conductivities of class $\C^{1,\infty}(\overline{\Omega})\cap H^{3/2}(\Omega)$ and conductivities in $W^{1,\infty}(\Omega)\cap H^{3/2+\delta}(\Omega)$ with $0 <\delta < 1/2$ arbitrarily small but fixed in \cite{MR3551265}.
We refer to \cite{MR3221605} for a review of theoretical results on the Calderón problem with partial data.
Regarding numerical methods, sparsity priors are used to improve the reconstruction using partial data in \cite{MR3462483,MR3447196}.
D-bar methods in two dimensions were investigated in \cite{MR3642251,MR3626801} and resistor networks in \cite{MR2608623}.
Due to the small size of the electrodes compared to the rest of the boundary in many practical applications, the idea of modeling small electrodes by point
electrodes using Dirac measures is appealing from the mathematical standpoint.
This point of view has been introduced as a {\it point electrode model} and justified in \cite{MR2819201}; see also \cite{MR3400033,MR2553181,MR3023421}.
We also observe that mathematical models using point measurements are highly relevant for large-scale inverse problems such as full-waveform inversion where the dimensions of the receivers are several orders of magnitude smaller than the dimensions of the physical domain of the model; see \cite{Virieux2009}.
The problem of reconstructing conductivities presenting sharp interfaces in EIT, also known as the inclusion detection problem, has attracted significant interest in the last three decades, starting from the pioneering works \cite{MR873244,MR1017325}.
Several numerical methods have been developed for reconstructing discontinuous conductivities including the factorization method introduced in \cite{MR1776481,MR1662460}; see also the review \cite{MR3095385},
monotonicity-based shape reconstructions \cite{MR3621830,MR3628886,MR3126995}, the enclosure method for reconstructing the convex hull of a set of inclusions \cite{MR1694840,MR1776482}, the MUSIC algorithm for determining the locations of small inclusions \cite{MR2168949}, a nonlinear integral equation method \cite{MR2309659}, and topological derivative-based methods \cite{MR2888256,MR2517928,MR2536481,MR2886190}.
Shape optimization techniques, which are the basis of the present paper, have also been employed to tackle this problem: based on level set methods \cite{MR2132313,MR3535238}, for a polygonal partition of the domain \cite{MR3723652}, using second-order shape sensitivity \cite{MR2407028}, and using a single boundary measurement \cite{MR2329288, MR1607628}.
In this framework, the conductivity is assumed to be piecewise constant or piecewise smooth, and it is then convenient to reformulate the problem as a shape optimization problem \cite{MR1215733} in order to investigate the sensitivity with respect to perturbations of a trial interface.
This sensitivity analysis relies on the calculation of the {\it shape derivative}, which can be written either in a strong form, usually as a boundary integral, or in a weak form which often presents itself as a domain integral involving the derivative of the perturbation field.
The usefulness of the weak form of the shape derivative, often called {\it domain expression} or {\it distributed shape derivative}, is known since the pioneering works \cite{MR800331,MR860040} but has been seldom used since then in comparison with the boundary expression.
A revival of the distributed shape derivative has been observed since \cite{MR2642680}, and
this approach has been further developed in the context of EIT and level set methods in \cite{MR3535238}, see also \cite{MR3660456}.
An important contribution of the present paper is to extend the framework developed in \cite{MR3535238} to the case of point measurements in EIT.
The main issue for shape functionals involving point evaluations is that one needs the continuity of the state, for which the usual $H^1$-regularity in two dimensions is insufficient.
Functionals with point evaluations and pointwise constraints have been studied intensively in the optimal control literature; see \cite{MR2551487,MR2583281}.
In particular, a convenient idea from optimal control is to use Gr\"oger's $W^{1}_p$-estimates \cite{MR990595,a_GRRE_1989a} with $p>2$ to obtain continuity of the state in two dimensions.
Here, we adapt this idea in the context of shape optimization and of the averaged adjoint method, in the spirit of \cite{MR3584578}.
We show that in general the shape derivative contains Dirac measures, and that the adjoint state is slightly less regular than $H^1$ due to the presence of Dirac measures on the right-hand side.
Another important contribution of this paper is to investigate the relations between the domain and boundary expressions of the shape derivative depending on the interface regularity, and the minimal regularity of the interface for which the boundary expression of the shape derivative can be obtained in the context of EIT with point measurements.
We start by recalling in Section \ref{sec:preliminaries} the $W^{1}_p$-estimates for mixed boundary value problems introduced in \cite{MR990595}.
We then formulate in Section \ref{sec:EIT_point} the shape optimization approach for the inverse problem of EIT and show how the averaged adjoint method can be adapted to the context of Banach spaces.
Then, we compute the distributed shape derivative and prove its validity for a conductivity inclusion which is only open.
When the inclusion is Lipschitz polygonal or $\C^1$, we also obtain the boundary expression of the shape derivative.
Finally, in Section \ref{sec:numerics} we explain the numerical algorithm based on the distributed shape derivative and we present a set of results showing the efficiency of the approach.
Introducing an error measure for the reconstruction, we also discuss the quality of reconstructions depending on the number of point measurements, applied boundary currents and noise level.
More details about the averaged adjoint method are given in an appendix for the sake of completeness.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Mixed boundary value problems in $W^1_p$}\label{sec:prel1}
In this section we recall the framework introduced in \cite{MR990595} for obtaining a $W^1_p$-estimate for solutions to
mixed boundary value problems for second order elliptic PDEs.
\begin{definition}[see \cite{MR990595,MR2551487}]\label{def1b}
Let $\mathcal{D}\subset\mathds{R}^2$ and $\Gamma\subset\partial\mathcal{D}$ be given. We say that $\mathcal{D}\cup\Gamma$ is
regular (in the sense of Gr\"oger) if $\mathcal{D}$ is a bounded Lipschitz domain, $\Gamma$ is a relatively
open part of the boundary ${\partial \D}$, $\Gamma_0 := {\partial \D}\setminus\Gamma$ has positive measure, and $\Gamma_0$ is a finite
union of closed and nondegenerated (i.e., not a single point) curved pieces of ${\partial \D}$.
\end{definition}
Let $\mathcal{D},\Gamma$ and $\Gamma_0$ be as in Definition \ref{def1b} and define for $d\geq 1$:
$$ \C^\infty_\Gamma (\mathcal{D},\mathds{R}^d): = \{ f|_{\mathcal{D}}\ | \ f\in\C^\infty(\mathds{R}^2,\mathds{R}^d),\, \operatorname{supp} f\cap\Gamma_0 = \emptyset\}.$$
In the scalar case, i.e. for $d=1$, we write $\C^\infty_\Gamma (\mathcal{D})$ instead of $\C^\infty_\Gamma (\mathcal{D},\mathds{R})$ and use a similar notation for the other function spaces.
We denote by $W^1_p(\mathcal{D})$, $1\le p \le \infty$ the Sobolev space of weakly differentiable functions with weak derivative in $L^p(\mathcal{D})$.
For $p,p'\geq 1$ satisfying $\frac{1}{p} + \frac{1}{p'} = 1$, we define the Sobolev space
$$W^1_{\Gamma,p}(\mathcal{D},\mathds{R}^d) := \overline{\C^\infty_\Gamma (\mathcal{D},\mathds{R}^d)}^{W^1_p}, $$
where $W^1_p$ stands for the usual norm in $W^{1,p}(\mathcal{D},\mathds{R}^d)$, and the dual space
$$W^{-1}_{\Gamma,p}(\mathcal{D},\mathds{R}^d) := (W^1_{\Gamma,p'}(\mathcal{D},\mathds{R}^d))^*. $$
We use the notation $\text{id}$ for the identity function in $\mathds{R}^2$, and $\mathds{I}$ for the $2\times 2$ identity matrix.
Let $2\leq q <\infty$ and $1\leq q'\leq 2$ satisfying $\frac{1}{q} + \frac{1}{q'} = 1$.
Let $\mathds{A} \in L^\infty(\mathcal{D})^{2\times 2}$ be a matrix-valued function satisfying for all $\eta,\theta\in \bbR^2$ and
$x\in \overline \mathcal{D}$:
\begin{align}\label{assump2}
\mathds{A}(x)\theta\cdot \theta &\ge m|\theta|^2 \text{ and } |\mathds{A}(x)\eta| \le M|\eta|, \text{ with } m>0 \text{ and }M>0,
\end{align}
where $|\cdot|$ denotes the Euclidean norm and $m\leq M$.
Introduce
\begin{align*}
a: W^1_{\Gamma,q}(\mathcal{D})\times W^1_{\Gamma,q'}(\mathcal{D}) & \to\mathds{R} \\
(v,w)& \mapsto \int_{\mathcal{D}} \mathds{A}\nabla v\cdot \nabla w.
\end{align*}
Then, define the corresponding operator
\begin{align}\label{Aq}
\begin{split}
\mathcal{A}_q: W^1_{\Gamma,q}(\mathcal{D}) &\to W^{-1}_{\Gamma,q}(\mathcal{D}), \\
v & \mapsto \mathcal{A}_q v := a(v,\cdot).
\end{split}
\end{align}
Let $\mathcal{P}$ be defined by, for $u,v\in W^1_{\Gamma,2}(\mathcal{D})$,
$$\langle \mathcal{P} u,v \rangle := \int_{\mathcal{D}} \nabla u\cdot \nabla v + uv.$$
By H\"older's inequality it follows that $\mathcal{P}: W^1_{\Gamma,p}(\mathcal{D})\to W^{-1}_{\Gamma,p}(\mathcal{D})$ is a well-defined and continuous operator for all $p\geq 2$.
We also introduce the constant
$$ M_p :=\sup\{ \| v\|_{W^1_p(\mathcal{D})}\ | \ v\in W^1_{\Gamma,p}(\mathcal{D}), \|\mathcal{P} v\|_{W^{-1}_{\Gamma,p}(\mathcal{D})} \leq 1 \}.$$
It is easily verified that $M_2 =1$.
Now we define the set of regular domains in the sense of Gr\"oger
$$ \Xi : = \{ (\mathcal{D},\Gamma)\ |\ \mathcal{D}\subset\mathds{R}^2,\Gamma\subset{\partial \D}, \text{ and }\mathcal{D}\cup\Gamma\text{ is regular}\}.$$
\begin{definition}
Denote by $R_q$, $2\leq q<\infty$, the set of regular domains $(\mathcal{D},\Gamma)\in\Xi$ for which $\mathcal{P}$ maps $W^1_{\Gamma,q}(\mathcal{D})$ onto $W^{-1}_{\Gamma,q}(\mathcal{D})$.
\end{definition}
Then we have the following results from \cite[Lemma 1]{a_GRRE_1989a}.
\begin{lemma}\label{lemma01}
Let $(\mathcal{D},\Gamma)\in R_q$ for some $q>2$.
Then $(\mathcal{D},\Gamma)\in R_p$ for $2\leq p\leq q$ and $M_p \leq M_q^\theta$ if $\frac{1}{p} = \frac{1-\theta}{2} + \frac{\theta}{q}$.
\end{lemma}
We can now state an adapted version of \cite[Theorem 1]{a_GRRE_1989a} which plays a key role in our investigations.
\begin{theorem}{\cite[Theorem 1]{a_GRRE_1989a}}\label{thm01}
Let $(\mathcal{D},\Gamma)\in R_{q_0}$ for some $q_0>2$.
Suppose that $\mathds{A}$ satisfies assumptions \eqref{assump2} for $q_0$ and let $\mathcal{A}_q$ be defined by \eqref{Aq}.
Then $\mathcal{A}_q: W^1_{\Gamma,q}(\mathcal{D}) \to W^{-1}_{\Gamma,q}(\mathcal{D})$ is an isomorphism provided that $q\in [2,q_0]$ and $M_q k<1$, where $k:= (1-m^2/M^2)^{1/2}$, and
\begin{equation}\label{Aq_iso}
\| \mathcal{A}_q^{-1}\|_{L(W^{-1}_{\Gamma,q}(\mathcal{D}),W^1_{\Gamma,q}(\mathcal{D}))} \leq c_q,
\end{equation}
where $c_q: = mM^{-2} M_q (1-M_q k)^{-1}$. Finally, $M_q k <1$ is satisfied if
$$\frac{1}{q}> \frac{1}{2} - \left(\frac{1}{2} - \frac{1}{q_0} \right) \frac{|\log k|}{\log M_{q_0}} . $$
\end{theorem}
\begin{remark}\label{rem2.5}
\begin{itemize}
\item If $(\mathcal{D},\Gamma)\in R_q$, then $M_q<\infty$.
\item For every regular $(\mathcal{D},\Gamma)\in\Xi$, there exists a $q_0>2$ so that $(\mathcal{D},\Gamma)\in R_{q_0}$; cf \cite[Theorem 3]{MR990595}.
\item For sufficiently small $q>2$, the constant $c_q$ in \eqref{Aq_iso} can be chosen to be independent of $q$; see \cite[Corollary 5]{MR3584578}.
\end{itemize}
\end{remark}
We now explain how a particular case of the theory described above can be applied to our problem.
Let $\sigma\in L^\infty(\mathcal{D})$ satisfying pointwise a.e. $\overline{\sigma} \geq \sigma\geq \underline{\sigma}>0$ where $\overline{\sigma},\underline{\sigma}>0$ are constants. It is clear that $\mathds{A}:= \sigma \mathds{I}\in L^\infty(\mathcal{D})^{2\times 2}$ satisfies assumptions~\eqref{assump2}.
In view of Remark \ref{rem2.5}, there exists $q_0>2$ such that $(\mathcal{D},\Gamma)\in R_{q_0}$.
For $q\in [2,q_0]$, $f\in L^q(\mathcal{D})$ and $g\in L^\infty(\partial\mathcal{D})$, the functional
\[
\langle F,v\rangle := \int_\mathcal{D} fv + \int_{\Gamma} gv , \quad v\in W_{\Gamma,q'}^1(\mathcal{D})
\]
defines an element in $(W_{\Gamma,q'}^1(\mathcal{D}))^* = W^{-1}_{\Gamma,q}(\mathcal{D})$.
Therefore, it follows from Theorem~\ref{thm01} that there is a unique $u\in W^1_{\Gamma,q}(\mathcal{D})$ solution to
\[
\int_{\mathcal{D}} \sigma \nabla u\cdot \nabla v = \int_\mathcal{D} fv + \int_{\Gamma} gv \ \text{ for all } v\in W_{\Gamma,q'}^1(\mathcal{D}),
\]
provided $q\in ]2,q_0]$ is sufficiently close to $2$.
\subsection{Shape optimization framework}\label{sec:prel2}
In this section, we recall basic notions about first and second order Eulerian shape derivatives.
For $k\geq 0$ we define
\begin{align*}
\C^k_c(\mathcal{D},\mathds{R}^2) &:=\{V\in \C^k(\mathcal{D},\mathds{R}^2)\ |\ V\text{ has compact support in } \mathcal{D}\},
\end{align*}
and $\C^\infty_c(\mathcal{D},\mathds{R}^2)$ similarly, and we equip these spaces with their usual topologies; see \cite[1.56, pp. 19-20]{MR2424078}.
Consider a vector field $V\in \C^1_c(\mathcal{D},\mathds{R}^2)$ and the associated flow
$T_t:\mathcal{D}\rightarrow \mathcal{D}$, $t\in [0,t_0]$ defined for each $x_0\in \mathcal{D}$ as $T_t(x_0):=x(t)$, where $x:[0,t_0]\rightarrow \mathds{R}^d$ solves
\begin{align}\label{Vxt}
\begin{split}
\dot{x}(t)&= V(x(t)) \quad \text{ for } t\in [0,t_0],\quad x(0) =x_0.
\end{split}
\end{align}
Let $\mathds{P}(\D)$ be the set of all open sets compactly contained in $\mathcal{D}$, where $\mathcal{D}\subset \mathds{R}^2$ is assumed to be open and bounded.
For $\Omega\in \mathds{P}(\D)$,
we consider the family of perturbed domains
\begin{equation}\label{domain}
\Omega_t := T_t(\Omega).
\end{equation}
\begin{definition}[Shape derivative]\label{def1}
Let $J : \mathds{P}(\D) \rightarrow \mathds{R}$ be a shape functional.
\begin{itemize}
\item[(i)] The Eulerian semiderivative of $J$ at $\Omega$ in direction $V \in \C^1_c(\mathcal{D},\mathds{R}^2)$
is defined by, when the limit exists,
\begin{equation}
d J(\Omega)(V):= \lim_{t \searrow 0}\frac{J(\Omega_t)-J(\Omega)}{t}.
\end{equation}
\item[(ii)] $J$ is said to be \textit{shape differentiable} at $\Omega$ if it has a Eulerian semiderivative at $\Omega$ for all $V \in \C^\infty_c(\mathcal{D},\mathds{R}^2)$ and the mapping
\begin{align*}
d J(\Omega): \C^\infty_c(\mathcal{D},\mathds{R}^2) & \to \mathds{R},\; V \mapsto d J(\Omega)(V)
\end{align*}
is linear and continuous, in which case $d J(\Omega)(V)$ is called the \textit{Eulerian shape derivative} at $\Omega$, or simply \textit{shape derivative} at $\Omega$.
\end{itemize}
\end{definition}
\section{EIT with point measurements}\label{sec:EIT_point}
\subsection{Problem formulation}
Let $\mathcal{D}\cup\Gamma\in \Xi$ (see Definition~\ref{def1b}) and $\Omega\in\mathds{P}(\D)$.
Denote $\Omega^c: = \mathcal{D}\setminus \Omega$ and $\Omega^c_t: = T_t(\Omega^c)$.
In this section, $n$ denotes the outward unit normal vector to $\Omega$.
Introduce the conductivity $\sigma_\Omega = \sigma_1\chi_{\Omega} + \sigma_0\chi_{\Omega^c}$ with $(\sigma_0,\sigma_1)$ positive scalars, $\sigma_1 > \sigma_0$, and $f_\Omega =f_1\chi_{\Omega} + f_0\chi_{\Omega^c}$ where $f_0,f_1\in H^1(\mathcal{D})$.
Here, $\chi_\Omega$ denotes the characteristic function of $\Omega$.
Let $p>2$ and $1< p'< 2$ satisfying $\frac{1}{p} + \frac{1}{p'} = 1$.
In view of the development in Section \ref{sec:prel1}, $u\in W^1_{\Gamma,p}(\Omega)$ is the solution to the mixed boundary value problem
\begin{equation}\label{E:var_form}
\int_\mathcal{D} \sigma_\Omega \nabla u \cdot \nabla v = \int_\mathcal{D} f_\Omega v + \int_{\Gamma} gv \quad \mbox{ for all }v\in W^{1}_{\Gamma,p'}(\mathcal{D}),
\end{equation}
with $g\in L^\infty({\partial \D})$. Observe that $u$ depends on $\Omega$ through $\sigma_\Omega$.
In EIT, $g$ represents an input, in this case an electric current applied on the boundary, and $u$ is the corresponding potential.
Then, measurements $h$ of the potential on a subset $\Gamma_h$ of $\overline{\mathcal{D}}$ are performed.
Given the Cauchy data $(g,h)$, the task is to find the best possible approximation of the unknown shape $\Omega$.
To obtain a better reconstruction, we apply several input currents $g_i$, $i = 1,\dots,I,$ and the corresponding measurements are denoted by $h_i$.
Assuming $(\sigma_0,\sigma_1)$ are known and denoting $u_i$ the solution of \eqref{E:var_form} with $g=g_i$,
the EIT problem becomes then:
\begin{align}\label{EIT-SO}
\begin{split}
\mbox{given }& \mbox{$\{(g_i,h_i)\}_{i=1}^I$, find $\Omega$ such that $u_i = h_i$ on $\Gamma_h$ for $i=1,\dots,I$.}
\end{split}
\end{align}
However, \eqref{EIT-SO} is idealized since in practice the measurements $h_i$ are corrupted by noise, therefore
we cannot expect that $u_i = h_i$ be exactly achievable, but rather that $|u_i - h_i|$ should be minimized.
When $\Gamma_h$ is a manifold of one or two dimensions, a common approach is to minimize an appropriate cost functional such as
\begin{align}
\label{eit3.2} J(\Omega) &= \frac{1}{2}\sum_{i=1}^I\int_{\Gamma_h} (u _i - h_i)^2.
\end{align}
Another popular approach is to use a Kohn-Vogelius type functional; see \cite{MR3535238}.
In this paper we are interested in the case where $\Gamma_h = \{x_k\}_{k=1}^K\in \overline{\mathcal{D}}$ is a finite set of points, i.e. we only have a finite collection of point measurements.
In this case, a Kohn-Vogelius type functional does not seem appropriate since we would need $h$ on all of $\partial\mathcal{D}$ for this approach.
The functional \eqref{eit3.2} on the other hand can be adapted to the case $\Gamma_h = \{x_k\}_{k=1}^K$ in the following way.
For $i=1,\dots,I$, assume that measurements $\{h_i(x_k)\}_{k=1}^K\in\mathds{R}^K$ are available.
For $\Omega\in\mathds{P}(\D)$ and $x_0\in \overline{\mathcal{D}}$, we consider the shape functional
\begin{equation}\label{E:cost_full}
J(\Omega) := \frac{1}{2}\sum_{i=1}^I \mu_i\sum_{k=1}^K \delta_{x_k}((u_i - h_i)^2) = \frac{1}{2}\sum_{i=1}^I \mu_i\sum_{k=1}^K (u_i(x_k)- h_i(x_k))^2,
\end{equation}
where $\delta_{x_k}: \C(\overline \mathcal{D})\to \mathds{R}$ is the Dirac measure concentrated at $x_k$ and $\mu_i$ are given constants.
Note that in view of the continuous embedding $W^1_{\Gamma,p}(\mathcal{D})\subset \C(\overline \mathcal{D})$ for $p>2$ in two dimensions, the point evaluation of $u_i$ in \eqref{E:cost_full} is well-defined.
Without loss of generality, we will compute the shape derivative for the simpler case $I=1$ and $\mu_1 = 1$, in which case the cost functional becomes
\begin{equation}\label{E:cost}
J(\Omega) = \frac{1}{2}\sum_{k=1}^K \delta_{x_k}((u - h)^2) = \frac{1}{2}\sum_{k=1}^K (u(x_k)- h(x_k))^2.
\end{equation}
The formula of the shape derivative in the general case \eqref{E:cost_full} can then be obtained by summation.
\begin{figure}
\begin{center}
\input{domain_om.tex}
\end{center}
\caption{Partition $\mathcal{D}=\Omega\cup \Omega^c$. }\label{partition}
\end{figure}
\subsection{Shape derivative}
For $\Omega\in\mathds{P}(\D)$ and $V \in \C^1_c(\mathcal{D},\mathds{R}^2)$, define $\Omega_t$ as in \eqref{domain}.
Since $V$ has compact support in $\mathcal{D}$, we have $\Omega_t\subset\mathcal{D}$ for all $t\in [0,t_0]$.
Now we consider the Lagrangian $\mathcal{L}:\mathds{P}(\D)\times W^1_{\Gamma,p}(\mathcal{D})\times W^1_{\Gamma,p'}(\mathcal{D})\rightarrow \mathds{R}$ associated with the cost functional \eqref{E:cost} and the PDE constraint \eqref{E:var_form} defined by
\begin{align}
\label{G}
\begin{split}
\mathcal{L}(\Omega,\varphi,\psi) & := \frac{1}{2}\sum_{k=1}^K (\varphi(x_k) -h(x_k))^2
+ \int_\mathcal{D} \sigma_{\Omega} \nabla \varphi\cdot\nabla \psi - f_\Omega\psi - \int_{\Gamma} g\psi.
\end{split}
\end{align}
Aiming at applying the averaged adjoint method \cite{a_ST_2015a}, we introduce the {\it shape-Lagrangian} $G:[0,t_0]\times W^1_{\Gamma,p}(\mathcal{D})\times W^1_{\Gamma,p'}(\mathcal{D}) \rightarrow \mathds{R}$ as
\begin{align*}
G(t,\varphi,\psi) &:= \mathcal{L}(\Omega_t,\varphi\circ T_t^{-1},\psi\circ T_t^{-1})\\
&= \frac{1}{2}\sum_{k=1}^K (\varphi\circ T_t^{-1} - h)^2(x_k) + \int_{\mathcal{D}} \sigma_{\Omega_t}\nabla(\varphi\circ T_t^{-1})\cdot \nabla(\psi\circ T_t^{-1}) -f_{\Omega_t} \psi\circ T_t^{-1} - \int_{\Gamma} g\psi\circ T_t^{-1},
\end{align*}
see Appendix 1 for a detailed explanation.
Notice that for all $p\ge 1$ we have $\varphi \in W^1_{\Gamma,p}(\mathcal{D})$ if and only if $\varphi\circ T_t \in W^1_{\Gamma,p}(\mathcal{D})$; see \cite[Theorem 2.2.2, p. 52]{b_ZI_1989a}.
Observe that
\begin{align*}
\sigma_{\Omega_t}\circ T_t
& =\sigma_1\chi_{\Omega_t}\circ T_t + \sigma_0\chi_{\Omega^c_t}\circ T_t
= \sigma_1\chi_{\Omega} + \sigma_0\chi_{\Omega^c}= \sigma_\Omega,\\
f_{\Omega_t}\circ T_t
& =f_1\circ T_t\, \chi_{\Omega_t}\circ T_t
+f_0\circ T_t\, \chi_{\Omega^c_t}\circ T_t
=f_1\circ T_t\, \chi_{\Omega}
+f_0\circ T_t\, \chi_{\Omega^c},
\end{align*}
and we introduce the function $f^t := f_1\circ T_t\, \chi_{\Omega} +f_0\circ T_t\, \chi_{\Omega^c}$.
Using the fact that $T_t = \text{id}$ on $\partial\mathcal{D}$ and proceeding with the change of variables $x\mapsto T_t(x)$ inside the integrals in $G(t,\varphi,\psi)$, we obtain using the chain rule
\begin{equation}\label{G_reduced}
G(t,\varphi,\psi) = \frac{1}{2}\sum_{k=1}^K (\varphi\circ T_t^{-1} - h)^2(x_k) + \int_{\mathcal{D}} \sigma_\Omega A(t)\nabla\varphi\cdot \nabla\psi - f^t\psi - \int_{\Gamma} g\psi,
\end{equation}
where $A(t):= \Det(D T_t) D T_t^{-1}D T_t^{-\mathsf{T}}$.
For $t\in [0,t_0]$, let us define the perturbation $\mathcal{A}^t_q$ of $\mathcal{A}_q$ defined in \eqref{Aq} as follows:
\begin{align}\label{Aqt}
\begin{split}
\mathcal{A}^t_q: W^1_{\Gamma,q}(\mathcal{D}) &\to W^{-1}_{\Gamma,q}(\mathcal{D}), \\
v & \mapsto \left(w\mapsto \langle\mathcal{A}^t_q v,w\rangle: = \int_{\mathcal{D}} \sigma_\Omega A(t)\nabla v\cdot \nabla w \right).
\end{split}
\end{align}
By continuity of $t\mapsto A(t):[0,t_0] \to C(\overline{\mathcal{D}})^{2\times 2}$, for every $\epsilon>0$ there exists $\delta>0$ so that the following result (see \cite[Lemma 13]{MR3584578}) follows immediately:
\begin{align}
\label{A01} A(t)(x)\eta\cdot\eta & \geq (1-\epsilon) |\eta|^2\quad \text{ for all }\eta\in\mathds{R}^2 \text{ and all } (t,x)\in [0,\delta]\times\overline{\mathcal{D}},\\
\label{A02} |A(t)(x)| & \leq 1+\epsilon\quad \text{ for all } (t,x)\in [0,\delta]\times\overline{\mathcal{D}}.
\end{align}
The continuity properties \eqref{A01}-\eqref{A02} imply the following
perturbed version of Theorem~\ref{thm01}.
\begin{lemma}\label{lemma02}
For each $(\mathcal{D},\Gamma)\in \Xi$ there exists $q_0>2$, $\epsilon>0$ and $\delta>0$ so that for all $t\in [0,\delta]$ and all $q\in [2,q_0]$ satisfying $M_q k < 1$, where $k:= (1-m^2/M^2)^{1/2}<1$ with $m = \underline{\sigma}(1-\epsilon)$ and $M = \overline{\sigma}(1+\epsilon)$, the mapping $\mathcal{A}^t_q: W^1_{\Gamma,q}(\mathcal{D}) \to W^{-1}_{\Gamma,q}(\mathcal{D})$ defined by \eqref{Aqt} is an isomorphism.
Moreover, we have for all $t\in [0,\delta]$ that
\begin{equation}\label{Aq_iso_t}
\| (\mathcal{A}^t_q)^{-1} \|_{L(W^{-1}_{\Gamma,q}(\mathcal{D}),W^1_{\Gamma,q}(\mathcal{D}))} \leq c_q ,
\end{equation}
where $c_q: = mM^{-2} M_q (1-M_q k)^{-1}$ is independent of $t$.
Finally, $M_q k <1$ is satisfied if
\[
\frac{1}{q}> \frac{1}{2} - \left(\frac{1}{2} - \frac{1}{q_0} \right) \frac{|\log k|}{\log M_{q_0}} .
\]
\end{lemma}
\begin{proof}
We have $\overline{\sigma} \geq \sigma\geq \underline{\sigma}>0$ with $ \underline{\sigma}: = \min\{\sigma_0,\sigma_1\}$, $\overline{\sigma}: = \max\{\sigma_0,\sigma_1\}$ and $\underline{\sigma}< \overline{\sigma}$.
Let us choose $\epsilon<1 $ and $\delta$ such that \eqref{A01},\eqref{A02} is satisfied, and let $t\in [0,\delta)$.
In view of \eqref{A01},\eqref{A02} it immediately follows that $\mathds{A} = \sigma_\Omega A(t)\in L^\infty(\mathcal{D})^{2\times 2}$ satisfies assumptions~\eqref{assump2} with $m := \underline{\sigma}(1-\epsilon)$ and $M:= \overline\sigma(1+\epsilon)$.
Hence, the result follows directly from Theorem \ref{thm01}, since $M$ and $m$ are independent of $t$.
\end{proof}
The main statement of this section is the following theorem.
\begin{theorem}[distributed shape derivative]\label{T:shape}
Let $\mathcal{D}\cup\Gamma\subset\mathds{R}^2$ be a regular domain in the sense of Gr\"oger and $\Omega\in\mathds{P}(\D)$.
Assume that $\Gamma_h\cap{\partial \Om} =\emptyset$ and $f_\Omega =f_1\chi_{\Omega}+f_0\chi_{\Omega^c}$ where $f_0,f_1\in H^1(\mathcal{D})$.
Then the shape derivative of $J$ at $\Omega$ in direction $V\in\C^1_c(\mathcal{D},\mathds{R}^2)$
is given by
\begin{equation}
\label{T:tensor_shape_deriv}
dJ(\Omega)(V)
= {\mathbf{S}}_0(V) + \int_{\mathcal{D}} {\mathbf{S}}_1: D V,
\end{equation}
where ${\mathbf{S}}_1\in L^1(\mathcal{D})^{2\times 2}$ and ${\mathbf{S}}_0\in (\C^0(\overline{\mathcal{D}},\mathds{R}^2))^*$ are defined by
\begin{align}
\label{S1_1}\mathbf{S}_1 & = - 2 \sigma_\Omega \nabla u\odot\nabla p + (\sigma_\Omega\nabla u\cdot\nabla p -fp)\mathds{I},\\
\mathbf{S}_0(V) & = \mathbf{S}_0^s(V) + \int_\mathcal{D} \mathbf{S}_0^r \cdot V\\
\mathbf{S}_0^r & = - p\widetilde{\nabla} f \\
\mathbf{S}_0^s & = - \sum_{k=1}^K \bigg((u - h)\nabla u\bigg)(x_k)\delta_{x_k},
\end{align}
where $\nabla u\odot\nabla p := (\nabla u\otimes\nabla p + \nabla p\otimes\nabla u)/2$,
$\widetilde{\nabla} f := \nabla f_1 \, \chi_{\Omega} + \nabla f_0\, \chi_{\Omega^c}$ and $\mathbf{S}_0^r\in L^1(\mathcal{D},\mathds{R}^2)$.
Also, there exists $q>2$ such that the adjoint $p\in W^1_{\Gamma,q'}(\mathcal{D}) $ is the solution to
\begin{equation}
\int_\mathcal{D} \sigma_\Omega \nabla p \cdot \nabla \varphi = -\sum_{k=1}^K (u(x_k) -h(x_k))\varphi(x_k) \quad \mbox{ for all } \varphi\in W^1_{\Gamma,q}(\mathcal{D}).
\end{equation}
\end{theorem}
\begin{proof}
We employ the averaged adjoint approach of \cite{a_ST_2015a}, we refer to Appendix 1 for details about the method.
We closely follow the argumentation of \cite{MR3584578}.
Let us define the perturbed state $u^t\in W^1_{\Gamma,q}(\mathcal{D})$ solution of
\begin{equation}\label{eq:perturbed_state}
\int_\mathcal{D} \sigma_\Omega A(t) \nabla u^t \cdot \nabla \varphi
= \int_\mathcal{D} f^t \varphi
+ \int_{\Gamma} g\varphi \quad \text{ for all } \varphi \in W^1_{\Gamma,q'}(\mathcal{D}).
\end{equation}
The mapping $F_t : W^1_{\Gamma,q'}(\mathcal{D})\to \bbR$ defined by
\[
\langle F_t,v \rangle := \int_\mathcal{D} f^t v + \int_{\Gamma} gv \quad \text{ for } v\in W^1_{\Gamma,q'}(\mathcal{D})
\]
is well-defined and continuous.
Consequently, thanks to Theorem~\ref{thm01} there is a unique solution to \eqref{eq:perturbed_state} in $W^1_{\Gamma,q}(\mathcal{D})$ for $q>2$ sufficiently close to $2$.
Using \eqref{Aq_iso_t} we get
\begin{align*}
\| u^t\|_{W^1_{\Gamma,q}(\mathcal{D})} \leq c_q \| F_t \|_{W^{-1}_{\Gamma,q}(\mathcal{D})}
\leq C(\| f^t \|_{L^2(\mathcal{D})} + \| g \|_{L^\infty(\partial\mathcal{D})}).
\end{align*}
It follows that for some constant $C$ independent of $t$, we have
\begin{align}\label{006}
\| u^t\|_{W^1_{\Gamma,q}(\mathcal{D})} \leq C.
\end{align}
Following \eqref{averated_}, the {\it averaged adjoint equation} reads: find $p^t\in W^1_{\Gamma,q'}(\mathcal{D})$, such that
\begin{equation}\label{E:averaged1}
\int_0^1 d_\varphi G(t,su^t + (1-s)u^0; p^t)(\varphi)\;ds =0 \quad \text{ for all } \varphi\in W^1_{\Gamma,q}(\mathcal{D}),
\end{equation}
which is equivalent to, using the fact that $A(t)^\mathsf{T} = A(t)$,
\begin{align}\label{E:averaged2}
\begin{split}
& \int_\mathcal{D} \sigma_\Omega A(t)\nabla p^t \cdot \nabla \varphi \\
& \hspace{1cm} = \frac{1}{2}\sum_{k=1}^K (u^t\circ T_t^{-1}(x_k) + u^0\circ T_t^{-1}(x_k) - 2h(x_k))\varphi\circ T_t^{-1}(x_k) \quad \text{ for all } \varphi\in W^1_{\Gamma,q}(\mathcal{D}).
\end{split}
\end{align}
Let us introduce the adjoint operator
\begin{align}\label{Aqt_adjoint}
\begin{split}
(\mathcal{A}^t_q)^*: W^{-1}_{\Gamma,q}(\mathcal{D})^* = W^1_{\Gamma,q'}(\mathcal{D}) &\to W^1_{\Gamma,q}(\mathcal{D})^* = W^{-1}_{\Gamma,q'}(\mathcal{D}), \\
w & \mapsto \left(v\mapsto \langle(\mathcal{A}^t_q)^* w,v\rangle: = \langle w, \mathcal{A}^t_q v\rangle \right).
\end{split}
\end{align}
Using \eqref{Aqt} and the fact that $A(t)^\mathsf{T} = A(t)$ we get for $w\in W^1_{\Gamma,q'}(\mathcal{D})$ and $v\in W^1_{\Gamma,q}(\mathcal{D})$,
\[
\langle(\mathcal{A}^t_q)^* w,v\rangle = \int_{\mathcal{D}} \sigma_\Omega A(t)\nabla w\cdot \nabla v.
\]
Now, in view of Lemma \ref{lemma02} there exists $q>2$ and $\delta>0$ such that the mapping $\mathcal{A}^t_q: W^1_{\Gamma,q}(\mathcal{D}) \to W^{-1}_{\Gamma,q}(\mathcal{D})$ is an isomorphism for all $t\in [0,\delta]$.
Thus, the adjoint mapping $(\mathcal{A}^t_q)^*: W^1_{\Gamma,q'}(\mathcal{D}) \to W^{-1}_{\Gamma,q'}(\mathcal{D})$ is also an isomorphism.
Now the functional $R_t: W^1_{\Gamma,q}(\mathcal{D})\to \bbR$ defined by
\[
\langle R_t ,v \rangle := \frac{1}{2}\sum_{k=1}^K (u^t\circ T_t^{-1}(x_k) + u^0\circ T_t^{-1}(x_k) - 2h(x_k))v\circ T_t^{-1}(x_k) \text{ for } v\in W^1_{\Gamma,q}(\mathcal{D}).
\]
is well-defined and continuous.
Therefore, since $(\mathcal{A}^t_q)^*$ is an isomorphism, the averaged adjoint equation $(\mathcal{A}^t_q)^* p^t = R_t$ has a unique solution $p^t\in W_{\Gamma,q'}^1(\mathcal{D})$.
Using the continuous embedding of $W^1_{\Gamma,q}(\mathcal{D})$ into the space of continuous functions $\C(\overline{\mathcal{D}})$ for $q>2$ in two dimensions, it also follows that
\begin{align*}
\|p^t\|_{W^1_{\Gamma,q'}(\mathcal{D})}
& \le C \max_{k\in\{1,\dots,K\}} |(u^t\circ T_t^{-1} + u^0\circ T_t^{-1}-2h )(x_k)|\\
& \le C \left(\|u^t\|_{W^1_{\Gamma,q}(\mathcal{D})} + \|u^0\|_{W^1_{\Gamma,q}(\mathcal{D})} +\max_{k\in\{1,\dots,K\}}|h(x_k)| \right) .
\end{align*}
Then using \eqref{006} we get, for some constant $C$ independent of $t$,
\begin{align}\label{007}
\|p^t\|_{W^1_{\Gamma,q'}(\mathcal{D})}\leq C.
\end{align}
With the estimate \eqref{007} we readily verify that $p^t\rightharpoonup p^0$ weakly in $W^1_{\Gamma,q'}(\mathcal{D})$ as $t\searrow 0$.
Using \eqref{eq:main_averaged} we have
\begin{equation}
\frac{G(t,u^t,p^t)-G(0,u^0,p^0)}{t}
= \frac{G(t,u^0,p^t)-G(0,u^0,p^t)}{t}
\end{equation}
and then in view of \eqref{G_reduced}
\begin{equation}\label{E:rhs_diff_G}
\begin{split}
\frac{G(t,u^0,p^t)-G(0,u^0,p^t)}{t} & = \frac{1}{2} \sum_{k=1}^K \frac{(u^0\circ T_t^{-1} - h)^2(x_k) - (u^0 - h)^2(x_k) }{t}\\
&\qquad + \int_\mathcal{D} \sigma_\Omega \frac{A(t)-\mathds{I}}{t} \nabla u^0\cdot \nabla p^t - \frac{f^t - f^0}{t} p^t.
\end{split}
\end{equation}
Using the assumption $\Gamma_h\cap{\partial \Om} =\emptyset$, we have for all $k=1,\dots,K$ that $x_k$ belongs either to $\Omega$, to $\mathcal{D}\setminus\overline{\Omega}$ or to $\partial\mathcal{D}$.
Assume first that $x_k$ belongs either to $\Omega$ or to $\mathcal{D}\setminus\overline{\Omega}$.
Since $\sigma_\Omega$ is constant in $\Omega$ and in $\mathcal{D}\setminus\overline{\Omega}$, $u$ is harmonic in these sets, therefore using elliptic regularity results we have $u\in\C^\infty(B(x_k,r_k))$ for sufficiently small $r_k$, where $B(x_k,r_k)$ denotes the open ball of center $x_k$ and radius $r_k$.
Thus, the first term on the right hand side of \eqref{E:rhs_diff_G} converges as $t\searrow 0$.
Now if $x_k\in\partial\mathcal{D}$, then $T_t(x_k)=x_k$ due to $V\in\C^1_c(\mathcal{D},\mathds{R}^2)$, and the first term on the right hand side of \eqref{E:rhs_diff_G} is equal to zero, so we obtain the same formula as in the case $x_k\in\mathcal{D}\setminus\overline{\Omega}$.
Also, using $V\in\C^1_c(\mathcal{D},\mathds{R}^2)$ we have the following convergence properties (see \cite[Lem. 3.1]{a_KAKUST_2018a} and \cite[Lemma 2.16]{phdKevin})
\begin{align}
\frac{A(t)-I}{t} \rightarrow A'(0) & := \Div(V) - D V - D V^\mathsf{T} \quad \text{ strongly in } \C(\overline \mathcal{D})^{2\times 2}, \\
\frac{f^t - f^0}{t} \rightarrow \widetilde f'(0) & := f_\Omega\Div(V) + \widetilde\nabla f\cdot V \quad \text{ strongly in } L^2(\mathcal{D}),
\end{align}
and we conclude that the right hand side of \eqref{E:rhs_diff_G} converges to
\begin{equation}\label{E:derivative_proof}
- \sum_{k=1}^K (u^0 - h )(x_k) \nabla u^0 (x_k) \cdot V(x_k)
+ \int_\mathcal{D} \sigma_\Omega A'(0)\nabla u^0\cdot \nabla p^0 - \widetilde f'(0) p^0 .
\end{equation}
In view of \eqref{G_reduced} this shows
\begin{equation}
\lim_{t\searrow 0}\frac{G(t,u^t,p^t)-G(0,u^0,p^0)}{t} = \partial_t G(0,u^0,p^0),
\end{equation}
which shows that Assumption \ref{H1} is satisfied.
Using tensor calculus, it is then readily verified that \eqref{E:derivative_proof} can be brought into expression \eqref{T:tensor_shape_deriv}.
The regularity ${\mathbf{S}}_1\in L^1(\mathcal{D})^{2\times 2}$ is due to $u\in W^1_{\Gamma,q}(\mathcal{D})$, and $p\in W^1_{\Gamma,q'}(\mathcal{D})$ and the regularity of ${\mathbf{S}}_0^r$ is a consequence of the regularity of $p$ and $f_\Omega$.
\end{proof}
An interesting feature of Theorem 3.2 is to show that the distributed shape derivative exists when $\Omega$ is only open.
Another relevant issue is to determine the minimal regularity of $\Omega$ for which we can obtain the boundary expression of the shape derivative.
The rest of this section is devoted to the study of this question.
We start with the following well-known result which describes the structure of the boundary expression of the shape derivative; see \cite[pp. 480-481]{MR2731611}.
\begin{theorem}[Zol\'esio's structure theorem]\label{thm:structure_theorem}
Let $\Omega$ be open with $\partial \Omega $ compact and of class $\C^{k+1}$, $k\geq 0$.
Assume $J$ has a Eulerian shape derivative at $\Omega$ and $d J(\Omega )$ is continuous for the $\C^k(\mathcal{D},\mathds{R}^d)$-topology.
Then, there exists a linear and continuous functional $l: \C^k(\partial \Omega )\rightarrow \mathds{R}$ such that
\begin{equation}
\label{volume}
d J(\Omega)(V)= l(V_{|\partial \Omega }\cdot n)\ \text{ for all } V\in \C^k_c(\mathcal{D},\mathds{R}^d).
\end{equation}
\end{theorem}
Theorem \ref{thm:structure_theorem} requires $\Omega$ to be at least $\C^1$, however we show in Proposition \ref{tensor_relations} that even for Lipschitz domains one can obtain a boundary expression for the shape derivative, see \eqref{158}, even though we get a weaker structure than \eqref{volume} since the tangential component of $V$ may be present in \eqref{158}.
Recall that a bounded domain is called Lipschitz if it is locally representable as the graph of a Lipschitz function.
In this case, it is well-known that the surface measure is well-defined on $\partial\Omega$ and there exists an outward pointing normal vector $n$ at almost every point on $\partial\Omega$; see \cite[Section 4.2, p. 127]{MR1158660}.
In the rest of the paper, the exponents $+$ and $-$ denote the restrictions of functions to $\Omega$ and to $\mathcal{D}\setminus\overline \Omega$, respectively, and the notation $\llbracket \phi\rrbracket : = \phi^+|_{\partial\Omega} - \phi^-|_{\partial\Omega} $ denotes the jump across the interface $\partial\Omega$ of a given function $\phi$.
\begin{proposition}\label{tensor_relations}
Suppose that $\Gamma_h\cap{\partial \Om} =\emptyset$, $\Omega\in\mathds{P}(\D)$ and $V \in \C^1_c(\mathcal{D}\setminus \Gamma_h,\mathds{R}^2)$, then we have
\begin{align} \label{eq:equvilibrium_strong1}
\operatorname{div}(\mathbf{S}_1^+) &= (\mathbf{S}_0^r)^+ \quad \text{ a.e. in } \Omega\setminus \Gamma_h, \\
\label{eq:equvilibrium_strong2} \operatorname{div}(\mathbf{S}_1^-) &= (\mathbf{S}_0^r)^- \quad \text{ a.e. in } (\mathcal{D}\setminus\overline \Omega)\setminus \Gamma_h.
\end{align}
If $\mathbf{S}_1^+\in W^{1,1}(\Omega,\mathds{R}^{2\times 2})$ and $\mathbf{S}_1^-\in W^{1,1}(\mathcal{D}\setminus\overline{\Omega},\mathds{R}^{2\times 2})$, then
\begin{equation}\label{eq:first_order_tensor_2}
dJ(\Omega)(V) = \int_\Omega \operatorname{div}(\mathbf{S}_1^\mathsf{T}V)
+ \int_{\mathcal{D}\setminus \overline \Omega} \operatorname{div}(\mathbf{S}_1^\mathsf{T}V).
\end{equation}
If in addition $\Omega$ is Lipschitz, we also have the boundary expression
\begin{equation}\label{158}
dJ(\Omega)(V) = \int_{\partial \Omega} \llbracket \mathbf{S}_1 \rrbracket n \cdotV .
\end{equation}
If in addition $\Omega$ is of class $\C^1$, we obtain the boundary expression
\begin{equation}\label{eq:general_boundary_exp}
dJ(\Omega)(V) = \int_{\partial \Omega} (\llbracket \mathbf{S}_1 \rrbracket n\cdot n )\, V\cdot n
\end{equation}
\end{proposition}
\begin{proof} In view of \cite[Theorem 2.2]{MR3535238}, if $V$ has compact support in $\Omega$ then the shape derivative vanishes.
Assume $V \in \C^{1}_c(\Omega\setminus \Gamma_h,\mathds{R}^2)$ and denote $U:=\operatorname{supp}V\subset \Omega\setminus \Gamma_h$, then $u$ and $p$ are clearly harmonic on $U$ since $\sigma$ is constant on $U$.
In view of \eqref{S1_1} and the regularity of $f_\Omega$, this yields $\mathbf{S}_1\in L^1(U)$ and $\operatorname{div}(\mathbf{S}_1)\in L^1(U)$.
Thus, we can write the tensor relation $\operatorname{div}(\mathbf{S}_1^\mathsf{T} V) = \mathbf{S}_1 : DV + V\cdot \operatorname{div}(\mathbf{S}_1)$.
For such $V$ we also have $ {\mathbf{S}}^s_0(V) = 0$, so we obtain
\begin{align}
\notag dJ(\Omega)(V) & = {\mathbf{S}}^s_0(V) + \int_\mathcal{D} \mathbf{S}_1 : DV +\mathbf{S}_0^r\cdot V \\
\label{eq:77} & = \int_U \operatorname{div}(\mathbf{S}_1^\mathsf{T}V) +V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)
= 0\qquad \mbox{ for all }V \in \C^{1}_c(\Omega\setminus \Gamma_h,\mathds{R}^2).
\end{align}
Since $\operatorname{supp}V =U \subset \Omega\setminus \Gamma_h$, we can extend $\mathbf{S}_1^\mathsf{T}V$ and $V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)$ by zero on $\mathcal{B}$, where $\mathcal{B}$ is a sufficiently large open ball which contains $U$.
We keep the same notation for the extensions for simplicity.
Since the extension satisfies $\mathbf{S}_1^\mathsf{T}V\in W^{1,1}(\mathcal{B},\mathds{R}^2)$, using the divergence theorem (for instance \cite[Section 4.3, Theorem 1]{MR1158660}) in $\mathcal{B}$ we get
\begin{align*}
\int_U \operatorname{div}(\mathbf{S}_1^\mathsf{T}V) +V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)
& = \int_{\mathcal{B}} \operatorname{div}(\mathbf{S}_1^\mathsf{T}V) +V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)\\
&= \int_{\partial\mathcal{B}} (\mathbf{S}_1^\mathsf{T}V)\cdot n +\int_{\mathcal{B}}V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)\\
& = \int_{\Omega}V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1) = 0, \qquad\mbox{ for all }V\in \C^{1}_c(\Omega\setminus \Gamma_h,\mathds{R}^2),
\end{align*}
which proves \eqref{eq:equvilibrium_strong1}.
Then, we can prove \eqref{eq:equvilibrium_strong2} in a similar way by taking a vector $V\in \C^{1}_c((\mathcal{D}\setminus\overline \Omega)\setminus \Gamma_h,\mathds{R}^2)$.
Now let us assume that $V \in \C^{1}_c(\mathcal{D}\setminus \Gamma_h,\mathds{R}^2)$ and denote $U_2:=\operatorname{supp}V\subset \mathcal{D}\setminus \Gamma_h$.
By standard elliptic regularity, we have $u\in H^1(U_2)$ and $p\in H^1(U_2)$.
If we assume $\mathbf{S}_1^+\in W^{1,1}(\Omega,\mathds{R}^{2\times 2})$ and $\mathbf{S}_1^-\in W^{1,1}(\mathcal{D}\setminus\overline{\Omega},\mathds{R}^{2\times 2})$,
taking $V \in \C^{1}_c(\mathcal{D}\setminus \Gamma_h,\mathds{R}^2)$ and using \eqref{eq:equvilibrium_strong1}-\eqref{eq:equvilibrium_strong2} we obtain
\begin{align*}
dJ(\Omega)(V) & = {\mathbf{S}}_0(V) + \int_\mathcal{D} \mathbf{S}_1 : DV +\mathbf{S}_0^r\cdot V \\
& = \int_{\Omega} \mathbf{S}_1 : DV +\mathbf{S}_0^r\cdot V + \int_{\mathcal{D}\setminus \overline \Omega} \mathbf{S}_1 : DV +\mathbf{S}_0^r\cdot V\\
& = \int_{\Omega} \operatorname{div}(\mathbf{S}_1^\mathsf{T}V) +V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)
+ \int_{\mathcal{D}\setminus \overline \Omega} \operatorname{div}(\mathbf{S}_1^\mathsf{T}V) +V\cdot( \mathbf{S}_0^r - \operatorname{div}\mathbf{S}_1)\\
& = \int_\Omega \operatorname{div}(\mathbf{S}_1^\mathsf{T}V)
+ \int_{\mathcal{D}\setminus \overline \Omega} \operatorname{div}(\mathbf{S}_1^\mathsf{T}V),
\end{align*}
which yields \eqref{eq:first_order_tensor_2}.
If in addition $\Omega$ is Lipschitz, applying the divergence theorem to \eqref{eq:first_order_tensor_2} we get \eqref{158}.
In view of \eqref{158}, we have that $dJ(\Omega )$ is continuous for the $\C^0(\mathcal{D} ,\mathds{R}^d)$-topology.
Thus, if $\Omega$ is of class $\C^1$, we can apply Theorem \ref{thm:structure_theorem} with $k=0$.
With $\Omega$ of class $\C^1$, we also have $n\in\C^0({\partial \Om},\mathds{R}^2)$ and $(V_{|\partial \Omega }\cdot n)n\in\C^0({\partial \Om},\mathds{R}^2)$.
Let $\hatV\in\C^0_c(\mathcal{D}\setminus \Gamma_h ,\mathds{R}^2)$ be an extension of $(V_{|\partial \Omega }\cdot n)n$, then using Theorem \ref{thm:structure_theorem} we obtain
\begin{align*}
dJ(\Omega)(V)
= l(V_{|\partial \Omega }\cdot n)
& = l(\hatV_{|\partial \Omega }\cdot n)
= dJ(\Omega)(\hatV) \\
= \int_{\partial \Omega} ((\mathbf{S}_1^+ - \mathbf{S}_1^-) n) \cdot\hatV
& = \int_{\partial \Omega} (\llbracket \mathbf{S}_1 \rrbracket n) \cdot ((V\cdot n) n)
= \int_{\partial \Omega} (\llbracket \mathbf{S}_1 \rrbracket n\cdot n) V\cdot n,
\end{align*}
which yields expression \eqref{eq:general_boundary_exp}.
\end{proof}
\begin{remark}
Proposition \ref{tensor_relations} is in fact valid for any shape functional whose distributed shape derivative can be written using a tensor expression of the type \eqref{T:tensor_shape_deriv}, and which satisfies the appropriate regularity assumptions.
Note that in general, one should not expect that the assumption $\mathbf{S}_1^+\in W^{1,1}(\Omega,\mathds{R}^{2\times 2})$ and $\mathbf{S}_1^-\in W^{1,1}(\mathcal{D}\setminus\overline{\Omega},\mathds{R}^{2\times 2})$ in Proposition \ref{tensor_relations} can be satisfied for any Lipschitz set $\Omega$.
For instance in the case of the Dirichlet Laplacian, one can actually build pathological Lipschitz domains for which $\mathbf{S}_1$ does not have such regularity; see
\cite[Corollary 3.2]{Costabel2019}.
However, these regularity assumptions for $\mathbf{S}_1^+,\mathbf{S}_1^-$ can be fulfilled for polygonal domains, as shown in Corollary \ref{cor:11}.
\end{remark}
\begin{corollary}\label{cor:11}
Suppose that $\Gamma_h\cap{\partial \Om} =\emptyset$, $f_0\in\C^\infty(\mathcal{D})$, $\Omega\in\mathds{P}(\D)$ and $V \in \C^{1}_c(\mathcal{D}\setminus \Gamma_h,\mathds{R}^2)$.
If $\Omega$ is Lipschitz polygonal or if $\Omega$ is of class $\C^1$, then we have
\begin{align}\label{bdr_expr}
dJ(\Omega)(V) = \int_{\partial \Omega} (\llbracket \sigma \partial_n u \partial_n p\rrbracket
+ \llbracket\sigma\rrbracket \nabla_{\partial \Omega} u\cdot\nabla_{\partial \Omega} p -\llbracket f_\Omega\rrbracket p) V\cdot n,
\end{align}
where $ \nabla_{\partial \Omega}$ denotes the tangential gradient on $\partial\Omega$.
\end{corollary}
\begin{proof}
In the case where $\Omega$ is of class $\C^1$, a quick calculation using \eqref{eq:general_boundary_exp} and \eqref{S1_1} yields \eqref{bdr_expr}.
In the case where $\Omega$ is polygonal, we can proceed in the following way.
Let $\widehat\mathcal{D}$ be a smooth open set such that $\overline{\Omega}\subset\widehat\mathcal{D}\subset\mathcal{D}$ and the boundaries of $\Omega$ and $\widehat\mathcal{D}$ are at a positive distance.
Since $f_0\in\C^\infty(\mathcal{D})$, using elliptic regularity we get that $u$ and $p$ are $\C^\infty$ on $\partial\widehat\mathcal{D}$.
Thus, $u|_{\widehat\mathcal{D}}$ and $p|_{\widehat\mathcal{D}}$ are also solutions of transmission problems defined in $\widehat\mathcal{D}$ with smooth inhomogeneous Dirichlet conditions on $\partial\widehat\mathcal{D}$, and consequently we are in the framework considered in \cite{MR1274152}.
Denote $L$ the number of vertices of the polygon $\Omega$.
We apply \cite[Theorem 7.3]{MR1274152} in the case $k=0$, $m=1$ and for the regularity $W^{2,4/3}$.
This yields the decomposition $u|_{\widehat\mathcal{D}} = u_0 + \sum_{\ell\in L} S_\ell$ with $u_0^+\in W^{2,4/3}(\Omega)$, $u_0^-\in W^{2,4/3}(\widehat\mathcal{D}\setminus\overline{\Omega})$ and $S_\ell$ are singular functions with support in the neighbourhood of the vertices of $\Omega$.
Here $S_\ell(r_\ell,\theta_\ell)$ are of the type $r_\ell^{\lambda_\ell} v(r_\ell,\theta_\ell)$, where $(r_\ell,\theta_\ell)$ are local polar coordinates at the vertex $\ell$ and $v(\theta_\ell)$ is a linear combination of $\sin(\lambda_\ell\theta_\ell)$ and $\cos(\lambda_\ell\theta_\ell)$.
It is shown in \cite[Theorem 8.1(ii)]{MR1713241} that $\lambda_\ell>1/2$ for all $\ell=1,\dots, L$.
Thus, we also obtain $\sum_{\ell\in L} S_\ell^+\in W^{2,4/3}(\Omega)$ and $\sum_{\ell\in L} S_\ell^-\in W^{2,4/3}(\widehat\mathcal{D}\setminus\overline{\Omega})$.
Proceeding in a similar way for $p$ and gathering the results, we obtain the regularity $u^+,p^+\in W^{2,4/3}(\Omega)$ and $u^-,p^-\in W^{2,4/3}(\widehat\mathcal{D}\setminus\overline{\Omega})$.
Then we have $\nabla(\nabla u\cdot\nabla p) = D^2 u p + D^2p u$ and using $(D^2u)^+,(D^2p)^+\in L^{4/3}(\Omega)$ and $(\nabla u)^+,(\nabla p)^+\in W^{1,4/3}(\Omega)\subset L^4(\Omega)$ and the same regularity on $\widehat\mathcal{D}\setminus\overline{\Omega}$, we obtain $\mathbf{S}_1^+\in W^{1,1}(\Omega,\mathds{R}^{2\times 2})$ and $\mathbf{S}_1^-\in W^{1,1}(\widehat\mathcal{D}\setminus\overline{\Omega},\mathds{R}^{2\times 2})$.
Then, using the fact that $V \in \C^{1}_c(\mathcal{D}\setminus \Gamma_h,\mathds{R}^2)$ we obtain in view of \eqref{158} of Proposition \ref{tensor_relations}
\begin{align*}
dJ(\Omega)(V)
& = \int_{\partial \Omega}
\llbracket \sigma \partial_n u\rrbracket \nabla_{\partial \Omega} u\cdotV
+ \llbracket \sigma \partial_n p\rrbracket \nabla_{\partial \Omega} p\cdotV
+(\llbracket \sigma \partial_n u \partial_n p\rrbracket
+ \llbracket\sigma\rrbracket \nabla_{\partial \Omega} u\cdot\nabla_{\partial \Omega} p
-\llbracket f_\Omega\rrbracket p) V\cdot n.
\end{align*}
Finally, using the fact that $\llbracket \sigma \partial_n u\rrbracket =0$ and $\llbracket \sigma \partial_n p\rrbracket = 0$ we obtain \eqref{bdr_expr}.
\end{proof}
\begin{remark}
Expressions similar to \eqref{bdr_expr} are known when $\Omega$ is at least $\C^1$, see \cite{MR3535238} and \cite{MR2329288}.
It is remarkable that one obtains the same expression \eqref{bdr_expr} when $\Omega$ is only Lipschitz polygonal.
Also, note that \eqref{bdr_expr} is similar to the formula obtained in \cite{MR3723652} for a polygonal inclusion in EIT, which was obtained in the framework of the perturbation of identity method.
In \cite{MR3723652}, an estimate of the singularity of the gradient in the neighbourhood of the vertices of the polygonal inclusion was used to obtain the boundary expression.
Here, we have used higher regularity of $u$ and $p$ in the subdomains $\Omega$ and $\widehat\mathcal{D}\setminus\overline{\Omega}$ to obtain \eqref{bdr_expr}.
The key idea of these two approaches is to control the singularity of the gradients of $u$ and $p$ near the vertices of the polygonal inclusion.
\end{remark}
\section{Numerical experiments}\label{sec:numerics}
We use the software package FEniCS for the implementation; see \cite{ans20553,langtangen2017solving,fenics:book}.
For the numerical tests the conductivity values are set to $\sigma_0 = 1$ and $\sigma_1 = 10$.
We choose $f_\Omega\equiv 0$, $\mathcal{D}=(0,1)\times(0,1)$ and
$$\Gamma = \partial D\setminus ([0.4,0.6]\times \{0\}\cup [0.4,0.6]\times \{1\}).$$
The domain $\mathcal{D}$ is meshed using a regular grid of $128 \times 128$ elements.
For the measurement points we choose $\Gamma_h = \{x_k\}_{k=1}^{K}\subset\Gamma$.
Recall that no measurements are performed on $\Gamma_0=\partial\mathcal{D}\setminus\Gamma$ and that $u$ satisfies Dirichlet boundary condition on $\Gamma_0$.
Synthetic measurements $\{h_i(x_k)\}_{k=1}^K$ are obtained by taking the trace on $\Gamma$ of the solution
of \eqref{E:var_form} using the ground truth domain $\Omega^\star$, $f_{\Omega^\star}\equiv 0$ and currents $g_i$, $i = 1, \dots, I$.
To simulate noisy EIT data, each measurement $h_i$
is corrupted by adding a normal Gaussian noise with mean zero and standard deviation $\delta * \|h_i\|_{\infty}$ , where $\delta$ is
a parameter.
The noise level is then computed as
\begin{align}
noise =& \frac{\sum_{i=1}^I (\sum_{k=1}^K |h_i(x_{k}) - \tilde{h}_i(x_k)|^2)^{1/2}}{\sum_{i=1}^I (\sum_{k=1}^K |h_i(x_k)|^2)^{1/2}},
\end{align}
where $h_i(x_k)$ and $\tilde{h}_i(x_k)$ are respectively the noiseless and noisy point measurements at $x_k$ corresponding to the current $g_i$.
In the numerical tests, we use two different sets of fluxes, i.e. $I\in\{3,7\}$, to obtain measurements.
Denote $\Gamma_{\rm{upper}}$, $\Gamma_{\rm{lower}}$, $\Gamma_{\rm{left}}$ and $\Gamma_{\rm{right}}$ the four sides of the square $\Omega$.
When $I=3$ we take
\begin{align*}
g_1 &= 1\mbox{ on }\Gamma_{\rm{left}}\cup\Gamma_{\rm{right}} \mbox{ and } g_1 = -1\mbox{ on }\Gamma_{\rm{upper}}\cup\Gamma_{\rm{lower}}, \\
g_2 &= 1\mbox{ on }\Gamma_{\rm{left}}\cup\Gamma_{\rm{upper}} \mbox{ and } g_2 = -1\mbox{ on }\Gamma_{\rm{right}}\cup\Gamma_{\rm{lower}},\\
g_3 &= 1\mbox{ on }\Gamma_{\rm{left}}\cup\Gamma_{\rm{lower}} \mbox{ and } g_3 = -1\mbox{ on }\Gamma_{\rm{right}}\cup\Gamma_{\rm{upper}}.
\end{align*}
When $I=7$ we take in addition a smooth approximation of the following piecewise constant function:
\begin{align*}
g_4 &= 1\mbox{ on }\Gamma_{\rm{left}}\cap \{x_2 > 0.5\},\
g_4 = -1\mbox{ on }\Gamma_{\rm{left}}\cap\{x_2 \leq 0.5\} \mbox{ and } g_4=0 \mbox{ otherwise},
\end{align*}
and $g_5,g_6,g_7$ are defined in a similar way on $\Gamma_{\rm{right}}$, $\Gamma_{\rm{upper}}$, $\Gamma_{\rm{lower}}$, respectively.
For the numerics we use the cost functional given by \eqref{E:cost_full}:
\begin{align}
\label{eit3.4} J(\Omega) & = \frac{1}{2}\sum_{i=1}^I \mu_i \sum_{k=1}^K (u_i(x_k) - h(x_k))^2,
\end{align}
where $u_i$ is the potential associated with the current $g_i$.
The weights $\mu_i$ associated with the current $g_i$ are chosen as the inverse of $\sum_{k=1}^K (u_i(x_k) - h(x_k))^2$ computed at the initial guess.
In this way, each term in the sum over $I$ is equal to $1$ at the first iteration, and the initial value of $J(\Omega)$ is equal to $I/2$.
To get a relatively smooth descent direction we solve the following partial differential equation: find $V\in H^1_0(\mathcal{D})^2$ such that
$$ \int_{\mathcal{D}} \alpha_1 DV : D\xi + \alpha_2 V \cdot\xi = - dJ(\Omega)(\xi)\mbox{ for all }\xi\in H^1_0(\mathcal{D})^2.$$
For the numerical tests, we chose $\alpha_1 = 0.3$ and $\alpha_2 = 0.7$.
To simplify the implementation, we use Dirichlet conditions on $\partial\mathcal{D}$ instead of the compact support condition $V \in \C^{1}_c(\mathcal{D}\setminus \Gamma_h,\mathds{R}^2)$ (see Section \ref{sec:prel2}).
Considering that $f_\Omega\equiv 0$ in $\mathcal{D}$, $V = 0$ on $\partial\mathcal{D}$ and that the points $\{x_k\}_{k=1}^{K}$ belong to $\Gamma$, in view of Theorem \ref{T:shape} we get $\mathbf{S}_0^s(V) =0$ which leads to the following equation for $V$:
\begin{align*}
\int_{\mathcal{D}} \alpha_1 DV : D\xi + \alpha_2 V \cdot\xi
& = - \int_{\mathcal{D}} -2\sigma_\Omega (\nabla u \odot \nabla p): D\xi + (\sigma_\Omega \nabla u\cdot\nabla p ) \mathds{I} : D\xi\quad \mbox{ for all }\xi\in H^1_0(\mathcal{D})^2.
\end{align*}
The relative reconstruction error $E(\Omega^r)$ is defined as
$$ E(\Omega^r) := \frac{\displaystyle\int_{\mathcal{D}} |\chi_{\Omega^\star} - \chi_{\Omega^r}|}{\displaystyle\int_{\mathcal{D}} \chi_{\Omega^\star}},$$
where $\Omega^r$ is the set obtained in the last iteration of the minimization algorithm.
We use $E(\Omega^r)$ as a measure of the quality of the reconstructions.
We present three numerical experiments.
In the first experiment, the ground truth consists of two ellipses and we use $I=3$ currents; see Figure \ref{fig:2ellipses}.
In the second experiment, the ground truth is a concave shape with one connected component and we use $I=3$ currents; see Figure \ref{fig:concave}.
In the third experiment, the ground truth consists of two ellipses and one ball and we use $I=7$ currents; see Figure \ref{fig:3ellipses}.
For each experiment, we study the influence of the point measurements patterns by comparing the reconstructions obtained using three different sets $\Gamma_h = \{x_k\}_{k=1}^{K}$ with $K\in\{16,34,70\}$.
The point measurements patterns and the corresponding reconstructions are presented in Figures \ref{2ellipses_nbpt}, \ref{concave_nbpt} and \ref{3ellipses_nbpt}, for the respective experiments.
We observe, as expected, that the reconstructions improve as $K$ becomes larger.
However, one obtains reasonable reconstructions in the case of the concave shape with $I=3$ currents and in the case of the two ellipses and ball with $I=7$ currents, even for $K=16$ points and in the presence of noise; see Figures \ref{concave_nbpt} and \ref{3ellipses_nbpt}.
In the case of two ellipses, the deterioration of the reconstruction for $K=16$ points is much stronger compared to the case $K=70$.
This indicates that the number of current $I=3$ is too low to reconstruct two ellipses with only $K=16$ points.
We conclude from these results that the amount of applied currents is more critical than the number of point measurements to obtain a good reconstruction.
For each experiment, we also study how the noise level affects the reconstruction depending on the amount of point measurements.
The results are gathered in Tables \ref{fig:noise_influence_2ellipses}, \ref{fig:noise_influence_concave} and \ref{fig:noise_influence}, where the rows correspond to three different levels of noise, and the columns to three different numbers of points $K\in\{16,34,70\}$.
In the case of two ellipses (Table \ref{fig:noise_influence_2ellipses}), the reconstruction using $K=70$ is very robust with respect to noise, whereas it deteriorates considerably using $K=16$.
In the cases of the concave shape (Table \ref{fig:noise_influence_concave}) and of the two ellipses and ball (Table \ref{fig:noise_influence}), the degradations of the reconstructions when the noise becomes larger are of a similar order in terms of reconstruction error, independently of the value of $K$.
These results indicate that a larger number of points $K$ may improve the robustness of the reconstruction with respect to noise mainly when the number $I$ of currents is low compared to the complexity of the ground truth.\\
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_005_two_ellipses.tex}
\caption{point measurements pattern}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-2-2/GroundTruth.pdf}
\caption{ground truth}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-2-2/Initialization.pdf}
\caption{initialization (red)}
\end{subfigure}%
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-2-2/Reconstruction.pdf}
\caption{reconstruction}
\end{subfigure}
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-2-2/Difference.pdf}
\caption{reconstruction (red) and ground truth (black)}
\end{subfigure}
\caption{Reconstruction of two ellipses using $I=3$ currents and $K=70$ point measurements with $1.13\%$ noise.}\label{fig:2ellipses}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_02_two_ellipses.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_01_two_ellipses.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_005_two_ellipses.tex}
\end{subfigure}
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-0-2/Difference.pdf}
\caption{$K=16$ point measurements,\\ $1.18 \%$ noise, $54.00\%$ relative error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-1-2/Difference.pdf}
\caption{$K=34$ point measurements,\\ $1.18 \%$ noise, $27.13\%$ relative error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests1/1-1-2-2/Difference.pdf}
\caption{$K=70$ point measurements,\\ $1.13 \%$ noise, $17.19\%$ relative error}
\end{subfigure}
\caption{Reconstruction of two ellipses using $I=3$ currents and three different sets of point measurements shown in the first row.}\label{2ellipses_nbpt}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{cccc}
\hline
noise & $K=16$ points & $K=34$ points & $K=70$ points \\ \hline
$0\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $33.4\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-0-0/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $18.8\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-1-0/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $16.3\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-2-0/Difference.pdf}
\end{minipage}
\\ \hline
$0.51\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $29.7\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-0-1/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $17.5\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-1-1/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $19.8\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-2-1/Difference.pdf}
\end{minipage}
\\ \hline
$1.16\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $54.0\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-0-2/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $27.1\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-1-2/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $17.2\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests1/1-1-2-2/Difference.pdf}
\end{minipage}
\end{tabular}
\caption{Influence of noise and number of point measurements on the reconstruction of two ellipses using $I=3$ currents (the noise value is the average over the noise values for the three levels of point measurements).}\label{fig:noise_influence_2ellipses}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{concave_inclusion_h_02.tex}
\caption{point measurements pattern}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-1-1/GroundTruth.pdf}
\caption{ground truth}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-1-1/Initialization.pdf}
\caption{initialization (red)}
\end{subfigure}%
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-1-1/Reconstruction.pdf}
\caption{reconstruction}
\end{subfigure}
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-1-1/Difference.pdf}
\caption{reconstruction (red) and ground truth (black)}
\end{subfigure}
\caption{Reconstruction of a concave shape using $I=3$ currents and $K=34$ point measurements with $0.55\%$ noise.}\label{fig:concave}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{concave_inclusion_h_02.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{concave_inclusion_h_01.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{concave_inclusion_h_005.tex}
\end{subfigure}
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-0-1/Difference.pdf}
\caption{$K=16$ point measurements,\\ $1.03 \%$ noise, $7.53\%$ relative error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-1-1/Difference.pdf}
\caption{$K=34$ point measurements,\\ $0.73 \%$ noise, $4.77\%$ relative error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/3-1-2-1/Difference.pdf}
\caption{$K=70$ point measurements,\\ $0.71 \%$ noise, $5.37\%$ relative error}
\end{subfigure}
\caption{Reconstruction of a concave shape using $I=3$ currents and three different sets of point measurements shown in the first row.}\label{concave_nbpt}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{cccc}
\hline
noise & $K=16$ points & $K=34$ points & $K=70$ points \\ \hline
$0\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $8.40\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-0-0/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $5.84\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-1-0/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $4.75\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-2-0/Difference.pdf}
\end{minipage}
\\ \hline
$0.54\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $8.81\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-0-1/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $5.28\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-1-1/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $6.77\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-2-1/Difference.pdf}
\end{minipage}
\\ \hline
$1.12\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $13.90\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-0-2/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $8.28\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-1-2/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $10.03\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/3-1-2-2/Difference.pdf}
\end{minipage}
\end{tabular}
\caption{Influence of noise and number of point measurements on the reconstruction of a concave shape using $I=3$ currents (the noise value is the average over the noise values for the three levels of point measurements). }\label{fig:noise_influence_concave}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_005_three_ellipses.tex}
\caption{point measurements pattern}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-2-1/GroundTruth.pdf}
\caption{ground truth}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-2-1/Initialization.pdf}
\caption{initialization (red)}
\end{subfigure}%
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-2-1/Reconstruction.pdf}
\caption{reconstruction}
\end{subfigure}
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-2-1/Difference.pdf}
\caption{reconstruction (red) and ground truth (black)}
\end{subfigure}
\caption{Reconstruction of three ellipses using $I=7$ currents and $K=70$ point measurements with $0.63\%$ noise.}\label{fig:3ellipses}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_02_three_ellipses.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_01_three_ellipses.tex}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\input{domain_h_005_three_ellipses.tex}
\end{subfigure}
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-0-1/Difference.pdf}
\caption{$K=16$ point measurements,\\ $0.59 \%$ noise, $25.52\%$ relative error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-1-1/Difference.pdf}
\caption{$K=34$ point measurements,\\ $0.54 \%$ noise, $18.8\%$ relative error}
\end{subfigure}%
~
\begin{subfigure}[t]{0.334\textwidth}
\centering
\includegraphics[height=1.99in,width=1.99in]{./tests2/2-2-2-1/Difference.pdf}
\caption{$K=70$ point measurements,\\ $0.63 \%$ noise, $16.1\%$ relative error}
\end{subfigure}
\caption{Reconstruction of three ellipses using $I=7$ currents and different sets of point measurements shown in the first row.}\label{3ellipses_nbpt}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{cccc}
\hline
noise & $K=16$ points & $K=34$ points & $K=70$ points \\ \hline
$0\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $25.3\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-0-0/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $17.5\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-1-0/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $16.3\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-2-0/Difference.pdf}
\end{minipage}
\\ \hline
$0.59\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $25.5\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-0-1/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $18.8\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-1-1/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $16.1\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-2-1/Difference.pdf}
\end{minipage}
\\ \hline
$1.14\%$
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $38.4\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-0-2/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $38.9\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-1-2/Difference.pdf}
\end{minipage}
&
\begin{minipage}{0.25\textwidth}
\centering{\vspace{0.1cm}{\scriptsize error: $29.7\%$}}\\
\includegraphics[width=1\textwidth, height = 1\textwidth]{./tests2/2-2-2-2/Difference.pdf}
\end{minipage}
\end{tabular}
\caption{Influence of noise and number of point measurements on the reconstruction of three ellipses using $I=7$ currents (the noise value is the average over the noise values for the three levels of point measurements).}\label{fig:noise_influence}
\end{table}
\noindent{\bf Acknowledgements.} Yuri Flores Albuquerque and Antoine Laurain gratefully acknowledge support of the RCGI - Research Centre for Gas Innovation, hosted by the University of São Paulo (USP) and sponsored by FAPESP - S\~ao Paulo Research Foundation (2014/50279-4) and Shell Brasil. This research was carried out in association with the ongoing R\&D project registered as ANP 20714-2 - Desenvolvimento de t\'ecnicas num\'ericas e software para problemas de invers\~ao com aplica\c{c}\~oes em processamento s\'ismico (USP / Shell Brasil / ANP), sponsored by Shell Brasil under the ANP R\&D levy as ``Compromisso de Investimentos com Pesquisa e Desenvolvimento''.
Antoine Laurain gratefully acknowledges the support of FAPESP, process: 2016/24776-6 ``Otimiza\c{c}\~ao de forma e problemas de fronteira livre'', and of the Brazilian National Council for Scientific and Technological Development (Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico - CNPq) through the process: 408175/2018-4 ``Otimiza\c{c}\~ao de forma n\~ao suave e controle de problemas de fronteira livre'', and through the program ``Bolsa de Produtividade em Pesquisa - PQ 2015'', process: 304258/2018-0.
\section{Appendix 1: averaged adjoint method}\label{sec:appendix}
Let $t_0>0$ be given and $E = E(\mathcal{D}), F=F(\mathcal{D})$ be two Banach spaces, and consider a parameterization
$\Omega_t = T_t(\Omega)$ for $t\in [0,t_0]$ such that $T_t(\mathcal{D})=\mathcal{D}$, i.e. which leaves $\mathcal{D}$ globally invariant.
Our goal is to differentiate shape functions of the type
$J(\Omega_t)$ which can be written using a Lagrangian as $J(\Omega_t) = \mathcal{L}(\Omega_t, u^t,\hat\psi)$,
where $u^t\in E(\mathcal{D})$ and $\hat\psi\in F(\mathcal{D}) $.
The main appeal of the Lagrangian is that
we actually only need to compute the partial derivative with respect to $t$ of $\mathcal{L}(\Omega_t,\hat\varphi,\hat\psi)$ to compute the derivative of $J(\Omega_t)$, indeed this is the main result of Theorem \ref{thm:sturm}.
In order to differentiate $\mathcal{L}(\Omega_t, \hat\varphi,\hat\psi)$, the change of coordinates $x\mapsto T_t(x)$ is used in the integrals.
In the process appear the pullbacks $\hat\varphi\circT_t\in E(\mathcal{D})$ and $\hat\psi\circT_t\in F(\mathcal{D})$ which depend on $t$.
The usual procedure in shape optimization to compensate this effect is to use a reparameterization
$\mathcal{L}(\Omega_t, \Psi_t(\varphi), \Psi_t (\psi))$ instead of $\mathcal{L}(\Omega_t,\hat\varphi,\hat\psi)$, where
$\Psi_t$ is an appropriate bijection of
$E(\mathcal{D})$ and $F(\mathcal{D})$, and $\varphi\in E(\mathcal{D})$, $\psi\in F(\mathcal{D})$.
Now the change of variable in the integrals yields functions $\varphi$ and $\psi$ in the integrands, which are independent of $t$.
In this paper we take $E(\mathcal{D}) = W^1_{\Gamma,p}(\mathcal{D})$, $F(\mathcal{D}) = W^1_{\Gamma,p'}(\mathcal{D})$, and $\Psi_t(\psi) = \psi\circT_t^{-1}$ is then a bijection of $E(\mathcal{D})$ and $F(\mathcal{D})$; see \cite[Theorem 2.2.2, p.52]{b_ZI_1989a}.
Thus we consider the so-called {\it shape-Lagrangian} $G:[0,t_0]\times E\times F \rightarrow \mathds{R}$ with
$$ G(t,\varphi,\psi):
=\mathcal{L}(\Omega_t,\varphi\circT_t^{-1},\psi\circT_t^{-1}).$$
The main result of this section, Theorem \ref{thm:sturm}, shows that in order to obtain the shape derivative of $\mathcal{L}$, it is enough to compute the partial derivative with respect to $t$ of $G$ while assigning the values $\varphi=u$ and $\psi=p$, where $u$ is the state and $p$ is the adjoint state.
The main ingredient is the introduction of the averaged adjoint equation described below.
Let us assume that for each $t\in [0,t_0]$ the equation
\begin{equation}\label{eq:state_G}
d_\psi G(t,u^t,0;\hat\psi) = 0\;\text{ for all } \hat\psi \in F.
\end{equation}
admits a unique solution $u^t\in E$.
Further, we make the following assumptions for $G$.
\begin{assumption}
\label{amp:gateaux_diffbar_G}
\label{amp:affine-linear}
For every $(t,\psi)\in [0,t_0]\times F$
\begin{enumerate}
\item[(i)] $[0,1]\ni s\mapsto G(t,su^t + (1-s)u^0),\psi)$ is absolutely continuous.
\item[(ii)] $[0,1]\ni s\mapsto d_\varphi G(t,su^t+(1-s)u^0,\psi;\hat{\varphi})$ belongs to $L^1(0,1)$ for all $\hat{\varphi}\in E$.
\end{enumerate}
\end{assumption}
When Assumption \ref{amp:affine-linear} is satisfied, for $t\in [0,t_0]$ we introduce the \textit{averaged adjoint equation} associated with $u^t$ and $u^0$:
find $p^t\in F$ such that
\begin{equation}\label{averated_}
\int_0^1 d_\varphi G(t,su^t+(1-s)u^0,p^t;\hat{\varphi})\, ds =0 \quad \text{ for all } \hat{\varphi}\in E.
\end{equation}
In view of Assumption \ref{amp:affine-linear} we have
\begin{equation}
\label{eq:main_averaged}
G(t,u^t,p^t)-G(t,u^0,p^t) = \int_0^1 d_\varphi G(t,su^t+(1-s)u^0,p^t;u^t-u^0)\, ds =0\quad \text{ for all } t\in[0,t_0].
\end{equation}
We can now state the main result of this section.
\begin{assumption}\label{H1}
We assume that
$$ \lim_{t\searrow 0} \frac{G(t,u^0,p^t)-G(0,u^0,p^t)}{t}=\partial_tG(0,u^0,p^0).$$
\end{assumption}
\begin{theorem}
\label{thm:sturm}
Let Assumption \ref{amp:affine-linear} and Assumption \ref{H1} be satisfied and assume there exists a unique solution $p^t$ of the averaged adjoint equation \eqref{averated_}.
Then for all $\psi \in F$ we obtain
\begin{equation}\label{eq:dt_G_single}
{\frac{d}{dt}} b(t,u^t) |_{t=0} = {\frac{d}{dt}}(G(t,u^t,\psi))|_{t=0}=\partial_t G(0,u^0,p^0).
\end{equation}
\end{theorem}
\section{Appendix 2: proof of Theorem \ref{thm01}}
For the convenience of the reader we write here the proof of Theorem~\ref{thm01}, which is essentially the same as the proof of \cite[Theorem 1]{a_GRRE_1989a}.
We recall from \cite{MR990595} that if $\mathcal{D}\cup \Gamma$ is regular in the sense of Gr\"oger, then the mapping $\mathcal{P}:W^1_{\Gamma,q}(\mathcal{D}) \to (W^1_{\Gamma,q'}(\mathcal{D}))^*$ defined by
$$\langle \mathcal{P} v,w\rangle := \int_\mathcal{D} \nabla v\cdot \nabla w + vw $$
is onto and hence
the inverse $\mathcal{P}^{-1}:(W^1_{\Gamma,q'}(\mathcal{D}))^*\to W^1_{\Gamma,q}(\mathcal{D})$ is well-defined.
\begin{proof}[Proof of Theorem~\ref{thm01}]
Let $f\in (W_{\Gamma,q'}^1(\mathcal{D}))^*$ be given.
As in \cite[Theorem 1]{a_GRRE_1989a} we define for $s>0$ the mapping
\begin{align*}
Q_f: W_{\Gamma,q}^1(\mathcal{D}) & \to W_{\Gamma,q}^1(\mathcal{D}),\\
u & \mapsto \mathcal{P}^{-1}(D^*BDu + tf),
\end{align*}
where $D:W_{\Gamma,2}^1(\mathcal{D})\to L^2(\mathcal{D},\bbR^3)$ is defined by $u\mapsto (u,\nabla u)$, $D^*: L^2(\mathcal{D},\bbR^3)^*\to (W_{\Gamma,2}^1(\mathcal{D}))^*$ is the adjoint of $D$ and $By:= y - s\widehat\mathds{A} y$ for $y=(y_0,y_1,y_2)\in L^2(\mathcal{D},\bbR^3)$ and $\widehat\mathds{A} y: = (0,\mathds{A} (y_1,y_2)^\mathsf{T})$.
We observe that $D^*BDu = D^*Du -s D^*(0,\mathds{A} \nabla u)$ and $D^*D=\mathcal{P}$, which yields
\begin{equation*}
Q_fu = u - s\mathcal{P}^{-1}( D^*(0,\mathds{A} \nabla u) - f).
\end{equation*}
If $Q_f$ has a fixed point in $W_{\Gamma,q}^1(\mathcal{D})$, then we obtain $D^*(0,\mathds{A} \nabla u) = f$ in $(W_{\Gamma,q'}^1(\mathcal{D}))^*$ which is equivalent to $\mathcal{A}_q u=f$ in $(W_{\Gamma,q'}^1(\mathcal{D}))^*$.
The proper choice of $s$ allows to show that $Q_f$ is a contraction and the result follows
from Banach's fixed point theorem.
Note that $\|D\|_{L^2} = \|D^*\|_{L^2}=1$.
Then for all $v,w\in W_{\Gamma,q}^1(\mathcal{D})$ we have
\begin{equation} \label{E:GR_1}
\|Q_fv-Q_fw\|_{W^1_{q}} \le \|\mathcal{P}^{-1}\|_{L(W^{-1}_{\Gamma,q},W^1_{\Gamma,q})} \|D^*\|_{L^2} \|BD(v-w)\|_{L^q}.
\end{equation}
Now, using assumptions~\eqref{assump2} yields, for all $s>0$,
\begin{equation}\label{E:stackrel_By}
\begin{split}
|By(x)|^2 & = |y(x)|^2 - 2s \widehat\mathds{A}(x)y(x)\cdot y(x) + s^2 |\widehat\mathds{A}(x)y(x)|^2 \\
&\le |y(x)|^2 - 2ms|y(x)|^2 + s^2 M^2|y(x)|^2.
\end{split}
\end{equation}
Hence, choosing $s= m/M^2$ yields $|By(x)|\le k|y(x)|$ with $k:= (1-m^2/M^2)^{1/2}$ and thus
\begin{equation}\label{E:GR_2}
\|BD(u-v)\|_{L^q} \stackrel{\eqref{E:stackrel_By}}{\le} k\|D(u-v)\|_{L^q}.
\end{equation}
Combining \eqref{E:GR_1}, \eqref{E:GR_2}, $M_q = \|\mathcal{P}^{-1}\|_{L(W^{-1}_{\Gamma,q},W^1_{\Gamma,q})} $ and $\|D\|_{L^2} =1$ yields
\begin{equation*}
\|Q_fu-Q_fv\|_{W^1_{q}} \le k M_q\|u-v\|_{W^1_q}.
\end{equation*}
Since we have assumed that $k M_q<1$, it follows that $Q_f$ is a contraction.
Let $u$ and $v$ be the fixed points of $Q_f, Q_g$, respectively.
Then we have
\begin{align*}
\|u-v\|_{W^1_{q}} &= \|Q_f u-Q_g v\|_{W^1_{q}} \le k M_q \|u-v\|_{W^1_q} + \|Q_f v-Q_g v\|_{W^1_{q}}\\
&\leq k M_q\|u-v\|_{W^1_q} + t \|\mathcal{P}^{-1}\|_{L(W^{-1}_{\Gamma,q},W^1_{\Gamma,q})}\|f - g\|_{W^{-1}_{\Gamma,q}}.
\end{align*}
Using $M_q = \|\mathcal{P}^{-1}\|_{L(W^{-1}_{\Gamma,q},W^1_{\Gamma,q})} $ we obtain
\begin{align*}
(1-k M_q)\|u-v\|_{W^1_{q}} & \leq t M_q \|f - g\|_{W^{-1}_{\Gamma,q}}.
\end{align*}
which proves \eqref{Aq_iso}.
By Lemma \ref{lemma01} the hypothesis $M_q k<1$ of Theorem \ref{thm01} is satisfied if $q>2$ is sufficiently close to $2$.
\end{proof}
\bibliographystyle{abbrv}
|
1,108,101,563,852 | arxiv | \section{Introduction}
The measurement of isotopic compositions and delta values generally requires the separation of the element of interest from a complex matrix. For measurements by inductively coupled plasma mass spectrometry, the presence of a matrix can affect the measurement by changing the instrumental mass bias or suppressing the element of interest. Separation from the matrix is especially important when the matrix contains elements with interfering isotopes, in order to minimize corrections for isobaric interferences.
The mineral zircon, ZrSiO$_4$, has been a staple of geochemistry for decades giving insights into the history of our planet\cite{Davis2003}. Zircon can remain a closed system over its lifetime, containing a chemical and isotopic fingerprint from the age when it was formed. In the early 1900s, scientists studying mass spectrometry recognized that zircon would be one of the best tools to use as a geochronometer. The discovery of the radioactivity of uranium and its eventual decay to lead would provide a tool to measure sample ages on a geological timescale, from thousands to billions of years. Zircon is especially suited to this measurement not only because of its resistance to chemical alteration, but also due to its high U concentration and the exclusion of lead from its structure during crystallization. Uranium-lead dating techniques\cite{Schaltegger2015} exploit this to measure the age of zircon minerals that are nearly as old as the Earth itself providing critical constraints on early Earth evolution processes.\cite{Harrison2009}.
Not only is zircon valuable as a geochronometer, but it also contains traces of the environment in which it was formed. Zircon provides the only way to understand the environment on Earth from $>$4 Ga, since no whole rocks have survived that long\cite{Whitehouse2017}. By comparing ancient zircon's elemental and isotopic compositions to more recent zircons along with their host rocks, an understanding of the early Earth can be deduced. The analysis of rare earth elements\cite{Whitehouse2002} along with oxygen and hafnium isotopic compositions\cite{Whitehouse2017} have been used to infer details about early Earth evolution processes such as crustal formation and the presence of water. Recently, advances in analytical techniques have allowed for isotopic composition measurements of elements found at low concentrations such as lithium, which provides additional constraints on early Earth evolution\cite{Ushikubo2008}.
Beyond geological applications, the nature and age of zircons make them an ideal system to study very long-lived nuclear processes such as double-beta ($\beta\beta$) decay. For example, the isotope $^{96}$Zr, which has a natural abundance of 2.80\%\cite{DeLaeter2003}, has been the subject of ongoing studies\cite{Heiskanen2007} as a $\beta\beta$-decay candidate: $^{96}\text{Zr} \rightarrow $ $^{96}\text{Mo}+2e^{-}+2\bar{\nu}$. The high energy of this decay, $Q=3356.097(86)$ keV\cite{Alanssari2016}, makes it one of the best candidates for the observation of neutrinoless $\beta\beta$-decay, the detection of which would prove the neutrino to be its own anti-particle. Two previous studies of the $\beta\beta$-decay half-life yielded significantly different results: $0.94(32)\cdot10^{19}$ years\cite{Wieser2001} and $2.4(3)\cdot10^{19}$ years \cite{Argyriades2010}. The first was performed by a geochemical measurement of zircon, while the second was a direct counting measurement performed at the Neutrino Ettore Majorana Observatory (NEMO). To understand this discrepancy, a more careful geochemical measurement is of utmost importance.
A geochemical measurement of the $\beta\beta$-decay half-life is performed by measuring the amount of the decay product $^{96}$Mo as a mass-independent excess relative to the natural isotopic composition of Mo. The measured excess is used along with the mass ratio of Mo to Zr in the zircon sample to determine the relative amount of daughter product with the following equation:
\begin{equation}
\frac{N_d}{N_0}=\frac{m_\text{Mo}}{m_\text{Zr}}\frac{A_w(\text{Zr})}{A_w(\text{Mo})}\frac{C(^{96}\text{Mo})}{C(^{96}\text{Zr})}\delta(^{96}\text{Mo})
\label{eq:ndn0}
\end{equation}
where $N_d$ and $N_0$ are the numbers of daughter $^{96}$Mo and parent $^{96}$Zr atoms respectively, $m_\text{X}$ are the total masses of Mo and Zr in the sample, $A_w(\text{X})$ are the atomic weights of each element, $C(^{96}\text{X})$ are the natural isotopic abundances of the respective isotopes, and $\delta(^{96}\text{Mo})$ is the measured excess of $^{96}$Mo due to the daughter product. The half-life $T_{1/2}$ is then determined from this ratio and the average age, $t$, of the zircons with the following equation:
\begin{equation}
T_{1/2}=\frac{-t \ln{2}}{\ln{(1-N_d/N_0)}}
\label{eq:halflife}
\end{equation}
There are therefore three values which must be measured to determine the half-life: the $^{96}$Mo excess, the $m_\text{Mo}/m_\text{Zr}$ ratio (Mo:Zr), and the age of the zircons in the sample. The Mo isotopic composition can be measured by mass spectrometry but requires the sample to be digested, and the Mo must be separated from the Zr to eliminate isobaric interferences from $^{92,94,96}$Zr. This is particularly challenging as the Zr content is $>$10$^7$ times more than the Mo content. The large Mo:Zr ratio also makes it difficult to measure the Mo:Zr ratio itself due to the limited dynamic range of the ICPMS, measured to be $<$10$^6$:1 for Mo and Zr. The Zr and Mo content cannot be measured simultaneously, so the Zr content is measured prior to the first ion exchange, while the Mo content is measured after most Zr is removed. Lastly, the sample age is determined by U-Pb dating using LA-ICPMS.
In this study we performed a meticulous analysis and optimization of the measurement techniques required to measure Mo isotopic composition in zircon in order to perform a geochemical measurement of the half-life of the decay of $^{96}$Zr. Large quantities ($>$100 mg) of zircon were required to be digested in order to detect the decay product $^{96}$Mo (on the order of $\sim$10 pg radiogenic Mo and $\sim$10 ng common Mo per gram of zircon). The zircon digestion and ion exchange chemistry were optimized to ensure maximum Mo recovery and separation from Zr. Suppression of Zr was critical to the analysis, and a target of less than 10\% m(Zr)/m(Mo) remaining was required. The mass spectrometry and data analysis were analyzed and optimized to obtain the lowest possible measurement uncertainty from limited samples while enabling a quantification of the sources of uncertainty. In particular, the correction of background and isobaric interferences were improved.
\section{Samples, reagents and equipment}
Numerous zircon reference materials have been well-characterized by geochronologists and geochemists for use as calibration and validation reference materials\cite{Wiedenbeck1995}\cite{Black2004}\cite{Woodhead2005}. However, the limited availability of these reference materials and the large quantity of zircon required for testing and validation of the measurement method presented herein ($\sim$20 g used throughout experiments) precludes their use in this study. As such, separated zircon samples from the Yoganup Strand Line were obtained from a Westralian Sands Limited (WSL) mineral sand mining operations at Capel, Western Australia. Detrital samples often contain zircons spanning a wide range of ages and high-n detrital zircon geochronology by LA-ICP-MS was used to characterize the detrital zircon population in the sand. Dates for 510 individual zircon grains were obtained for an aliquot of zircon (WSL5655) employing methods similar to those of Daniels et al\cite{Daniels2017}. Individual zircon dates ranged from 150 Ma to nearly 3500 Ma and a mean age of 910(30) Ma was obtained for the sample.
Solutions were prepared with high purity reagents including Seastar\TM Baseline\regTM 47-51\% hydrofluoric acid (HF), Anachemia Environmental Grade Plus 32-35\% hydrochloric acid (HCl), and BDH Aristar\regTM Ultra 67-70\% nitric acid (HNO$_3$). Reagents were diluted with Milli-Q water purified to 18.2 M$\Omega\cdot$cm. Measurements were calibrated to dilutions of ICP standards: Specpure\regTM 1000 $\mu$g/g Zr and PlasmaCal\regTM 10000 $\mu$g/g Mo. All dilutions and other mass measurements were performed with a calibrated Mettler-Toledo AT201 analytical balance.
Digestions were performed in a custom HF resistant Parr Instrument Company Model 4746 high pressure acid digestion vessel constructed from the high nickel Alloy 400 (Monel\regTM). Ion exchanges were performed with Eichrom\regTM TEVA resin (50-100 $\mu$m) and Eichrom\regTM analytical grade cation exchange resin (50Wx8, 100-200 mesh). Mass spectrometry measurements were carried out on a Thermo Scientific Neptune\TM multi-collector inductively coupled plasma mass spectrometer (MC-ICPMS) equipped with 9 Faraday cups, a secondary electron multiplier, and Multi Ion Counting (MIC) detectors. Samples were introduced through an Elemental Scientific Apex-Q desolvating nebulizer with a 130 $\mu$L/min PFA nebulizer.
\section{Digestion of zirconium silicate samples}
The digestion of small zircon samples is used frequently for geochemical analysis \cite{Potts2015}, especially in zircon dating by TIMS \cite{Mattinson2005}. Zircon is resistant to most acids, requiring high temperature HF for digestion. Concentrated HF is sometimes mixed with other acids such as nitric, sulfuric or boric acid to assist with digestion of the silicate matrix. High temperature digestion has been demonstrated to achieve close to 100\% digestion at 200-250 $^\circ$C for $>$24 hours.\cite{Mattinson2005,Wiedenbeck1995}
The digestion of the sample for this project was particularly challenging due to the large amount ($>$100 mg) of zircon being processed. Large sample sizes were required in order to have a detectable amount of the $^{96}$Mo decay product. Further, the HF acid digested sample had to be evaporated and fully re-dissolved in HCl for ion exchange separation. These requirements increased the likelihood that silicates would re-crystallize which could interfere with the recovery of trace elements.
Tests of recovery from the digestion were performed with 0.1-1.0 g aliquots of WSL5655 zircon, in 10 mL of hydrofluoric acid. Prior to digestion, zircon samples were purified by S.G. Frantz\regTM magnetic mineral separation (non-magnetic recovered at 1.8 A, 1$^\circ$), then washed in concentrated aqua regia overnight. This step minimized contamination from other minerals such as titanite (CaTiSiO$_5$) which were found to contain a large amount of Mo. Digestions were performed by loading the digestion vessel with the zircon and acid, then heating in an oven at 220 $^\circ$C for 3-4 days to achieve maximum dissolution. Zirconium recovery was tested by measuring the Zr signal intensity of a diluted (10000:1) sample against 100 ppb Specpure\regTM zirconium standard with the Neptune MC-ICPMS.
Digesting in 10 mL of 48\% HF yielded recoveries between 50-100\%, depending on the amount of zircon being processed. It was found that no more than 0.5 g of zircon could be successfully digested at once. Samples were then evaporated to dryness under a heat lamp then re-dissolved in 5 mL 2.75 M HCl for ion exchange. Solutions that contained visible crystallized silicate had lower Zr recovery during re-dissolution than those that achieved full digestion. Pipetting the dissolved solution from any crystallized silicate allowed for 95-100\% re-dissolution in HCl. The addition of nitric or boric acids was not found to improve the digestion or re-dissolution recovery. The addition of nitric acid was found to lower digestion recovery by diluting the HF acid.
Total Zr recovery of $>$75\% was achieved by digesting $\sim$0.5 g of zircon in 10 mL of 48\% HF acid at 215 $^\circ$C for 96 hours followed by evaporation of the silicate-free solution and re-dissolution in 2.75 M HCl, as shown in Table \ref{tab:Digest-Table}.
\begin{table}[t]
\centering
\begin{tabular}{llll}
\hline
& 5655-0 & 5655-1 & 5655-2 \\
\hline
\textbf{Digestion} & & & \\
Mass Zircon (mg) & 383.0 & 573.8 & 489.5 \\
Mass w/ HF (g) & 9.9756 & 12.6130 & 11.6684 \\
$\left[\text{Zr}\right]$ Expected (mg/g) & 19.2 & 22.75 & 20.98 \\
$\left[\text{Zr}\right]$ Measured (mg/g) & 20(1) & - & - \\
\hline
\textbf{Re-dissolution} & & & \\
Mass w/ HCl & 12.2786 & 12.5062 & 10.7136 \\
$\left[\text{Zr}\right]$ Expected (mg/g) & 15.60 & 22.94 & 22.85 \\
$\left[\text{Zr}\right]$ Measured (mg/g) & 15.3(8) & 23(1) & 10.3(5) \\
\hline
\end{tabular}
\caption{Results of three digestions of WSL zircons. Sample 5655-0 demonstrated complete recovery during digestion, and samples 5655-0 and -1 demonstrated complete recovery during re-dissolution. Sample 5655-2 lost about half of the sample due to re-crystallization after digestion.}
\label{tab:Digest-Table}
\end{table}
\section{Ion exchange separation of Mo from Zr}
Purification of molybdenum from various matrices has been demonstrated to achieve $\sim$3-4 order of magnitude reduction in contaminants using anion and cation ion exchange resins \cite{Siebert2001}. However, a much more robust method was needed for this work as molybdenum must be separated from 7-8 orders of magnitude more zirconium to minimize the isobaric interferences on $^{92,94,96}$Mo. Further, the chemistry blank must be minimized to avoid increasing the $<$50 ng natural Mo content which would suppress the relative magnitude of the nuclear decay excess. Improved blanks were achieved by using TEVA resin (50-100 $\mu$m) for anion exchange as it was found to perform quite well with low column volumes and lower molarity acids. The resin was characterized by performing online elution measurements of a solution containing 1 ppm each of Mo and Zr in 0.25-3.0 M HCl. The solution was pumped at 130 $\mu$L/min through 0.5 mL of resin, then into the MC-ICPMS using an ESI Apex-Q desolvating nebulizer. The signal intensities for $^{91}$Zr and $^{98}$Mo were simultaneously measured and are shown in Figure \ref{fig:Elution-Curves}. Complete Mo retention was achieved at 2.0 M HCl while no evidence of increased Zr retention was observed up to 3.0 M HCl. It is important to note that Mo loaded with Zr is significantly delayed, even in 0.5 M HCl. This combined with the its broadened peak width demonstrated that the transport of Mo is affected by the resin. This effect is mitigated by using 0.5 M HNO$_3$ during Mo elution which significantly increases the elution rate as seen after the dotted lines in Figure \ref{fig:Elution-Curves}.
The optimal concentration of HCl for the ion exchange was found to be 2.75 M, resulting in high Mo retention and low Zr retention on the column while allowing for lowered acid concentration in case the acid is buffered during sample re-dissolution. Approximately 2 orders of magnitude reduction in Zr was achieved after 2000 s, corresponding to 5 mL of wash, with 3 orders achieved after 10 mL and diminishing returns after this point. Near complete ($>$95\%) recovery of Mo was achieved with 5 mL of 0.5 M HNO$_3$. To achieve sufficient separation, 2-3 complete passes through fresh resin was necessary.
\begin{figure}[htp]
\centering
\includegraphics[width=0.55\linewidth]{Graphics/Elution-Curves-N.pdf}
\caption{\footnotesize{Log-scale elution curves for solution passed through TEVA ion exchange resin at 130 $\mu$L/min into ICPMS. Solutions containing 1 ppm Zr and Mo in 0.5 M, 1.5 M, and 2.5 M HCl introduced at start, followed by wash in HCl at the same concentration, marked by a solid line. At dashed line 0.75 M HCl is introduced, and at dotted line 0.5 M HNO$_3$ is introduced. Note, Mo signal prior to wash is due to tailing from the previous measurement.}}
\label{fig:Elution-Curves}
\end{figure}
In addition to the TEVA separation, a cation exchange resin was used to purify Mo from other elements, in particular iron which produces an interference on $^{96}$Mo due to $^{56}$Fe$^{40}$Ar$^+$. The interfering signal intensity was around 2.4 mV/ppm Fe, compared to 35 V/ppm Mo at $^{96}$Mo. This was enough to cause a significant non-radiogenic $^{96}$Mo excess in high Fe, low Mo samples. TEVA-exchanged samples were dried and re-dissolved in 0.5 M HCl. At this concentration, Fe and Zr are retained by the column while Mo is eluted. The solution was pumped through a 0.5 mL cation exchange column at 0.5 mL/min, and near-complete recovery ($>$95\%) was achieved with an additional 3-5 mL of 0.5 M HCl. This process was repeated several times to further minimize the Zr content. Finally, the purified Mo samples were dried and re-dissolved in 0.5 M HNO$_3$ for analysis by MC-ICPMS. Column blanks given as concentrations relative to the amount of solution passed through the column were measured to be less than 0.05 ng/g Zr and 0.01 ng/g Mo. Total analytical blank was measured to be $<$2 ng Mo, mostly coming from the acid digestion.
Four ion exchanges were performed on two separate digestions of the WSL zircons. Two ion exchanges using TEVA resin were performed to remove the bulk of the Zr from the sample as the extremely high concentration of Zr would quickly overload the cation columns leading to poor separation. Subsequently, two cation exchange separations were performed to capture the remaining Zr and other interfering elements such as Fe. As shown in Table \ref{tab:IonX-Table}, more than nine orders of magnitude reduction in Zr was achieved while retaining 63\% of the zircons' Mo. It was not possible to measure the initial Mo content prior to the ion exchange due to the very low concentration relative to Zr.
\begin{table*}[t]
\centering
\begin{tabular}{lllllll}
\hline
& Initial & TEVA-1 & TEVA-2 & CAT-1 & CAT-2 & Proportion Remaining \\
\hline
5655-1 Mass-Zr & 287 mg & 180 $\mu$g & 145 ng & 19 ng & 0.3 ng & $8.7\cdot10^{-10}$ \\
5655-1 Mass-Mo & - & 56 ng & 45 ng & 42 ng & 35 ng & 63\% \\
& & & & & & \\
5655-2 Mass-Zr & 110 mg & 59 $\mu$g & 43 ng & 15 ng & 0.1 ng & $9.1\cdot10^{-10}$ \\
5655-2 Mass-Mo & - & 260 ng & 225 ng & 239 ng & 165 ng & 63\% \\
\hline
\end{tabular}
\caption{Recoveries after four consecutive ion exchanges, two using TEVA resin and two using cation resin. Results demonstrate a 9 order of magnitude reduction in the Zr content while maintaining 63\% recovery of molybdenum. The large variation of Mo content was due to the accessory mineral titanite that was found to contain a much higher Mo concentration than the zircons. Uncertainties in mass measurement are 5\%.}
\label{tab:IonX-Table}
\end{table*}
\section{Mo isotopic composition measured by MC-ICPMS}
The low quantity of Mo recovered from the zircons, often $<$50 ng, required careful optimization to achieve high precision due to multiple isobaric interferences. The mass-independent excess of $^{96}$Mo was used to determine the amount of decay product from the $\beta\beta$-decay of $^{96}$Zr $\rightarrow$ $^{96}$Mo. To measure this excess, mass-dependent fractionation, both natural and that which is introduced by the chemistry and instrument, must be corrected. Typically, this would be done by correcting to the accepted value for an interference free isotope amount ratio such as n$(^{97}$Mo)/n($^{95}$Mo). However, natural $^{238}$U contained in the zircons decays by spontaneous fission to $^{95,97,98,100}$Mo affecting the expected isotope amount ratio. This leaves n($^{94}$Mo)/n($^{92}$Mo) as the only Mo isotope amount ratio unaffected by mass-independent fractionation. It was therefore critical that any remaining Zr be accurately measured to correct for interferences from $^{92,94,96}$Zr. Ruthenium was also monitored due to interferences from $^{96,98,100}$Ru.
Samples were naturally aspirated into an ESI Apex-Q desolvating nebulizer at an uptake rate of $\sim$130 $\mu$L/min. $^{92-98}$Mo isotopes were measured in cups L2-H3 on $10^{11}$ $\Omega$ resistors, while $^{90,91}$Zr isotopes were measured in cups L4 and L3 with $10^{12}$ $\Omega$ resistors. $^{99}$Ru was monitored on an ion counter (IC6). $^{100}$Mo could not be measured simultaneously due to the relative cup positions of the interferences and was not included in the analysis. Each sample was measured with 60 cycles of 2 second integrations. The integration time was chosen to limit the effect of occasional spikes in the background intensity of interfering zirconium isotopes from the desolvating nebulizer.
The measurement sessions included measurements of the Mo ICP standard diluted to 10-200 ppb, the Zr ICP standard diluted to 50-100 ppb, and mixtures of the two standards were used to verify the performance of the interference correction. All samples were aspirated in 0.5 M HNO$_3$ and all measurements were preceded by blank measurements of 0.5 M HNO$_3$ to correct for other constant spectral interferences, such as $^{40}$Ar$_2$$^{16}$O$^+$. Every few measurements were bracketed by a measurement of the 200 ppb Mo working standard to provide an approximate correction for drift in instrumental mass bias. Once the performance of the laboratory standards had been verified, the purified Mo from zircons was measured with a concentration of $\sim$50 ppb in 1 mL of 0.5 M HNO$_3$.
\subsection{Data analysis}
A data analysis algorithm was developed using Mathematica\regTM to process the data. First, three filters were applied to the data: one to remove data with large spikes in Zr intensity, one to apply a 2$\sigma$ outlier test, and one to ensure the signal intensity was $>$75\% of the maximum. The latter was applied to maximize sample efficiency by starting data collection immediately when sample uptake is started, and when the sample runs out before data collection is finished. Next, a blank correction was applied by subtracting the average intensities of the HNO$_3$ blank measurements from the sample measurements, line by line.
The data were then corrected for Zr and Ru interferences using the average measured intensities at $^{90}$Zr and $^{99}$Ru for each sample. Ru content in the zircon Mo samples was $<$0.1\% relative to Mo. The Ru correction was applied using the IUPAC-published isotopic composition\cite{DeLaeter2003}, with an exponential fractionation correction applied to match the composition to the standard-sample bracketing determined fractionation, e.g.:
\begin{equation}
r\left(\frac{^{96}Ru}{^{99}Ru}\right)_{cor}=r\left(\frac{^{96}Ru}{^{99}Ru}\right)_{pub}\left(\frac{m_{96}}{m_{99}}\right)^\alpha
\end{equation}
where $r_{cor}$ and $r_{pub}$ are the fractionation corrected and IUPAC-published isotope abundance ratios, $m_x$ is the atomic mass of the $x$th isotope, and $\alpha$ is the fractionation exponent.
The effect of the Zr interference correction required more careful analysis as the Zr content is as much as 10\% relative to Mo. As most of the Zr was removed by ion exchange, the remaining Zr was significantly fractionated, which affected the correction. The measured $^{91}$Zr/$^{90}$Zr of each sample was used to apply an exponential fractionation factor to the Zr correction with a limit set at $\alpha = \pm 0.2$ for the fractionation exponent. This limit, corresponding to $^{91}\text{Zr}/^{90}\text{Zr}= \pm 2\permil$, was included to restrict the size of the fractionation correction on samples with only background Zr levels such as Mo ICP standards. Less than $\pm 1 \permil$ Zr fractionation was measured for samples containing Zr. The zirconium isotopic composition of the ICP standard was measured in the same session on the same cup configuration, which improved the accuracy of the Zr correction and allowed the $^{94}$Mo/$^{92}$Mo isotope ratio to be used to correct for mass-dependent fractionation.
\section{Measurement results}
To demonstrate the effectiveness of these techniques we look at typical results. A zircon sample from Westralian Sands Limited (WSL-5655) was measured along with a set of Mo and Zr standards and mixtures. Mo solutions with lower concentration and a synthetic mixture with higher Zr content than the Mo recovered from the zircons were used to verify accuracy of the data processing. As shown in Table \ref{tab:Delta-Table}, all measurements were within uncertainty of the initial lab standard measurement. Uncertainties ranged from tens to a hundreds of ppm, and depended predominantly on the success of the Zr correction. Since the mass bias correction is based on the $^{94}$Mo/$^{92}$Mo ratio, an increased uncertainty in the $^{92,94}$Zr correction leads to a magnified uncertainty in the higher mass difference isotope ratios. This is most evident in the lowest concentration Mo standard measurement. It is also evident from the fractionation trends seen in Figure \ref{fig:Delta-plot} that the largest deviations are due to uncertainty in the mass bias correction.
\begin{table*}[t]
\centering
\begin{tabular}{llllllll}
\hline
Sample ID & $^{90}$Zr (V) & $^{95}$Mo (V) & $\delta^{95}$Mo & $\delta^{96}$Mo & $\delta^{97}$Mo & $\delta^{98}$Mo & N \\
\hline
Mo 200 ppb & 0.0006 & 7.4 & 3(10) & 8(12) & 2(13) & 6(17) & 5 \\
Mo 50 ppb & 0.0002 & 1.8 & -10(40) & -10(50) & -10(50) & -10(70) & 2 \\
Mo 10 ppb & 0.0002 & 0.4 & 10(100) & -10(120) & 60(140) & 40(170) & 1 \\
Mo 51 Zr 5 & 0.5516 & 1.9 & 0(50) & 50(60) & 0(80) & 10(100) & 2 \\
WSL-5655 (1) & 0.0045 & 0.9 & -10(40) & 40(50) & 50(60) & 20(70) & 3 \\
WSL-5655 (2) & 0.0011 & 3.2 & -1(23) & 16(26) & 13(29) & 0(40) & 3 \\
\hline
\end{tabular}
\caption{Delta Values ($^{X}$Mo/$^{92}$Mo, ppm) relative to initial Mo standard reference measurement. Fractionation is corrected to n($^{94}$Mo)/n($^{92}$Mo) of initial Mo ICP standard measurement. Note that all delta-values, including the high-Zr synthetic mixture are within 1$\sigma$ uncertainty of 0 ppm, demonstrating the efficacy of the zirconium correction.}
\label{tab:Delta-Table}
\end{table*}
\begin{figure}[htp]
\centering
\includegraphics[width=0.65\linewidth]{Graphics/Methods-DeltaPlot.pdf}
\caption{Measured isotopic compositions normalized to n($^{94}$Mo)/n($^{92}$Mo): Mo ICP standard (200 ppb - filled circle, 50 ppb - filled square, 10 ppb - filled diamond), mixture of 50 ppb Mo and 5 ppb Zr (filled triangle), and purified Mo from two WSL-5655 zircon samples (hollow circle and square). Error bars at 1$\sigma$ are included.}
\label{fig:Delta-plot}
\end{figure}
The results of these relatively high Zr and low Mo samples demonstrate the accuracy of the data processing algorithm. The Zr interference correction is successfully applied even when the Zr concentration is 10\% of the Mo concentration. The uncertainty is well characterized, with no results lying outside the expected uncertainty range of zero. This establishes confidence in the results of the two independently processed zircon samples of WSL-5655, which demonstrated no resolvable evidence of nuclear processes leading to mass independent fractionation on any Mo isotopes.
\section{Limit on the half-life of $^{96}$Zr}
The half-life of the $\beta\beta$-decay of $^{96}$Zr can be determined based on the age of the measured zircon sample and the relative amount of the daughter product $^{96}$Mo compared to the parent $^{96}$Zr by applying Equations \ref{eq:ndn0} and \ref{eq:halflife}. The zircon sample was detrital, having a wide range of ages with a mean of 910(30) Ma based on N=510 measurements. The amount of Mo and Zr in the digested samples cannot be measured simultaneously by ICPMS as the smallest measurable Mo/Zr ratio was found to be 1:$10^6$. It was therefore required to measure the amounts separately, before and after the first ion exchange. This increased the uncertainty in this ratio to an estimated 20\% due to potential Mo loss during the ion exchange.
As the $^{96}$Mo excess for the zircons was found to be within 1$\sigma$ uncertainty of zero, only a lower limit can be placed on the half-life. As shown in Table \ref{tab:Halflife-Table}, despite the better precision of the 5655F-2 measurement, the larger amount of Mo lowers the limit that can be derived. The determined lower limit for the half-life of $^{96}$Zr is $T_{1/2} \geq 6.4\cdot10^{18}$.
\begin{table}[t]
\centering
\begin{tabular}{llll}
\hline
Sample & Mo/Zr & $\delta^{96}\text{Mo}$ & Half-life \\
& (ppm) & Upper limit & Lower limit \\
\hline
WSL-5655 (1) & 0.19 & 90 & $6.4 \cdot 10^{18}$ a \\
WSL-5655 (2) & 2.4 & 42 & $1.1 \cdot 10^{17}$ a \\
\hline
\end{tabular}
\caption{Lower limit of $^{96}$Zr half-life based on uncertainty of $\delta(^{96}\text{Mo})$.}
\label{tab:Halflife-Table}
\end{table}
To directly measure half-life of $^{96}$Zr, a large high purity sample (a few grams) of older zircons with an age over 2 Ga would be required. The results of this study demonstrate that it will be possible to determine the half-life with similar precision to the previous direct counting-rates measurement\cite{Argyriades2010}. This procedure eliminates the need for assumptions used to estimate the non-radiogenic Mo content in zircon for the geochemical measurement by Wieser and DeLaeter\cite{Wieser2001}, which will allow an understanding of the discrepancy between these two measurements.
\section{Conclusions}
The techniques developed have demonstrated high precision measurements of trace Mo in ancient zircons. The chemical separation and data analysis techniques have allowed for high precision measurements to be made on very limited samples ($<50$ ng Mo from 0.5 g zircon) by reducing Zr content by $>9$ orders of magnitude while retaining 63\% of Mo. The flexibility of TEVA and cation resins, which combined have selectivity for most of the periodic table, allows these techniques to be applied to the measurement of other trace elements in zircon free of matrix effects and isobaric interferences. Further, the calibration of the ion exchange elution "online" allows for diagnostics and refinement of ion exchange procedures, optimizing acid volumes required for separation to maximize sample recovery while minimizing chemistry blank. The application of data filters and enhanced background and mass bias corrections have led to a significant improvement in measurement precision, often more than an order of magnitude improvement over typical data evaluation outputs. These algorithms can be applied to any isotopic system, particularly when isobaric interferences are present and sample is limited. Isotopic composition measurements of highly trace elements in ancient zircon will extend the application of state-of-the-art geochemical tracers to understanding early Earth evolution, long-lived nuclear processes and cosmochemistry.
\begin{acknowledgments}
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).
\end{acknowledgments}
|
1,108,101,563,853 | arxiv | \section{Introduction}
\label{sec:introduction}
Spinning motions of rigid bodies have been studied for centuries and
still are drawing interest in recent years, including the motions of
Euler's disks \cite{Moffatt2000}, spinning eggs \cite{Moffatt2002}, and
rolling rings \cite{Jalali2015}, to mention just a few. Also,
macroscopic systems which convert vibrations to rotations have been
studied in various context such as a circular granular ratchet
\cite{Heckel2012}, and bouncing dumbbells, which show a cascade of
bifurcations \cite{Kubo2015}. Another interesting example of rigid body
dynamics which involves such oscillation-rotation coupling is a
rattleback, also called as a celt or wobble stone, which is a
semi-elliptic spinning toy [Fig.~\ref{fig:notation}(a)]. It spins
smoothly when spun in one direction; however, when spun in the other
direction, it soon starts wobbling or rattling about its short axis and
stops spinning, then it starts to rotate in the opposite direction. One
who has studied classical mechanics must be amazed by this reversal in
spinning, because it apparently seems to violate the angular momentum
conservation, and the chirality emerges from a seemingly symmetrical
object.
There are three requirements for a rattleback to show this reversal of
rotation: (1) the two principal curvatures of the lower surface should
be different, (2) the two horizontal principal moments of inertia
should also be different, and (3) the principal axes of inertia should
be misaligned to the principal directions of curvature. These
characteristics induce the coupling between the spinning motion and the
two oscillations: the pitching about the short horizontal axis and the
rolling about the long horizontal axis. The coupling is asymmetric,
i.e., the oscillations cause torque around the spin axis and the
signs of the torque are opposite to each other. This also means
that either the pitching or the rolling is excited depending on the
direction of the spinning. We will see that the spinning couples with
the pitching much stronger than that with the rolling; therefore, it
takes much longer time for spin reversal in one direction than in the
other direction, and that is why most rattlebacks reverse only for one way
before they stop by dissipation.
In the 1890s, a meteorologist, Walker, performed the first quantitative
analysis of the rattleback motion \cite{Walker1896}. Under the
assumptions that the rattleback does not slip at the contact point and
that the rate of spinning speed changes much slower than other time
scales, he linearized the equations of motion and showed that
either the pitching or the rolling becomes unstable depending on the
direction of the spin. More detailed analyses were performed by Bondi
\cite{Bondi1986}, and recently by Wakasugi \cite{WakasugiH23}. Case and
Jalal \cite{Case2014} derived the growth rate of instability at slow
spinning. Markeev \cite{Markeev1984}, Pascal \cite{Pascal1983}, and
Blackowiak et al. \cite{Blackowiak1997} obtained the equations of
the spin motion by extracting the slowly varying amplitudes of the fast
oscillations of the pitching and the rolling. Moffatt and Tokieda
\cite{MoffattTokieda2008} derived similar equations to those of Markeev
\cite{Markeev1984} and Pascal \cite{Pascal1983}, and pointed out the
analogy to the dynamo theory. Garcia and Hubbard
\cite{GarciaHubbard1988} obtained the expressions of the averaged
torques generated by the pure pitching and the rolling, and derived the
formula for spin reversal time.
As the first numerical study, Kane and Levinson \cite{Kane1982}
simulated the energy-conserving equations and showed that the rattleback
changes its spinning direction indefinitely for certain parameter values
and initial conditions. They also demonstrated the coupling between the
oscillations and the spinning by showing that it starts to rotate when
it begins with pure pitching or rolling, but the direction of the
rotation is different between pitching and rolling. Similar simulations
were performed by Lindberg and Longman independently
\cite{Lindberg1983}. Nanda \textit{et al.} simulated the spin resonance of the
rattleback on a vibrating base \cite{Nanda2016}.
Energy-conserving dynamical systems usually conserve the phase volume,
but the present rattleback dynamics does not explore the whole phase
volume with a given energy because of a non-holonomic constraint due to
the no-slip condition. Therefore, the Liouville theorem does not hold,
and such a system has been shown to behave much like dissipative
systems. Borisov and Mamaev in fact reported the existence of ``strange
attractor'' for certain parameter values in the present system
\cite{Borisov2003}. The no-slip rattleback system has been actively
studied in the context of chaotic dynamics during the last decade
\cite{Borisov2006,Borisov2014}.
Effects of dissipation at the contact point have been investigated in
several works. Magnus \cite{Magnus1974} and Karapetyan
\cite{Karapetyan1981} incorporated a viscous type of friction force
proportional to the velocity. Takano \cite{Takano2014} determined the
conditions under which the reversal of rotation occurs with the viscous
dissipation. Garcia and Hubbard \cite{GarciaHubbard1988} simulated
equations with aerodynamic force, Coulomb friction in the spinning, and
dissipation due to slippage, then they compared the results with a real
rattleback. The dissipative rattleback models based on the contact
mechanics with Coulomb friction have been developed by
Zhuravlev and Klimov \cite{Zhuravlev2008} and Kudra and Awrejcewicz
\cite{Awrejcewicz2012, Kudra2013, Kudra2015}.
This paper is organized as follows. In the next section, we
reformulate the rattleback dynamics under the no-slip and no dissipation
condition in a physically transparent way. In the small-spin and
small-oscillation approximation, the dynamics is reduced to a
simplified three-variable dynamics. We then focus on the time required
for reversal, or what we call \textit{the time for reversal},
which is the most evident quantity that characterizes
rattlebacks, and obtain a concise expression for the Garcia-Hubbard
formula for the time for reversal \cite{GarciaHubbard1988}. In
Sec.~\ref{sec:simulation}, the results of the extensive numerical
simulations are presented for various model parameters and initial
conditions in order to examine the validity and the limitation of the
theory. Discussions and conclusion are given in
Sec.~\ref{sec:discussion} and Sec.~\ref{sec:conclusion}, respectively.
\section{Theory}
\label{sec:theory}
\subsection{Equations of motion}
\begin{figure}
\includegraphics[width=7cm]{1.pdf}
\caption{\label{fig:notation}(a) A commercially available rattleback made of plastic. (b) Notations of the rattleback. (c) A schematic illustration of the shell-dumbbell model.}
\end{figure}
We consider a rattleback as a rigid body, whose configuration can be
represented by the position of the center of mass G and the Euler
angles; both of them are obtained by integrating the velocity of the
center of mass $\bm{v}$ and the angular velocity $\bm{\omega}$ around it
\cite{Goldstein2002}.
We investigate the rattleback motion on a horizontal plane, assuming
that it is always in contact with the plane at a single point C without
slipping. We ignore dissipation, then all the forces that act on
the rattleback are the contact force $\bm{F}$ exerted by the plane
at C and the gravitational force $-Mg\bm{u}$, where $\bm{u}$ represents
the unit vertical vector pointing upward
[Fig.~\ref{fig:notation}(b)]. Therefore, the equations of motion are
given by
\begin{align}
\frac{d(M\bm{v})}{dt} &= \bm{F} - Mg\bm{u},\label{eq:em-1}\\
\frac{d(\hat{I}\bm{\omega})}{dt} &= \bm{r} \times \bm{F}, \label{eq:em-2}
\end{align}
where $M$ and $\hat{I}$ are the mass and the inertia tensor around G,
respectively, and $\bm{r}$ is the vector from G to the contact point C.
The contact force $\bm{F}$ is determined by the conditions of the
contact point; our assumptions are that (1) the rattleback is always in
contact at a point with the plane, and (2) there is no slip at the
contact point. The second constraint is represented by the relation
\begin{equation}
\bm{v} = \bm{r} \times \bm{\omega}.\label{eq:no-slip}
\end{equation}
Before formulating the constraint (1), we specify the co-ordinate
system. We employ the body-fixed co-ordinate with the origin being the
center of mass G, and the axes being the principal axes of inertia; the
$z$ axis is the one close to the spinning axis pointing downward, and
the $x$ and $y$ axes are taken to be $I_{xx} > I_{yy}$
(Fig. \ref{fig:coordinate}).
In this co-ordinate, the lower surface function of the rattleback is assumed to be given by
\begin{equation}
f(x,y,z) = 0,\label{eq:def-z}
\end{equation}
where
\begin{equation}
f(x,y,z) \equiv \frac{z}{a} - 1 + \frac{1}{2a^2}(x,\,y)\hat{R}(\xi)\hat{\Theta}\hat{R}^{-1}(\xi)\begin{pmatrix}x\\y\\\end{pmatrix},
\end{equation}
with
\begin{align}
\hat{R}(\xi) \equiv
\begin{pmatrix}
\cos\xi, & -\sin\xi \\
\sin\xi, & \cos\xi \\
\end{pmatrix}, \quad
\hat{\Theta} \equiv
\begin{pmatrix}
\theta, & 0 \\
0 & \phi \\
\end{pmatrix}.
\end{align}
Here $a$ is the distance between G and the surface at $x=y=0$, and $\xi$ is the \textit{skew angle} by which the principal directions of curvature are rotated from the $x$-$y$ axes, which we choose as the principal axes of inertia (Fig. \ref{fig:coordinate}). $\theta/a$ and $\phi/a$ are the principal curvatures at the bottom, namely at $(0,0,a)^{t}$.
Now, we can formulate the contact point condition (1); the components of
the contact point vector $\bm{r}$ should satisfy Eq.~(\ref{eq:def-z}),
and the normal vector of the surface at C should be parallel to the
vertical vector $\bm{u}$. Thus we have
\begin{equation}
\bm{u} \parallel \nabla f,
\end{equation}
which gives the relation
\begin{equation}
\frac{\bm{r}_{\perp}}{a} = \frac{1}{u_{z}} \hat{R}(\xi)\hat{\Theta}^{-1}\hat{R}^{-1}(\xi)\bm{u}_{\perp} \label{eq:def-xy},
\end{equation}
where $\bm{a}_{\perp}$ represents the $x$ and $y$ components of a
vector $\bm{a}$ in the body-fixed co-ordinate.
Before we proceed, we introduce a dotted derivative of a vector $\bm{a}$
defined as the time derivative of the vector components in the
body-fixed co-ordinate. This is related to the time derivative by
\begin{equation}
\frac{d\bm{a}}{dt} = \dot{\bm{a}} + \bm{\omega} \times \bm{a}.
\end{equation}
Note that the vertical vector $\bm{u}$ does not depend on time, thus we have
\begin{equation}
\frac{d\bm{u}}{dt} = \dot{\bm{u}} + \bm{\omega} \times \bm{u} = \bm{0}. \label{eq:diff-u}
\end{equation}
These conditions, i.e., the no-slip condition (\ref{eq:no-slip}), the conditions of the contact point (\ref{eq:def-z}) and (\ref{eq:def-xy}), and the vertical vector condition (\ref{eq:diff-u}), close the equations of motion (\ref{eq:em-1}) and (\ref{eq:em-2}).
Following Garcia and Hubbard \cite{GarciaHubbard1988}, we describe the rattleback dynamics by $\bm{u}$ and $\bm{\omega}$.
The evolution of $\bm{\omega}$ is obtained as
\begin{multline}
\hat{I} \dot{\bm{\omega}} - M\bm{r} \times (\bm{r} \times \dot{\bm{\omega}})
= - \bm{\omega} \times (\hat{I}\bm{\omega}) \\
+ M\bm{r}\times(\dot{\bm{r}}\times \bm{\omega} + \bm{\omega}\times (\bm{r} \times \bm{\omega})) + Mg\bm{r}\times\bm{u} \label{eq:diff-omega}
\end{multline}
by eliminating the contact force $\bm{F}$ from the equations of motion (\ref{eq:em-1}) and (\ref{eq:em-2}), and using the no-slip condition (\ref{eq:no-slip}).
The state variables $\bm{u}$ and $\bm{\omega}$ can be determined by Eqs.~(\ref{eq:diff-u}) and (\ref{eq:diff-omega}) with the contact point conditions (\ref{eq:def-z}) and (\ref{eq:def-xy}).
\begin{figure}
\includegraphics[width=8cm]{2r.pdf}
\caption{(color online) \label{fig:coordinate}A body-fixed co-ordinate viewed from below. The dashed lines indicate the principal directions of curvature, rotated by $\xi$ from the principal axes of inertia (the $x$-$y$ axes).}
\end{figure}
The rattleback is characterized by the inertial parameters $M$,
$I_{xx}$, $I_{yy}$, $I_{zz}$, the geometrical parameters $\theta$,
$\phi$, $a$, and the skew angle $\xi$. For the stability of the
rattleback, both of the dimensionless curvatures $\theta$ and
$\phi$ should be smaller than $1$; without loss of generality, we assume
\begin{equation}
0 < \phi < \theta < 1,
\end{equation}
then, it is enough to consider
\begin{equation}
-\frac{\pi}{2} < \xi < 0,
\end{equation}
for the range of the skew angle $\xi$. The positive $\xi$ case can be obtained by the reflection with respect to the $x$-$z$ plane.
At this stage, we introduce the dimensionless inertial parameters
$\alpha$, $\beta$, and $\gamma$ for later use after Bondi
\cite{Bondi1986} as
\begin{equation}
\alpha \equiv \frac{I_{xx}}{Ma^{2}}+1, \ \beta \equiv \frac{I_{yy}}{Ma^{2}}+1, \ \gamma \equiv \frac{I_{zz}}{Ma^2},\label{eq:def-abg}
\end{equation}
which are dimensionless inertial moments around the contact
point C. Note that
\begin{equation}
\alpha > \beta > 1,
\end{equation}
because we have assumed $I_{xx} > I_{yy}$.
\subsection{Small amplitude approximation of oscillations under $\omega_{z}=0$}
\label{subsec:linearization}
We consider the oscillation modes in the case of no spinning
$\omega_{z} = 0$ in the small amplitude approximation, namely, in the
linear approximation in $|\omega_{x}|,\,|\omega_{y}|\ll\sqrt{g/a}$,
which leads to $|x|,\,|y| \ll a$, $|u_{x}|,\, |u_{y}| \ll 1 $, and
$u_{z} \approx -1$.
In this regime, the $x$ and $y$ components of Eq.~(\ref{eq:diff-u}) can
be linearized as
\begin{equation}
\dot{\bm{u}}_{\perp} \approx \hat{\varepsilon}\,\bm{\omega}_{\perp}, \quad
\hat{\varepsilon} \equiv
\begin{pmatrix}
0, & 1 \\
-1, & 0 \\
\end{pmatrix} = \hat{R}(-\pi/2).\label{eq:lin-u}
\end{equation}
By using Eq.~(\ref{eq:def-xy}) with $u_{z} \approx -1$, Eq.~(\ref{eq:diff-omega}) can be linearized as
\begin{align}
\hat{J}\,\dot{\bm{\omega}}_{\perp} &\approx \frac{g}{a^2} (\bm{r}\times\bm{u})_{\perp}\notag\\
&= -\frac{g}{a}\hat{\varepsilon}\,[-\hat{R}(\xi)\hat{\Theta}^{-1}\hat{R}^{-1}(\xi) +1 ] \bm{u}_{\perp},\label{eq:lin-omg1}
\end{align}
with the inertial matrix
\begin{equation}
\hat{J} \equiv
\begin{pmatrix}
\alpha, & 0 \\
0, & \beta \\
\end{pmatrix}.
\end{equation}
From the linearized equations (\ref{eq:lin-u}) and (\ref{eq:lin-omg1}), we obtain
\begin{equation}
\hat{J}\ddot{\bm{\omega}}_{\perp}= - \frac{g}{a}(\hat{\Gamma} -1) \bm{\omega}_{\perp},\label{eq:lin-omg2}
\end{equation}
where
\begin{equation}
\hat{\Gamma} \equiv \hat{R}(\xi +\pi/2) \hat{\Theta}^{-1}\hat{R}^{-1}(\xi+\pi/2).
\end{equation}
At this point, it is convenient to introduce the bra-ket notation for the row and column vector of $\bm{\omega}_{\perp}$ as $\bra{\omega_{\perp}}$ and $\ket{\omega_{\perp}}$, respectively. With this notation, Eq.~(\ref{eq:lin-omg2}) can be put in the form of
\begin{align}
\ket{\ddot{\tilde{\omega}}_{\perp}}= -\hat{H}\ket{\tilde{\omega}_{\perp}},
\label{eq:lin-omg3}
\end{align}
with
\begin{align}
\ket{\tilde{\omega}_{\perp}} \equiv \hat{J}^{1/2}\ket{\omega_{\perp}},
\quad \hat{H} \equiv \frac{g}{a}\hat{J}^{-1/2}(\hat{\Gamma} -1 )\hat{J}^{-1/2},
\label{eq:lin-omg4}
\end{align}
where $\hat{H}$ is symmetric. The eigenvalue equation
\begin{align}
\hat{H} \ket{\tilde{\omega}_{j}}= \omega_{j}^2 \ket{\tilde{\omega}_{j}}
\label{eq:def-omgpr}
\end{align}
determines the two oscillation modes with $j=p$ or $r$, whose frequencies are given by
\begin{equation}
\omega_{p,r}^2 = \frac{1}{2}\left[(H_{11}+H_{22})
\pm \sqrt{(H_{11}-H_{22})^2 + 4H_{12}^2}\right]
\label{eq:def-omega_pr}
\end{equation}
with
\begin{equation}
\omega_{p} \ge \omega_{r}.
\label{ineq:omega_pr}
\end{equation}
Here, $H_{ij}$ denotes the $ij$ component of $\hat H$.
The orthogonal condition for the eigenvectors
$\ket{\tilde{\omega}_{p}}$ and $\ket{\tilde{\omega}_{r}}$ can be
written using $\hat{\varepsilon}$ as
\begin{align}
\ket{\tilde{\omega}_{p}} &= \hat{\varepsilon} \ket{\tilde{\omega}_{r}},\quad
\ket{\tilde{\omega}_{r}} = -\hat{\varepsilon} \ket{\tilde{\omega}_{p}}, \\
\bra{\tilde{\omega}_{r}} &= \bra{\tilde{\omega}_{p}}\hat{\varepsilon}, \quad
\bra{\tilde{\omega}_{p}} = -\bra{\tilde{\omega}_{r}}\hat{\varepsilon}.
\end{align}
In the case of zero skew angle, $\xi=0$, we have
\begin{align}
\omega_p^2 &=\left({g\over a}\right){1/\phi-1\over\alpha}\equiv \omega_{p0}^2,
\label{def:omega_p0}
\\
\omega_r^2 &=\left({g\over a}\right){1/\theta-1\over\beta}\equiv \omega_{r0}^2,
\label{def:omega_r0}
\end{align}
and the eigenvectors $\ket{\omega_{p}}$ and $\ket{\omega_{r}}$ are
parallel to the $x$ and the $y$ axes, thus these modes correspond to the
pitching and the rolling oscillations, respectively. This correspondence
holds for $|\xi|\ll 1$ and $\omega_{p0}>\omega_{r0}$ as for a typical
rattleback parameter, the case we will discuss mainly in the following
\footnote{Note that in the atypical case of
$\omega_{p0}<\omega_{r0}$, i.e. the pitching is slower than
the rolling, we have $\omega_p\approx\omega_{r0}$ and
$\omega_r\approx\omega_{p0}$ for $|\xi|\ll 1$ because $\omega_p>\omega_r$
by Eq.~(\ref{ineq:omega_pr}).}.
\subsection{Garcia and Hubbard's theory for the time for reversal}
\label{subsec:GHtheory}
Based on our formalism, it is quite straightforward to derive Garcia and Hubbard's formula for the reversal time of rotation.
\subsubsection{Asymmetric torque coefficients}
Due to the skewness, the pitching and the rolling are coupled with the
spinning motion. We examine this coupling in the case of
$\omega_{z} = 0$ by estimating the averaged torques around the vertical
axis caused by the pitching and the rolling oscillations. From
Eqs.~(\ref{eq:em-1}) and (\ref{eq:em-2}) and the no-slip condition
Eq.~(\ref{eq:no-slip}), the torque around $\bm{u}$ is given by
\begin{align}
T &\equiv \bm{u}\cdot(\bm{r} \times \bm{F})
\approx - Ma^2 [\dot{\bm{\omega}}_{\perp}\cdot\hat{\varepsilon}(\hat{\Gamma} - 1)\hat{\varepsilon} \,\bm{u}_{\perp}\,],\label{eq:ave-torque}
\end{align}
within the linear approximation in $\bm{\omega}_{\perp}$, $\bm{u}_{\perp}$, and $\bm{r}_{\perp}$ discussed in Sec.~\ref{subsec:linearization}.
We define the \textit{asymmetric torque coefficients} $K_{p} $ and $K_{r}$ for each mode by
\begin{equation}
-K_{p} \equiv \frac{\overline{T}_{p}}{\overline{E}_{p}}, \qquad
K_{r} \equiv \frac{\overline{T}_{r}}{\overline{E}_{r}},
\label{eq:def-kpr}
\end{equation}
where $\overline{T}_{j}\ (j=p\text{~or~}r)$ is the averaged torque over the oscillation period generated by each mode, and $\overline{E}_{j}$ is the corresponding averaged oscillation energy which can be estimated within the linear approximation as
\begin{align}
\overline{E} &\approx Ma^2 (\alpha \overline{\omega_{x}^2} + \beta \overline{\omega_{y}^2} ). \label{eq:ave-ene}
\end{align}
The minus sign for the definition of $K_{p}$ is inserted in order that both $K_{p}$ and $K_{r}$ should be positive for typical rattleback parameters as can be seen below. Note that the asymmetric torque coefficients are dimensionless.
From Eqs.~(\ref{eq:ave-torque}) and (\ref{eq:ave-ene}), $-K_{p}$ is given by
\begin{align}
-K_{p} &= \frac{\braket{\omega_{p}|\,\hat{\varepsilon}(\hat{\Gamma} - 1)\hat{\varepsilon}\hat{\varepsilon}\,|\omega_{p}}}{\braket{ \omega_{p}|\hat{J}|\omega_{p}}}\notag\\
&= -\frac{(a/g)\braket{\tilde{\omega}_{p}|\,\hat{J}^{-1/2}\hat{\varepsilon}\hat{J}^{1/2}\hat{H}\,|\tilde{\omega}_{p}}}{\braket{\tilde{\omega}_{p}|\tilde{\omega}_{p}}}\label{eq:kp-1}\\
&= - \omega_{p}^2 \,\frac{(a/g)\braket{\tilde{\omega}_{p}|\,\hat{J}^{-1/2}\hat{\varepsilon}\hat{J}^{1/2}\,|\tilde{\omega}_{p}}}{\braket{\tilde{\omega}_{p}|\tilde{\omega}_{p}}}.
\label{eq:kp-2}
\end{align}
In the same way, $K_{r}$ is given by
\begin{align}
K_{r} &= -\frac{(a/g)\braket{\tilde{\omega}_{r}|\hat{J}^{-1/2}\hat{\varepsilon}\hat{J}^{1/2}\hat{H}|\tilde{\omega}_{r}}}{\braket{\tilde{\omega}_{r}|\tilde{\omega}_{r}}}\label{eq:kr-1}\\
&= \omega_{r}^2 \,\frac{(a/g)\braket{\tilde{\omega}_{p}|(\hat{J}^{-1/2}\hat{\varepsilon}\hat{J}^{1/2})^{\dagger}|\tilde{\omega}_{p}}}{\braket{\tilde{\omega}_{p}|\tilde{\omega}_{p}}}.
\label{eq:kr-2}
\end{align}
Equations (\ref{eq:kp-1})--(\ref{eq:kr-2}) yield simple relations for $K_{p}$ and $K_{r}$ as
\begin{align}
\frac{K_{p}}{K_{r}} = \frac{\omega_{p}^2}{\omega_{r}^{2}} \label{eq:k-rat}
\end{align}
and
\begin{align}
K_{p} - K_{r}&= \frac{(a/g)}{\braket{\tilde{\omega}_{p}|\tilde{\omega}_{p}}}
\mathrm{Tr}\left[\hat{J}^{-1/2}\hat{\varepsilon}\hat{J}^{-1/2}\hat{H}\right]\notag\\
& = -\frac{1}{2}\sin(2\xi)\left(\frac{1}{\beta} - \frac{1}{\alpha}\right)\left(\frac{1}{\phi} - \frac{1}{\theta}\right). \label{eq:k-diff}
\end{align}
Equations (\ref{eq:k-rat}) and (\ref{eq:k-diff}) are enough to determine
\begin{align}
K_{p} = -\frac{1}{2}\sin(2\xi)\left(\frac{1}{\beta} - \frac{1}{\alpha}\right)\left(\frac{1}{\phi} - \frac{1}{\theta}\right) \frac{\omega_{p}^2}{\omega_{p}^2 - \omega_{r}^2},\label{eq:Kp}\\
K_{r} = -\frac{1}{2}\sin(2\xi)\left(\frac{1}{\beta} - \frac{1}{\alpha}\right)\left(\frac{1}{\phi} - \frac{1}{\theta}\right) \frac{\omega_{r}^2}{\omega_{p}^2 - \omega_{r}^2}\label{eq:Kr}.
\end{align}
Note that Eqs.~(\ref{eq:Kp}) and (\ref{eq:Kr}) are consistent with the
three requirements of rattlebacks: $\xi \neq 0$, $\alpha \neq \beta$,
and $\theta \neq \phi$.
Equations (\ref{eq:Kp}) and (\ref{eq:Kr}) are shown to be equivalent to the
corresponding expressions Eq.~(42a,b) in Garcia and Hubbard
\cite{GarciaHubbard1988} although their expressions look quite involved.
These results also show that
\begin{equation}
K_{p}K_{r} > 0 \quad \text{and hence} \quad \overline{T}_{p}\overline{T}_{r} <0,
\end{equation}
namely, the torques generated by the pitching and the rolling always
have opposite signs to each other.
\subsubsection{Typical rattleback parameters}
Typical rattleback parameters fall in the region that satisfies
the following two conditions:
(1) the skew angle is small,
\begin{equation}
|\xi| \ll 1,
\end{equation}
and (2) the pitch frequency is higher than the roll frequency. Under
these conditions, the modes $p$ and $r$ of Eq.~(\ref{eq:def-omgpr})
correspond to the pitching and the rolling oscillations respectively, and
\begin{equation
\omega_{p}^2 \approx \omega_{p0}^2,\qquad
\omega_{r}^2 \approx \omega_{r0}^2
\label{eq:omg_pr-approx-omgpr0}
\end{equation
in accord with the inequality (\ref{ineq:omega_pr}) \cite{Note1}.
From Eqs.~(\ref{eq:def-kpr}), (\ref{eq:Kp}), and (\ref{eq:Kr}), the signs
of the asymmetric torque coefficients and the averaged torques for
typical rattlebacks are given by
\begin{equation}
K_{p}>0 \quad \text{and}\quad K_{r}>0, \label{eq:sign-typ-k}
\end{equation}
and
\begin{equation}
\overline{T}_{p}<0 \quad \text{and}\quad \overline{T}_{r}>0,
\end{equation}
by noting $\xi<0$, $\alpha > \beta$, $\theta >\phi$.
The fact that $\omega_{p0}>\omega_{r0}$ for a typical rattleback
means that the shape factor, $1/\phi-1$ or $1/\theta-1$,
contributes much more than the inertial factor, $1/\alpha$
or $1/\beta$, in
Eqs.~(\ref{def:omega_p0}) and (\ref{def:omega_r0}) although these two factors compete, i.e.
$1/\phi-1>1/\theta-1$ and $1/\alpha<1/\beta$. This is a
typical situation because the two curvatures of usual rattlebacks are
markedly different, i.e., $\phi \ll \theta < 1$ as can be seen in
Fig.~\ref{fig:notation}(c).
Moreover, we can show that the pitch frequency is always higher for an
ellipsoid with a uniform mass density whose surface is given by $x^2/c^2
+ y^2/b^2+ z^2/a^2 = 1\ (b^2 > c^2 > a^2)$. This also holds for a
semi-ellipsoid for $b^2 > c^2 > (5/8)a^2$, where the co-ordinate system
is the same as the ellipsoid.
\subsubsection{Time for reversal}
Now we study the time evolution of the \textit{spin} $n$ defined as the
vertical component of the angular velocity
\begin{equation}
n \equiv \bm{u}\cdot\bm{\omega},
\end{equation}
assuming that the expressions for the asymmetric torque coefficients,
$K_p$ and $K_r$, obtained above are valid even when $\omega_{z}\ne 0$.
We consider the quantities $\overline{n}$, $\overline{E}_p$, and
$\overline{E}_r$, averaged over the time scale much longer than the
oscillation periods, yet much shorter than the time scale for spin
change.
Then, these averaged quantities should follow the following evolution equations:
\begin{align}
I_{\textrm{eff}} \frac{d\overline{n}(t)}{dt} &= - K_{p} \overline{E}_{p}(t) + K_{r} \overline{E}_{r}(t) , \label{eq:diff-ne-1}\\
\frac{d \overline{E}_{p}(t)}{dt} & = K_{p} \overline{n}(t) \overline{E}_{p}(t),\label{eq:diff-ne-2}\\
\frac{d \overline{E}_{r}(t)}{dt} & = - K_{r} \overline{n}(t) \overline{E}_{r}(t) \label{eq:diff-ne-3}.
\end{align}
Here, $I_{\textrm{eff}}$ is the effective moment of inertia around $\bm{u}$ under
the existence of the oscillations, and is assumed to be constant; it should be close to $I_{zz}$.
As can be seen easily, the total energy $E_{\textrm{tot}}$ defined by
\begin{equation}
E_{\textrm{tot}} \equiv \frac{1}{2} I_{\textrm{eff}} \overline{n}(t)^2 + \overline{E}_{p}(t) + \overline E_{r}(t)
\end{equation}
is conserved. It can be seen that there is another invariant,
\begin{equation}
C_{I} \equiv \frac{1}{K_{p}}\ln \overline{E}_{p} + \frac{1}{K_{r}}\ln \overline{E}_{r},\label{eq:casimir}
\end{equation}
which has been discussed in connection with a Casimir invariant
\cite{MoffattTokieda2008, Yoshida2016}. With these two
conservatives, general solutions of the three-variable system
(\ref{eq:diff-ne-1})--(\ref{eq:diff-ne-3}) should be periodic.
Let us consider the case where the spin is positive at $t=0$ and the sum of the oscillation energies are small compared to the spinning energy:
\begin{equation}
\overline{n}(0) \equiv n_i >0, \quad
\overline{E}_{p}(0) + \overline{E}_{r}(0) \ll \frac{1}{2} I_{\textrm{eff}} n_{i}^2.
\end{equation}
For a typical rattleback, the pitching develops and the rolling decays as long
as $\overline n>0$ as can be seen from Eqs.~(\ref{eq:sign-typ-k}), (\ref{eq:diff-ne-2}) and (\ref{eq:diff-ne-3}). Thus the rolling is irrelevant and can be ignored, i.e., $\overline E_r(t) = 0$, to estimate the time for reversal. Then we can derive the equation
\begin{equation}
\frac{d\overline{n}(t)}{dt} = -\frac{K_{p}}{2}\left(n_0^2- \overline{n}(t)^2\right) \label{eq:diff-n},
\end{equation}
where the constant $n_0>0$ is defined by
\begin{equation}
\frac{1}{2}I_{\textrm{eff}} n_{0}^2 \equiv E_{\textrm{tot}}.
\end{equation}
This can be easily solved as
\begin{equation}
\overline{n}(t) = n_{0}\frac{(n_{0} + n_{i})\exp(-n_{0}K_{p}t) - (n_{0} -n_{i}) }{(n_{0} + n_{i})\exp(-n_{0}K_{p}t) + (n_{0} -n_{i})},
\label{eq:gh-solution-p}
\end{equation}
and we obtain the time for reversal $t_{rGH+}$ for the $n_{i} > 0$ case as
\begin{equation}
t_{rGH+} = \frac{1}{n_0 K_p} \ln\left(\frac{n_{0}+n_{i}}{n_0 - n_i}\right),
\label{eq:trgh-p}
\end{equation}
by just setting $\overline{n}=0$ in Eq.~(\ref{eq:gh-solution-p}).
Similarly, in the case of $n_i<0$, only the rolling develops and the pitching is irrelevant, thus we obtain $\overline{n}(t)$ and the time for reversal $t_{rGH-}$ as
\begin{equation}
\overline{n}(t) = -n_{0}\frac{(n_{0} + |n_{i}|)\exp(-n_{0}K_{r}t) - (n_{0} - |n_{i}|) }{(n_{0} + |n_{i}|)\exp(-n_{0}K_{r}t) + (n_{0} - |n_{i}|)}
\label{eq:gh-solution-m}
\end{equation}
and
\begin{equation}
t_{rGH-} =\frac{1}{n_0 K_r} \ln\left(\frac{n_0+|n_i|}{n_0 - |n_i|} \right). \label{eq:trgh-m}
\end{equation}
Equations (\ref{eq:trgh-p}) and (\ref{eq:trgh-m}) are Garcia-Hubbard formulas for the times for reversal \cite{GarciaHubbard1988}.
From the expressions of $K_{p}$ and $K_{r}$ given by Eqs.~(\ref{eq:Kp})
and (\ref{eq:Kr}), we immediately notice that (1) the time for reversal
is inversely proportional to the skew angle $\xi$ in the small skewness
regime, and (2) the ratio of the time for reversal $t_{rGH-}/t_{rGH+}$
is simply given by the squared ratio of the pitch frequency to the roll
frequency $\omega_{p}^2 / \omega_{r}^2$, provided initial values $n_{0}$
and $n_{i}$ are the same except their signs.
For a typical rattleback, $\omega_{p}^2 \gg \omega_{r}^2$, thus $t_{rGH+} \ll t_{rGH-}$, i.e., the time for reversal is much shorter in the case of $n_{i}>0$ than in the case of $n_{i}<0$. Thus we call the spin direction of $n_{i} > 0$ the \textit{unsteady direction} \cite{GarciaHubbard1988}, and that of $n_i <0$ the \textit{steady direction}.
In the small skewness regime, this ratio of the squared frequencies is estimated as
\begin{align}
\frac{\omega_{p}^2}{\omega_{r}^2} \approx \frac{\omega_{p0}^2}{\omega_{r0}^2}
= \frac{\beta}{\alpha}\frac{1/\phi - 1}{1/\theta -1}. \label{eq:ratomg}
\end{align}
This becomes especially large as $\theta$ approaches $1$ or as $\phi$ approaches 0, namely, as the smaller radius of principal curvature approaches $a$, or as the larger radius of principal curvature becomes much larger than $a$.
We remark that both of the inertial parameters $\alpha$ and $\beta$ are larger than $1$ by definition Eq.~(\ref{eq:def-abg}), and cannot be arbitrarily large for a typical rattleback.
Let us consider these two limiting cases: $\phi\to 0$ and $\theta \to 1$ with $|\xi| \ll 1$. In the case of $\phi \to 0$,
\begin{equation}
K_{p} \to \infty,\quad
K_{r} \to (-\xi) \left(\frac{1}{\beta} - \frac{1}{\alpha}\right)\frac{\alpha}{\beta}\left(\frac{1}{\theta} - 1\right),
\end{equation}
thus the time for reversal $t_{rGH-}$ remains constant while $t_{rGH+}$ approaches $0$. In the case of $\theta \to 1$,
\begin{equation}
K_{p} \to (-\xi) \left(\frac{1}{\beta} - \frac{1}{\alpha}\right)\left(\frac{1}{\phi} - 1\right),\quad
K_{r} \to 0,
\end{equation}
and thus $t_{rGH+}$ remains constant while $t_{rGH-}$ diverges to infinity,
i.e., the negative spin rotation never reverses.
\section{Simulation}
\label{sec:simulation}
We perform numerical simulations for the times for the first spin reversal and compare them with Garcia-Hubbard formulas (\ref{eq:trgh-p}) and (\ref{eq:trgh-m}).
\subsection{Shell-dumbbell model}
To consider a rattleback whose inertial and geometrical parameters can be set separately, we construct a simple model of the rattleback, or the \textit{shell-dumbbell model}, which consists of a light shell and two dumbbells: the light shell defines the shape of the lower part of the rattleback and the dumbbells represent the masses and the moments of inertia. The shell is a paraboloid given by Eq.~(\ref{eq:def-z}). The dumbbells consist of couples of weights, $m_{x}/2$ and $m_{y}/2$, fixed at $(\pm r_{x},0,0)$ and $(0,\pm r_{y},0)$ in the body-fixed co-ordinate, respectively [Fig.~\ref{fig:notation}(c)]. Then the total mass is
\begin{equation}
M = m_{x} + m_{y}
\end{equation}
and the inertia tensor is diagonal with its principal moments
\begin{align}
I_{xx} &= m_{y}r_{y}^{2},\quad I_{yy} = m_{x}r_{x}^{2}, \\
I_{zz} &= m_{y}r_{y}^{2} + m_{x}r_{x}^{2}.
\end{align}
Note that the simple relation
\begin{equation}
I_{zz} = I_{xx} + I_{yy}
\end{equation}
holds for the shell-dumbbell model. We define
\begin{equation}
f_{sd} \equiv I_{yy}/I_{zz},
\end{equation}
then the dimensionless parameters $\alpha,\,\beta,$ and $\gamma$ defined by Eq.~(\ref{eq:def-abg}) are given by,
\begin{equation}
\gamma = I_{zz}/Ma^{2}, \, \alpha = (1-f_{sd})\gamma + 1, \, \beta = f_{sd}\gamma + 1.
\end{equation}
The parameter $f_{sd}$ satisfies $0<f_{sd}<0.5$, since we have assumed $\alpha > \beta$.
The shell-dumbbell model makes it easier to visualize an actual object represented by the model with a set of parameters, and is used in the following simulations for determining the parameter ranges.
\subsection{Methods}
\label{subsec:method}
\begin{table*}
\caption{\label{tab:parameters} Two sets of parameters used in the simulations: GH used by Garcia and Hubbard \cite{GarciaHubbard1988} and SD for the present shell-dumbbell model. For SD, the parameter values are chosen randomly from the ranges shown, and averages and/or distributions of simulation results are presented.}
\begin{ruledtabular}
\begin{tabular}{lccccccc}
&$\gamma$ & $f_{sd}$ & $\alpha,\ \beta$ & $\theta$ & $\phi$ & $-\xi$ (deg) \\ \midrule
GH & $12.28$ &---& 13.04, 1.522 & 0.6429& 0.0360 & 1.72\\
SD & $[5,15]$ &$[0.05,0.15]$ &---& $[0.6,0.95]$ & [0.01,0.1]& (0,6] \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{figure}
\centering \includegraphics[width=7.5cm]{3r.pdf}
\caption{\label{fig:omega-dist}(color online) (a) Cumulative
distributions of the pitch and the roll frequencies for the parameter
set SD in Table~\ref{tab:parameters}; $\omega_{p} \text{~and~}
\omega_{r}$ of Eq.~(\ref{eq:def-omgpr}) and their zeroth order
approximation $\omega_{p0}$ and $\omega_{r0}$ by
Eqs.~(\ref{def:omega_p0}) and (\ref{def:omega_r0})} are shown. The
inset shows the cumulative distribution of $\omega_{p}/\omega_{r}$. The
number of samples is $10^{6}$.
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{4.pdf}
\caption{\label{fig:k-dist}(color online) (a) Cumulative distributions of the asymmetric torque coefficients $K_{p}$ and $K_{r}$ for SD (Table \ref{tab:parameters}). The number of samples is ${10}^{5}$. (b) A 2D color plot for the distribution of ($K_{p}$,\,$K_{r}$). The color code shown is in the logarithmic scale for the relative frequency $P(K_{p},\,K_{r})$, i.e., $-9 \le \log_{10}P(K_{p},\,K_{r}) \le 0$. The number of samples is $10^{8}$.}
\end{figure}
The equations of motion (\ref{eq:diff-u}) and (\ref{eq:diff-omega}) with the contact point conditions (\ref{eq:def-z}) and (\ref{eq:def-xy}) are numerically integrated by the fourth-order Runge-Kutta method with an initial condition $\bm{\omega}(0)$ and $\bm{u}(0)$.
In the simulations, we take
\begin{equation}
\bm{u}(0) = (0,0,-1)^{t} \label{eq:sim-ic-1}
\end{equation}
and specify $\bm{\omega}(0)$ as
\begin{equation}
\bm{\omega}(0) = (|\omega_{xy0}|\cos\psi,\ |\omega_{xy0}|\sin\psi,\ -n_{i}) \label{eq:sim-ic-2}
\end{equation}
in terms of $|\omega_{xy0}|$, $\psi$, and $n_{i}$. According to the simplified dynamics (\ref{eq:diff-ne-1})--(\ref{eq:diff-ne-3}), the irrelevant mode of oscillation does not affect the dynamics sensitively as long as the relevant mode exists and the initial spin energy is much larger than the initial oscillation energy. Thus we choose $\ket{\omega(0)}= (\omega_{x0}, \omega_{y0})^{t}$ in the direction of the relevant eigenmode,
\begin{equation}
\psi = \psi_{p} \text{ for } n_{i}>0,\quad\text{and}\quad\psi = \psi_{r} \text{ for } n_{i}<0, \label{eq:sim-ic-3}
\end{equation}
where $\psi_{p}$ and $\psi_{r}$ are the angles of the eigenvectors $\ket{\omega_{p}}$ and $\ket{\omega_{r}}$ from the $x$-axis, respectively.
Numerical results are presented in the unit system where $M$, $a$, and
\begin{equation}
\tilde{t} \equiv 1/\tilde{\omega} \equiv \sqrt{a/g}
\end{equation}
as units of mass, length, and time. The size of the time step for the numerical integration is taken to be $0.002\,\tilde{t}$.
In numerics, we determine the time for reversal $t_{r}$ by the time at which $n=\bm{\omega}\cdot\bm{u}$ becomes zero for the first time, and they are compared with Garcia-Hubbard formulas (\ref{eq:trgh-p}) and (\ref{eq:trgh-m}); $n_0$ is determined as
\begin{equation}
\frac{\gamma n_{0}^{2}}{2} = \frac{1}{2}(
\alpha\omega_{x0}^2 + \beta\omega_{y0}^2 + \gamma\omega_{z0}^2),
\label{eq:def-n0}
\end{equation}
assuming $I_{\mathrm{eff}} = I_{zz}$ at $t=0$.
Here the potential energy $U(\bm{u})$ is set to zero where $\bm{u}(0) = (0,0,-1)^{t}$.
The parameters used in the simulations are listed in Table~\ref{tab:parameters}.
For the parameter set SD, the ranges are shown. When numerical results are plotted against $K_{p}$ or $K_{r}$, given by Eqs.~(\ref{eq:Kp}) or (\ref{eq:Kr}), respectively, sets of parameters are chosen randomly from the ranges until resulting $K_{p}$ or $K_{r}$ falls within the range of $\pm0.1\%$ of a target value.
The ranges of SD are chosen to meet the following two conditions: (1)
$0<\phi \ll \theta <1$, $\beta < \alpha$, and $|\xi| \ll 1$ and (2) the
pitch frequency should be higher than the roll frequency. As argued
in Sec.~\ref{subsec:GHtheory}, usual rattlebacks such as one in
Fig.~\ref{fig:notation}(a) satisfy these two
conditions. Figure \ref{fig:omega-dist} shows the cumulative distributions
for the eigenfrequencies $\omega_{p}$ and $\omega_{r}$, and their
approximate expressions $\omega_{p0}$ and $\omega_{r0}$ for the
parameter set SD; it shows $(\omega_{p}/\omega_{r}) > 1.3$ in accordance
with the condition (2).
The parameter set GH gives $K_{p} = 0.553$ and $K_{r}=0.0967$, and the
distributions of $K_{p}$ and $K_{r}$ for SD are shown in
Fig.~\ref{fig:k-dist}, where one can see $K_{p}\gg K_{r}$. From
Eq.~(\ref{eq:k-rat}), this corresponds to $\omega_{p}^2 \gg
\omega_{r}^2$, i.e., the pitch frequency is significantly faster than
the roll frequency. Consequently, the time for reversal is much shorter
for the unsteady direction $n_{i}>0$, where the pitching is induced,
than for the steady direction $n_{i}<0$, where the rolling is
induced. We denote the time for reversal for the unsteady direction as
$t_{ru}$ and that for the steady direction as $t_{rs}$ when we consider
a specific spinning direction.
\subsection{Results}
\subsubsection{General behavior for the parameter set GH}
\begin{figure}
\includegraphics[width=8cm]{5r.pdf}
\caption{\label{fig:gh-t-n} A typical spin evolution and the corresponding $\omega_{x}$ and $\omega_{y}$ for GH (Table~\ref{tab:parameters}). (a) The case of the initial spin in the unsteady direction. The initial condition is specified by Eqs.~(\ref{eq:sim-ic-1})--(\ref{eq:sim-ic-3}) with $n_{i} = 0.1\,\tilde{\omega}$ and $|\omega_{xy0}|=0.01\,\tilde{\omega}$. (b) The case of the initial spin in the steady direction with $n_{i} = -0.1\,\tilde{\omega}$ and $|\omega_{xy0}|=0.01\,\tilde{\omega}$. The dashed lines in (a-1) and (b-1) show Garcia and Hubbard's solution $\overline{n}(t)$ given by Eqs.~(\ref{eq:gh-solution-p}) and (\ref{eq:gh-solution-m}), respectively.}
\end{figure}
In Fig.~\ref{fig:gh-t-n} we show a typical simulation result of the time evolution of the spin $n(t)$ along with the angular velocities $\omega_{x}(t)$ and $\omega_{y}(t)$ for the parameter set GH (Table~\ref{tab:parameters}) in the case of the unsteady direction $n_i > 0$ (a), and the steady direction $n_i < 0$ (b).
Figure \ref{fig:gh-t-n}(a-1) shows that the spin $n$ changes its sign from positive to negative at $t_{ru}\approx 112\,\tilde{t}$, and Fig.~\ref{fig:gh-t-n}(b-1) shows the spin $n$ changes its sign from negative to positive at $t_{rs} \approx 810\,\tilde{t}$. Garcia and Hubbard's solutions $\overline{n}(t)$ of Eqs.~(\ref{eq:gh-solution-p}) and (\ref{eq:gh-solution-m}) are shown by the dashed lines in Figs.~\ref{fig:gh-t-n}(a-1) and (b-1), respectively; they are in good agreement with the numerical simulations.
The angular velocities $\omega_{x}$ and $\omega_{y}$ oscillate in much
shorter time scale, and their amplitudes evolve differently depending on
the spin direction. In the case of Fig.~\ref{fig:gh-t-n}(a), where the
positive initial spin reverses to negative, the amplitude of
$\omega_{x}$ becomes large and reaches its maximum around $t_{ru}$; the
amplitude of $\omega_{y}$ also becomes large around both sides of
$t_{ru}$ but shows the local minimum at $t_{ru}$. Both $\omega_{x}$
and $\omega_{y}$ oscillate at the pitch frequency $\omega_{p} \approx
1.44\,\tilde{\omega}$. In the case of Fig.~\ref{fig:gh-t-n}(b) where the
negative spin reverses to positive, the situation is similar but the
amplitude of $\omega_{y}$ reaches its maximum around $t_{rs}$, and
$\omega_{x}$ and $\omega_{y}$ oscillate at the roll frequency
$\omega_{r} \approx 0.602\,\tilde{\omega}$.
These features can be understood based on the analysis in the previous
section as follows. The positive spin induces the pitching, which is
mainly represented by $\omega_{x}$ because the eigenvector of the pitching
$\ket{\omega_{p}}$ is nearly parallel to the $x$ axis, i.e., $\psi_{p}
\approx -17^{\circ}$. Likewise, the negative spin induces the rolling,
mainly represented by $\omega_{y}$, because $\psi_{r} \approx
88^{\circ}$. The local minima of the amplitude for $\omega_{y}$ in
Fig.~\ref{fig:gh-t-n}(a-3), or $\omega_{x}$ in
Fig.~\ref{fig:gh-t-n}(b-2), at the times for reversal are tricky; it
might mean that the eigenvector of the pitching (rolling) deviates
more from the $x$ axis ($y$ axis) for $\omega_{z} \neq 0$ than
that for $\omega_{z} = 0$; as a result, the pitching (rolling) mode has a
larger projection on the $y$ axis ($x$ axis) for $\omega_{z} \neq 0$.
Note that for given $|n_{i}|$, the maximum value of $\omega_{y}$ in
Fig.~\ref{fig:gh-t-n}(b-3) is larger than that of $\omega_{x}$ in
(a-2). This is due to $\alpha \gg \beta$; the oscillation energy
around zero spin for the both cases should be the same, which gives $\alpha \overline{{\omega}_{x}^2}\approx \beta \overline{{\omega}_{y}^2}$ thus
$\sqrt{\overline{\omega_{x}^2}} < \sqrt{\overline{\omega_{y}^2}}$.
\subsubsection{Simulations with the parameter set SD}
We present detailed results of the simulations for the ranges of the parameters given by SD in Table \ref{tab:parameters}.
\paragraph{Unsteady initial spin direction $(n_{i}>0)$.}
In this case, the system behaves basically as we expect from the Garcia-Hubbard formula unless the initial spin or oscillation is too large.
Figure \ref{fig:kptr} shows the time for reversal $t_{ru}$ as a function of $K_{p}$ when spun in the unsteady direction. The results are plotted against $K_{p}$ by the procedure described in Sec.~\ref{subsec:method}.
When the initial spin $n_{i}$ is $n_{i} \lesssim 0.2\,\tilde{\omega}$ with $|\omega_{xy0}|=0.001\tilde{\omega},\, 0.01\tilde{\omega}$, $t_{ru}$ is in good agreement with the Garcia-Hubbard formula $t_{rGH+}$ of Eq.~(\ref{eq:trgh-p}), i.e., almost inversely proportional to $K_{p}$ with small scatter around the average. For a given $n_i$, as the initial oscillation amplitude $|\omega_{xy0}|$ becomes large, the standard deviations of $t_{ru}$ become large, and the average of $t_{ru}$ deviates upward from the Garcia-Hubbard formula $t_{rGH+}$, which is derived with the small amplitude approximation of $\omega_{x}$ and $\omega_{y}$. For larger $n_i$, $t_{rGH+}$ also underestimates $t_{ru}$, as already noted by Garcia and Hubbard \cite{GarciaHubbard1988} for the parameter set GH. The underestimation can be also seen in Fig.~\ref{fig:gh-t-n}(a-1), where one can see that Garcia and Hubbard's solution $\overline{n}(t)$ of Eq.~(\ref{eq:gh-solution-p}) changes its sign earlier than the simulation.
For $n_i \gtrsim 0.4\,\tilde{\omega}$, $t_{ru}$ deviates notably upward from the Garcia-Hubbard formula $t_{rGH+}$. As $n_i$ increases, the average of $t_{ru}$ increases and the standard deviations become large. Figure \ref{fig:kptr}(b) shows a typical spin evolution with $n_{i} = 0.5\,\tilde{\omega}$. The spin oscillates widely at the pitch frequency, which is qualitatively different from typical spin behaviors at small $n_{i}$ and from Garcia and Hubbard's solution $\overline{n}(t)$ of Eq.~(\ref{eq:gh-solution-p}) as in Fig.~\ref{fig:gh-t-n}(a-1). In this region, the Garcia-Hubbard formula is no longer valid.
\begin{figure}
\includegraphics[width=8.4cm]{6r.pdf} \caption{(color online)
\label{fig:kptr} (a) Time for reversal of the unsteady direction
$t_{ru}$ for the parameter set SD (Table~\ref{tab:parameters}) as a
function of the asymmetric torque coefficient $K_{p}$ in the logarithmic
scale. The error bars indicate one standard deviation of $1000$ samples
for each data point. The solid lines are $t_{rGH+}$ given by
Eq.~(\ref{eq:trgh-p}), calculated using the mean values of $n_{0}$. (b)
A typical spin evolution with $n_{i} = 0.5\,\tilde{\omega},\
|\omega_{xy0}|=0.01\,\tilde{\omega}$. The parameter set GH is used.}
\end{figure}
\paragraph{Steady initial spin direction $(n_{i}<0)$.}
\begin{figure*}
\includegraphics[width=16.8cm]{7r.pdf} \caption{(color online)
\label{fig:krtr}(a) Fractions of Types R, SS, and SW for the steady
direction for eight values of $K_{r}$ with various initial conditions
$|\omega_{xy0}| \text{ and } n_{i}$. Parameters are randomly chosen from
SD (Table~\ref{tab:parameters}). The number of the samples is 1000 for
each $K_{r}$. Filled triangles show the fractions of samples whose
$|n_{c1}|$ is smaller than $|n_{i}|$. (b) Typical spin evolutions of a
Type SS sample (b-1) and a Type SW sample (b-2), along
with an example of ``chaotic" oscillation (b-3) found for $K_{r} = 0.0041$
with $n_{i}=-0.5\,\tilde{\omega}$.}
\end{figure*}
\begin{figure}
\includegraphics[width=8.4cm]{8r.pdf} \caption{(color online)
\label{fig:krtr-2} Time for reversal $t_{rs}$ for the steady direction
as a function of $K_{r}$ in the logarithmic scale. Each data point
represents the average with the standard deviation of Type R samples out
of $1000$ simulations from the parameter set SD
(Table~\ref{tab:parameters}).}
\end{figure}
Much more complicated phenomena are observed when spun in the steady
direction. When the initial spin $|n_i|$ is small enough, the spin
simply reverses as shown in Fig.~\ref{fig:gh-t-n}(b-1). We call this
simple reversal behavior Type R. For larger $|n_i|$, however, there
appear some cases where the spin never reverses; in such cases there are
two types of behaviors: steady spinning at $n_{ss}$ (Type SS), and spin
wobbling around $n_{w}$ ($n_{ss}<n_{w}<0$, Type SW). For Type SS
samples, $n_{ss}$ is slightly less than $n_{i}$,
i.e., $n_{ss}<n_{i}<0$, because small initial rolling decays and
its energy is converted to the spin energy. Typical spin evolutions of a
Type SS sample and a Type SW sample are shown in
Figs.~\ref{fig:krtr}(b-1) and (b-2).
Figure \ref{fig:krtr}(a) shows the $K_{r}$ dependence of the fractions of
Types R, SS, and SW for various initial conditions given by $n_{i}$ and
$|\omega_{xy0}|$. For each sample, we wait up to $t = 5t_{rGH-}$;
the spin evolution is classified as Type R if it reverses. If it does
not, the spin evolution is classified as Type SS if the initial rolling
amplitude decays monotonously, and classified as Type SW if the spin $n$
starts wobbling by the time $5t_{rGH-}$. The other samples, in
which the rolling grows slowly yet shows no visible spin change
by the time $5t_{rGH-}$, are labeled ``unclassified" in
Fig.~\ref{fig:krtr}. Such samples may show spin reversal or spin
wobbling if we take a much longer simulation time. Type SS appears for
$|n_i|\gtrsim 0.3\,\tilde{\omega}$ and its fraction increases as $|n_i|$
increases. The fraction is larger for smaller $K_{r}$ and smaller
$|\omega_{xy0}|$, i.e., $|\omega_{xy0}|=0.001\,\tilde{\omega}$. Type SW
appears for $|n_i| \gtrsim 0.1\,\tilde{\omega}$ and its fraction is also
larger for smaller $K_{r}$, but stays around $0.2$ for $|n_i| \gtrsim
0.4\,\tilde{\omega}$.
Figure \ref{fig:krtr-2} shows the $K_{r}$ dependence of $t_{rs}$
only for the samples of Type R, which shows a spin reversal
behavior. For small $|n_{i}| \lesssim 0.2\,\tilde{\omega}$ with
$|\omega_{xy0}|=0.01\,\tilde{\omega}, 0.001\,\tilde{\omega}$, $t_{rs}$
is in good agreement with Garcia-Hubbard formula $t_{rGH-}$ of
Eq.~(\ref{eq:trgh-m}), and the average of $t_{ru}$ is almost
inversely proportional to $K_{r}$. As in the case of the unsteady
direction, the standard deviations of $t_{rs}$ become large, and the
average $t_{rs}$ deviates downward from $t_{rGH-}$ as initial
oscillation amplitude $|\omega_{xy0}|$ becomes large. Note that
$t_{rGH-}$ tends to overestimate $t_{rs}$, in contrast to the case of
the unsteady direction, where $t_{rGH+}$ underestimates $t_{ru}$. This
has also been noted by Garcia and Hubbard \cite{GarciaHubbard1988} for
the parameter set GH, and can be seen by Garcia and Hubbard's solution
$\overline{n}(t)$ in Fig.~\ref{fig:gh-t-n}(b-1). For $|n_{i}| \gtrsim
0.3\,\tilde{\omega}$, one may notice the standard deviations are large
for $K_{r}\ll 0.1$. In these cases, we find that some samples appear to
spin stably for quite a long time, i.e., several times of
$t_{rGH-}$, and then abruptly starts to reverse its sign. During the
time period $t<t_{rs}$, the rolling grows much more slowly than it
should as predicted by the theory in Sec.~\ref{sec:theory}. Such samples
make both the average and standard deviation large as
Fig.~\ref{fig:krtr-2}.
Next we consider the Type SS samples. There always exists a steady solution, $\bm{\omega}(0)=(0,0,\mathrm{const.})^{t}$ and $\bm{u}(0) = (0,0,-1)^{t}$, and Bondi \cite{Bondi1986} has shown that for the steady direction, this solution is linearly stable for $n < n_{c1}<0$, where $n_{c1} (<0)$ is given by
\begin{equation}
n_{c1}^2 \equiv \frac{g}{a}\frac{-(1-\theta)(1-\phi)}{2-(\theta+\phi) - (\alpha + \beta - \gamma)(\theta + \phi - 2\theta\phi)}. \label{eq:def-nc1}
\end{equation}
When the denominator of Eq.~(\ref{eq:def-nc1}) is positive, such a threshold does not actually exist, and the steady solution is always unstable. Note that $n_{c1}$ does not depend on $\xi$.
In Fig.~\ref{fig:krtr}, the filled triangles show the fraction of samples whose $|n_{c1}|$ is smaller than $|n_{i}|$, which should correspond with the ratio of Type SS. For $|\omega_{xy0}|=0.001\,\tilde{\omega}$, all samples whose $|n_{c1}|$ is smaller than $|n_{i}|$ actually show Type SS behaviors and vice versa. On the other hand, for $|\omega_{xy0}|=0.1\,\tilde{\omega}$, there are some samples whose $|n_{c1}|$ is smaller than $|n_{i}|$ yet do not show Type SS behavior; for $n_i = -0.3 \,\tilde{\omega}$, there are only several Type SS samples out of 8000 samples, which cannot be seen in Fig.~\ref{fig:krtr}(a), and for $|n_i| \gtrsim 0.4\,\tilde{\omega}$, the fractions of Type SS for $|\omega_{xy0}|=0.1\,\tilde{\omega}$ are smaller than those for $|\omega_{xy0}|=0.001\,\tilde{\omega}$. This may be because $|\omega_{xy0}|=0.1\,\tilde{\omega}$ is not small perturbation, and the spin might have escaped from the basin of attractor of Type SS behavior.
Last we consider the Type SW samples. The time when the spin starts to
wobble roughly corresponds with $t_{rs}$ of Type R in
Fig.~\ref{fig:krtr-2}; the center of wobbling $n_{w}$ and its amplitude
vary from sample to sample. As in the case of Type R, there are some
samples which start to wobble after several times of $t_{rGH-}$
where $K_{r} \ll 0.1$. Wobbling behaviors of such samples are similar to
those which start wobbling around $t_{rGH-}$. We remark that there are
two qualitatively different Type SW behaviors. When $|n_i| \lesssim
0.4\,\tilde{\omega}$, the spin of Type SW sample oscillates almost
periodically. However, when $n_i = -0.5\,\tilde{\omega}$ and $K_{r} \ll
0.1$, we find some samples that show ``chaotic" oscillations as an
example shown in Fig~\ref{fig:krtr}(b-3).
\section{Discussion}
\label{sec:discussion}
In the present work, we study the minimal model for the rattleback
dynamics, i.e., a spinning rigid body with a
no-slip contact ignoring any form of dissipation.
We have reduced the original dynamics to
the simplified dynamics (\ref{eq:diff-ne-1})--(\ref{eq:diff-ne-3})
with the three variables. The assumptions and/or approximations used in
the derivation are (1) the amplitudes of the oscillations are small,
(2) the coupling between the spin and the oscillations does not depend
on the spin, and (3) the time scale for the spin change is much longer
than the oscillation periods. It is interesting to note that the last
assumption is apparently analogous to that used in the derivation of an
adiabatic invariant for some systems under slow change of an external
parameter if the spin variable is regarded as a slow parameter.
In the present case with this separation of time scales, the
dynamics conserves the ``Casimir invariant" $C_{I}$ of
Eq.~(\ref{eq:casimir}).
Our simplified dynamics can be compared with some previous
works. Based on Bondi's formulation \cite{Bondi1986}, Case and Jalal
obtained the growth rates $\delta_{x}$ and $\delta_{y}$ of the pitching
and the rolling amplitudes around the $x$ and $y$ axes, respectively, at a
small constant spin and small skewness \cite{Case2014}. Their results
can be expressed as
\begin{equation}
\delta_{x} = \frac{n}{2}K_{p},\label{eq:case-instab1}\quad
\delta_{y} = -\frac{n}{2}K_{r},
\end{equation}
using our notations. The factor $1/2$ comes from the choice of the variables; they chose the contact point co-ordinates, while we choose the oscillation energies, which are second order quantities of their variables.
Moffatt and Tokieda \cite{MoffattTokieda2008} obtained equations for the
oscillation amplitudes of pitching and rolling, $P$ and $R$, and the
spinning $S$ for small spin and skewness as
\begin{equation}
\frac{d}{d\tau}
\begin{pmatrix} P \\ R \\ S\end{pmatrix}
= \begin{pmatrix} R \\ \lambda P \\ 0\end{pmatrix}\times
\begin{pmatrix} P \\ R \\ S\end{pmatrix}
= \begin{pmatrix} \lambda PS \\ -RS \\ R^2 - \lambda P^2\end{pmatrix}
, \label{eq:toki-prs}
\end{equation}
where $\tau$ is rescaled time, and $\lambda$ is the squared ratio of the
pitch frequency to the roll frequency. Equation (\ref{eq:toki-prs}) is
equivalent with Eqs.~(\ref{eq:diff-ne-1})--(\ref{eq:diff-ne-3}); again
the difference comes from choice of the variables. The mathematical
structures of Eq.~(\ref{eq:toki-prs}) have been investigated recently in
more detail by Yoshida \textit{et al.} \cite{Yoshida2016} in
connection with the Casimir invariant and chaotic behavior of the
original dynamics.
\begin{figure}
\includegraphics[width=7.5cm]{9r.pdf}
\caption{\label{fig:periodic} Three types of spin behaviors after the
first reversal period in the small spin regime with $n_{i} =
0.1\tilde{\omega}$, $|\omega_{xy0}|=0.01\tilde{\omega}$. (a) A
quasi-periodic behavior with the parameter set GH ($\theta = 0.6429$),
(b) a chaotic behavior with $\theta = 0.82$, (c) a quasi-periodic
behavior with a period shorter than the first one with $\theta =
0.9$. All the other parameters for (b) and (c) are the same as GH.
The dashed lines show the spin evolutions for the corresponding
simplified dynamics, where $\overline{E}_p(0) = Ma^2[\alpha (|\omega_{xy0}|\cos\psi_{p})^2 + \beta (|\omega_{xy0}|\sin\psi_{p})^2]/2$, $\overline{E}_r(0) = 3\times10^{-5}Ma^2\tilde{\omega}^2$, and $\overline{n}(0) = n_{i}$.}
\end{figure}
After the first round of spin reversals, our simplified
dynamics (\ref{eq:diff-ne-1})--(\ref{eq:diff-ne-3}) repeats
itself and shows periodic
behavior as well as the dynamics studied by Moffatt and Tokieda
Eq.~(\ref{eq:toki-prs}) because the system with only three variables has
two conservatives, i.e., the total energy and the Casimir
invariant. However, the Casimir invariant is an approximate one in the
original dynamics, and invariant only under the approximations
given at the beginning of this section. The Casimir ``invariant''
actually varies and the original system shows aperiodic behaviors.
A few examples for longer time evolutions of spin $n(t)$ are
given in Fig.~\ref{fig:periodic} for the system with the parameter set
GH except for the curvature in the rolling direction $\theta=0.6429$
(a) for GH, $0.82$ (b), and $0.9$ (c) along with those by the
corresponding simplified dynamics.
The first example (a) almost shows a
periodic spin reversal behavior as is expected by the simplified
dynamics. It is, however, only quasi-periodic
with fluctuating periodicity.
The second example (b) does not show a periodic behavior;
the initial spin reversal till $t/\tilde t\approx 100$ is nearly the
same with (a), but after the time of the second spin
reversal around $t/\tilde t\approx 3000$, it turns into chaotic,
deviating from the simplified dynamics.
The third example (c) may look similar to (a) but is peculiar;
it shows a quasi-periodic behavior after the initial round of
spin reversals, and its periodicity is
much shorter than that by the simplified dynamics.
The simplified dynamics seems to work reasonably well for the case of
smaller $\theta$ in (a) but fails for larger $\theta$ close to $1$ in
(b) and (c).
This indicates that the approximations or assumptions
used to derive the simplified dynamics are not valid for
the larger curvature in the rolling direction $\theta$;
as the radius of curvature $1/\theta$ becomes small and close to 1, i.e.,
the height of the center of mass, the restoration force for the rolling
oscillation becomes weak.
This should result in the rolling oscillation with larger amplitude and
the slower frequency, thus the assumptions (1) and (3) given at the
beginning of this section may not be good enough.
The fact that the system shows a different behavior
after the first round of spin reversals is reminiscent of the existence of
attractors, which is normally prohibited in a conserving system by
Liouville theorem. In the present system, however, the theorem is
invalidated by the non-holonomic constraint due to the no-slip condition
Eq.~(\ref{eq:no-slip}) \footnote{
The no-slip condition should be violated in the situations where the
ratio of the vertical and the inplane components of the contact force,
i.e., $F_{\parallel}\equiv\bm F\cdot\bm u$ and $F_{\perp}\equiv|\bm{F} -
(\bm{F}\cdot \bm{u})\bm u|$, exceeds the friction coefficient. The
ratio $F_\perp/F_\parallel$ becomes large when the angular momentum
around $\bm u$ changes. In the cases given in Fig.~\ref{fig:periodic},
its largest value is around 0.2.
}.
As mentioned already, the existence of strange attractors in
an energy conserving system with a non-holonomic constraint has
been studied by Borizov et al. \cite{Borisov2014}, and chaotic behavior
in the rattleback system has been discussed in connection with the
Casimir invariant by Yoshida \textit{et al.} \cite{Yoshida2016}.
\section{Summary and conclusion}
\label{sec:conclusion}
We have performed the theoretical analysis and numerical
simulations on the minimal model of rattleback. By reformulating Garcia
and Hubbard's theory \cite{GarciaHubbard1988}, we obtained the concise
expressions for the asymmetric torque coefficients, Eqs.~(\ref{eq:Kp})
and (\ref{eq:Kr}), gave the compact proof to the fact that the pitching
and the rolling generate the torques with the opposite sign, and reduced
the original dynamics to the three-variable dynamics by a physically
transparent procedure.
Our expressions for the asymmetric torque coefficients are
equivalent to those by Garcia and Hubbard, but we explicitly elucidate
that the ratio of the two coefficient for the pitching and the rolling
oscillation is proportional to the squared ratio of those frequencies.
Since the pitching frequency is significantly higher than that of the
rolling for a typical rattleback, the time for reversal to one spin
direction (or unsteady direction) is much shorter than that to the other
direction (or steady direction); the spin reversal for the latter
direction is not usually observed in a real rattleback due to dissipation.
The simulations on the original dynamics for various parameter
sets demonstrate that Garcia-Hubbard formulas for
the first spin reversal time (\ref{eq:trgh-p}) and (\ref{eq:trgh-m}) are good in the case of
small initial spin and small oscillation for both the unsteady and the steady directions. The deviation from the formula is especially large for the steady direction in the fast initial spin and small $K_r$ regime, where the
rattleback may not reverse and shows a variety of dynamics, that
includes steady spinning, periodic and chaotic wobbling.
In conclusion, the rattleback is simple but shows fascinatingly rich
dynamics, and keeps attracting physicists' attention.
\bibliographystyle{apsrev}
|
1,108,101,563,854 | arxiv | \section{Introduction}\label{sec1}
In \cite{Hamilton}, R. Hamilton established the following gradient and
Laplacian estimates for a bounded positive solution to the heat equation
on closed manifolds with Ricci curvature bounded below.
\begin{theorem}[R. Hamilton \cite{Hamilton}]\label{ham1}
Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold
with Ricci curvature satisfying $Ric\geq-K$ for some constant
$K\geq 0$. Assume that $u$ is a positive solution to the heat
equation with $u\leq A$ on $M\times[0,T]$ for some constant
$A<\infty$, where $0<T<\infty$. Then
\begin{equation}\label{gradest1}
t\frac{|\nabla u|^2}{u^2}\leq(1+2Kt)\log\left(\frac{A}{u}\right).
\end{equation}
If we further assume $T\leq1$, then for $0\leq t\leq T$,
\begin{equation}\label{gradest2}
t\frac{\Delta u}{u}
\leq C(n,K)\left[1+\log\left(\frac{A}{u}\right)\right].
\end{equation}
\end{theorem}
For estimate \eqref{gradest2}, when $K=0$, by choosing function
$\varphi=t$ in Hamilton's proof of Lemma 4.1 in \cite{Hamilton},
we easily confirm that the condition ``$T\leq1$" can be removed.
Hamilton's estimate \eqref{gradest1} shows that one can compare
two different points at the same time, however the well-known
Li-Yau's gradient estimate \cite{[Li-Yau]} only allows comparisons
between different points at the different times. In \cite{Kotsch},
B. Kotschwar generalized the gradient estimate \eqref{gradest1}
to the case of complete, noncompact Riemannian manifolds with
Ricci curvature bounded below. Using this generalization,
Kotschwar gave an estimate on the gradient of the heat
kernel for complete manifolds with nonnegative Ricci curvature.
Moreover this estimate is sharp in the order of $t$ for the
heat kernel on $\mathbb{R}^n$.
Kotschwar's result can be used to prove the monotonicity of Ni's
entropy functional (stationary metric for Perelman's
$\mathcal{W}$-functional in \cite{[Perelman]}) in \cite{[Ni1]}
for the fundamental solution to the heat equation on complete,
noncompact manifolds. We would like to point out that, in the
course of justifying the monotonicity Ni's entropy functional
on complete noncompact manifolds, one may need a noncompact
version of Hamilton's Laplacian estimate \eqref{gradest2}.
However, as far as we know, perhaps no people generalized
the estimate \eqref{gradest2} to the complete noncompact case.
In \cite{[CCG]}, the authors only briefly sketched a proof of
the estimate \eqref{gradest2} for the noncompact case but
didn't give any detail. In this paper, we will provide a
full detailed proof that Hamilton's Laplacian estimate
\eqref{gradest2} also holds for complete, noncompact
manifolds with nonnegative Ricci curvature. Precisely,
we show that
\begin{theorem}\label{the1}
Let $(M,g)$ be an $n$-dimensional complete noncompact Riemannian
manifold with nonnegative Ricci curvature. Suppose $u$ is a
smooth positive solution to the heat equation
\begin{equation}\label{weheat}
\frac{\partial u}{\partial t}-\Delta u=0,
\end{equation}
satisfying $u\leq A$ for some constant $A<\infty$ on
$M\times[0,T]$, where $0<T<\infty$.
Then
\begin{equation}\label{gradest4}
t\frac{\Delta u}{u}
\leq n+4\log\left(\frac{A}{u}\right)
\end{equation}
for all $x\in M$ and $0\leq t\leq T$.
\end{theorem}
The proof of Theorem \ref{the1} is similar to the arguments of
Kotschwar \cite{Kotsch}, which can be divided into two steps.
In the first step, we obtain some Bernstein-type estimate of
$\Delta u$, similar to the upper estimate of $|\nabla u|$
derived by Kotschwar \cite{Kotsch} on complete noncompact
manifolds. In the second step, using upper estimates of
$\Delta u$ and $|\nabla u|$, we apply the maximum principle
to the quantity of Hamilton's Laplacian estimate on complete
noncompact Riemannian manifolds due to Karp-Ni \cite{KaLi}
or Ni-Tam \cite{[Ni-Tam]}. We remark that a priori integral
bound needed for the application of maximum principle on
complete noncompact manifolds has also been obtained in
\cite{[Gri]} and in \cite{Yu} for more general setting.
As an application of Theorem \ref{the1}, we obtain the following
Laplacian estimate of the heat kernel on a complete noncompact
Riemannian manifold with nonnegative Ricci curvature.
\begin{theorem}\label{Kethe1}
Let $(M,g)$ be an $n$-dimensional complete noncompact Riemannian manifold
with nonnegative Ricci curvature, and $H(x,y,t)$ its heat kernel.
Then, for all $\delta>0$, there exists a constant $C=C(n,\delta)$
such that
\[
\frac{\Delta H}{H}(x,y,t)\leq\frac{2}{t}
\left[C+4\frac{d^2(x,y)}{(4-\delta)t}\right]
\]
for all $x,y\in M$ and $t>0$.
\end{theorem}
\begin{remark}
We would like to point out that Theorem \ref{Kethe1} is sharp
in the order of $t$ for the heat kernel on $\mathbb{R}^n$.
\end{remark}
The structure of this paper is organized as follows. In Section
\ref{sec2}, we derive Bernstein-type gradient estimates of the
Laplacian for solutions to the heat equation (see Theorem
\ref{theor2}). Our proof makes use of Shi's gradient estimates
\cite{[Shi]}, combining the classical cut-off function arguments.
In Section \ref{sect3}, we finish the proof of Theorem \ref{the1}
by using Theorem \ref{theor2}. In Section \ref{sect4}, we apply
Theorem \ref{the1} to the heat kernel and complete the proof
of Theorem \ref{Kethe1}.
\section{Bernstein-type estimates}\label{sec2}
In this section, we assume that $(M,g)$ be an $n$-dimensional
complete noncompact Riemannian manifold with the Ricci curvature
uniformly bounded below by $-K$ for some constant $K\geq 0$,
and suppose that $u$ is a smooth solution to the heat equation
\eqref{weheat} satisfying $|u|\leq A$ on some open $U\subset M$
for $0\leq t\leq T<\infty$. At first we recall the Kotschwar's
result in \cite{Kotsch}.
\begin{theorem}[Kotschwar \cite{Kotsch}]\label{Ber1}
Let $(M,g)$ be an $n$-dimensional complete noncompact Riemannian
manifold with $Ric\geq-K$ for some constant $K\geq 0$. Suppose
$u$ is a smooth solution to the heat equation \eqref{weheat}
satisfying $|u|\leq A$ on $B_p(2R)\times[0,T]$ for some
$p\in M^n$ and $A,R,T>0$. Then there exists a constant
$C=C(n,K)$ such that
\[
t|\nabla u|^2\leq CA^2\left[1+T\left(1+\frac{1}{R^2}\right)\right]
\]
holds on $B_p(R)\times[0,T]$.
\end{theorem}
\begin{remark}\label{rem1}
If $Ric\geq0$, from the proof course of Theorem \ref{Ber1}
in \cite{Kotsch}, one shows that
\[
t|\nabla u|^2\leq C(n)A^2\left(1+\frac{T}{R^2}\right)
\]
on $B_p(R)\times[0,T]$.
\end{remark}
\begin{remark}
Letting $R\to \infty$ in the proof course of Theorem \ref{Ber1},
one immediately shows that there exists a constant $C(n)$
such that
\begin{equation}\label{td1}
t|\nabla u|^2\leq C(n)A^2(1+KT)
\end{equation}
on $M^n\times[0,T]$.
\end{remark}
In the above description, Kotschwar showed the first derivative
estimate of the positive solution to the heat equation on
complete manifolds. Below we will give an upper estimate of
$\Delta u$. Our proof is similar in spirit to the derivative
estimates due to Shi \cite{[Shi]} (see also \cite{Kotsch}). Let
\begin{equation}\label{deffor}
F(x,t):=(C+t|\nabla u|^2)t^2|\Delta u|^2,
\end{equation}
where the constant $C$ is to be chosen. The following lemma
is useful for proving Theorem \ref{theor2}.
\begin{lemma}\label{lemq3}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold with
$Ric\geq -K$ for some constant $K\geq 0$. If $0<u\leq A$ is the
solution to the heat equation \eqref{weheat} on $B_p(2R)\times[0,T]$
for some $p \in M$ and $A,R,T>0$, where $T\leq 1$, satisfying
\[
|\nabla u|^2\leq \frac{C_*}{t}
\]
for some constant $C_*$ on $B_p(R)\times(0,T]$, then
there exists a finite positive constant $c:=c(n,K,A,R)$ such that
\[
\frac{\partial F}{\partial t}
\leq \Delta F-\frac {c}{t}F^2+\frac{C_*^2}{t}
\]
on $B_p(R)\times(0,T]$.
\end{lemma}
\begin{remark}
The assumption $T\leq 1$ in Lemma \ref{lemq3} is only used
in \eqref{gjgj}. By Theorem \ref{Ber1}, one may choose
$C_*:=C(n,K)A^2\left[1+T\left(1+\frac{1}{R^2}\right)\right]$.
Also we can choose $c:=C^{-1}(n)\cdot C^{-2}_*$. Moreover,
if $R\to \infty$, then $\lim_{R\to \infty}c$ is a finite
positive constant. Note that here the constant $C(n,K)$
may be different from the one in Theorem \ref{Ber1}.
\end{remark}
\begin{remark}\label{rem26}
When $K=0$, from \eqref{gjgj}, we see that the assumption $T\leq 1$
can be replaced by $T<\infty$. In this case, from Remark
\ref{rem1}, one can choose $C_*:=C(n)A^2\left(1+\frac{T}{R^2}\right)$
and $c:=C^{-1}(n)\cdot C^{-2}_*$. If $R\to \infty$, then
$\lim_{R\to \infty}c$ still be a finite positive constant,
\textbf{independent} on $T$.
\end{remark}
\begin{proof}
At first, the evolution formula of $t|\nabla u|^2$ is that
\begin{equation*}
\begin{aligned}
\left(\frac{\partial}{\partial t}-\Delta \right)(t|\nabla u|^2)
&=-2t|\nabla\nabla u|^2-2t Ric(\nabla u,\nabla u)+|\nabla u|^2\\
&\leq-t|\nabla\nabla u|^2-\frac{t}{n}|\Delta u|^2-2t Ric(\nabla u,\nabla u)+|\nabla u|^2\\
&\leq-t|\nabla\nabla u|^2-\frac{t}{n}|\Delta u|^2+(2Kt+1)|\nabla u|^2,
\end{aligned}
\end{equation*}
where we used $Ric\geq -K$. Then we compute that
\[
\left(\frac{\partial}{\partial t}-\Delta\right)|\Delta u|^2
=-2|\nabla \Delta u|^2
\]
and hence
\[
\left(\frac{\partial}{\partial t}-\Delta\right)(t^2|\Delta u|^2)
=-2t^2|\nabla \Delta u|^2+2t|\Delta u|^2.
\]
By the assumption of this lemma, we can choose $C$ in \eqref{deffor}
such that $C=8C_*$, which implies that $8t|\nabla u|^2\leq C$.
Combining the above equations yields
\begin{equation*}
\begin{aligned}
\left(\frac{\partial}{\partial t}-\Delta\right)F
&=(C+t|\nabla u|^2)\left[\left(\frac{\partial}{\partial t}-\Delta\right)
(t^2|\Delta u|^2)\right]\\
&\quad+\left[\left(\frac{\partial}{\partial t}-\Delta\right)(C+t|\nabla u|^2)\right]t^2|\Delta u|^2\\
&\quad-2t^3\nabla(|\nabla u|^2)\cdot\nabla((\Delta u)^2)\\
&\leq(C+t|\nabla u|^2)\left(-2t^2|\nabla \Delta u|^2+2t|\Delta u|^2\right)\\
&\quad+\left[-t|\nabla\nabla u|^2-\frac{t}{n}|\Delta u|^2+(2Kt+1)|\nabla u|^2\right]t^2|\Delta u|^2\\
&\quad+8t^3|\nabla u||\nabla\nabla u|\cdot|\Delta u||\nabla \Delta u|\\
&\leq-18t^3|\nabla u|^2|\nabla \Delta u|^2-\frac{t^3}{n}(\Delta u)^4-t^3|\nabla\nabla u|^2(\Delta u)^2\\
&\quad+2t(\Delta u)^2(C+t|\nabla u|^2)+(2Kt+1)t^2|\nabla u|^2|\Delta u|^2\\
&\quad+t^3|\nabla\nabla u|^2\cdot|\Delta u|^2+16t^3|\nabla u|^2\cdot|\nabla \Delta u|^2,
\end{aligned}
\end{equation*}
where we used the Schwarz inequality. Since $t\leq T\leq1$, the above formula becomes
\begin{equation}
\begin{aligned}\label{gjgj}
\left(\frac{\partial}{\partial t}-\Delta\right)F
&\leq-\frac{t^3}{n}(\Delta u)^4+(2Kt+1)Ct|\Delta u|^2+4Ct(\Delta u)^2\\
&\leq-\frac{t^3}{n}(\Delta u)^4+(2K+1)Ct|\Delta u|^2+4Ct(\Delta u)^2\\
&\leq-\frac {c}{t}F^2+\frac{18n(1+K^2)C^2}{t},
\end{aligned}
\end{equation}
where in the last inequality, we used
\[
\frac{t^3}{2n}(\Delta u)^4+\frac{18n(1+K^2)C^2}{t}\geq C(2K+1)t|\Delta u|^2+4Ct(\Delta u)^2.
\]
Here $c:=c(n,K,A,R)$ depends on $n$, $K$, $A$ and $R$.
For example, we may choose $c:=C^{-1}(n)\cdot C^{-2}_*$.
Then the result follows.
\end{proof}
Using Lemma \ref{lemq3}, we prove the following Laplacian estimate
for the positive solution to the heat equation.
\begin{theorem}\label{theor2}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold with
$Ric\geq -K$ for some constant $K\geq 0$. Let $u$ be a positive
solution to the heat equation \eqref{weheat} with $u\leq A$
on $B_p(2R)\times(0,T]$ for some
$p\in M^n$ and $A,R,T>0$, where $A<\infty$ and $T\leq 1$.
Then there exists a constant $C=C(n,K)$ such that
\begin{equation}\label{td2}
t|\Delta u|\leq CA\left[1+T\left(1+\frac{1}{R^2}\right)\right]^{1/2}
\cdot\left(1+\frac{T}{R^2}\right)^{1/2}
\end{equation}
on $B_p(R)\times[0,T]$.
\end{theorem}
\begin{remark}\label{Laptd1}
When $K=0$, the assumption $T\leq 1$ can be replaced by
$T<\infty$. In this case, from Remark \ref{rem26},
estimate \eqref{td2} can be rewritten by a simple version
\begin{equation}\label{td2gen}
t|\Delta u|\leq C(n)A\left(1+\frac{T}{R^2}\right)
\end{equation}
on $B_p(R)\times[0,T]$. If we further let $R\to \infty$,
then
\begin{equation}\label{td3gen}
t|\Delta u|\leq C(n)A
\end{equation}
on $M\times[0,T]$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{theor2}]
As in \cite{[Li-Yau],[Cala]} (see also \cite{Kotsch} or \cite{Wu10}),
for any $p\in M$ and $R>0$,
we may choose a cut-off function with $\eta(x)=1$ on $B_p(R)$
and supported in $B_p(2R)$ satisfying
\begin{equation}
|\nabla \eta|^2\leq \frac{C_3}{R^2}\eta
\end{equation}
and
\begin{equation}
\Delta \eta\geq-\frac{C_3}{R^2}
\end{equation}
for some $C_3=C_3(n)>0$. Letting $G:=\eta F$, we compute that
\[
\left(\frac{\partial}{\partial t}-\Delta\right)G \leq
\frac{\eta}{t}(-cF^2+C^2_*)-\Delta\eta\cdot F-2\nabla \eta\cdot\nabla F,
\]
where $C_*=C(n,K)A^2\left[1+T\left(1+\frac{1}{R^2}\right)\right]$.
Assume that at a point $(x_0,t_0)$ where the function $G$ attains
its positive maximum in $B(p,2R)\times(0,T]$. Then at $(x_0,t_0)$
we have
\[
\left(\frac{\partial}{\partial t}-\Delta\right)G\geq 0
\quad \mathrm{and}\quad 0=\nabla G=\eta \nabla F+F\nabla\eta.
\]
Therefore at $(x_0,t_0)$, we have
\begin{equation*}
\begin{aligned}
0&\leq\frac{1}{t}\left[-c(\eta F)^2+C^2_*\eta\right]-\Delta\eta\cdot (\eta F)+2F|\nabla\eta|^2\\
&\leq\frac{1}{t}[-cG^2+C^2_*\eta]+\frac{C_3}{R^2}G+2\frac{C_3}{R^2}G
\end{aligned}
\end{equation*}
and hence
\begin{equation}\label{guanji}
cG^2(x_0,t_0)\leq C^2_*+3\frac{C_3}{R^2}G(x_0,t_0)t_0.
\end{equation}
Since
\[
3\frac{C_3}{R^2}G(x_0,t_0)t_0\leq \frac c2G^2(x_0,t_0)+\frac{8C_3^2}{cR^4}t^2_0,
\]
inequality \eqref{guanji} implies that
\[
\frac c2G^2(x_0,t_0)\leq C^2_*+\frac{8C_3^2}{cR^4}t^2_0.
\]
Therefore for any $(x,t)\in B_p(R)\times[0,T]$,
\[
G^2(x,t)\leq G^2(x_0,t_0)\leq 2c^{-1}C^2_*+c^{-2}\frac{16C_3^2}{R^4}t^2_0.
\]
Since $c:=C^{-1}(n)\cdot C^{-2}_*$, we have that
\[
G^2(x,t)\leq C(n)C^4_*+\frac{C(n)}{R^4}T^2C^4_*
\]
for any $(x,t)\in B_p(R)\times[0,T]$. This implies
\[
G(x,t)\leq C(n)C^2_*+\frac{C(n)}{R^2}TC^2_*
\]
for any $(x,t)\in B_p(R)\times[0,T]$. By the definitions of
$C_*$ and $G$, we have
\[
8C_*t^2|\Delta u|^2\leq C(n)C^2_*+\frac{C(n)}{R^2}TC^2_*
\]
and therefore
\[
t^2|\Delta u|^2\leq C(n,K)A^2\left[1+T\left(1+\frac{1}{R^2}\right)\right]
\cdot\left(1+\frac{T}{R^2}\right)
\]
for any $(x,t)\in B_p(R)\times[0,T]$,
which completes the proof of the theorem.
\end{proof}
\section{Proof of Theorem \ref{the1}}\label{sect3}
In this section by using gradient and Laplacian estimates of
the previous section, we apply a maximum principle on complete
noncompact manifolds due originally to Karp and Li \cite{KaLi}
(see also Ni-Tam \cite{[Ni-Tam]}), to finish the proof of
Theorem \ref{the1}.
\begin{theorem}[Karp-Li \cite{KaLi} and Ni-Tam \cite{[Ni-Tam]}]\label{noncmax}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold. Suppose
$f(x,t)$ is a smooth function on $M\times[0,T]$, $0<T<\infty$, such that
\[
\left(\frac{\partial}{\partial t}-\Delta\right)f(x,t)\leq 0
\quad\mathrm{whenever}\quad f(x,t)\leq 0.
\]
Let $f_+(x,t):=\max\{f(x,t),0\}$. Assume that
\begin{equation}
\int^T_0\int_Me^{-ar^2(x)}f^2_+(x,t)d\mu dt\leq0
\end{equation}
for some constant $a>0$, where $r(x)$ is the distance to $x$ from
some fixed $p\in M$. If $f(x,0)\leq 0$ for all $x\in M$, then
$f(x,t)\leq 0$ for all $(x,t)\in M\times[0,T]$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{the1}]
Following the Hamilton's proof \cite{Hamilton} with a little modification.
We define $u_\epsilon=u+\epsilon$ satisfying
$\epsilon<u_\epsilon<A+\epsilon$ and the function
\begin{equation*}
\begin{aligned}
P(x,t)&:=t\left(\Delta u_\epsilon+\frac{|\nabla u_\epsilon|^2}{u_\epsilon}
\right)-u_\epsilon\left(n+4\log\frac{A}{u_\epsilon}\right).
\end{aligned}
\end{equation*}
By \cite{Hamilton}, we have
\begin{equation*}
\begin{aligned}
\left(\frac{\partial}{\partial t}-\Delta\right)P&\leq
-\frac{2t}{nu_\epsilon}\left(\Delta u_\epsilon-\frac{|\nabla u_\epsilon|^2}{u_\epsilon}
\right)^2+\left(\Delta u_\epsilon-\frac{|\nabla u_\epsilon|^2}{u_\epsilon}
\right)-2\frac{|\nabla u_\epsilon|^2}{u_\epsilon}.
\end{aligned}
\end{equation*}
By the Hamilton's arguments, we easily have that
\[
\left(\frac{\partial}{\partial t}-\Delta\right)P\leq 0
\quad\mathrm{whenever}\quad P\geq 0.
\]
This fact can be obtained by the following three cases.
\begin{enumerate}
\item If $\Delta u_\epsilon\leq\frac{|\nabla u_\epsilon|^2}{u_\epsilon}$, then we are done.
\item If $\frac{|\nabla u_\epsilon|^2}{u_\epsilon}\leq\Delta u_\epsilon\leq3\frac{|\nabla u_\epsilon|^2}{u_\epsilon}$, then we are also done.
\item If $3\frac{|\nabla u_\epsilon|^2}{u_\epsilon}\leq\Delta u_\epsilon$, then when
$P\geq 0$, we have
\[
2\left(\Delta u_\epsilon-\frac{|\nabla u_\epsilon|^2}{u_\epsilon}
\right)\geq\Delta u_\epsilon+\frac{|\nabla u_\epsilon|^2}{u_\epsilon}
\geq \frac{nu_\epsilon}{t}
\]
and hence we are done completely.
\end{enumerate}
Now, obviously, we have
\[
P(x,0)<0.
\]
By our assumptions on $u_\epsilon$, we also have
\[
P_+(x,t)\leq t\left(\Delta u_\epsilon
+\frac{1}{\epsilon} |\nabla u_\epsilon|^2\right),
\]
where $P_+(x,t):=\max\{P(x,t),0\}$. Thus, using
estimates \eqref{td1} and \eqref{td3gen},
for any $p\in M^n$, and $T, R>0$, we have
\begin{equation*}
\begin{aligned}
\int^T_0\int_{B_p(R)}&e^{-r^2(x)}P^2_+(x,t)d\mu dt\\
&\leq\int^T_0\int_{B_p(R)}e^{-r^2(x)}\left[t
\left(\Delta u_\epsilon+\frac{1}{\epsilon} |\nabla u_\epsilon|^2\right)\right]^2d\mu dt\\
&\leq\left(C(n)A+\frac{C(n)A^2}{\epsilon}\right)^2\int^T_0\int_Me^{-r^2(x)}
d\mu dt.
\end{aligned}
\end{equation*}
Since $Ric\geq 0$, by the Bishop
volume comparison theorem, we have that
\[
\int^T_0\int_{B_p(R)}e^{-r^2(x)}
d\mu dt<\infty.
\]
Then by letting $R\to\infty$, we conclude that
\[
\int^T_0\int_Me^{-r^2(x)}d\mu dt<\infty
\]
and hence
\[
\int^T_0\int_{B_p(R)}e^{-r^2(x)}P^2_+(x,t)d\mu dt<\infty.
\]
By the maximum principle for the complete noncompact manifold,
we conclude that $P(x,t)\leq 0$ for all $t\leq T$ and
hence the conclusion of Theorem \ref{the1} follows.
\end{proof}
\section{Proof of Theorem \ref{Kethe1}}\label{sect4}
The proof of Theorem \ref{Kethe1} follows from that of
Theorem 1 in \cite{Kotsch} with little modification,
but is included for completeness.
\begin{proof}[Proof of Theorem \ref{Kethe1}]
Let $H(x,y,t)$ be the heat kernel of the heat equation on $(M,g)$.
For any $t>0$ and $y\in M$, we set $u(x,s):=H(x,y,s+t/2)$, and
then $u$ is a smooth, positive solution to the heat equation on
$[0,T)$. By \cite{[Li-Yau]}, for any $\delta>0$, there exists a
constant $C_1=C_1(\delta)>0$ such that
\begin{equation}\label{keres7}
\frac{\exp\left(\frac{-d^2(x,y)}{(4-\delta)(s+t/2)}\right)}{C_1\mathrm{Vol}(B_y(\sqrt{s+t/2}))}
\leq u(x,s)\leq\frac{C_1}{\mathrm{Vol}(B_y(\sqrt{s+t/2}))}
\end{equation}
for all $x,y\in M$, and $s\geq 0$.
Letting
\[
A:=\frac{C_1}{\mathrm{Vol}(B_y(\sqrt{t/2}))},
\]
then the latter part of inequality \eqref{keres7} implies $u\leq A$ for all $x$ and
$s$. Since $Ric\geq0$, there exists a positive constant
$C_2:=C_2(n)$ such that
\[
\mathrm{Vol}(B_y(\sqrt{s+t/2}))\leq\mathrm{Vol}(B_y(\sqrt{t}))
\leq C_2\mathrm{Vol}(B_y(\sqrt{t/2}))
\]
for all $0\leq s\leq t/2$. Thus, by the front part of inequality \eqref{keres7}
and Theorem \ref{the1}, we have
\[
s\frac{\Delta u}{u}
\leq n+4\log\left(\frac{A}{u}\right)
\leq n+4\log(C^2_1C_2)+\frac{4d^2(x,y)}{(4-\delta)(s+t/2)}
\]
on $M\times [0,t/2]$. Setting $C=n+4\log(C^2_1C_2)$ and choosing at $s=t/2$,
from above, we conclude that
\[
(t/2)\frac{\Delta H}{H}(x,y,t)=(t/2)\frac{\Delta u}{u}(x,t/2)
\leq C+4\frac{d^2(x,y)}{(4-\delta)t}
\]
for all $x,y\in M$ and $t>0$.
\end{proof}
\section*{Acknowledgment}
The author would like to express his gratitude to the referee for careful readings
and many valuable suggestions.
\bibliographystyle{amsplain}
|
1,108,101,563,855 | arxiv | \section{Self-consistent order parameters}
The order parameters and the interaction Hamiltonian in Eqs.~(2) and (3) of the main text are defined in the orbital basis. Here, we have paid particular attention to the SOC in the noninteracting Hamiltonian which gives, say, spin up for the first two orbitals ($xz$, $yz$) and spin down for the $xy$ orbital at the same momentum. This is, e.g., the reason there are only three terms for the interaction $V$ in Fig.~\ref{fig5}(c) of the main text.
For our interaction Hamiltonian, the order parameter vertices are written by considering all possible contractions in the particle-hole channel. In this orbital basis, the self-consistent vertices for each order parameters is graphically expressed in Fig.~\ref{fig5}(b) of the main text. A transformation from the orbital basis to the band basis [using the eigenvectors given in Eq.~(4) of the main text], gives
{\allowdisplaybreaks\begin{eqnarray}
\Gamma^{SDW}_{ii}(\Omega) &=& \left(U_{i}\eta_{{\bm k}_1,i}\eta_{{\bm k}_2,i}\gamma_{{\bm k}_3,i}\gamma_{{\bm k}_4,i} + \sum_{j\ne i} J_{ij}\eta_{{\bm k}_1,i}\gamma_{{\bm k}_3,j}\gamma_{{\bm k}_2,i}\eta_{{\bm k}_4,j}\right),\qquad i=1,2,3;\nonumber\\
\Gamma^{SODW}_{12}(\Omega) &=& \left(V_{12}\eta_{{\bm k}_1,1}\eta_{{\bm k}_2,1}\gamma_{{\bm k}_3,2}\gamma_{{\bm k}_4,2} + J^{\prime}_{12}\eta_{{\bm k}_1,1}\gamma_{{\bm k}_3,1}\gamma_{{\bm k}_2,2}\eta_{{\bm k}_4,2}\right);\nonumber\\
\Gamma^{ODW}_{i3}(\Omega) &=& \left[(V-J)_{i3}\eta_{{\bm k}_1,i}\eta_{{\bm k}_2,i}\gamma_{{\bm k}_3,3}\gamma_{{\bm k}_4,3}- J_{i3}\eta_{{\bm k}_1,i}\eta_{{\bm k}_3,3}\eta_{{\bm k}_2,i}\eta_{{\bm k}_4,3} \mp J^{\prime}_{i3}\eta_{{\bm k}_1,i}\gamma_{{\bm k}_3,i}\eta_{{\bm k}_2,3}\gamma_{{\bm k}_4,3}\right],~~ i=1,2.
\label{sceqn}
\end{eqnarray}}
With the definitions of various dressed interactions ${\bar U}$, ${\bar V}$, ${\bar J}$ and ${\bar J}^{\prime}$, we obtain Eq.~(5) in the main text.
\begin{figure*}[t]
\rotatebox[origin=c]{0}{\includegraphics[width=0.7\columnwidth]{interaction_diagrams}}
\caption{(a) Diagram symbols for spin-polarized orbitals 1$\rightarrow yz$, 2$\rightarrow xz$, and 3$\rightarrow xy$. (b) Various possible order parameters in the particle-hole channel. (c) The interaction Hamiltonian in Eq.~(3) of the main text, written explicitly for different orbitals. Similar terms obtained for opposite spin follows similarly in which any single line becomes double line, and vice versa. (d) Self-consistent order parameters given in Eqs.~\ref{sceqn}.} \label{fig5}
\end{figure*}
\section{RG equation in the band basis}
Similarly, the interaction Hamiltonian can also be transformed into the band basis as
{\allowdisplaybreaks\begin{eqnarray}
H_{I}&=&\sum_{{\bm k}_i}\left[\left(\sum_{i=1,2,3}U_{ii}\eta_{{\bm k}_1,i}\eta_{{\bm k}_2,i}\gamma_{{\bm k}_3,i}\gamma_{{\bm k}_4,i}
+ V_{12}\eta_{{\bm k}_1,1}\eta_{{\bm k}_2,1}\gamma_{{\bm k}_3,2}\gamma_{{\bm k}_4,2} \right.\right.
\nonumber\\
&&
\left. + \sum_{i=1,2}(V-J)_{i3}\eta_{{\bm k}_1,i}\eta_{{\bm k}_2,i}\gamma_{{\bm k}_3,3}\gamma_{{\bm k}_4,3} -J^{\prime}_{i3}\eta_{{\bm k}_1,i}\gamma_{{\bm k}_3,i}\eta_{{\bm k}_2,3}\gamma_{{\bm k}_4,3}\right)
\alpha^{\dag}_{{\bm k}_1}\alpha_{{\bm k}_2}\beta^{\dag}_{{\bm k}_3}\beta_{{\bm k}_4}
\nonumber\\
&&+\left(\sum_{i=1,2}V_{i3}\eta_{{\bm k}_1,i}\eta_{{\bm k}_2,i}\eta_{{\bm k}_3,3}\eta_{{\bm k}_4,3}
+ (V-J)_{12}\eta_{{\bm k}_1,1}\eta_{{\bm k}_2,1}\eta_{{\bm k}_3,2}\eta_{{\bm k}_4,2}\right)\alpha^{\dag}_{{\bm k}_1}\alpha_{{\bm k}_2}\alpha^{\dag}_{{\bm k}_3}\alpha_{{\bm k}_4}
\nonumber\\
&&+\left(\sum_{i=1,2}V_{i3}\gamma_{{\bm k}_1,i}\gamma_{{\bm k}_2,i}\gamma_{{\bm k}_3,3}\gamma_{{\bm k}_4,3}
+ (V-J)_{12}\gamma_{{\bm k}_1,1}\gamma_{{\bm k}_2,1}\gamma_{{\bm k}_3,2}\gamma_{{\bm k}_4,2}\right)\beta^{\dag}_{{\bm k}_1}\beta_{{\bm k}_2}\beta^{\dag}_{{\bm k}_3}\beta_{{\bm k}_4}\nonumber\\
&&+\left(J_{12}\eta_{{\bm k}_1,1}\gamma_{{\bm k}_3,2}\gamma_{{\bm k}_2,1}\eta_{{\bm k}_4,2}
+ J_{12}^{\prime}\eta_{{\bm k}_1,1}\gamma_{{\bm k}_3,1}\gamma_{{\bm k}_2,2}\eta_{{\bm k}_4,2}\right)\alpha^{\dag}_{{\bm k}_1}\beta^{\dag}_{{\bm k}_3}\beta_{{\bm k}_2}\alpha_{{\bm k}_4}
\left.+\sum_{i=1,2}J_{i3}\eta_{{\bm k}_1,i}\eta_{{\bm k}_3,3}\eta_{{\bm k}_2,i}\eta_{{\bm k}_4,3}\alpha^{\dag}_{{\bm k}_1}\alpha^{\dag}_{{\bm k}_3}\beta_{{\bm k}_2}\beta_{{\bm k}_4}\right]\nonumber\\
&=&\sum_{{\bm k}_i}\left[u_1
\alpha^{\dag}_{{\bm k}_1}\alpha_{{\bm k}_2}\beta^{\dag}_{{\bm k}_3}\beta_{{\bm k}_4}+ u_2 \alpha^{\dag}_{{\bm k}_1}\alpha_{{\bm k}_2}\alpha^{\dag}_{{\bm k}_3}\alpha_{{\bm k}_4} +u_3 \beta^{\dag}_{{\bm k}_1}\beta_{{\bm k}_2}\beta^{\dag}_{{\bm k}_3}\beta_{{\bm k}_4} \right.\left. + u_4\alpha^{\dag}_{{\bm k}_1}\beta^{\dag}_{{\bm k}_3}\beta_{{\bm k}_2}\alpha_{{\bm k}_4} + u_5 \alpha^{\dag}_{{\bm k}_1}\alpha^{\dag}_{{\bm k}_3}\beta_{{\bm k}_2}\beta_{{\bm k}_4}\right].
\label{eigenintH}
\end{eqnarray}}
\begin{figure*}
\rotatebox[origin=c]{0}{\includegraphics[width=0.8\columnwidth]{RG_diagrams}}
\caption{RG diagrams for the one-loop vertex renormalizations.} \label{fig6}
\end{figure*}
The definition of new interactions $u_{i=1-5}$ in the band basis is readily inferred from the above equation. The new interactions are represented in Fig.~\ref{fig6}(a). The RG equations follow from the diagrams in Fig.~\ref{fig6}, and are given by the following coupled differential equations:
\begin{eqnarray}
{d{u}_1 \over d \ell}= 2(u_1^2+u_5u_5^*),\qquad {d{u}_{2,3}\over d \ell} = -2(u_{2,3}^2+u_5u_5^*),\qquad{d{u}_{4} \over d \ell}&=& 2u_4(u_1+u_4) - 4u_5u_5^*,\qquad {d{u}_{5}\over d \ell} = 2u_5(2u_1-u_2-u_3).
\label{RG}
\end{eqnarray}
Equations above have run-away flows (all $u_i$ diverge at the fixed point). However, the phases of the system are determined by the ratios of these interactions. Assuming $u_5$ is real and nonvanishing, we can formulate the following RG equations for the ratios of the interactions:
{\allowdisplaybreaks\begin{eqnarray}
{d \over d \ell}\left({u_1\over u_5}\right)&=& {2\over u_5}\left(u_5^2-2u_1^2+u_1 u_2+u_1 u_3\right),\qquad {d \over d \ell}\left({u_2\over u_5}\right)= {2\over u_5}\left(-u_5^2-2u_1 u_2 +u_2 u_3\right),\nonumber\\
{d \over d \ell}\left({u_3\over u_5}\right)&=& {2\over u_5}\left(-u_5^2-2u_1 u_3 +u_2 u_3\right),\qquad {d \over d \ell}\left({u_4\over u_5}\right)= {2\over u_5}\left(-2 u_5^2+u_4^2-u_1 u_4 +u_2 u_3+u_2 u_4\right).\nonumber\\
\label{RG2}
\end{eqnarray}}
Setting the right-hand sides of the equations above to zero (corresponding to the fixed-point values), we find that the following fixed-point values are possible: ${u_1\over u_5}=\pm{1\over \sqrt{2\left(1+ \sqrt{2}\right)}}$, ${u_2\over u_5}={u_3\over u_5}=\mp{1\over \sqrt{1+ \sqrt{2}}}$, and ${u_4\over u_5}=\pm{1+\sqrt{2}\pm\sqrt{6\left(2+3\sqrt{2}\right)}\over 2 \sqrt{2\left(1+ \sqrt{2}\right)}}$. The numerical solution of the RG equations, however, indicated hat only one of these possible values correspond to an stable fixed point.
\end{widetext}
\end{document}
|
1,108,101,563,856 | arxiv | \section{Introduction}
\label{sec:intro}
Core-collapse supernovae (CCSN) occur at the end of the life of massive stars ($M \gtrsim 8 - 10 \, \rm{M}_{\odot}$). In these violent events, the core of the star gravitationally collapses and triggers a shock wave, leading to the supernova explosion. Despite many decades of theoretical and numerical modeling, the detailed explosion mechanism is not yet fully understood. Simulations in spherical symmetry including detailed neutrino transport and general relativity fail to explode self-consistently, except for the lowest-mass core-collapse progenitors \citep{fischer10,huedepohl10}. There are many ongoing efforts using multi-dimensional fluid dynamics, magnetic fields, and rotation to
address various remaining open questions in core-collapse supernova theory (see, e.g., \citet{janka12,janka12b,burrows13}). Among those are also technical issues, for example the consequences of neutrino transport approximations, the convergence of simulation results, or the dependence of the simulation outcome on the dimensionality of the model. Awareness of this dependence is especially important because not all investigations can be performed in a computationally very expensive three-dimensional model. While sophisticated multi-dimensional models are needed for an accurate investigation of the explosion mechanism, they are currently too expensive for systematic studies that have to be based on a large number of progenitor models. But such a large number of simulations is required to address the following fundamental questions: What are the conditions for explosive nucleosynthesis as a function of progenitor properties? What is the connection between the progenitor model and the compact remnant? How do
these aspects relate to the explosion dynamics and energetics? The lack of readily calculable supernova simulations with self-consistent explosions is a problem for many related fields, in particular for predicting nucleosynthetic yields of supernovae. As we will continue to argue below, spherically symmetric models of the explosion of massive stars are still a pragmatic method to study large numbers of stellar progenitors, from the onset of the explosion up to several seconds after core bounce.
In the past, supernova nucleosynthesis predictions relied on artificially triggered explosions, either using a piston \citep[e.g.][]{ww95,Limongi2006,Chieffi2013} or a thermal energy bomb \citep[e.g.,][]{tnh96,Umeda2008}. For the piston model, the motion of a mass point is specified along a ballistic trajectory. For the thermal energy bomb, explosions are triggered by adding thermal energy to a mass zone. In both cases, additional energy is added to the system to trigger an explosion. In addition, the mass cut (bifurcation between the proto-neutron star (PNS) and the ejecta) and the explosion energy are free parameters which have to be constrained from the mass of the $^{56}$Ni ejecta. While these approaches are appropriate for the outer layers, where the nucleosynthesis mainly depends on the strength of the shock wave, they are clearly incorrect for the innermost layers. There, the conditions and the nucleosynthesis are directly related to the physics of collapse and bounce, and to the details of the
explosion mechanism.
Besides the piston and thermal bomb methods, another widely used way to artificially trigger explosions is the so-called ``neutrino light-bulb''. In this method, the PNS is excised and replaced with an inner boundary condition which contains an analytical prescription for the neutrino luminosities. The neutrino transport is replaced by neutrino absorption and emission terms in optically thin conditions. Suitable choices of the neutrino luminosities and energies can trigger neutrino-driven explosions \citep[e.g.,][]{Burrows1993,Yamasaki2005,iwakami08,iwakami09,yamamoto2013}.
The light-bulb method has also been used to investigate models with respect to their dimensionality. The transition from spherical symmetry (1D) to axisymmetry (2D) delivers the new degree of freedom to bring cold accreting matter down to the neutrinospheres while matter in other directions can dwell longer in the gain region and efficiently be heated by neutrinos \citep{Herant94}. The standing accretion shock instability \citep[SASI, e.g.,][]{Blondin2003,Blondin2006,Scheck2008,iwakami09,Fernandez2010,Guilet2012} strongly contributes to this effect in 2D light-bulb models and leads to strong polar oscillations of expansion during the unfolding of the explosion \citep{Murphy2008}. It was first expected that the trend toward a smaller critical luminosity for successful explosions will continue as one goes from 2D to three-dimensional (3D) models \citep{Nordhaus2010,handy14}, but other studies pointed toward the contrary \citep{Hanke2012,Couch2013a}. One has to keep in mind, that a light bulb approach might not
include the full coupling between the accretion rate and the neutrino luminosity. However, recent models that derive the neutrino luminosity from a consistent evolution of the neutron star support the result that 2D models show faster explosions than 3D models \citep{Mueller.Janka.ea:2012, Bruenn.Mezzacappa.ea:2013, Dolence2013, takiwaki2014}. Most important for this work is a finding that is consistent with all above investigations: In 3D there is no preferred axis. The 3D degrees of freedom lead to a more efficient cascade of fluid instabilities to smaller scales. In spite of vivid fluid instabilities, the 3D models show in their overall evolution a more pronounced sphericity than the 2D models. Hence their average conditions resemble more closely the shock expansion that would be obtained by an exploding 1D model.
In a 1D model with detailed Boltzmann neutrino transport two other methods to trigger explosions using neutrinos have been used \citep{cf06a,fischer10}.
These ``absorption methods'' aim at increasing the neutrino energy deposition in the heating region by mimicking the expected net effects of multi-dimensional simulations. In one case, the neutral-current scattering opacities on free nucleons are artificially decreased to values between 0.1 and 0.7 times the original values. This leads to increased diffusive neutrino fluxes in regions of very high density. The net results are a faster deleptonization of the PNS and higher neutrino luminosities in the heating region. In the other case, explosions are enforced by multiplying the reaction rates for neutrino absorption on free nucleons by a constant factor. To preserve detailed balance, the emission rates also have to be multiplied by the same factor. This reduces the timescale for neutrino heating and again results in a more efficient energy deposition in the heating region. However, the energy associated with these explosions were always weak.
Recently, \cite{Ugliano.Janka.ea:2012} have presented a more sophisticated light-bulb method to explode spherically symmetric models using neutrino energy deposition in post-shock layers. They use an approximate, grey neutrino transport and replace the innermost 1.1~M$_{\odot}$\xspace of the PNS by an inner boundary. The evolution
of the neutrino boundary luminosity
is based on an analytic cooling model of the PNS, which depends on a set of free parameters. These parameters are set by fitting observational properties of SN~1987A for progenitor masses around 20~M$_{\odot}$\xspace (see also \citet{ertl2015}).
Artificial supernova explosions have been obtained by other authors using a grey leakage scheme that includes neutrino heating via a parametrized charged-current absorption scheme \citep{OConnor2010} in spherically symmetric simulations \citep{OConnor.Ott:2011}.
In this paper, we report on a new approach, PUSH, for artificially triggering explosions of massive stars in spherical symmetry. In PUSH, we deposit a fraction of the luminosity of the heavy flavor neutrinos emitted by the PNS in the gain region to increase the neutrino heating efficiency. We ensure an accurate treatment of the electron fraction of the ejecta through a spectral neutrino transport scheme for $\nu_e$ and $\bar{\nu}_e$ and a detailed evolution of the PNS. We calibrate our new method by comparing the explosion energies and nucleosynthesis yields of different progenitor stars with observations of SN~1987A. This method provides a framework to study many important aspects of core-collapse supernovae for large sets of progenitors: explosive supernova nucleosynthesis, neutron-star remnant masses, explosion energies, and other aspects where full multi-dimensional simulations are still too expensive and traditional piston or thermal bomb models do not capture all the relevant physics. With PUSH we can investigate general tendencies and perform systematic parameter variations, providing complementary information to ``ab-initio'' multi-dimensional simulations.
The article is organized as follows:
Section~\ref{sec:method} describes our simulation framework, the stellar progenitor models, the new method PUSH, and our post-processing analysis.
In Section~\ref{sec:results}, we present a detailed exploration of the PUSH method and the results of fitting it to observables of SN~1987A.
We also analyze aspects of the supernova dynamics and progenitor dependency.
In Section~\ref{sec:discuss}, we discuss further implications of our results and also compare with other works from the literature.
A summary is given and conclusions are drawn in Section~\ref{sec:concl}.
\section{Method and input}
\label{sec:method}
\subsection{Hydrodynamics and neutrino transport}
We make use of the general relativistic hydrodynamics code AGILE in spherical symmetry \citep{Liebendoerfer.Agile}. For the stellar collapse, we apply the deleptonization scheme of \citet{Liebendoerfer.delept:2005}. For the neutrino transport, we employ the Isotropic Diffusion Source Approximation (IDSA) for the electron neutrinos $\nu_e$ and electron anti-neutrinos $\overline{\nu}_{e}$ \citep{Liebendoerfer.IDSA:2009}, and an Advanced Spectral Leakage scheme (ASL) for the heavy-lepton flavor neutrinos $\nu_x = \nu_{\mu}, \overline{\nu}_{\mu}, \nu_{\tau}, \overline{\nu}_{\tau}$ \citep{Perego2014}. We discretize the neutrino energy using 20 geometrically increasing energy bins, in the range $3 \, {\rm MeV} \leq E_{\nu} \leq 300\, {\rm MeV}$. The neutrino reactions included in the IDSA and ASL scheme are summarized in Table~\ref{tab:nu_interact}. They represent the minimal set of the most relevant weak processes in the post-bounce phase, particularly up to the onset of an explosion. Note that electron captures
on heavy nuclei and neutrino scattering on electrons, which are relevant in the collapse phase \citep[see, for example, ][]{Mezzacappa1993a,Mezzacappa1993c}, are not included explicitly in the form of reaction rates, but as part of the parameterized deleptonization scheme.
In the ASL scheme, we omit nucleon-nucleon bremsstrahlung, $N+N \leftrightarrow N+N +\nu_x+ \bar{\nu}_x$, where $N$ denotes any nucleon \citep[see, for example,][]{Hannestad98,Bartl14}. We have observed that its inclusion would overestimate $\mu$ and $\tau$ neutrino luminosities during the PNS cooling phase, due to the missing neutrino thermalization provided by inelastic scattering on electrons and positrons at the PNS surface. However, we have also tested that the omission of this process does not significantly change the $\mu$ and $\tau$ neutrino luminosities predicted by the ASL scheme before the explosion sets in. Thus, neglecting $N$-$N$ bremsstrahlung is only relevant for the cooling phase, where it improves the overall behavior when compared to simulations obtained with detailed Boltzmann neutrino transport \cite[][e.g.]{fischer10}.
The equation of state (EOS) of \citet{Hempel.SchaffnerBielich:2010} (HS) that we are using includes various light nuclei, such as alphas, deuterons or tritons (details below). However, the inclusion of all neutrino reactions for this detailed nuclear composition would be beyond the standard approach implemented in current supernova simulations, where only scattering on alpha particles is typically included. To not completely neglect the contributions of the other light nuclei, we have added their mass fractions to the unbound nucleons. This is motivated by their very weak binding energies and, therefore, by the idea that they behave similarly as the unbound nucleon component.
In all our models we use 180 radial zones which include the progenitor star up to the helium shell. This corresponds to a radius of $R \approx (1.3-1.5) \times 10^{10}$~cm. With this setup, we model the collapse, bounce, and onset of the explosion. The grid of AGILE is adaptive, with more resolution where the gradients of the thermodynamic variables are steeper. Thus, in the post-bounce and explosion phases, the surface of the PNS and the shock are the better resolved regions. The simulations are run for a total time of 5~s, corresponding to $\gtrsim 4.6$~s after core bounce. At this time, the shock has not yet reached the external edge of our computational domain.
\begin{table}[h,t]
\begin{center}
\caption{Relevant neutrino reactions.\label{tab:nu_interact}}
\begin{tabular}{lll}
\tableline \tableline
Reactions & Treatment & Reference \\
\tableline
$e^- + p \leftrightarrow n + \nu_e$ & IDSA & a \\
$e^+ + n \leftrightarrow p + \overline{\nu}_e$ & IDSA & a \\
$N + \nu \leftrightarrow N + \nu$ & IDSA \& ASL & a \\
$(A,Z) + \nu \leftrightarrow (A,Z) + \nu$ & IDSA \& ASL & a \\
$e^- + e^+ \leftrightarrow \nu_{\mu,\tau} + \overline{\nu}_{\mu,\tau}$ & ASL & a, b \\
\hline
\end{tabular}
\tablecomments{Nucleons are denoted by $N$. The nucleon charged current rates are based on \citet{Bruenn85}, but effects of mean-field interactions \citep{reddy98,roberts12,martinez12,hempel14} are taken into account.}
\tablerefs{ (a) \citet{Bruenn85}; (b) \citet{Mezzacappa1993b}.}
\end{center}
\end{table}
\subsection{Equation of state and nuclear reactions}
\label{sec_eos}
For the high-density plasma in nuclear statistical equilibrium (NSE) the tabulated microphysical EOS HS(DD2) is used. This supernova EOS is based on the model of \citet{Hempel.SchaffnerBielich:2010}. It uses the DD2 parametrization for the nucleon interactions \citep{typel10}, the nuclear masses from \citet{audi2003}, and the \textit{Finite Range Droplet Model} \citep{moller95}. 8140 nuclei are included in total, up to $Z=136$ and to the neutron drip line. The HS(DD2) was first introduced in \citet{fischer14}, where its characteristic properties were discussed and general EOS effects in core-collapse supernova simulations were investigated. \citet{fischer14} showed that the HS(DD2) EOS gives a better agreement with constraints from nuclear experiments and astrophysical observations than the commonly used EOSs of \citet{lattimer91} and \citet{shen98}. Furthermore, additional degrees of freedom, such as various light nuclei and a statistical ensemble of heavy nuclei, are taken into account. The nucleon mean-
field potentials, which are used in the charged-current rates, have been calculated consistently \citep{hempel14}. The maximum mass of a cold neutron star for the HS(DD2) EOS is 2.42~M$_\odot$ \citep{fischer14}, which is well above the limits from \citet{demorest2010} and \citet{antoniadis2013}.
The EOS employed in our simulations includes an extension to non-NSE conditions. In the non-NSE regime the nuclear composition is described by 25 representative nuclei from neutrons and protons to iron-group nuclei. The chosen nuclei are the alpha-nuclei $^4$He, $^{12}$C, $^{16}$O, $^{20}$Ne, $^{24}$Mg, $^{28}$Si, $^{32}$S, $^{36}$Ar, $^{40}$Ca, $^{44}$Ti, $^{48}$Cr, $^{52}$Fe, $^{56}$Ni, complemented by $^{14}$N and the following asymmetric isotopes: $^{3}$He, $^{36}$S, $^{50}$Ti, $^{54}$Fe, $^{56}$Fe, $^{58}$Fe, $^{60}$Fe, $^{62}$Fe, and $^{62}$Ni. With these nuclei it is possible to achieve a mapping of the abundances from the progenitor calculations onto our simulations which is consistent with the provided electron fraction, i.e., mantaining charge neutrality. All the nuclear masses $M_i$ are taken from \citet{audi2003}. To advect the nuclear composition inside the adaptive grid, we implement the Consistent Multi-fluid Advection (CMA) method by \citet{plewa98}. For given abundances, the non-NSE EOS is
calculated based on the same underlying description used in the NSE regime \citep{Hempel.SchaffnerBielich:2010}, but with the following modifications: excited states of nuclei are neglected, excluded volume effects are not taken into account, and the nucleons are treated as non-interacting Maxwell-Boltzmann gases. Such a consistent description of the non-NSE and NSE phases prevents spurious effects at the transitions between the two regimes.
Outside of NSE, an approximate $\alpha$-network is used to follow the changes in composition. Explosive Helium-, Carbon-, Neon-, and Oxygen-burning are currently implemented in the simulation. Note that the thermal energy generation by the nuclear reactions is fully incorporated via the detailed non-NSE treatment. We do not have to calculate explicitly any energy liberation, but just the changes in the abundances. Within our relativistic treatment of the EOS (applied both in the non-NSE and NSE regime) energy conservation means that the specific internal energy $e_{\rm int}$ is not changed by nuclear reactions. This is due to fact that $e_{\rm int}$ includes the specific rest mass energy $e_{\rm mass}$, where $e_{\rm mass}$ is given by the sum over the masses of all nuclei weighted with their yield $Y_i=X_i/A_i$,
\begin{eqnarray}
e_{\rm mass}=\sum_i Y_i M_i \; .
\end{eqnarray}
However, if we define the specific thermal energy $e_{\rm th}$ as
\begin{equation}
e_{\rm th} = e_{\rm int} - e_{\rm mass} ,
\label{eqn:e_th}
\end{equation}
the nuclear reactions will decrease the rest mass energy (i.e., increase the binding) and consequently increase the thermal energy. This treatment of the non-NSE EOS is consistent with the convention used in all high-density NSE EOSs.
Due to limitations of our approximate $\alpha$-network, and because we do not include any quasi-statistical equilibrium description, we apply some parameterized burning for temperatures between 0.3 and 0.4~MeV. A temperature dependent burning timescale is introduced, which gradually transforms the initial non-NSE composition towards NSE. For temperatures of 0.4~MeV and above, the non-NSE phase always reaches a composition close to NSE and its thermodynamic properties become very similar to the NSE phase of the HS(DD2) EOS. Even though the two phases are based on the same input physics, small, but unavoidable differences can remain, due to the limited set of nuclei considered in non-NSE. To assure a smooth transition for all conditions, we have introduced a transition region as an additional means of thermodynamic stability. We have chosen a parameterization in terms of temperature and implement a linear transition in the temperature interval from 0.40~MeV to 0.44~MeV. We check that the basic thermodynamic
stability condition $ds/dT>0$ is always fulfilled.
\subsection{Initial models}
For this study, we use solar-metallicity, non-rotating stellar models from the stellar evolution code {\tt KEPLER} \citep{Woosley.Heger:2002}. Our set includes 16 pre-explosion models with zero-age main sequence (ZAMS) mass between 18.0~M$_{\odot}$ and 21.0~M$_{\odot}$ in increments of 0.2~M$_{\odot}$. These models have been selected to have ZAMS mass around 20~M$_{\odot}$, similar to the progenitor of SN~1987A \citep[e.g.,][]{PP.sn1987a:2007}. We label the models by their ZAMS mass. In Figure \ref{fig:prog_dens}, the density profiles of the progenitor models are shown. For each of them the compactness parameter $\xi_{M}$ is defined following \citet{OConnor.Ott:2011} by the ratio of a given mass $M$ and the radius $R(M)$ which encloses this mass:
\begin{equation}
\xi_{M} \equiv \frac{M/M_{\odot}}{R(M)/1000\mathrm{km}}.
\label{eq:compactness}
\end{equation}
Typically, either $\xi_{1.75}$ or $\xi_{2.5}$ are used. The compactness can be computed at the onset of collapse or at bounce, as suggested by \citet{OConnor.Ott:2011}. For our progenitors, the difference in the compactness parameter between these two moments is not significant for our discussions. Thus, for simplicity, in the following we will use $\xi_{1.75}$ computed at the onset of the collapse. The progenitor models considered here fall into two distinct families of compactness: low compactness ($\xi_{1.75} < 0.45$; LC models) and high compactness ($\xi_{1.75}> 0.45$; HC models), see Table \ref{tab:prog_compact}. Figure \ref{fig:prog_compact} shows the compactness as function of ZAMS mass for the progenitors of this study. The non-monotonous behavior is a result of the evolution before collapse.
The mass range between 19 and 21~M$_{\odot}$\xspace is particularly prone to variations of the compactness.
For a detailed discussion of the behavior of the compactness as function of ZAMS mass see \citet{sukhbold.woosley:2014}.
\begin{figure}[htp!]
\includegraphics[width=0.5\textwidth]{fig01_prog-dens-mass.eps}
\caption{Density profiles as function of ZAMS mass for the progenitor models included in this study (18.0~M$_{\odot}$\xspace to 21.0~M$_{\odot}$\xspace). HC models are shown in red, LC models are shown in blue. The vertical line in the inset is located at 1.75~M$_{\odot}$\xspace and indicates that mass at which the compactness parameter $\xi_{1.75}$ is determined (see Equation~\ref{eq:compactness}).
\label{fig:prog_dens}}
\end{figure}
\begin{figure}[htp!]
\includegraphics[width=0.5\textwidth]{fig02_compactness.eps}
\caption{Compactness $\xi_{1.75}$ as function of ZAMS mass for our pre-explosion models at the onset of collapse (green crosses) and at bounce (magenta pluses).
\label{fig:prog_compact}
}
\end{figure}
\begin{table*}[h,t]
\begin{center}
\caption{Progenitor properties \label{tab:prog_compact}}
\begin{tabular}{llllllllll}
\tableline \tableline
M$_{\rm ZAMS}$& \multicolumn{2}{c}{$\xi_{1.75}$} & \multicolumn{2}{c}{$\xi_{2.5}$} & M$_{\rm prog}$ & M$_{\rm Fe}$ & M$_{\rm CO}$ & M$_{\rm He}$ & M$_{\rm env}$\\
(M$_{\odot}$) & at collapse & at bounce & at collapse & at bounce & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$)\\
\tableline
18.2 & 0.37 & 0.380 & 0.173 & 0.173 & 14.58 & 1.399 & 4.174 & 5.395 & 9.186\\
18.6 & 0.365 & 0.375 & 0.170 & 0.170 & 14.85 & 1.407 & 4.317 & 5.540 & 9.313\\
18.8 & 0.357 & 0.366 & 0.166 & 0.166 & 15.05 & 1.399 & 4.390 & 5.613 & 9.435\\
19.6 & 0.282 & 0.288 & 0.118 & 0.117 & 13.37 & 1.461 & 4.959 & 6.243 & 7.125\\
19.8 & 0.334 & 0.341 & 0.135 & 0.135 & 14.54 & 1.438 & 4.867 & 6.112 & 8.428\\
20.0 & 0.283 & 0.287 & 0.125 & 0.125 & 14.73 & 1.456 & 4.960 & 6.215 & 8.517\\
20.2 & 0.238 & 0.241 & 0.104 & 0.104 & 14.47 & 1.458 & 5.069 & 6.342 & 8.125\\
\tableline
18.0 & 0.463 & 0.485 & 0.199 & 0.199 & 14.50 & 1.384 & 4.104 & 5.314 & 9.187\\
18.4 & 0.634 & 0.741 & 0.185 & 0.185 & 14.82 & 1.490 & 4.238 & 5.459 & 9.366\\
19.0 & 0.607 & 0.715 & 0.191 & 0.191 & 15.03 & 1.580 & 4.461 & 5.693 & 9.341\\
19.2 & 0.633 & 0.737 & 0.191 & 0.192 & 15.08 & 1.481 & 4.545 & 5.760 & 9.325\\
19.4 & 0.501 & 0.535 & 0.185 & 0.185 & 15.22 & 1.367 & 4.626 & 5.860 & 9.365\\
20.4 & 0.532 & 0.594 & 0.192 & 0.192 & 14.81 & 1.500 & 5.106 & 6.376 & 8.433\\
20.6 & 0.742 & 0.95 & 0.278 & 0.279 & 14.03 & 1.540 & 5.260 & 6.579 & 7.450\\
20.8 & 0.726 & 0.904 & 0.271 & 0.272 & 14.34 & 1.528 & 5.296 & 6.609 & 7.735\\
21.0 & 0.654 & 0.764 & 0.211 & 0.212 & 13.00 & 1.454 & 5.571 & 6.969 & 6.026\\
\tableline
\end{tabular}
\tablecomments{ZAMS mass, compactness $\xi_{1.75}$ and $\xi_{2.5}$ at the onset of collapse and at bounce, total progenitor mass at collapse (M$_{\rm prog}$), mass of the iron core (M$_{\rm Fe}$), carbon-oxygen core (M$_{\rm CO}$), and helium core (M$_{\rm He}$), and mass of the hydrogen-rich envelope (M$_{\rm env}$) at collapse, for all the progenitor models included in this study. The top part of the table includes the low-compactness progenitors (LC; $\xi_{1.75}<0.4$ at collapse), the bottom part includes the high-compactness progenitors (HC; $\xi_{1.75}>0.45$ at collapse).
}
\end{center}
\end{table*}
\subsection{The PUSH method}
\label{subsec:push}
\subsubsection{Rationale}
The goal of PUSH is to provide a computationally efficient framework to explode massive stars in spherical symmetry to study multiple aspects of core-collapse supernovae. The usage of a spectral transport scheme to compute the $\nu_e$ and $\bar{\nu}_e$ luminosities provides a more accurate evolution of $Y_e$ of the innermost ejecta, which is a crucial aspect for nucleosynthesis. The neutrino luminosities include the accretion contribution, as well as the luminosity coming from PNS mantle and core. The accretion luminosity depends not only on the accretion rate but also on the evolution of the mass and radius of the PNS, which is treated accurately and self-consistently in our models.
In order to trigger explosions in the otherwise non-exploding spherically symmetric simulations, we rely on the delayed neutrino-driven mechanism, which was first proposed by \citet{bethe_85}. Despite the lack of consensus and convergence of numerical results between different groups, recent multi-dimensional simulations of CCSNe have shown that convection, turbulence and SASI in the shocked layers increase the efficiency at which $\nu_e$ and $\bar{\nu}_e$ are absorbed inside the gain region, compared with spherically symmetric models \citep[see, for example,][]{janka96,Nordhaus2010,Hanke2012,Hanke2013,Dolence2013, Couch2013a,Melson2015}. This effect, together with the simultaneous increase in time that a fluid particle spends inside the gain region \citep[e.g.,][]{Murphy2008, handy14}, provides more favorable conditions for the development of an explosion. Moreover, according to multi-dimensional explosion models, the shock revival is followed by a phase where continued accretion and shock expansion coexist
over a time scale of $\gtrsim 1 \, {\rm s}$ \citep[e.g.,][]{Scheck2006,Marek2009,Bruenn2014,Melson2015}. During this phase, matter accreted through low-entropy downflows onto the PNS continues to power an accretion luminosity. The re-ejection of a fraction of this matter by neutrino heating accelerates the shock and increases the explosion energy. The length of this phase, the exact amount of injected energy, and its deposition rate are still uncertain.
Inspired by the increase of the net neutrino heating that a fluid element experiences due to the above mentioned multi-dimensional effects, PUSH provides a more efficient neutrino energy deposition inside the gain region in spherically symmetric models. However, unlike other methods that use electron flavor neutrinos to trigger artificial 1D explosions (see Section \ref{sec:intro}), in PUSH we deposit a fraction of the luminosity of the heavy flavor neutrinos ($\nu_x$'s) behind the shock to ultimately provide successful explosion conditions. This additional energy deposition is calibrated by comparing the explosion energies and nucleosynthesis yields obtained from our progenitor sample with observations of SN~1987A. This ensures that our artificially increased heating efficiency has an empirical foundation. Thus, we can make predictions in the sense of an effective model.
Despite the fact that $\nu_x$'s contribute only marginally to the energy deposition inside the gain region in self-consistent models \citep[see, for example,][]{bethe_85} and that they only show a weak dependence on the temporal variation of the accretion rate \citep[see, for example,][]{Liebendoerfer2004}, their usage presents a number of advantages for our purposes. They represent one of the largest energy reservoirs available, but they do not directly change the electron fraction $Y_e$ (unlike electron flavor neutrinos). This allows us to trigger an explosion in 1D simulations without modifying $\nu_e$ and $\bar{\nu}_e$ luminosities nor changing charged current reactions. The $\nu_x$ luminosities are calculated consistently within our model. They include dynamical feedback from the accretion history, progenitor properties of each individual model, and the cooling of the forming compact object. As shown by \citet{OConnor.Ott:2013} in broad progenitor studies, during the accretion phase that precedes the
shock revival, the properties of the $\nu_x$ spectral fluxes correlate significantly with the properties of $\nu_e$'s and $\bar{\nu}_e$'s. Unlike the electron (anti-)neutrino luminosities, that in spherically symmetric models decrease suddenly once the shock has been revived, $\nu_x$ luminosities are only marginally affected by the development of an explosion. This allows PUSH to continue injecting energy inside the expanding shock for a few hundreds of milliseconds after the explosion has set in. Moreover, since this energy injection is provided by the $\nu_x$ fluxes, it changes significantly between different progenitors and correlates with the $\nu_e$ and $\bar{\nu}_e$ accretion luminosities (at least, during the accretion phase).
\subsubsection{Implementation}
The additional energy deposition, that represents the main feature of PUSH, is achieved by introducing a local heating term, $Q^+_{\mathrm{push}} (t,R)$ (energy per unit mass and time), given by
\begin{equation}
Q^+_{\mathrm{push}} (t,r) = 4 \, \mathcal{G}(t) \int_0^{\infty} q^+_{\mathrm{push}}(r,E) \, dE ,
\label{eq:push_integral}
\end{equation}
where
\begin{equation}
q^+_{\mathrm{push}}(r,E) \equiv
\sigma_0 \;
\frac{1}{4 \, m_b}
\left( \frac{E}{m_e c^2} \right)^2
\frac{1}{4 \pi r^2}
\left( \frac{dL_{\nu_x}}{dE} \right)
\mathcal{F}(r,E) ,
\label{eq:push_qdot}
\end{equation}
with
\begin{equation}
\sigma_0 = \frac{4 G_F^2 \left(m_e c^2 \right)^2}{\pi \left( \hbar c \right)^4 } \approx 1.759 \times 10^{-44} {\rm cm^2}
\end{equation}
being the typical neutrino cross-section, $m_b \approx 1.674 \times 10^{-24}{\rm g}$ an average baryon mass,
and $(dL_{\nu_x}/dE)/(4 \pi r^2)$ the spectral energy flux for any single $\nu_x$ neutrino species with energy $E$.
Note that all four heavy neutrino flavors are treated identically by the ASL scheme, and contribute equally to $Q^+_{\mathrm{push}}$
(see the factor 4 appearing in Equation (\ref{eq:push_integral})).
The term $\mathcal{F}(r,E)$ in Equation (\ref{eq:push_qdot}) defines the spatial location where $Q^+_{\mathrm{push}} (t,r)$ is active:
\begin{equation}
\mathcal{F}(r,E) =
\left\{
\begin{array}{ll}
0 & \mbox{if} \quad ds/dr > 0 \quad \mbox{or} \quad \dot{e}_{\nu_e,\overline{\nu}_e} < 0 \\
\exp (- \tau_{\nu_e}(r,E)) & \mbox{otherwise} \\
\end{array}
\right. ,
\label{eq:push_F}
\end{equation}
where $\tau_{\nu_e}$ denotes the (radial) optical depth of the electron neutrinos, $s$ is the matter entropy and $\dot{e}_{\nu_e,\bar{\nu}_e}$ the net specific energy rate due to electron neutrinos and anti-neutrinos.
The two criteria above are a crucial ingredient in our description of triggering CCSN explosions:
PUSH is only active where electron-neutrinos are heating ($\dot{e}_{\nu_e,\overline{\nu}_e} > 0$) and where neutrino-driven convection can occur ($ds/dr < 0$).
The term $\mathcal{G}(t)$ in Equation~(\ref{eq:push_integral}) determines the temporal behaviour of $Q^+_{\mathrm{push}} (t,r)$.
Its expression reads
\begin{equation}
\mathcal{G} (t) = k_{{\rm push}} \times
\left\{
\begin{array}{ll}
0 & t \leq t_{\rm{on}} \\
\left( t-t_{\rm on} \right)/t_{\rm rise} & t_{\rm on} < t \leq t_{\rm on} + t_{\rm rise} \\
1 & t_{\rm on} + t_{\rm rise}< t \leq t_{\rm off} \\
\left( t_{\rm off} + t_{\rm rise} - t \right)/t_{\rm rise} & t_{\rm off} < t \leq t_{\rm off} + t_{\rm rise} \\
0 & t > t_{\rm off} + t_{\rm rise}
\end{array}
\right. ,
\label{eq:push_G}
\end{equation}
and it is sketched in Figure \ref{fig:g_factor}.
Note that throughout the article we always measure the time relative to bounce, if not noted otherwise.
The {\em cumulative energy} deposited by PUSH, $E_{\rm push}$, can be calculated from the {\em energy deposition rate}
${\rm d}E_{\rm push}/{\rm d}t$ as
\begin{equation}
E_{\rm push}(t) = \int_{t_{\rm on}}^t \: \left( \frac{{\rm d}E_{\rm push}}{{\rm d}t} \right) \, {\rm d}t' =
\int_{t_{\rm on}}^t \: \left( \int_{V_{\rm gain}} \: Q_{\rm push}^+ \, \rho \, {\rm d}V \right) {\rm d}t' . \label{eq_epush}
\end{equation}
where $V_{\rm gain}$ is the volume of the gain region.
Both these quantities have to be distinguished from the corresponding energy and energy rate obtained by IDSA:
\begin{equation}
E_{\rm idsa}(t) = \int_{0}^{t} \: \left( \frac{{\rm d}E_{\rm idsa}}{{\rm d}t} \right) {\rm d}t' =
\int_{0}^{t} \: \left( \int_{V_{\rm gain}} \: \dot{e}_{\nu_e,\bar{\nu}_e} \, \rho \, {\rm d}V \right) {\rm d}t . \label{eq_eidsa}
\
\end{equation}
\begin{figure}[htp!]
\includegraphics[width=0.5\textwidth]{fig03_push_g_factor.eps}
\caption{The function $\mathcal{G}(t)$ determines the temporal behavior of the heating due to PUSH.
The quantity $t_{\rm on}$ is robustly set by multi-dimensional models.
We consider a value of 80~ms in our calculations and a value of 120 ms for testing.
$t_{\rm rise}$ and $k_{\rm push}$ are set by our calibration procedure,
spanning a range from 50~ms to 250~ms, and from 0 (PUSH off) to $\sim$4, respectively.
Since we assume that the explosion takes place within the first second after core bounce,
we use $t_{\rm off} = 1 {\rm s}$.
\label{fig:g_factor}}
\end{figure}
The definition of $\mathcal{G}(t)$ introduces a set of (potentially) free parameters:
\begin{itemize}
\item
$k_{\rm push}$ is a global multiplication factor that controls directly the amount of extra heating provided by PUSH. The choices of $\sigma_0$ as reference cross-section and of the $\mu$ and $\tau$ neutrino luminosity as energy reservoir suggest $k_{\rm push} \gtrsim 1$.
\item
$t_{\rm on}$ sets the time at which PUSH starts to act. We relate $t_{\rm on}$ to the time when deviations from spherically symmetric behavior appear in multi-dimensional models. Matter convection in the gain region sets in once the advection time scale $\tau_{\rm adv}$ and the convective growth time scale $\tau_{\rm conv}$ satisfy $\tau_{\rm adv} / \tau_{\rm conv} \gtrsim 3$ \citep{Foglizzo2006}. For all the models we have explored, this happens around $t = 0.06-0.08 \, {\rm s}$. In the above estimates, $\tau_{\rm adv} = \dot{M}_{\rm shock}/M_{\rm gain}$, where $\dot{M}_{\rm shock}$ is the accretion rate at the shock and $M_{\rm gain}$ the mass in the gain region, and $\tau_{\rm conv} = f_{B-V}^{-1}$, where $f_{B-V}$ is the Brunt-V\"{a}is\"{a}la frequency. Considering that $\tau_{\rm conv} \sim 4-5 \, {\rm ms}$, we expect $t_{\rm on} \sim 0.08-0.10 \, {\rm s}$, in
agreement with recent multi-dimensional simulations \citep[see, for example, the 1D-2D comparison of the
shock position in][]{Bruenn.Mezzacappa.ea:2013}.
\item
$t_{\rm rise}$ defines the time scale over which $\mathcal{G}(t)$ increases from zero to $k_{\rm push}$\xspace. We connect $t_{\rm rise}$ with the time scale that characterizes the growth of the largest multi-dimensional perturbations between the shock radius ($R_{\rm shock}$) and the gain radius ($R_{\rm gain}$) \citep[e.g.][]{janka96}. \cite{Foglizzo2006} showed that convection in the gain region can be significantly stabilized by advection, especially if $\tau_{\rm adv} / \tau_{\rm conv}$ only marginally exceeds 3, and that the growth rate of the fastest growing mode is diminished. Thus, $t_{\rm rise} \gg \tau_{\rm conv}$. On the other hand, a lower limit to $t_{\rm rise}$ is represented by the overturn time scale, $\tau_{\rm overturn}$, defined as
\begin{equation}
\tau_{\rm overturn} \sim \frac{\pi (R_{\rm shock}-R_{\rm gain})}{\langle v \rangle_{\rm gain}} ,
\end{equation}
where $ \langle v \rangle_{\rm gain} $ is the average fluid velocity inside the gain region. In our simulations, we have found $\tau_{\rm overturn} \approx 0.05 \, {\rm s}$ around and after $t_{\rm on}$. In case of a contracting shock, SASIs
are also expected to develop around $0.2-0.3 \, {\rm s}$ after bounce \citep{Hanke2013}. Hence, we assume $0.05 \, {\rm s} \lesssim t_{\rm rise} \lesssim \left( 0.30 \, {\rm s} - t_{\rm on} \right) $.
\item
$t_{\rm off}$ sets the time after which PUSH starts to be switched off. We expect neutrino driven explosions to develop for $t \lesssim 1$~s due to the fast decrease of the luminosities during the first seconds after core bounce. Hence, we fix $t_{\rm off} = 1$~s. PUSH is not switched off suddenly at the onset of the explosion, but rather starts decreasing naturally even before 1~s after core bounce due to the decreasing neutrino luminosities
and due to the rarefaction of the gain region above the PNS.
The subsequent injection of energy by neutrinos in the accelerating shock is qualitatively
consistent with multi-dimensional simulations, where accretion and explosion can coexist during the early stages of the shock expansion. The decrease of ${\rm d}E_{\rm push}/{\rm d}t$
on a time scale of a few hundreds of milliseconds after
the explosion has been launched makes our results largely independent of the choice of $t_{\rm off}$ for explosions happening not too close to it.
\end{itemize}
While $t_{\rm on}$ is relatively well constrained and $t_{\rm off}$ is robustly set, $t_{\rm rise}$ and especially $k_{\rm push}$ are still undefined. We will discuss their impact on the model and on the explosion properties in detail in Sections \ref{sec:method_kpush}-\ref{sec:method_kpush_trise}. Ultimately, we will fix them using a calibration procure detailed in Section \ref{sec:method_sn1987a}.
\subsection{Post-processing analysis}
For the analysis of our results we determine several key quantities for each simulation. These quantities are obtained from a post-processing approach. We distinguish between the {\em explosion properties}, such as the explosion time, the mass cut,or the explosion energy, and the {\em nucleosynthesis yields}. The former are calculated from the hydrodynamics profiles. The latter are obtained from detailed nuclear network calculations for extrapolated trajectories.
\subsubsection{Accretion rates and explosion properties}
\label{sec:def_eexpl}
For the accretion process, we distinguish between the accretion rate at the shock front, $\dot{M}_{\rm shock} = {\rm d} M(R_{\rm shock}) / {\rm d} t$, and the accretion rate on the PNS, $\dot{M}_{\rm PNS} = {\rm d} M(R_{\rm PNS}) / {\rm d} t$. In these expressions, $M(R)$ is the baryonic mass enclosed in a radius $R$, $R_{\rm shock}$ is the shock radius, and $R_{\rm PNS}$ is the PNS radius that satisfies the condition $\rho(R_{\rm PNS})=10^{11}{\rm g \, cm^{-3}}$.
We consider the explosion time $t_{\rm expl}$ as the time when the shock reaches $500 \, {\rm km}$, measured with respect to core bounce (cf. \citet{Ugliano.Janka.ea:2012}). In all our models, the velocity of matter at the shock front has turned positive at that radius and the explosion has been irreversibly launched. There is no unique definition of $t_{\rm expl}$ in the literature and some other studies (cf. \citet{janka96,handy14}) use different definitions, e.g., the time when the explosion energy increases above $10^{48}$~erg. However, we do not expect that the different definitions give qualitatively different explosion times.
For the following discussion, we will use the total energy of the matter between a given mass shell $m_0$ up to the stellar surface:
\begin{equation}
E_{\mathrm{total}}(m_0,t)=-\int_M^{m_0} \: e_{\rm total}(m,t) \, {\rm d}m \;.
\label{eq:total energy}
\end{equation}
$M$ is the enclosed baryonic mass at the surface of the star and $m_0$ is a baryonic mass coordinate ($ 0 \leq m_0 \leq M$). $e_{\rm total}$ is the specific total energy, given by
\begin{equation}
e_{\rm total}=e_{\rm int}+e_{\rm kin}+e_{\rm grav} \; ,
\end{equation}
i.e., the sum of the (relativistic) internal, kinetic, and gravitational specific energies. For all these quantities we make use of the general-relativistic expressions in the laboratory frame \citep{fischer10}. The integral in Equation~(\ref{eq:total energy}) includes both the portion of the star evolved in the hydrodynamical simulation and the outer layers, which are considered as stationary profiles from the progenitor structure.
The explosion energy emerges from different physical contributions (see, for example, the appendix of \cite{Scheck2006} and the discussion in \cite{Ugliano.Janka.ea:2012}). In our model, we are taking into account:
(i) the total energy of the neutrino-heated matter that causes the shock revival;
(ii) the nuclear energy released by the recombination of nucleons and alpha particles into heavy nuclei at the transition to non-NSE conditions;
(iii) the total energy associated with the neutrino-driven wind developing after the explosion up to the end of the simulation;
(iv) the energy released by the explosive nuclear burning in the shock-heated ejecta; and
(v) the total (negative) energy of the outer stellar layers (also called the ``overburden'').
We are presently not taking into accout the variation of the ejecta energy due to the appearance of late-time fallback. This is justified as long as the fallback represents only a small fraction of the total ejected mass.
To compute the explosion energy, we assume that the total energy of the ejecta with rest-masses subtracted eventually converts into kinetic energy of the expanding supernova remnant at $t \gg t_{\rm expl}$. The quantity $e_{\rm total}$ includes the rest mass contribution via $e_{\rm int}$, see Equation~(\ref{eqn:e_th}). Instead, if we want to calculate the explosion energy, we have to consider the thermal energy $e_{\rm th}$. Therefore, we define the specific explosion energy as
\begin{equation}
e_{\rm expl}=e_{\rm th}+e_{\rm kin}+e_{\rm grav} \; ,
\end{equation}
and the time- and mass-dependent explosion energy for the fixed mass domain between $m_0$ and $M$ as
\begin{equation}
H_{\mathrm{expl}}(m_0,t)=-\int_M^{m_0} \: e_{\rm expl}(m,t) \, {\rm d}m \;.
\label{eq:expl energy mt}
\end{equation}
This can be interpreted as the total energy of this region in a non-relativistic EOS approach, where rest masses are not included.
The actual explosion energy (still time-dependent) is given by
\begin{equation}
E_{\rm{expl}}(t) = H_{\rm{expl}}(m_{\rm cut}(t),t) \; .
\label{eq:expl energy t}
\end{equation}
i.e., for the matter above the mass cut $m_{\rm cut}(t)$.
To identify the mass cut, we consider the expression suggested by Bruenn in \citet{fischer10}:
\begin{equation}
m_{\rm{cut}}(t) = m\left(\max(H_{\rm{expl}}(m,t))\right) \, ,
\label{eq:mass cut}
\end{equation}
where the maximum is evaluated outside the homologous core ($m \gtrsim 0.6$~M$_{\odot}$\xspace), which has large positive values of the specific explosion energy $e_{\rm expl}$ once the PNS has formed due to the high compression. In the outer stellar envelope, before the passage of the shock wave, $e_{\rm expl}$ is dominated by the negative gravitational contribution. However, it is positive in the neutrino-heated region and in the shocked region above it. Hence, the above definition of $m_{\rm cut}$ locates essentially the transition from gravitationally unbound to bound layers. The final mass cut is obtained for $t=t_{\rm final}$.
Our final simulation time $t_{\rm final} \gtrsim 4.6$~s is always much larger than the explosion time and, as we will show later, it allows $E_{\rm expl}(t)$ to saturate. Thus, we consider $E_{\rm{expl}}(t=t_{\rm final})$ as the ultimate explosion energy of our models. In the following, if we use $E_{\rm expl}$ without the time as argument, we mean this final explosion energy.
\subsubsection{Nucleosynthesis yields}
\label{sec:winnet}
To predict the composition of the ejecta, we perform nucleosynthesis calculations using the full nuclear network {\sc Winnet} \citep{Winteler.ea:2012}. We include isotopes up to $^{211}$Eu covering the neutron-deficient as well as the neutron-rich side of the valley of $\beta$-stability. The reaction rates are the same as in \citet{Winteler.ea:2012}. They are based on experimentally known rates where available and predictions otherwise. The n-, p-, and alpha-captures are taken from \citet{Rauscher.FKT:2000}, who used known nuclear masses where available and the \textit{Finite Range Droplet Model} \citep{Moller.ea:1995} for unstable nuclei far from stability. The $\beta$-decay rates are from the nuclear database \textit{NuDat2}\footnote{http://www.nndc.bnl.gov/nudat2/}.
We divide the ejecta into different mass elements of $10^{-3}$~M$_{\odot}$\xspace each and follow the trajectory of each individual mass element. As we are mainly interested in the amounts of $^{56}$Ni, $^{57}$Ni, $^{58}$Ni, and $^{44}$Ti, we only consider the 340 innermost mass elements above the mass cut, corresponding to a total mass of $0.34$~M$_{\odot}$. The contribution of the outer mass elements to the production of those nuclei is negligible.
For $t<t_{\rm final}$, we use the temperature and density evolution from the hydrodynamical simulations as inputs for our network. For each mass element we start the nucleosynthesis post-processing when the temperature drops below 10~GK, using the NSE abundances (determined by the current electron fraction $Y_e$) as the initial composition. For mass elements that never reach 10~GK we start at the moment of bounce and use the abundances from the approximate $\alpha$-network at this point as the initial composition. Note that for all tracers the further evolution of $Y_e$ in the nucleosynthesis post-processing is determined inside the {\sc Winnet} network.
At the end of the simulations, i.e.\ $t=t_{\rm final}$, the temperature and density of the inner zones are still sufficiently high for nuclear reactions to occur ($T \approx 1$~GK and $\rho \approx 2.5 \times 10^3$~g~cm$^{-3}$). Therefore, we extrapolate the radius, density and temperature up to $t_{\rm end} = 100$~s using:
\begin{align}
r(t) &= r_{\rm final} + t v_{\rm final} \label{eq:extrapol_rad} \\
\rho(t) &= \rho_{\rm final} \left( \frac{t}{t_{\rm final}} \right)^{-3} \\
T(t) &= T[s_{\rm final},\rho(t),Y_e(t)] \label{eq:extrapol_t9},
\end{align}
where $r$ is the radial position, $v$ the radial velocity, $\rho$ the density, $T$ the temperature, $s$ the entropy per baryon, and $Y_e$ the electron fraction of the mass zone. The temperature is calculated at each timestep using the equation of state of \citet{Timmes.Swesty:2000}. The prescription in Equations (\ref{eq:extrapol_rad})--(\ref{eq:extrapol_t9}) corresponds to a free expansion for the density and an adiabatic expansion for the temperature (see, for example, \citet{korobkin2012}).
\section{Fitting and Results}
\label{sec:results}
To test the PUSH method, we perform a large number of runs where we vary the free parameters and explore their impact on the explosion properties. We also analyze in detail the basic features of the simulations and of the explosions in connection with the properties of the progenitor star. Finally, we fit the free parameters in the PUSH method to reproduce observed properties of SN~1987A for a progenitor star in the range 18-21~M$_{\odot}$\xspace.
\subsection{General effects of free parameter variations}
\subsubsection{$k_{\rm push}$}
\label{sec:method_kpush}
The parameter with the most intuitive and strongest impact on the explosion is $k_{\rm push}$\xspace. Its value directly affects the amount of extra heating which is provided by PUSH. As expected, larger values of $k_{\rm push}$\xspace (assuming all other parameters to be fixed) result in the explosion being more energetic and occurring earlier. In addition, a faster explosion implies a lower remnant mass, as there is less time for the accretion to add mass to the forming PNS.
Beyond these general trends with $k_{\rm push}$\xspace, the detailed behavior depends also on the compactness of the progenitor. For all 16~progenitor models in the 18-21~M$_{\odot}$\xspace ZAMS mass range, we have explored several PUSH models, varying $k_{\rm push}$ between 0.0 and 4.0 in increments of 0.5 but fixing $t_{\rm on} = 80$~ms and $t_{\rm rise} = 150$~ms. For $k_{\rm push}\leqslant 1$, none of the models explode and for $k_{\rm push} = 1.5$ only the lowest compactness models explode. Figure \ref{fig:kpush} shows the explosion energy, the explosion time and the (baryonic) remnant mass as function of the progenitor compactness for $k_{\rm push} = 1.5,2.0,3.0,4.0$.
A distinct behavior between low and high compactness models is seen. The LC models ($\xi_{1.75}<0.4$) result in slightly weaker and faster explosions, with less variability in the explosion energy and in the explosion time for different values of $k_{\rm push}$\xspace. Even for relatively large values of $k_{\rm push}$\xspace, the explosion energies remain below 1~Bethe (1~Bethe, abbreviated as 1~B, is equivalent to $10^{51}$~erg). On the other hand, the HC models ($\xi_{1.75}>0.45$) explode stronger and later, with a larger variation in the explosion properties. In this case, for high enough values of $k_{\rm push}$\xspace ($\gtrsim 3.0$), explosion energies of $\gtrsim 1$~Bethe can be obtained.
The HC models also lead to a larger variability of the remnant masses, even though this effect is less pronounced than for the explosion time or energy. For the values of $k_{\rm push}$\xspace used here, we obtain (baryonic) remnant masses from approximately 1.4 to 1.9~M$_{\odot}$\xspace. The differences of LC and HC models will be investigated further in Section \ref{sec:hc and lc}.
There are three models with $0.37 \lesssim \xi_{1.75} \lesssim 0.50 $ (corresponding to ZAMS masses of 18.0 (HC), 18.2 (LC), and 19.4~M$_{\odot}$ (HC)) which do not follow the general trend. In particular, we find the threshold value of $k_{\rm push}$ for successful explosions to be higher for these models. A common feature of these three models is that they have the lowest Fe-core mass of all the models in our sample and the highest central densities at the onset of collapse.
The choice of $t_{\rm rise}$\xspace does not affect the observed trends with $k_{\rm push}$\xspace: similar behaviors are also seen for $50\, {\rm ms} \lesssim t_{\rm rise} \lesssim 250 \, {\rm ms}$.
\begin{figure}[htp!]
\includegraphics[width=0.49\textwidth]{fig04a_kpush_Eexpl.eps} \\
\includegraphics[width=0.49 \textwidth]{fig04b_kpush_texpl.eps} \\
\includegraphics[width=0.49\textwidth]{fig04c_kpush_mremn.eps}
\caption{Explosion energies (top), explosion times (middle), and (baryonic) remnant mass (bottom) as function of compactness for $k_{\rm push}$\xspace $1.5$, $2.0$, $3.0$, and $4.0$, and fixed $t_{\rm rise}=0.15$~s for all progenitor models included in this studay (ZAMS mass between 18.0 and 21.0~M$_{\odot}$\xspace). Non-exploding models are indicated with $E_{\rm expl}=-0.5$~B in the top panel and are omitted in the other panels.
\label{fig:kpush} }
\end{figure}
\subsubsection{$t_{\rm on}$}
\label{sec:method_ton}
To test the sensitivity of our method to the parameter $t_{\rm on}$, we compute models with $k_{\rm push} = 2.0$ and $t_{\rm rise} = 0.15 \, {\rm s}$ for a very large onset parameter, $t_{\rm on} = 120 \, {\rm ms}$. We compare the corresponding results with the ones obtained for $t_{\rm on} = 80 \, {\rm ms}$. As expected, the shock revival happens slightly later (with a temporal shift of $\sim 30 \, {\rm ms}$), the explosion energies are smaller (by $\sim 0.05$~B) and the remnant masses are marginally larger (by 0.08~M$_{\odot}$\xspace). However, all the qualitative behaviours described above, as well as the distinction between high and low compactness models, do not show any dependence on $t_{\rm on}$. In the following, we will always assume $t_{\rm on}=80 \, {\rm ms}$.
\subsubsection{$k_{\rm push}$ \& $t_{\rm rise}$}
\label{sec:method_kpush_trise}
In Sections \ref{sec:method_kpush} and \ref{sec:method_ton}, we have investigated the dependency of the model on the single parameters $k_{\rm push}$\xspace and $t_{\rm on}$. Now, we explore the role of $t_{\rm rise}$\xspace\ in combination with $k_{\rm push}$\xspace. For this, we approximately fix the explosion energy to the canonical value of $\sim 1$~B for the high compactness models (corresponding, for example, to the previously examined models with $k_{\rm push} = 3.0$ and $t_{\rm rise} = 150 \, {\rm ms}$), and investigate which other combinations of $k_{\rm push}$\xspace and $t_{\rm rise}$\xspace result in the desired explosion energy. We restrict our explorations to a sub-set of progenitor models (18.0~M$_{\odot}$\xspace, 18.6~M$_{\odot}$\xspace, 19.2~M$_{\odot}$\xspace, 19.4~M$_{\odot}$\xspace, 19.8~M$_{\odot}$\xspace, 20.0~M$_{\odot}$\xspace, 20.2~M$_{\odot}$\xspace and 20.6~M$_{\odot}$\xspace) that spans the $\xi_{1.75}$-range of all 16 progenitors. Figure~\ref{fig:kpush_trise} summarizes the explosion energies, explosion times, and remnant masses for various combinations of $k_{\rm push}$\xspace and $t_{\rm rise}$\xspace for progenitors of different compactness. The required constraint can be
obtained by several combinations of parameters, which lie on a curve in the $k_{\rm push}$\xspace-$t_{\rm rise}$\xspace plane. As a general result, a longer $t_{\rm rise}$\xspace requires a larger $k_{\rm push}$\xspace to obtain the same explosion energy. This can be understood from the different roles of the two parameters: while $k_{\rm push}$\xspace sets the maximum efficiency at which PUSH deposits energy from the reservoir represented by the $\nu_{\mu,\tau}$ luminosity, $t_{\rm rise}$\xspace sets the time scale over which the mechanism reaches this maximum. Together, they control the slope of $\mathcal{G}(t)$ in the rising phase (see Figure~\ref{fig:g_factor}). A model with a longer rise time reaches its maximum efficiency later, at which time the luminosities have already decreased and a part of the absorbed energy has been advected on the PNS or re-emitted in the form of neutrinos. To compensate for these effects, a larger $k_{\rm push}$\xspace is required for a longer $t_{\rm rise}$\xspace. This is seen in Figure~\ref{fig:energy trise}, where we plot the cumulative neutrino contribution $( E_{\rm push}+E_{\rm idsa}
)$ and its time derivative for four runs of the 18.0~M$_{\odot}$\xspace progenitor model, but with different combinations of $t_{\rm rise}$\xspace and $k_{\rm push}$\xspace. Runs with larger parameter values require PUSH to deposit more energy (see $ ( E_{\rm push}+E_{\rm idsa})$ at $t \approx t_{\rm expl}$), and the corresponding deposition rates are shifted towards later times. Moreover, for increasing values of $t_{\rm rise}$, the explosion time $t_{\rm expl}$ becomes larger, but the interval between $(t_{\rm on}+t_{\rm rise})$ and $t_{\rm expl}$ decreases. Despite the significant variation of $k_{\rm push}$ between different runs, the peak values of ${\rm d}( E_{\rm push}+E_{\rm idsa} )/{\rm d} t$ at the onset of the shock revival that preceeds the explosion are very similar in all cases.
\begin{figure}[htp!]
\includegraphics[width=0.49\textwidth]{fig05a_kpush_trise_Eexpl.eps} \\
\includegraphics[width=0.49 \textwidth]{fig05b_kpush_trise_texpl.eps} \\
\includegraphics[width=0.49\textwidth]{fig05c_kpush_trise_mremn.eps}
\caption{Explosion energies (top), explosion times (middle), and (baryonic) remnant mass (bottom) as function of compactness for pairs of $k_{\rm push}$\xspace and $t_{\rm rise}$\xspace, and for the progenitor models with ZAMS mass 18.0, 18.6, 19.2, 19.4, 19.8, 20.0, 20.2, and 20.6~M$_{\odot}$\xspace.
\label{fig:kpush_trise}
}
\end{figure}
\begin{figure}[htp!]
\includegraphics[width=0.35\textwidth,angle=-90]{fig06_energy_trise.eps}
\caption{Temporal evolution of the total neutrino energy contribution inside the gain region $( E_{\rm push} + E_{\rm idsa} )$ (solid lines) and of its time derivative (dashed lines), for four runs with the same ZAMS progenitor mass (18.0~M$_{\odot}$\xspace), but different combinations of PUSH parameters $t_{\rm rise}$\xspace and $k_{\rm push}$\xspace. For each run, the vertical lines correspond to $t = t_{\rm on}+t_{\rm rise}$ (long, dashed) and to $t_{\rm expl}$ (short, dot-dashed).
\label{fig:energy trise} }
\end{figure}
\subsubsection{$t_{\rm off}$}
Even though PUSH is active up to $t_{\rm off}+t_{\rm rise} \gtrsim 1 \, {\rm s}$, its energy deposition reduces progressively on a timescale of a few 100~ms after the explosion has set in (see Figure~\ref{fig:energy trise}). This shows explicitly that the value of $t_{\rm off}$ does not have important consequences in our simulations, at least as long as we have typical explosion times well below one second. The observed decrease of the PUSH energy deposition rate after the launch of the explosion will be explained in Section~\ref{sec:hc and lc}.
\subsection{Contributions to the explosion energy}
\label{sec:eexpl_contr}
In the following, we discuss the contributions to and the sources of the explosion energy, i.e., we investigate how the explosion energy is generated. This is done in several steps: first, we have a closer look at the neutrino energy deposition. Then we show how it relates to the increase of the total energy of the ejected layers, and finally how this increase of the total energy transforms into the explosion energy. For this analysis, we have chosen the 19.2 and 20.0~M$_{\odot}$\xspace ZAMS mass progenitor models as representatives of the HC and LC samples, respectively. We consider their exploding models obtained with $t_{\rm on}=80$~ms, $t_{\rm rise}=150$~ms, and $k_{\rm push}=3.0$. A summary of the explosion properties can be found in Table \ref{tab:HC vs LC}.
\begin{table}[h,t]
\begin{center}
\caption{Explosion properties for two reference runs \label{tab:HC vs LC}}
\begin{tabular}{lccc}
\tableline \tableline
Quantity & & HC & LC \\
\tableline
ZAMS & (M$_{\odot}$\xspace) & 19.2 & 20.0 \\
$\xi_{1.75}$ & (-) & 0.637 & 0.283 \\
$t_{\rm on}$ & (ms) & \multicolumn{2}{c}{80} \\
$t_{\rm rise}$ & (ms) & \multicolumn{2}{c}{150} \\
$k_{\rm push}$ & (-) & \multicolumn{2}{c}{3.0} \\
$t_{\rm expl}$ & (ms) & 307 & 206 \\
$M_{\rm remn}$ & (M$_{\odot}$\xspace) & 1.713 & 1.469 \\
$E_{\rm expl}$ ($t_{\rm final}$) & (B) & 1.36 & 0.57 \\
$E_{\rm push}$ ($t_{\rm off} + t_{\rm rise}$) & (B) & 3.51 & 1.08 \\
$E_{\rm idsa}$ ($t_{\rm off} + t_{\rm rise}$) & (B) & 2.76 & 1.01 \\
$E_{\rm idsa}$ ($t_{\rm final}$) & (B) & 4.10 & 2.11 \\
\tableline
\end{tabular}
\tablecomments{These two runs are used to compare the HC and LC samples.}
\end{center}
\end{table}
The table shows that for both models neutrinos are required to deposit a net cumulative energy $( E_{\rm push}+E_{\rm idsa} )$ much larger than $E_{\rm expl}$ to revive the shock and to lead to an explosion that matches the expected energetics. For the two reference runs, when the PUSH contribution is switched off ($t = t_{\rm off}+t_{\rm rise}$), the cumulative deposited energy is $\sim 4$ times larger than $E_{\rm expl}$. This can also be inferred from Figure~\ref{fig:energy trise} for other runs. That ratio increases further up to $\sim 5.5$ at $t = t_{\rm final}$, due to the neutrino energy deposition happening at the surface of the PNS which generates the $\nu$-driven wind. According to Equations~(\ref{eq_epush}) and (\ref{eq_eidsa}), $E_{\rm push}$ and $E_{\rm idsa}$ are the total energies which are deposited in the (time-dependent) gain region. This neutrino energy deposition increases the internal energy of the matter flowing in that region. However, since the advection timescale is much shorter than
the explosion timescale, a large fraction of this energy is advected onto the PNS surface by the accreting mass before the explosion sets in, and hence does not contribute to the explosion energy. Only the energy deposited by neutrinos in the region above the final mass cut will eventually contribute to the explosion energy.
\begin{figure*}[htp!]
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth,angle=-90]{fig07_hc_tot_expl_var_mcut.eps}
\includegraphics[width=0.35\textwidth,angle=-90]{fig07_lc_tot_expl_var_mcut.eps}
\end{tabular}
\caption{Time evolution of the time- and mass-integrated variation of the total energy $\Delta E_{\rm total}$ (thin dashed line), of the neutrino net deposition energy $E_{\nu}$ (dot-dashed line) and of the explosion energy for a fixed domain $\Delta H_{\rm expl}$ (thick dashed line), above $m_{ \rm cut}^{\rm fin} = m_{\rm cut}(t_{\rm final})$, for the HC (left) and for the LC (right) reference runs
reported in Table \ref{tab:HC vs LC}. The evolution of the time-dependent explosion energy, $\Delta E_{\rm expl}$, is also shown (solid line).
Both $\Delta H_{\rm expl}$ and
$\Delta E_{\rm expl}$ are computed with respect to $H_{\rm expl}(m_{\rm cut}^{\rm fin},t_{\rm initial})$. The difference between
$\Delta E_{\rm total}(m_{\rm cut}^{\rm fin},t)$ and $E_\nu$ represents the mechanical work, $E_{\rm mech}$; the difference between
$\Delta E_{\rm total}(m_{\rm cut}^{\rm fin},t)$ and $\Delta H_{\rm expl}(m_{\rm cut}^{\rm fin},t)$ represents the released rest-mass energy,
$-\Delta E_{\rm mass}$.
\label{fig:total energy contributions} }
\end{figure*}
To identify this \textit{relevant} neutrino contribution, in Figure~\ref{fig:total energy contributions} we show the time evolution of the integrated net neutrino energy deposition $E_{\nu}(m_{ \rm cut}^{\rm fin},t)$ within the domain above the fixed mass $m_{ \rm cut}^{\rm fin} = m_{\rm cut}(t_{\rm final})$. We choose $m_{ \rm cut}^{\rm fin}$ to include all the relevant energy contributions to the explosion energy, up to the end of our simulations. Despite the significant differences in magnitudes, the two models show overall similar evolutions. If we compare $E_{\nu}(m_{ \rm cut}^{\rm fin},t)$ at late times with $( E_{\rm push}(t_{\rm off}+t_{\rm rise})+E_{\rm idsa}(t_{\rm final}) )$ from Table \ref{tab:HC vs LC}, we see that it is significantly smaller. About two thirds of the energy originally deposited in the gain region are advected onto the PNS and hence do not contribute to the explosion energy.
In addition to the neutrino energy deposition, in Figure~\ref{fig:total energy contributions} we also show the variation of the total energy for the domain above $m_{ \rm cut}^{\rm fin}$, i.e., $\Delta E_{\rm total}(m_{ \rm cut}^{\rm fin},t) = E_{\rm total}(m_{ \rm cut}^{\rm fin},t) - E_{\rm total}(m_{ \rm cut}^{\rm fin}, t_{\rm initial})$, where $t_{\rm initial}$ is the time when we start our simulation from the stage of the progenitor star. The variation of the total energy can be separated into the net neutrino contribution and the mechanical work at the inner boundary, $\Delta E_{\rm total}=E_{\nu}+E_{\rm mech}$. We note that in our general relativistic approach the variation of the gravitational mass due to the intense neutrino emission from the PNS is consistently taken into accout. It is visible in Figure~\ref{fig:total energy contributions}, that the net deposition by neutrinos makes up the largest part of the change of the total energy. The transfer of mechanical energy $E_{\rm mech}$ is negative
because of the expansion work performed by the inner boundary during the collapse and the PNS shrinking. However it is significantly smaller in magnitude than $E_{\nu}$.
Next, we investigate the connection between the variation of the total energy and the explosion energy. In Figure~\ref{fig:total energy contributions}, we show the variation of the explosion energy above the fixed mass $m_{ \rm cut}^{\rm fin}$, i.e. $\Delta H_{\rm expl}(m_{ \rm cut}^{\rm fin},t)=H_{\rm expl}(m_{ \rm cut}^{\rm fin},t) - H_{\rm expl}(m_{ \rm cut}^{\rm fin}, t_{\rm initial})$, together with the relative variation of the time-dependent explosion energy, $\Delta E_{\rm expl}(t) = E_{\rm expl}(t) - H_{\rm expl}(m_{ \rm cut}^{\rm fin}, t_{\rm initial})$. It is obvious from Equations~(\ref{eqn:e_th}), (\ref{eq:total energy}), and (\ref{eq:expl energy mt}) that the difference between $\Delta H_{\rm expl}(m_{ \rm cut}^{\rm fin},t)$ and $\Delta E_{\rm total}(m_{ \rm cut}^{\rm fin},t)$ is given by the variation of the integrated rest mass energy, $\Delta H_{\rm expl}(m_{ \rm cut}^{\rm fin},t)=\Delta E_{\rm total}(m_{ \rm cut}^{\rm fin},t)-\Delta E_{\rm mass}(m_{ \rm cut}^{\rm fin},t)$.
In Figure~\ref{fig:total energy contributions}, $-\Delta E_{\rm mass}(m_{ \rm cut}^{\rm fin},t)$ can thus be identified as the difference between the long-thin and the short-thick dashed lines. We find that the overall rest mass contribution to the final explosion energy is positive, but much smaller than the neutrino contribution. Figure~\ref{fig:total energy contributions} also makes evident the conceptual difference between $H_{\rm expl}$ and $E_{\rm expl}$, and, at the same time, shows that $H_{\rm expl}(m_{ \rm cut}^{\rm fin},t) \rightarrow E_{\rm expl}(t)$ for $t \rightarrow t_{\rm final}$, since we have chosen $m_{ \rm cut}^{\rm fin}=m_{\rm cut}(t_{\rm final})$. It also reveals that the explosion energy $E_{\rm expl}$ has practically saturated for $t \gtrsim 1 \, {\rm s}$, while $E_{\nu}$ (and, consequently, $\Delta E_{\rm tot}$ and $\Delta H_{\rm expl}$) increases up to $t_{\rm final}$, when $m_{ \rm cut}^{\rm fin}$ is finally ejected. However, this energy provided by neutrinos is mostly spent to
unbind matter from
the PNS surface. Thus, the late $\nu$-driven wind, which occurs for several seconds after 1 s, still increases $E_{\rm expl}$, but at a relative small, decreasing rate.
To summarize, the variation of the explosion energy above $m_{ \rm cut}^{\rm fin}$ can be expressed as
\begin{eqnarray}
\Delta H_{\rm expl}(m_{ \rm cut}^{\rm fin},t)&=&\Delta E_{\rm total}(m_{ \rm cut}^{\rm fin},t)-\Delta E_{\rm mass}(m_{ \rm cut}^{\rm fin},t) \nonumber \\
&=& E_{\nu}(m_{ \rm cut}^{\rm fin},t)+E_{\rm mech}(m_{ \rm cut}^{\rm fin},t)-\Delta E_{\rm mass}(m_{ \rm cut}^{\rm fin},t) \; . \nonumber \\
&&
\label{eq_deltaeexp}
\end{eqnarray}
The quantity $-\Delta E_{\rm mass}$ is positive, but significantly smaller than $E_{\nu}(m_{ \rm cut}^{\rm fin},t)$. $E_{\rm mech}$ is negative and also smaller than $E_{\nu}(m_{ \rm cut}^{\rm fin},t)$. Therefore, we conclude that in our models the explosion energy is mostly generated by the energy deposition of neutrinos in the eventually ejected layers, especially within the first second after bounce.
\begin{figure*}[htp!]
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth,angle=-90]{fig08_hc_energy_var.eps}
\includegraphics[width=0.35\textwidth,angle=-90]{fig08_lc_energy_var.eps}
\end{tabular}
\caption{Temporal evolution of the gravitational (thin solid), kinetic (long dashed), negative rest mass (short dashed), internal (dot-dashed), and explosion (solid thick) energies above $m_{\rm cut}^{\rm fin} = m_{\rm cut}(t_{\rm final})$, for the HC (left) and for the LC (right) reference runs, see Table \ref{tab:HC vs LC}. The internal and the rest mass energy are given with respect to the initial rest mass, $E_{\rm mass,0}=E_{\rm mass}(m_{\rm cut}^{\rm fin},t_{\rm initial})$. The difference between the internal energy and the rest mass energy represents the thermal energy.
\label{fig:explosion energy contributions} }
\end{figure*}
To give further insight, in Figure~\ref{fig:explosion energy contributions} we show the time evolution of all energies which contribute to the explosion energy together with the explosion energy itself, for both the HC (left panel) and the LC model (right panel). We present $E_{\rm int}(m_{ \rm cut}^{\rm fin},t)$, $-E_{\rm mass}(m_{ \rm cut}^{\rm fin},t)$, $E_{\rm grav}(m_{ \rm cut}^{\rm fin},t)$ and $E_{\rm kin}(m_{ \rm cut}^{\rm fin},t)$, which together give a complete decomposition of the explosion energy, i.e.,
\begin{eqnarray}
H_{\rm expl}(m_{ \rm cut}^{\rm fin},t)&=&E_{\rm kin}(m_{ \rm cut}^{\rm fin},t)+E_{\rm grav}(m_{ \rm cut}^{\rm fin},t)\nonumber \\
&&+E_{\rm int}(m_{ \rm cut}^{\rm fin},t)-E_{\rm mass}(m_{ \rm cut}^{\rm fin},t) \;.
\end{eqnarray}
Compared to Figure~\ref{fig:total energy contributions} we are now not dealing with variations any more but with absolute values. Gravitational energy initially dominates ($H_{\rm expl}(m_{ \rm cut}^{\rm fin},t) < 0$), meaning that the portion of the star above $m_{ \rm cut}^{\rm fin}$ is still gravitationally bound. The HC model is initially more bound than the LC model (for example, $H_{\rm expl}(m_{ \rm cut}^{\rm fin},t=0.1 \, {\rm s}) \approx -0.54 \, {\rm B}$, versus $H_{\rm expl}(m_{ \rm cut}^{\rm fin},t=0.1 \, {\rm s}) \approx -0.40 \, {\rm B}$, respectively). Before providing positive explosion energy, neutrinos have to compensate for this initial negative binding energy as well as for the negative $E_{\rm mech}$. This can be seen explicitly by expressing Equation (\ref{eq_deltaeexp}) as:
\begin{eqnarray}
H_{\rm expl}(m_{ \rm cut}^{\rm fin},t_{\rm final}) &\sim& H_{\rm expl}(m_{ \rm cut}^{\rm fin},t_{\rm initial}) + E_\nu(m_{ \rm cut}^{\rm fin}, t_{\rm final})\nonumber \\
&& + E_{\rm mech}(t_{\rm final}) \, ,
\end{eqnarray}
where we have neglected $\Delta E_{\rm mass}$.
In the following, we discuss the evolution of the relevant energies and, in particular, of the rest mass energy (see Section~\ref{sec_eos} for the description of the (non-)NSE EOS and of the related definitions of the internal, thermal and rest mass energies). The innermost part of the ejecta (corresponding to $\sim 0.15$~M$_{\odot}$\xspace and $\sim 0.07$~M$_{\odot}$\xspace above $m_{ \rm cut}^{\rm fin}$ for the 19.2~M$_{\odot}$\xspace and 20.0~M$_{\odot}$\xspace model, respectively) is initially composed of intermediate mass nuclei (mainly silicon and magnesium). In the first part of the evolution, during the gravitational collapse, no significant changes of $E_{\rm int}$ and $E_{\rm mass}$ are observed in Figure~\ref{fig:explosion energy contributions}. However, when this matter enters the shock, it is quickly photodissociated into neutrons, protons, and alpha particles. This process increases the rest mass energy, as is visible in Figure~\ref{fig:explosion energy contributions} between roughly 200 and 300~ms for the HC model and between 100 and 200~ms for
the LC model. At the same time, the release of gravitational energy of the still infalling matter and the dissipation of kinetic energy happening at the shock, together with the large and intense neutrino absorption on free nucleons, increase $E_{\rm int}$. Later, once neutrino heating has halted the collapse and started the explosion, the expanding shock decreases its temperature and free neutrons and protons inside it recombine first into alpha particles and then into iron group nuclei. At the same time, fresh infalling layers are heated by the shock to temperatures above $0.44$~MeV, and silicon and magnesium are converted into heavier nuclei and alpha particles under NSE conditions, leading to an alpha-rich freeze-out from NSE. The production of alpha particles, which are less bound than the heavy nuclei initially present in the same layers, limits the amount of rest mass energy finally released. Thus, these recombination and burning processes liberate in a few hundred milliseconds after $t_{\rm expl}$ an
amount of rest mass energy larger but comparable to the energy spent by the shock to photodissociate the infalling matter during shock revival and early expansion. We have checked in post-processing that the full nucleosynthesis network {\sc Winnet} confirms these results.
\subsection{Explosion dynamics and the role of compactness}
\label{sec:hc and lc}
\begin{figure*}[htp!]
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth,angle=-90]{fig09_mdot_comp.eps} &
\includegraphics[width=0.35\textwidth,angle=-90]{fig09_radii_comp.eps}
\\
\includegraphics[width=0.35\textwidth,angle=-90]{fig09_luminosity_comp.eps} &
\includegraphics[width=0.35\textwidth,angle=-90]{fig09_meanE_comp.eps}
\end{tabular}
\caption{Temporal evolution of (a) the accretion rate at the PNS and at the shock, (b) the shock, the gain, and the PNS radii, (c) the neutrino luminosities, and (d) the neutrino mean energies, for all modeled neutrino flavors. In all panels, we present exploding runs for the 19.2~M$_{\odot}$\xspace (red lines) and then 20.0~M$_{\odot}$\xspace (blue lines) ZAMS mass models obtained with the PUSH parameters reported in Table \ref{tab:HC vs LC}. We also plot the corresponding non-exploding runs obtained by setting $k_{\rm push}=0$ for the 19.2~M$_{\odot}$\xspace (light red) and 20.0~M$_{\odot}$\xspace (light blue) ZAMS mass progenitor models.
\label{fig:compactness comparison}}
\end{figure*}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.35\textwidth,angle=-90]{fig10_energy_rate_comp.eps}
\end{center}
\caption{Temporal evolution of the neutrino energy deposition inside the gain region from $\nu_e$ and $\bar{\nu}_e$ (${\rm d}E_{\rm idsa}/{\rm d} t$, long-thin dashed lines), from PUSH (${\rm d}E_{\rm push}/{\rm d} t$, short-thick dashed lines), and their sum (solid lines). The 19.2~M$_{\odot}$\xspace (HC) ZAMS mass model is represented in red, while the 20.0~M$_{\odot}$\xspace (LC) ZAMS mass is in blue, with PUSH parameters reported in Table \ref{tab:HC vs LC}. The short colored vertical lines show the time of explosion.
\label{fig:energy derivaties} }
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.35\textwidth,angle=-90]{fig11_ram_comp.eps}
\end{center}
\caption{Temporal evolution of the ratio between the ram pressure above the shock and the thermal pressure below the shock. The 19.2~M$_{\odot}$\xspace (HC) ZAMS mass model is represented in red, while the 20.0~M$_{\odot}$\xspace (LC) ZAMS mass model is in blue. The PUSH parameters are reported in Table \ref{tab:HC vs LC}. Light red and light blue lines represent the corresponding
runs without PUSH ($k_{\rm push}=0$).
\label{fig:pressure ratio} }
\end{figure}
The distributions of the explosion energy and explosion time obtained with PUSH, as well as their variations in response to changes of the model parameters, suggest a possible distinction between high and low compactness progenitors. In the following, we investigate how basic properties of the models (e.g., the accretion history or the neutrino luminosities), ultimately connected with the compactness, relate to differences in the explosion process and properties. For a similar discussion in self-consistent 1D and 2D supernova simulations, see \citet{suwa14}. Again, we choose the 19.2 and 20.0 ZAMS mass progenitor runs with $t_{\rm rise}=150 \, {\rm ms}$ and $k_{\rm push}=3.0$, as representatives of the HC and LC samples, respectively.
In Figure~\ref{fig:compactness comparison}, we show the temporal evolution of several quantities of interest for both the 19.2~M$_{\odot}$\xspace and 20.0~M$_{\odot}$\xspace models, with and without PUSH. The evolution before $t_{\rm on}$ follows the well known early shock dynamics in CCSNe (see, for example, \cite{Burrows1993}). In both models, a few tens of milliseconds after core bounce, the expanding shock turns into an accretion front, and the mantle between the PNS surface and the shock reaches a quasi-stationary state. In this accretion phase, $\dot{M}_{\rm shock}$ and $\dot{M}_{\rm PNS}$ are firmly related. However, the two different density profiles already affect the evolution of the shock. Since $\rho_{19.2}/\rho_{20.0} \gtrsim 1.2 $ outside the shock and up to a radius of $2 \times 10^{8}$~cm (while the infalling velocities of the unshocked matter are initially almost identical), $\dot{M}_{\rm shock}$ (and in turn also $\dot{M}_{\rm PNS}$) starts to differ between the two models around $t_{\rm pb} \approx 30$~ms.
The difference in the accretion rates has a series of immediate consequences. For the HC case,
(i) neutrino luminosities are larger (Figure~\ref{fig:compactness comparison}c);
(ii) the shock is subject to a larger ram pressure (i.e., a larger momentum transfer provided by the collectively infalling mass flowing through the shock), and, as visible in the case without PUSH, shock stalling happens earlier and at a smaller radius (Figure~\ref{fig:compactness comparison}b);
(iii) the PNS mass grows faster. Since the mass of the PNS at bounce is almost identical for the two models ($M_{\rm PNS} \approx 0.63$~M$_{\odot}$\xspace), the stronger gravitational potential implied by (iii) increases the differences in the accretion rates even further by augmenting the ratio of the radial velocities inside the gain region (larger by 12--15\% at $t \approx t_{\rm on}$ for the 19.2~M$_{\odot}$\xspace case).
For $t>t_{\rm on}$, the differences between the two runs amplify as a result of the PUSH action. In the LC case, due to the lower accretion rate, a relatively small energy deposition by PUSH in the gain region (smaller than or comparable to the energy deposition by $\nu_e$ and $\bar{\nu}_e$ from IDSA, as visible in Figure~\ref{fig:energy derivaties}) is able to revive the shock expansion a few milliseconds after $t_{\rm on}$. Later, the increasing ${\rm d}E_{\rm push}/{\rm d}t$ triggers an explosion in a few tens of milliseconds, even before $\mathcal{G}(t)$ reaches its maximum (Figure~\ref{fig:compactness comparison}b).
In the HC case, the energy deposition by neutrinos is more intense from the beginning due to the larger neutrino luminosities and harder neutrino spectra (Figures~\ref{fig:compactness comparison}c and \ref{fig:compactness comparison}d) and due to the higher density inside the gain region. However, because of the larger accretion rate, the extra contribution provided by PUSH is initially only able to prevent the fast shock contraction observed in the model without PUSH. During this shock stalling phase, the accretion rate and the luminosity decrease, but only marginally and very similarly to the non-exploding case. When PUSH reaches its maximum energy deposition rate ($t \approx t_{\rm on} + t_{\rm rise}$), the shock revives and the explosion sets in (Figure~\ref{fig:compactness comparison}b).
In Figure~\ref{fig:pressure ratio}, we plot the ratio of the ram pressure just above the shock front ($P_{\rm ram}(R_{\rm shock}^+)=\rho v^2$ calculated at $R_{\rm shock}^{+} = R_{\rm shock} + 1 \, \rm{km}$) to the thermal pressure just inside it ($P_{\rm th}(R_{\rm shock}^-)$ where $R_{\rm shock}^{-} = R_{\rm shock} - 1 \, \rm{km}$). In the non-exploding runs (i.e., without PUSH), both these pressures decrease with time, but their ratio stays always well above unity. On the other hand, in runs with PUSH, the more efficient energy deposition by neutrinos reduces the decrease of the thermal pressure inside the shock. The corresponding drop in the pressure ratio below unity determines the onset of the explosion.
In both runs, once the explosion has been launched, the density in the gain region decreases and the PUSH energy deposition rate reduces accordingly. The conversion from an accreting to an expanding shock front decouples $\dot{M}_{\rm shock}$ from $\dot{M}_{\rm PNS}$. The latter drops steeply, together with the accretion neutrino luminosities (Figures~\ref{fig:compactness comparison}a and \ref{fig:compactness comparison}c), while $\dot{M}_{\rm shock}$ decreases first but then stabilizes around an almost constant (slightly decreasing) value. In the case where the shock expansion velocity is much larger than the infalling matter velocity at $R_{\rm shock}$, $\dot{M}_{\rm shock}$ can be re-expressed as
\begin{equation}
\dot{M}_{\rm shock} \approx 4 \pi R_{\rm shock}^2 \, \rho(R_{\rm shock}) \, v_{\rm shock},
\end{equation}
where $v_{\rm shock} = {\rm d} R_{\rm shock} / {\rm d} t \propto R_{\rm shock}^{\delta}$. For $R>R_{\rm shock}$ we have in good approximation $\rho(R) \propto R^{-2}$, and thus
\begin{equation}
\dot{M}_{\rm shock} \propto R_{\rm shock}^{\delta} .
\end{equation}
The stationary value of $\dot{M}_{\rm shock}$ implies that $\delta \approx 0$. Thus, after an initial exponential expansion, the shock velocity is almost constant during the first second after the explosion.
Despite the larger difficulties to trigger an explosion, the HC model explodes more energetically than the LC model. According to the analysis performed in Section~\ref{sec:eexpl_contr}, the difference in the explosion energy between the HC and the LC model depends ultimately on the different amount of energy deposited by neutrinos. Since the high compactness model requires a larger energy deposition to overcome the ram pressure and the gravitational potential, the total energy of the corresponding ejecta (and in turn the explosion energy) will be more substantially increased.
In addition, after the explosion has been triggered, the larger neutrino luminosities and densities that characterize the HC model inject more energy in the expanding shock compared with the LC model.
\subsection{Fitting of SN~1987A}
\label{sec:method_sn1987a}
The ultimate goal of core-collapse supernova simulations is to reproduce the properties observed in real supernovae. So far we have only focused on the dependence of dynamical features of the explosion (e.g., the explosion energy) on the parameter choices in the PUSH method. However, the ejected mass of radioactive nuclides (such as $^{56}$Ni) is an equally important property of the supernova explosion. Here, we describe how we calibrate the PUSH method by reproducing the explosion energy and mass of Ni ejecta of SN~1987A for a progenitor within the expected mass range for this supernova.
\subsubsection{Observational constraints from SN~1987A}
The analysis and the modeling of the observational properties of SN~1987A just after the luminosity peak have been the topics of a long series of works \citep[e.g.,][ and references therein]{Woosley1988,arnett89,Shigeyama1990,Kozma1998a,Kozma1998b,Blinnikov2000,Fransson.Kozma:2002,Utrobin2005,Seitenzahl2014}. They provide observational estimates for the explosion energy, the progenitor mass, and the ejected masses of $^{56}$Ni, $^{57}$Ni, $^{58}$Ni, and $^{44}$Ti, all of which carry rather large uncertainties.
In Table \ref{tab:sn1987a}, the values used for the calibration of the PUSH method are summarized.
The ZAMS progenitor mass is assumed to be between 18~M$_{\odot}$\xspace and 21~M$_{\odot}$\xspace, corresponding to typical values reported in the literature for the SN~1987A progenitor, see, e.g., \citet{Woosley1988,Shigeyama1990}. For the explosion energy we consider the estimate reported by \citet{Blinnikov2000}, $E_{\rm expl} = \left(1.1 \pm 0.3 \right) \times 10^{51}$~erg (for a detailed list of explosion energy estimates for SN~1987A, see for example Table~1 in \citet{handy14}). This value was obtained assuming $\sim$14.7~M$_{\odot}$\xspace of ejecta and an hydrogen-rich envelope of $\sim$10.3~M$_{\odot}$\xspace. The uncertainties in the progenitor properties and in the SN distance were taken into account in the error bar. The employed values of the total ejecta and of the hydrogen-rich envelope are compatible (within a 15\% tolerance) with a significant fraction of our progenitor candidates, especially for $M_{\rm ZAMS} < 19.6$~M$_{\odot}$\xspace (see Table~\ref{tab:prog_compact}, where the total ejecta can be estimated subtracting $1.6$~M$_{\odot}$\xspace from the mass of the
star at the onset of the collapse). Explosion models with larger ejected mass (i.e., less compatible with our candidate sample) tend to have larger explosion energies (see, for example, \citet{Utrobin2005}). Finally, we consider the element abundances for $^{56,57}$Ni and $^{44}$Ti provided by \citet{Seitenzahl2014}, which were obtained from a least squares fit of the decay chains to the bolometric lightcurve. For $^{58}$Ni we use the value provided by \citet{Fransson.Kozma:2002}.
\begin{table}[h,t]
\caption{Observational properties of SN~1987A. \label{tab:sn1987a}}
\begin{center}
\begin{tabular}{lc}
\tableline \tableline
$E_{\rm expl}$ & $(1.1 \pm 0.3) \times 10^{51}$~erg \\
$m_{\rm prog}$ & 18-21 M$_{\odot}$\xspace\\
$m(^{56}{\rm Ni})$ & $(0.071 \pm 0.003)$~M$_{\odot}$ \\
$m(^{57}{\rm Ni})$ & $(0.0041 \pm 0.0018)$~M$_{\odot}$ \\
$m(^{58}{\rm Ni})$ & 0.006~M$_{\odot}$ \\
$m(^{44}{\rm Ti})$ & $(0.55 \pm 0.17) \times 10^{-4}$~M$_{\odot}$ \\
\tableline
\end{tabular}
\tablecomments{
The nucleosynthesis yields are taken from \cite{Seitenzahl2014} except for $^{58}$Ni which is taken from \cite{Fransson.Kozma:2002}. No error estimates were given for $^{58}$Ni. The explosion energy is adapted from \cite{Blinnikov2000}. For the progenitor range we chose typical values found in the literature, see e.g.\ \cite{Shigeyama1990,Woosley1988}.}
\end{center}
\end{table}
\subsubsection{Fitting procedure}
\label{sec: fit}
\begin{figure*}[htp!]
\includegraphics[width=0.5\textwidth]{fig12_ni56.eps}
\includegraphics[width=0.5\textwidth]{fig12_ni57.eps} \\
\includegraphics[width=0.5\textwidth]{fig12_ni58.eps}
\includegraphics[width=0.5\textwidth]{fig12_ti44.eps}
\caption{Ejected mass of $^{56}$Ni (top left), $^{57}$Ni (top right), $^{58}$Ni (bottom left), and $^{44}$Ti (bottom right) and explosion energy for four representative HC progenitor models. Five combinations of $k_{\rm push}$\xspace and $t_{\rm rise}$\xspace are shown, each with a different symbol. The error bar box represents the observational values from \cite{Seitenzahl2014} (for $^{56,57}$Ni and $^{44}$Ti) and from \citet{Fransson.Kozma:2002} (for $^{58}$Ni). No error bars are reported for $^{58}$Ni.
\label{fig:calibration_summary} }
\end{figure*}
\begin{figure*}[htp!]
\includegraphics[width=0.5\textwidth]{fig13_ni56.eps}
\includegraphics[width=0.5\textwidth]{fig13_ni57.eps} \\
\includegraphics[width=0.5\textwidth]{fig13_ni58.eps}
\includegraphics[width=0.5\textwidth]{fig13_ti44.eps}
\caption{Same as Figure~\ref{fig:calibration_summary}, but assuming 0.1~M$_{\odot}$ fallback. Note the different scale for $^{56}$Ni and $^{58}$Ni compared to Figure~\ref{fig:calibration_summary}.
\label{fig:calibration_summary_fb} }
\end{figure*}
We calibrate the PUSH method by finding a combination of progenitor mass, $k_{\rm push}$\xspace, and $t_{\rm rise}$\xspace which provides the best fit to the all observational quantities of SN~1987A mentioned above. The weight given to each quantity is related to the uncertainty. For example, due to the large uncertainty in the $^{44}$Ti mass, this does not provide a strong constraint on selecting the best fit.
Figure~\ref{fig:calibration_summary} shows the explosion energy and ejected mass of $^{56}$Ni, $^{57}$Ni, $^{58}$Ni, and $^{44}$Ti for different cases of $k_{\rm push}$\xspace and $t_{\rm rise}$\xspace and for four select HC progenitors used to calibrate the PUSH method. We do not consider the LC progenitors, because of their generally lower explosion energies, see Figure~\ref{fig:kpush}. The different cases of $k_{\rm push}$\xspace and $t_{\rm rise}$\xspace span a wide range of explosion energies around 1~Bethe. For all parameter combinations shown, at least one progenitor in the 18-21~M$_{\odot}$\xspace range fulfills the requirement of an explosion energy between 0.8~Bethe and 1.4~Bethe.
There is a roughly linear correlation between the explosion energy and the synthesized $^{56}$Ni-mass. However, this correlation is not directly compatible with the observations, as the ejected $^{56}$Ni is systematically larger than expected (up to a factor of $\sim 2$ for models with an explosion energy around 1~Bethe). There is a weak trend that models with higher $t_{\rm rise}$ tend to give lower nickel masses for given explosion energy. Among the parameter combinations that produce robustly high explosion energies (i.e., $k_{\rm push} \geq 3$), $k_{\rm push}=3.5$ with the high value of $t_{\rm rise}$ of 200~ms gives the lowest $^{56}$Ni mass for similar explosion energies, but still much too high.
Our simulations can be reconciled with the observations by taking into account fallback from the initially unbound matter. Since we do not model the explosion long enough to see the development of the reverse shock and the appearance of the related fallback when the shock reaches the hydrogen-rich envelope, we have to impose it, removing some matter from the innermost ejecta\footnote{Note that we did not modify the explosion energy due to the fallback. This is based on the expectation that at the late time when fallback forms, the explosion energy is approximately equally distributed among the total ejected mass, which is about two orders of magnitude higher than our fallback mass.}. With a value of $\sim 0.1$~M$_{\odot}$\xspace we can match both the expected explosion energy and $^{56}$Ni ejecta mass, see Figure~\ref{fig:calibration_summary_fb}. In this way we have fixed the final mass cut by observations. However, we point out that we are able to identify the amount of late-time fallback only because we also have the
dynamical mass cut from our hydrodynamical simulations. This is not possible in other methods such as pistons or thermal bombs. Our value of $\sim 0.1$~M$_{\odot}$\xspace of fallback in SN~1987A will be further discussed and compared with other works in Section~\ref{sec_fallback}.
The observed yield of $^{56}$Ni provides a strong constraint on which parameter combination would fit the data. From the observed yields of $^{57}$Ni and $^{58}$Ni, only the 18.0 and 19.4 progenitors remain viable candidates. Without fallback our predicted $^{44}$Ti yields are compatible with the observed yields (see Figure~\ref{fig:calibration_summary}). However, if we include fallback (which is needed to explain the observed Ni yields), $^{44}$Ti becomes underproduced compared to the oberved value. Since this behavior is true for all out models, we exclude the constraint given by $^{44}$Ti from our calibration procedure. From the considered parameter combinations, we obtained the best fit to SN~1987A for the 18.0~M$_{\odot}$ progenitor model with $k_{\rm push}=3.5$, $t_{\rm rise}=200$~ms, and a fallback of 0.1~M$_{\odot}$. These parameters are summarized in Table \ref{tab:bestfit}. In Figure~\ref{fig:fitting model}, we show the temporal evolution of the accretion rates, of the relevant radii, and of the
neutrino luminosities and mean energies for our best fit model.
For comparison purposes, we present also the results obtained for the same model without PUSH. Note that in this non-exploding case the $\nu_e$ and $\bar{\nu}_e$ luminosities stay almost constant over several $ \sim 100$ ms after core bounce, despite the decreasing accretion rate. This is due to the relatively slow variation of $\dot{M}_{\rm PNS}$ (for example, compared with the variation obtained in the 19.2 M$_{\odot}$\xspace model, Figure~\ref{fig:compactness comparison}) and due to the simultaneous increase of the PNS gravitational potential, proportional to $M_{\rm PNS}/R_{\rm PNS}$ \citep[see, for example,][]{Fischer2009}.
A summary of the most important results of the simulations using this parameter set for the different progenitors in the 18-21~M$_{\odot}$\xspace window is given in Table~\ref{tab:values}. For the remnant mass and for the $^{56}$Ni yields of our best-fit model, we provide both the values obtained with and without assuming a fallback of $0.1$~M$_{\odot}$\xspace.
\begin{figure*}[htp!]
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth,angle=-90]{fig14_mdot_bf.eps}
\includegraphics[width=0.35\textwidth,angle=-90]{fig14_radii_bf.eps}
\\
\includegraphics[width=0.35\textwidth,angle=-90]{fig14_luminosity_bf.eps}
\includegraphics[width=0.35\textwidth,angle=-90]{fig14_meanE_bf.eps}
\end{tabular}
\caption{Same as in Figure~\ref{fig:compactness comparison}, but for the SN~1987A best fit model: 18.0~M$_{\odot}$\xspace progenitor, with $t_{\rm on}=80$~ms, $t_{\rm rise}=200$~ms, and $k_{\rm push}=3.5$.
\label{fig:fitting model} }
\end{figure*}
\begin{table}[h,t]
\caption{Parameter values for best fit to SN~1987A. \label{tab:bestfit}}
\begin{center}
\begin{tabular}{cccc}
\tableline \tableline
$k_{\rm push}$\xspace & $t_{\rm rise}$\xspace & $t_{\rm on}$ & $t_{\rm off}$ \\
~(-) & (ms) & (ms) & (s) \\
\tableline
3.5 & 200 & 80 & 1 \\
\tableline
\end{tabular}
\tablecomments{We identified the 18.0~M$_{\odot}$\xspace model as the progenitor which fits best, whereas we had to impose a late-time fallback of 0.1~M$_{\odot}$. }
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\caption{Summary of simulations for $k_{\rm push}=3.5 $ and $t_{\rm rise}=200$~ms. \label{tab:values} }
\begin{tabular}{cccccc}
\tableline \tableline
ZAMS & $E_{\mathrm{expl}}$ & $t_{\mathrm{expl}}$ & M$^B_{\mathrm{remnant}}$ & M$^G_{\mathrm{remnant}}$ & M($^{56}$Ni ) \\
(M$_{\odot}$) & (Bethe) & (s) & (M$_{\odot}$) & (M$_{\odot}$) & (M$_{\odot}$) \\
\tableline
18.0 & 1.092 & 0.304 & 1.563 & 1.416 & 0.158 \\
18.2 & 0.808 & 0.249 & 1.509 & 1.371 & 0.110 \\
18.4 & 1.358 & 0.318 & 1.728 & 1.549 & 0.144 \\
18.6 & 0.702 & 0.239 & 1.529 & 1.388 & 0.090 \\
18.8 & 0.721 & 0.236 & 1.522 & 1.382 & 0.093 \\
19.0 & 1.366 & 0.317 & 1.716 & 1.54 & 0.161 \\
19.2 & 1.356 & 0.318 & 1.724 & 1.546 & 0.152 \\
19.4 & 1.15 & 0.326 & 1.608 & 1.452 & 0.158 \\
19.6 & 0.371 & 0.230 & 1.584 & 1.433 & 0.04 \\
19.8 & 0.661 & 0.225 & 1.523 & 1.383 & 0.088 \\
20.0 & 0.613 & 0.222 & 1.474 & 1.342 & 0.085 \\
20.2 & 0.379 & 0.224 & 1.554 & 1.408 & 0.039 \\
20.4 & 0.743 & 0.263 & 1.674 & 1.506 & 0.094 \\
20.6 & 1.005 & 0.277 & 1.781 & 1.592 & 0.141 \\
20.8 & 0.959 & 0.277 & 1.764 & 1.578 & 0.135 \\
21.0 & 1.457 & 0.316 & 1.733 & 1.554 & 0.198 \\
\tableline
18.0 (fb) & 1.092 & 0.304 & 1.663 & 1.497 & 0.073 \\
\tableline
\end{tabular}
\tablecomments{For the model 18.0 (fb), which is our best fit to SN~1987A, we have included 0.1~M$_{\odot}$ of fallback, determined from obervational constraints. See the text for more details. }
\end{center}
\end{table}
\subsection{Ni and Ti yields, progenitor dependence}
\label{sec_yields}
Figures~\ref{fig:calibration_summary} and \ref{fig:calibration_summary_fb} show that the composition of the ejecta is highly dependent on the progenitor model, especially for the amount of $^{57}$Ni and $^{58}$Ni ejected. From the four HC progenitors shown, two (18.0~M$_{\odot}$ and 19.4~M$_{\odot}$) produce a fairly high amount of those isotopes, while the other two (19.2~M$_{\odot}$ and 20.6~M$_{\odot}$) do not reach the amount observed in SN~1987A. A thorough investigation of the composition profile of the ejecta reveals that $^{57}$Ni and $^{58}$Ni are mainly produced in the slightly neutron-rich layers ($Y_e < 0.5$), where the alpha-rich freeze-out leads to nuclei only one or two neutron units away from the $N=Z$ line. A comparison of the $Y_e$ and composition profiles for the 18.0~M$_{\odot}$ and the 20.6~M$_{\odot}$ progenitors is shown in Figure~\ref{fig:composition_ye_profiles}. For the 18.0~M$_{\odot}$ model, the cutoff mass is 1.56~M$_{\odot}$ and a large part of the silicon shell is ejected. In
this shell, the initial matter composition is slightly neutron-rich (due to a small contribution from $^{56}$Fe) with $Y_e \simeq 0.498$ (dotted line in top left graph) and the conditions for the production of $^{57}$Ni and $^{58}$Ni are favorable. The increase in $Y_e$ around 1.9~M$_{\odot}$ marks the transition to the oxygen shell. The same transition for the 20.6~M$_{\odot}$ model happens around 1.74~M$_{\odot}$, i.e., inside the mass cut. Therefore, this model ejects less $^{57}$Ni and $^{58}$Ni (see also \citealt{thielemann90}). In all our models, $^{44}$Ti is produced within the innermost 0.15~M$_{\odot}$ of the ejecta (see Figure \ref{fig:composition_ye_profiles}). Since we assume 0.1~M$_{\odot}$ fallback onto the PNS, most of the synthesized $^{44}$Ti is not ejected in our simulations.
\begin{figure*}[htp!]
\includegraphics[width=0.5\textwidth]{fig15a_ye_profile_s18.0_2nd.eps}
\includegraphics[width=0.5\textwidth]{fig15b_ye_profile_s20.6.eps}
\includegraphics[width=0.5\textwidth]{fig15c_comp_s18.0_2nd.eps}
\includegraphics[width=0.5\textwidth]{fig15d_comp_s20.6_2nd.eps}
\caption{Electron fraction profiles (top) and nuclear compositions at 100~s
(bottom) above the mass cut for the 18.0~M$_{\odot}$ (left) and the 20.6~M$_{\odot}$ (right) progenitors with the parameters $k_{\rm push}=3.5$ and $t_{\rm rise}=200$~ms. The electron fraction is plotted for two different times in the network: the input values for the first timestep (``input'') and the value after post-processing (``final''). The dashed lines in all panels correspond to the alternative case, where $Y_e^{\rm hydro} (t=4.6~\rm s)$ is taken as the initial electron fraction in the network, whereas the solid lines represent the standard case (using $Y_e^{\rm hydro} (T=10~\rm GK)$).
\label{fig:composition_ye_profiles}}
\end{figure*}
\section{Implications and Discussion}
\label{sec:discuss}
\subsection{Sensitivities of nucleosynthesis yields}
\label{sec: ye dependence of nucleosynthesis}
While post-processing the ejecta trajectories for nucleosynthesis, $Y_e$ is evolved by the nuclear network independently of the hydrodynamical evolution. This leads to a discrepancy at later times between the electron fraction in the initial trajectory ($Y_e^{\rm hydro}$) and in the network ($Y_e^{\rm nuc}$). In order to estimate the possible error in our nucleosynthesis calculations arising from this discrepancy, we have performed reference calculations using $Y_e^{\rm hydro}(t=t_{\rm final})$ instead of $Y_e^{\rm hydro}(T=10\;{\rm GK})$ as a starting value for the network (see Section~\ref{sec:winnet}). The results are shown in Figure~\ref{fig:composition_ye_profiles} for two progenitors: 18.0~M$_{\odot}$ and 20.6~M$_{\odot}$. The label ``standard'' refers to the regular case which uses $Y_e^{\rm hydro}(T=10\;{\rm GK})$ as input. The calculation using $Y_e^{\rm hydro}(t=t_{\rm final})$ as input is labeled ``alternative'' and is represented by the dashed lines. The point in time at which the $Y_e$ profile
is shown is indicated by the supplements ``input'' (before the first timestep) and ``final'' (at $t=100$~s).
The corresponding nuclear compositions of the ejecta, each at the final calculation time of 100~s, are shown in the bottom panels. For the alternative
$Y_e$ profile of the 18.0~M$_{\odot}$ progenitor (top left) the minimum around 1.59~M$_{\odot}$ disappears, leading to an increase in $^{56}$Ni in this region at the expense of $^{57}$Ni and $^{58}$Ni (bottom left). For the 20.6~M$_{\odot}$ progenitor the situation is similar, with only a very small region just above 1.8~M$_{\odot}$ showing significant differences. In general, we observe that the uncertainties in $Y_e$ in our calculations are only present up to 0.05~M$_{\odot}$\xspace above the mass cut. The resulting uncertainties in the composition of the ejecta are very small or even inexistent in the scenarios where we consider fallback.
The radioactive isotope $^{44}$Ti can be detected in supernovae and supernova remnants. Several groups have used different techniques to estimate the $^{44}$Ti yield \citep{chugai97,Fransson.Kozma:2002,jerkstrand11,larsson11,grebenev12,grefenstette14,Seitenzahl2014}. The inferred values span a broad range, $(0.5 - 4) \times 10^{-4}$~M$_{\odot}$\xspace. Traditional supernova nucleosynthesis calculations \citep[e.g.][]{tnh96,ww95} typically predict too low $^{44}$Ti yields. Only very few models predict high $^{44}$Ti yields: \citet{thielemann90} report $^{44}$Ti yields around $10^{-4}$ and above in the best fits of their artificial SN explosions to SN~1987A. \citet{rauscher2002} argue that the yields of $^{56}$Ni and $^{44}$Ti are very sensitive to the ``final mass cut'' (as we have shown, too), which is often determined by fallback. Ejecta in a supernova may be subject to convective overturn. To account for this, we can assume homogeneous mixing in the inner layers up to the outer boundary of the silicon shell before
cutting off the fallback material (see, for example, \cite{Umeda2002} and references therein). For our best-fit model, the ejected $^{44}$Ti mass increases to $2.70~\times~10^{-5}$~M$_{\odot}$, if this prescription is applied. Comparing to the previous yield of $1.04~\times~10^{-5}$~M$_{\odot}$, we observe that the effect of homogeneous mixing is considerable, but not sufficient to match the observational values. The ejected $^{56-58}$Ni masses also show a slight increase. However, there are also uncertainties in the nuclear physics connected to the production and destruction of $^{44}$Ti. The final amount of produced $^{44}$Ti depends mainly on two reactions: $^{40}$Ca($\alpha,\gamma$)$^{44}$Ti and $^{44}$Ti($\alpha,p$)$^{47}$V. Recent measurements of the $^{44}$Ti($\alpha,p$)$^{47}$V reaction rate within the Gamow window concluded that it may be considerably smaller than previous theoretical predictions \citep{margerin2014}. In this study, an upper limit cross section is reported that is a factor of 2.2
smaller than the cross section we have used in our calculations (at a confidence level of $68\%$). Using this smaller cross section for the $^{44}$Ti($\alpha,p$)$^{47}$V reaction, our yield of ejected $^{44}$Ti for our best-fit model (18.0~M$_{\odot}$ progenitor, $k_{\rm push}=3.5$, $t_{\rm rise}=200$~ms) rises to $1.49~\times~10^{-5}$~M$_{\odot}$ with fallback and $5.65~\times~10^{-5}$~M$_{\odot}$ without fallback. This corresponds to a relative increase of $43\%$ with fallback and $48\%$ without fallback. If we include both the new cross section and homogeneous mixing, the amount of $^{44}$Ti in the ejecta is $3.99~\times~10^{-5}$~M$_{\odot}$ including fallback. This value, however, is still below the expected value derived from observations, but within the error box.
\subsection{Wind ejecta}
In the analysis of the nucleosynthesis yields above, we have used a mass resolution of 0.001~M$_{\odot}$\xspace for the tracers. This is too coarse to resolve the ejecta of the late neutrino-driven wind. Note that in our best-fit approach, where no mixing is assumed, none of the neutrino-driven wind is ejected because it is part of the fallback. Nevertheless, in the following we report briefly on the properties of the wind obtained by our detailed neutrino-transport scheme. For our best-fit model, the 18.0~M$_{\odot}$ progenitor, at $t_{\rm final}$ we find an electron fraction around 0.32, entropies up to 80~$k_{B}$ per baryon, and fast expansion velocities ($\sim 10^9$~cm/s). Similar conditions are also found for the other progenitors. They are not sufficient for a full r-process (see, for example, \citet{farouqi2010}). On the other hand, we have found that the entropy is still increasing and the electron fraction still decreasing in the further evolution. The high asymmetries are only obtained if we include the
nucleon mean-field interaction potentials in the neutrino charged-current rates \citep{martinez12}. However, they are much higher than found in other long-term simulations which also include these potentials \citep{roberts12,martinez12,martinez14}. This could be related to the missing neutrino-electron scattering in our neutrino transport, which is an important source of thermalization and down-scattering, especially for the high energy electron antineutrinos at late times, see \citet{fischer12}. More detailed comparisons are required to identify the origin of the found differences which will be addressed in a future study.
\subsection{Amount of fallback}
\label{sec_fallback}
To reconcile our models with the nucleosynthesis observables of SN~1987A we need to invoke 0.1~M$_{\odot}$\xspace of fallback (see Section \ref{sec: fit}). The variation in the amount of synthesized Ni isotopes between runs obtained with different PUSH parameters (Figure~\ref{fig:calibration_summary}) suggests that a smaller $t_{\rm rise}$ (and consequently smaller $k_{\rm push}$) could also be compatible with SN~1987A observables, if a larger fallback is assumed. On the one hand, assuming that $t_{\rm rise}$ ranges between 50 ms and 250 ms, fallback for the 18.0~M$_{\odot}$\xspace model compatible with observations is between 0.14~M$_{\odot}$\xspace (for $t_{\rm rise}=50$~ms) and 0.09~M$_{\odot}$\xspace (for $t_{\rm rise}=250$~ms). On the other hand, if the amount of fallback has been fixed, the observed yields (especially of $^{56}$Ni) reduce the uncertainty in $t_{\rm rise}$ to $\lesssim 50$~ms.
Our choice of 0.1~M$_{\odot}$\xspace is compatible with the fallback obtained by \cite{Ugliano.Janka.ea:2012} in exploding spherically symmetric models for progenitor stars in the same ZAMS mass window. Moreover, \citet{chevalier89} estimated a total fallback around 0.1~M$_{\odot}$\xspace for SN~1987A, which is supposed to be an unusually high value compared to ``normal'' type II supernovae. Recent multi-dimensional numerical simulations by \citet{bernal13,fraija14} confirmed this scenario and furthermore showed that such a hypercritical accretion can lead to a submergence of the magnetic field, giving a natural explanation why the neutron star (possibly) born in SN~1987A has not been found yet.
\subsection{Compact Remnant of SN~1987A}
From the observational side, the compact remnant in SN~1987A is still obscure. From the neutrino signal (see, e.g., \citet{arnett89,koshiba92} and \citet{vissani14} for a recent detailed analysis) one can conclude that a PNS star was formed and that it lasted at least for about 12~s. The mass cut in our calibration run is located at an enclosed baryon mass of 1.56~M$_{\odot}$\xspace without fallback. If we include the 0.1~M$_{\odot}$\xspace of late-time fallback required to fit the observed nickel yields and the explosion energy, we have a final baryonic mass of 1.66~M$_{\odot}$\xspace. For the employed HS(DD2) EOS this corresponds to a gravitational mass of a cold neutron star of 1.42~M$_{\odot}$\xspace (without fallback) or 1.50~M$_{\odot}$\xspace (with fallback). The CCSN simulations with artificial explosions of \citet{thielemann90}, where a final kinetic energy of 1~Bethe was obtained by hand and where the mass-cut was deduced from a $^{56}$Ni yield of $(0.07\pm 0.01)$~M$_{\odot}$\xspace, lead to a similar baryonic mass of $(1.6 \pm 0.045)$~M$_{\odot}$\xspace. These authors also wrote that
uncertainties in the stellar models could increase this value to $1.7$~M$_{\odot}$\xspace which would also be fully compatible with our result.
The prediction of the neutron star mass has important consequences. From the observations of \citet{demorest2010} and \citet{antoniadis2013} it follows that the maximum gravitational mass of neutron stars has to be above two solar masses. The maximum mass of the HS(DD2) EOS is 2.42~M$_{\odot}$\xspace, corresponding to a baryonic mass of 2.92~M$_{\odot}$\xspace. If the compact remnant in SN~1987A was a black hole, and not a neutron star, it means that at least $\sim 0.5$~M$_{\odot}$\xspace of additional accreted mass were required, if we just take the two solar mass limit. If we use the maximum baryonic mass of HS(DD2) we even have to accrete $\sim$1.3~M$_{\odot}$\xspace of additional material. Obviously, if such a huge amount of material would be accreted onto the neutron star, our predictions for the explosion energy and the nucleosynthesis would not apply any more.
Nevertheless, we have the impression that it would be difficult to fit the SN~1987A observables and obtain a black hole as the compact remnant at the same time. For spherical fallback, it is certainly excluded. The only possibility could be a highly anisotropic explosion and aspherical accretion, which we cannot address with our study. To show if such a scenario can be realized remains a task for future multi-dimensional studies. In the 2D simulations of \citet{yamamoto2013} the remnant mass is decreasing with the explosion energy and an explosion energy above 1~Bethe would result in neutron stars below $\sim 2$~M$_{\odot}$\xspace baryonic mass. Note that \citet{kifonidis2006} already came to the same conclusion that the formation of a black hole in SN~1987A ``is quite unlikely'', based on 2D simulations with a 15~M$_{\odot}$\xspace progenitor.
Another possibility was proposed by \citet{chan2009}. These authors argued that the time delay of $\sim 5$~s observed for the neutrino signal by the IMB detector could be related to a collapse to a quark star. Due to the proposed faster neutrino cooling of quark stars, this would give a natural explanation why it has not been observed until today. The end of our simulations is also around 5~s, thus we can make statements about the conditions at which the phase transition to quark matter took place in SN~1987A, if the scenario of \citet{chan2009} was true. We have a central mass density of $4.56 \times 10^{14}$~g/cm$^3$ corresponding to $n_B^c=0.272$~fm$^{-3}$ or $n_B^c=1.83~n_B^0$, a temperature of 23.2~MeV, and an electron fraction of 0.24. Some simplified models for quark matter predict that the phase transition in symmetric matter is shifted to higher densities compared with supernova conditions \citep{fischer11}. Under that hypothesis, a phase transition around 2~$\rho_0$ and 20 MeV cannot be excluded.
A simpler explanation is given by the possibility that a pulsar in the SN~1987A remnant is simply not (yet) observable. \citet{oegelman2004,graves2005} showed that the non-detection of any compact remnant puts important limits on the magnetic field the NS can have (either unusually low or very high, in the realm of magnetars). Furthermore, for both cases (NS and BH) \citet{graves2005} put severe constraints on currently ongoing accretion scenarios, e.g., spherical accretion is almost ruled out. \citet{graves2005} conclude that ``it seems unlikely that the remnant of SN~1987A currently harbors a pulsar''. Our simulations would be in line with the option of a neutron star with a very low magnetic field or with a ``normal'' magnetic field which is still (partly) buried in the crust due to the late time fallback, similar to what is observed for neutron stars in binary systems. In this respect, recent high-resolution radio observations of the remnant indicate the presence of a compact source or a pulsar wind
nebula \citep{zanardo13,Zanardo2014}. Future observations will be able to clarify the nature of this emission.
\subsection{Correlations}
As a byproduct of exploring the 18-21~M$_{\odot}$\xspace window and the fitting procedure to SN~1987A we have found interesting correlations between different quantities, which we will discuss here. In Figure~\ref{fig:correls_with-xi}, we plot the explosion energy, the explosion time, and the (baryonic) remnant mass as function of the progenitor compactness. The results obtained with the calibrated runs indicate a general trend with progenitor compactness for $E_{\rm expl}$. The explosion time, $t_{\rm expl}$, is almost constant within each the LC and the HC group, while the difference between the two groups is related to the difference between how LC and HC models explode (discussed in Section~\ref{sec:hc and lc}). The remnant mass increases with compactness, as expected. Nevertheless, we notice significant deviations from the described trends: for $E_{\rm expl}$ and $t_{\rm expl}$ in the HC sample, for $M_{\rm rem}$ mainly in the LC sample.
Figure~\ref{fig:correls_Eexpl-texpl} shows explosion times and explosion energies for all the exploding runs in our sample. We can identify a correlation between $t_{\rm expl}$ and $E_{\rm expl}$ for a given progenitor: the larger $t_{\rm expl}$ the lower is $E_{\rm expl}$. This correlation is more pronounced for the HC models than for the LC models. It means that the explosion in PUSH cannot set in too late, if the observed explosion energy should be achieved.
\begin{figure}[hbp!]
\includegraphics[width=0.5\textwidth]{fig16a_best_kpush_trise_Eexpl.eps}
\includegraphics[width=0.5 \textwidth]{fig16b_best_kpush_trise_texpl.eps}
\includegraphics[width=0.5\textwidth]{fig16c_best_kpush_trise_mremn.eps}
\caption{Explosion energies (top), explosion times (middle), and (baryonic) remnant mass (bottom) as function of compactness for the PUSH parameters of our best-fit model ($k_{\rm push}$\xspace $=3.3$ and $t_{\rm rise}=0.15$~s) for all progenitors in the 18-21~M$_{\odot}$\xspace window. HC models are denoted by a red cross, LC models by a blue plus. Our best-fit model for SN~1987A is highlighted by a filled triangle.
\label{fig:correls_with-xi} }
\end{figure}
\begin{figure}[htp!]
\includegraphics[width=0.5\textwidth]{fig17_texpl_Eexpl_mass.eps} \\
\caption{Explosion energy $E_{\rm expl}$ versus explosion time $t_{\rm expl}$ for all the progenitors in the 18-21~M$_{\odot}$\xspace range and for different combinations of $k_{\rm push}$\xspace and $t_{\rm rise}$, however only the exploding models are included. HC models are indicated by a triangle, LC models by a circle. The best fit model is indicated by a cross. The different colors distinguish different progenitors.
\label{fig:correls_Eexpl-texpl} }
\end{figure}
\subsection{Heating efficiency and residence time}
In the context of CCSNe, the heating efficiency $\eta$ is often defined as the ratio between the volume-integrated, net energy deposition inside the gain region and the sum of the $\nu_e$ and $\bar{\nu}_e$ luminosities at infinity:
\begin{equation}
\eta = \frac{\int_{V_{\rm gain}} \rho \, \dot{e}_{\nu_e,\bar{\nu}_e}{\rm d}V}{L_{\nu_{e}} + L_{\bar{\nu}_{e}}},
\end{equation}
see, e.g., \citet{Murphy2008,Marek2009,Mueller.Janka.ea:2012,Couch2014,Suwa2013}.
In non-exploding, spherically symmetric simulations, $\eta$ usually rises within a few tens of milliseconds after core bounce and reaches its maximum around $\eta \sim 0.1$ at $t \approx 100 \, \rm{ms}$, when the shock approaches its maximum radial extension. As soon as the shock starts to recede and the volume of the gain region decreases, $\eta$ diminishes quickly to a few percents (see, for example, the long-dashed lines in Figure~\ref{fig:eta plot}).
\begin{figure}[htp!]
\includegraphics[width=0.35\textwidth,angle=-90]{fig18_eta_bf.eps}
\caption{Neutrino heating efficiency for the SN~1987A best fit model: 18.0~M$_{\odot}$\xspace model with $t_{\rm on}=80$~ms, $t_{\rm rise}=200$~ms, and $k_{\rm push}=3.5$. The solid lines represent the total efficiency (i.e., due to $\nu_e$ and $\bar{\nu}_e$ absorption and due to PUSH), the short-thick dashed lines the efficiency only due to $\nu_e$ and $\bar{\nu}_e$ absorption. For comparison, the heating efficiency of the corresponding non-exploding model ($k_{\rm push} = 0$) is also presented (long-thin dashed lines).
\label{fig:eta plot} }
\end{figure}
In multi-dimensional simulations, where the shock contraction is delayed or even not happening, energy deposition is expected to be slightly more efficient ($\eta \sim $ 0.10 -- 0.15 at maximum) and to decrease more slowly, within a few hundreds of milliseconds after bounce or at the onset of an explosion \citep[see, for example,][]{Murphy2008,Mueller.Janka.ea:2012,Couch2013a,Couch2014}. These differences arise not only because the gain region does not contract, but also because neutrino-driven convection efficiently mixes low and high entropy matter between the neutrino cooling and the heating regions below the shock front. Furthermore, convective motion and SASIs are expected to increase significantly the residence time of fluid particles inside the gain region during which they are subject to intense neutrino heating \citep[see, e.g., ][]{Murphy2008,handy14}. Since the increase of the particle internal energy is given by the time integral of the energy absorption rate over the residence time, this
translates to a larger energy variation \citep{handy14}.
In spherically symmetric models, the imposed radial motion does not allow the increase of the residence time. This constraint limits the energy gain of a mass element traveling through the gain region. In models exploded using the light-bulb approximation, a large enough internal energy variation is provided by increasing the neutrino luminosity above a critical value, which depends on the mass accretion rate and on the dimensionality of the model
\citep[e.g.,][]{Burrows1993,Yamasaki2005,iwakami08,Murphy2008,iwakami09,Nordhaus2010,Hanke2012,Couch2013a,Dolence2013,handy14,suwa14}.
Since in our model the neutrino luminosities are univocally defined by the cooling of the PNS and by the accretion rate history, we increase the energy gain by acting on the neutrino heating efficiency. This effect can be made visible by defining a heating efficiency that takes the PUSH contribution into account, $\eta_{\rm tot}$:
\begin{equation}
\eta_{\rm tot} = \eta + \eta_{\rm push} = \frac{\int_{V_{\rm gain}} \rho \, \left( \dot{e}_{\nu_e,\bar{\nu}_e} + \dot{Q}^{+}_{\rm push} \right){\rm d}V} {L_{\nu_{e}} + L_{\bar{\nu}_{e}}}.
\end{equation}
In Figure~\ref{fig:eta plot}, we plot $\eta_{\rm tot}$ as a function of time for our SN~1987A calibration model, with PUSH ($k_{\rm push} = 3.5$) and without it ($k_{\rm push} = 0$). We first notice that the heating efficiency provided by $\nu_{e}$ and $\bar{\nu}_{e}$ can differ between exploding (short-thick dashed lines) and non-exploding models (long-thin dashed lines). In the case of the exploding model, PUSH provides an increasing contribution to $\eta_{\rm tot}$. It continues to increase steeply up to $t \approx t_{\rm on} + t_{\rm rise}$, but also later, up to $t \approx t_{\rm expl}$, due to the shock expansion preceding the explosion. Thus, the increasing heating efficiency in our spherically symmetric models can be interpreted as an effective way to include average residence times longer than the advection timescale.
\begin{figure}[htp!]
\includegraphics[width=0.35\textwidth,angle=-90]{fig19_eta_global.eps}
\caption{Average and maximum heating efficiencies, calculated between $t = t_{\rm on}$ and $t = t_{\rm expl}$ for the runs obtained with the fitted parameters, Table~\ref{tab:bestfit}, and plotted as a function of the progenitor compactness $\xi_{1.75}$. The black crosses and the red triangles refer to the average and the maximum efficiency due to $\nu_e$ and $\bar{\nu}_e$ ($\eta$), while the blue stars and the magenta squares to the average and the maximum total efficiency ($\eta_{\rm tot}$), including also the PUSH contribution.
\label{fig:av_eta}}
\end{figure}
In Figure~\ref{fig:av_eta}, we collect the average and the maximum heating efficiencies, for all the models obtained with the set of parameters resulting from the fit procedure (Table~\ref{tab:bestfit}). Both the average and the maximum values are computed within the interval $ t_{\rm on} \le t \le t_{\rm expl}$. We plot them as a function of the compactness and we distinguish between $\eta$ and $\eta_{\rm tot}$. The maximum of $\eta$ is usually realized at $t \approx t_{\rm on}$, while the maximum of $\eta_{\rm tot}$ is reached around $t \approx t_{\rm expl}$ (see also Figure~\ref{fig:eta plot}). Since the explosion sets in later for HC models, when $t_{\rm expl} \gtrsim t_{\rm on} + t_{\rm rise}$, the PUSH factor $\mathcal{G}$ has time to rise to $k_{\rm push}$\xspace for these models. This increases not only the maximum but also the average $\eta_{\rm tot}$ compared with the LC cases. We notice that all four quantities show a correlation with $\xi_{1.75}$, but much weaker in the case of $\eta$ than in the case of $\eta_
{\rm tot}$. Moreover, in the HC region, we recognise deviations from monotonic behaviors which reproduce the irregularities already observed in the explosion properties.
\subsection{Alternative measures of the explosion energy}
In the following, we discuss alternative measures of the explosion energy used in the literature for reasons of comparison. We investigate their behaviors at early simulation times and their general rate of convergence. The diagnostic energy $E^+(t)$, see e.g.~\citet{Bruenn.Mezzacappa.ea:2013}, is given by the integral of the specific explosion energy $e_{\rm expl}$ over regions where it is positive (again, excluding the PNS core, see Section~\ref{sec:def_eexpl}). The quantity $E^+(t)$ is often used in multi-dimensional simulations as an estimate of the explosion energy at early simulation times, see e.g.~\citet{buras2006b,suwa2010,janka2012,Couch2014,takiwaki2014}.
The overburden $E_{\rm ov}(t)$, see \cite{Bruenn.Mezzacappa.ea:2013}, is given by the integral of the specific explosion energy of the still gravitationally bound regions between the expanding shock front and the surface of the progenitor star. If we define $E_{\rm ov}^+(t)$ as the sum of the overburden and of the diagnostic energy, we recover a measure of the explosion energy equivalent to the one defined in Equation~(\ref{eq:expl energy t}):
\begin{equation}
E_{\rm expl}(t)\equiv E_{\rm ov}^+(t)=E^+(t) + E_{\rm ov}(t).
\end{equation}
For long enough simulation times, all matter above the mass-cut should get positive specific explosion energies, and thus the overburden should approach zero and the diagnostic energy should become equal to the explosion energy $E_{\rm expl}(t)$.
Finally, an upper limit for the explosion energy is obtained by also taking into account the ``residual recombination energy'' $E_{\rm rec}(t)$ \citep{Bruenn.Mezzacappa.ea:2013}:
\begin{equation}
E_{\rm ov,r}^+(t)=E_{\rm ov}^+(t) + E_{\rm rec}(t) \, ,
\end{equation}
where $E_{\rm rec}(t)$ is the energy that would be released if all neutron-proton pairs and all \textsuperscript{4}He recombined to \textsuperscript{56}Ni in the regions of positive specific explosion energy. We call it \textit{residual} recombination energy to make clear that this is energy which is \textit{not} liberated in our simulations, in contrast to the energy of the recombination processes which we identified in Section~\ref{sec:eexpl_contr}.
\label{sec:eexpl_evo}
\begin{figure}[htp!]
\includegraphics[angle=-90,width=0.5\textwidth]{fig20_expl_diagnostic.eps}
\caption{Temporal evolution of the diagnostic energy $E^+$, the explosion energy $E^+_{\rm ov}$, and the upper limit of the explosion energy also including the recombination energy $E_{\rm rec}$ for a HC (19.2~M$_{\odot}$\xspace progenitor) and a LC case (20.0~M$_{\odot}$\xspace progenitor), for PUSH parameters reported in Table~\ref{tab:HC vs LC}.
\label{fig:diagexplosion_energies}}
\end{figure}
In Figure \ref{fig:diagexplosion_energies}, we investigate the behavior of the diagnostic energy $E^+(t)$, and we compare it with our estimate of the explosion energy $E_{\rm expl}(t)\equiv E_{\rm ov}^+(t)$ and with its upper limit represented by $E^+_{\rm ov,r}(t)$. We want to emphasize that these quantites are obtained from mass integrals above the time-dependent mass-cut, in contrast to most of the energies investigated in Section~\ref{sec:eexpl_contr}, where a fixed mass domain was considered.
While $E_{\rm ov}^+(t)$ and $E^+_{\rm ov,r}(t)$ have already saturated to a constant value at $t\approx1.5$~s, even at $t\approx4.6$~s the diagnostic energy has not yet converged. $E_{\rm ov}^+(t)$ and $E^+_{\rm ov,r}(t)$ approach their asymptotic values from below, and any late time increase ($t \gtrsim 1.5$~s) is due to the energy carried by the neutrino-driven wind ejected from the PNS surface. On the other hand, $E^+(t)$ reaches its maximum around $\la 1$~s after $t_{\rm expl}$, when the neutrino absorption and the nuclear recombination have released most of their energy in the expanding shock wave, and then it decreases towards $E_{\rm ov}^+(t)$, since matter with negative total specific energy is accreted at the shock. The difference between $E_{\rm ov}^+(t)$ and $E^+(t)$ is mainly represented by the gravitational binding energy of the stellar layers above the shock front. Thus, the rate of convergence of the diagnostic energy depends on the amount of gravitational binding energy contained in the outer
envelope of the star and on the relative speed at which the shock propagates inside it. Since the gravitational binding energy of the outer layers is similar between the two explored models, the different rate of convergence depends mostly on the different expansion velocity of the shock wave, which is larger for more energetic HC models.
\citet{yamamoto2013} found for a 15~M$_{\odot}$\xspace progenitor that the diagnostic energy saturates and thus reaches the asymptotic explosion energy already between 1 and 2~s post-bounce. This difference to what we find is related to the different progenitors used and, in particular, to the different binding energy of the outer envelopes, which is expected to be much smaller for a $15$~M$_{\odot}$\xspace progenitor than for a $\sim 20$~M$_{\odot}$\xspace progenitor \citep[see, for example, Figure 5 of][]{burrows13}. Nevertheless, we conclude that the diagnostic energy is in general (i.e., without further considerations) not suited to give an accurate estimate of the explosion energy at early times.
\subsection{Comparison with other works}
A similar fitting to SN~1987A energetics has been done for multi-dimensional simulations (2D and 3D) using a light-bulb scheme for the neutrinos by \citet{handy14}. As initial conditions they used a post-collapse model based on the 15~M$_{\odot}$\xspace blue supergiant progenitor model of \citet{Woosley1988}. Even if they did not provide the corresponding compactness, the values of the accretion rate ($\sim 0.2-0.3$~M$_{\odot}$\xspace~s$^{-1}$) and of the electron neutrino luminosity ($\sim 1.8-3.5 \times 10^{51}$~erg~s$^{-1}$) at the onset of the explosion are more compatible with our LC models. In their fitting, only the diagnostic explosion energy $E^+$ was used at a time of $t_{\rm pb} = 1.5$~s when it is expected to have saturated to $E_{\rm expl}$ (cf. \cite{yamamoto2013}). But no estimates for the nucleosynthesis yields were given. The time when the shock reaches 500 km (which corresponds for us to $t_{\rm expl}$) is significantly lower in their models (90-140 ms after bounce), mainly due to the different extension and
evolution of the shock during the first 100 ms after core bounce. A more detailed quantitative comparison (albeit limited by the different dimensionality of the two models) requires to use a more similar progenitor. However, the advection timescale and the mass in the gain region are larger than the corresponding values we have obtained in all our models, as expected from the larger average residence time resulting from multi-dimensional hydrodynamical effects.
\citet{Ugliano.Janka.ea:2012} also calibrated their spherically symmetric exploding models with the observational constraints from SN~1987A, and used progenitor models identical to the ones we have adopted \citep{Woosley.Heger:2002}. They also found that the remnant mass and the properties of the explosion exhibit a large variability inside the narrow 18-21~M$_{\odot}$\xspace ZAMS mass window (they even found some non-exploding models). However, they did not find any clear trend with progenitor compactness (for example, their calibration model is represented by the 19.8~M$_{\odot}$\xspace ZAMS mass progenitor which belongs to the LC sample). The explosion timescales for models in the 18-21~M$_{\odot}$\xspace ZAMS mass interval are much longer in their case ($t_{\rm expl} \sim 0.3 - 1$~s), while their range for the explosion energy (0.6~--~1.6~Bethe) is relatively compatible with ours (0.4~--~1.6~Bethe). Clearly, all these differences are related to the numerous diversities between the two models.
A possible relation between explosion properites and progenitor compactness has been first pointed out by \citet{OConnor2010}, who searched for a minimum enhanced neutrino energy deposition in spherically symmetric models. Similarly to us, they found that more compact progenitors require larger heating efficiency to explode. However, they do not investigate the explosion energy of their models. Moreover, they consider it to be unlikely that a model which requires $\eta \gtrsim 0.23$ ($\xi_{2.5} \gtrsim 0.45$) will explode in nature. In our analysis, we have interpreted a large neutrino heating efficiency in spherically symmetric models as an effective way to take into account longer residence time inside the gain region. We have pointed out that HC models, characterized by larger $\eta_{\rm tot}$, are required to obtain the observed properties of SN~1987A. However, these models still have $\xi_{2.5} < 0.45$ and our average heating efficiency are below the critical value of \citet{OConnor2010}.
A clear correlation between explosion properties and progenitor compactness has been recently discussed by \citet{Nakamura2014}. They performed systematic 2D calculations of exploding CCSNe for a large variety of progenitors, using the IDSA to model $\nu_e$ and $\bar{\nu}_e$ transport. Due to computational limitations and due to the usage of only a NSE EOS, their simulations were limited to $\sim 1$~s after core bounce. Thus, they could not ensure the convergence of the diagnostic energy and could not directly compare their results with CCSN observables. However, they found trends with compactness similar to the ones we have found in our reduce sample.
Other authors have also compared the predicted explosion energy and Ni yield from their models to the observational constraints. For example, \citet{yamamoto2013}, using the neutrino light-bulb method to trigger explosions in spherical symmetry, found a similar trend between explosion energies and nickel masses as we found (see Table~\ref{tab:values}). They also compared to a thermal bomb model with similar explosion energies and mass cut, and found that the neutrino heating mechanism leads to systematically larger $^{56}$Ni yields. They related it to higher peak temperatures, which appear because a greater thermal energy is required to unbind the accreting envelope. They also concluded that the neutrino-driven mechanism is more similar to piston-driven models by comparing with \citet{young2007}. The problem of overproducing $^{56}$Ni is lessened in the 2D simulations of \citet{yamamoto2013} because of slightly lower peak temperatures and the occurrence of fallback.
The conclusions drawn in Section~\ref{sec:eexpl_contr} about the contributions of nuclear reactions to the explosion energy are somewhat opposite to what can be found in other works in the literature. For example, \citet{yamamoto2013} state that the contribution of the nuclear reactions to the explosion energy is comparable to or greater than that of neutrino heating. Furthermore, they identify the recombinations of nucleons into nuclei in NSE as the most important nuclear reactions. However, they also point out that this ``recombination energy eventually originates from neutrino heating''. We think that this aspect is crucial for understanding the global energetics. Indeed, if we had started the analysis presented in Figure~\ref{fig:explosion energy contributions} not at bounce but at $t_{\rm expl}$ we would also have identified a strong contribution from the nuclear reactions, given roughly by the difference between $-(E_{\rm mass}-E_{\rm mass,0})$ at $t_{\rm expl}$ (which is close to the minimum) and the
final value. However, as is clear from the Figure, roughly the same amount of energy was actually taken from the thermal energy before $t_{\rm expl}$. The dominant net contribution to the explosion energy originates from neutrino heating, as is evident from Figure~\ref{fig:total energy contributions} and as we haved discussed in detail in Section~\ref{sec:eexpl_contr}.
\section{Summary and Conclusions}
\label{sec:concl}
The investigation of the explosion mechanism of CCSNe as well as accurate explorations
of all the aspects related with it, is a long lasting, but still fascinating problem.
Sophisticated multi-dimensional hydrodynamical simulations, possibly including detailed neutrino
transport, microphysical EOS, magnetic fields and aspherical properites
of the progenitor structure, are ultimately required to address this problem.
The high computational costs of such models and the uncertanties in several necessary ingredients
still motivate the usage of effective spherically symmetric models to perform
extended progenitor studies.
In this work we have presented a new method, PUSH, for artificially triggering parametrized core-collapse supernova explosions of massive stars in spherical symmetry.
The method provides a robust and computationally affordable framework to study important aspects of core-collapse supernovae that require modeling of the explosion for several seconds after its onset for extended sets of progenitors.
For example, the effects of the shock passage through the star, the neutron star mass distribution, the determination of the explosion energy, or explosive supernova nucleosynthesis. Here, we have focused on the exploration of basic explosion properties and on the calibration of PUSH by reproducing observables of SN~1987A. We considered progenitors in the ZAMS mass range of 18~--~21~M$_{\odot}$\xspace which corresponds to typical values for the progenitor mass of SN~1987A \citep{Shigeyama1990}.
Unlike traditional methods (such as thermal bombs, pistons, or neutrino light-bulbs), our method does not require any external source of energy to trigger the explosion nor a modification of the charged-current neutrino reactions. Instead, the PUSH method taps a fraction of the energy from muon- and tau-neutrinos which are emitted by the PNS. This additional energy is deposited inside the gain region for a limited time after core bounce.
The introduction of a local heating term that is only active where electron-neutrinos are heating and where neutrino-driven convection can occur is inspired by qualitative properties of multi-dimensional CCSN simulations. We have two major free parameters, $t_{\rm rise}$, describing the temporal evolution of PUSH, and $k_{\rm push}$, controlling the strength. They are determined by comparing the outcome of our simulations with observations.
Our setup allows us to model the entire relevant domain, including the PNS and the ejecta.
In particular,
(i) the thermodynamic properties of matter both in NSE and non-NSE conditions are treated accurately;
(ii) the neutrino luminosities are directly related to the PNS evolution and to the mass accretion history;
(iii) the evolution of the electron fraction is followed by a radiative transport scheme for electron flavor neutrinos, which is important for the nucleosynthesis calculations.
We have studied the evolution of the explosion energy and how it is generated. The energy deposition by neutrinos is the main cause of the increase of the total energy of the ejecta and, thus, the main source of the explosion energy. The net nuclear binding energy released by the ejecta during the whole supernova (including both the initial endothermic photodissociation and the final exothermic explosive burning) is positive, but much smaller than the energy provided by neutrinos. Furthermore, we obtain an approximate convergence of the explosion energy typically only after 1 to 2 seconds and only if the full progenitor structure is taken into account. Vice-versa, we find that the so-called ``diagnostic energy'' is, in general, not suited to give an accurate estimate of the explosion energy at early times.
Our broad parameter exploration has revealed a distinction between high compactness ($\xi_{1.75}>0.45$) and low compactness ($\xi_{1.75}<0.45$) progenitor models for the ZAMS mass range of 18~--~21~M$_{\odot}$\xspace. The LC models tend to explode earlier, with lower explosion energy, and with a lower remnant mass. When the HC models explode, they tend to explode later, more energetically, and producing more massive remnants. This is due to different accretion histories of the LC and HC models. The HC models have larger accretion rates, which produce larger neutrino luminosities, (marginally) harder neutrino spectra, and a stronger ram pressure at the shock. In order to overcome this pressure a more intense neutrino energy deposition is required behind the shock.
And, once the explosion has been launched, a more intense energy deposition inside the expanding shock is observed.
Thus, HC models require more time to explode but the resulting explosions are more energetic.
The fitting of the PUSH parameters to observations of SN~1987A has lead to several interesting conclusions. The requirement of an explosion energy around 1 Bethe has restricted our progenitor search to HC models. At the same time, our parameter space exploration has shown that a constraint on the explosion energy is equivalent to a tight correlation between the two most relevant PUSH parameters, $t_{\rm rise}$ and $k_{\rm push}$: if a certain explosion energy has to be achived, a longer timescale for PUSH to reach its maximum efficiency ($t_{\rm rise}$) has to be compensated by a larger PUSH strength ($k_{\rm push}$). This degeneracy can be broken by including nucleosynthesis yields in the calibration of the free parameters.
We find an overproduction of $^{56}{\rm Ni}$ for runs with an explosion energy around and above 1~Bethe. This problem is observed for all the tested parameter choices and progenitors that provide a sufficiently high explosion energy. Thus, fallback is necessary in our models to reproduce the observed nucleosynthesis yields. This fallback could be associated with the formation of a reverse shock when the forward shock reaches the hydrogen shell. The relatively large amount of fallback that we use (0.1~M$_{\odot}$\xspace) is consistent with observational constraints from SN~1987A and with explicit calculations of the fallback for exploding models of $\sim 20$~M$_{\odot}$\xspace ZAMS mass progenitors \citep{chevalier89,Ugliano.Janka.ea:2012}.
The production of ${~}^{57-58}{\rm Ni}$ is sensitive to the electron fraction of the innermost ejecta. A final mass cut initially located inside the silicon shell can provide slightly neutron rich ejecta, corresponding to the conditions required to fit the ${~}^{57-58}{\rm Ni}$ yields of SN~1987A. We found that this is only possible for the 18.0 M$_{\odot}$\xspace and 19.4 M$_{\odot}$\xspace ZAMS mass progenitors, whereas for the other HC models, characterized by larger $\xi_{1.75}$, the mass cut is located inside the oxygen shell. The 18.0 M$_{\odot}$\xspace and 19.4 M$_{\odot}$\xspace ZAMS mass progenitors can explain the energetics and all nickel yields if fallback is included.
For $^{44}$Ti, in contrast, we find that it is underproduced. However, we have shown that uncertainties in the relevant nuclear reaction rates, together with mixing of the ejecta, can help reducing this discrepancy.
Our work has demonstrated that the progenitor structure and composition are of great importance for the nucleosynthesis yields. Recently, it has been pointed out that asphericities in the progenitor structure could aid the multi-dimensional neutrino-driven supernova mechanism \citep{Couch2013b,Mueller.Janka:2015,Couch.ea:2015}. For our work, the compositional changes induced by multi-dimensional effects in the progenitor evolution \citep{Arnett2015} would be of particular interest and could be the subject of future work. However, at present, databases with large sets of progenitors are only available for calculations that were done in spherical symmetry.
Finally, we have identified a progenitor (18.0~M$_{\odot}$\xspace ZAMS mass, compactness $\xi_{1.75} = 0.463$ at collapse) that fits the observables of SN~1987A for a suitable choice of the PUSH parameters ($t_{\rm on}=80$~ms, $t_{\rm rise}=200$~ms, and $k_{\rm push}=3.5$) and assuming 0.1~M$_{\odot}$\xspace of fallback. The associated explosion energy is $E_{\rm expl}=1.092$ Bethe, while $M({~}^{56}{\rm Ni})=0.073$ M$_{\odot}$\xspace. The formation of a BH in SN~1987A seems to be unlikely, since it would require a much larger fallback compared with our analysis and/or an extremely asymmetric explosion. Instead, we predict that in SN~1987A a neutron star with a baryonic mass of 1.66~M$_{\odot}$\xspace was born, corresponding to a gravitational mass of 1.50~M$_{\odot}$\xspace for a cold neutron star with our choice of the EOS. This will hopefully be probed by observations soon \citep{Zanardo2014}.
For our best model of SN~1987A the explosion happens on a timescale of a few hundereds of milliseconds after core bounce. This timescale is consistent with the overall picture of a neutrino-driven supernova, and broadly compatible with the first results obtained in exploding, self-consistent, multi-dimensional simulations.
From exploring the progenitor range of 18~--~21~M$_{\odot}$\xspace ZAMS mass we found indications of a correlation between explosion properties and the compactness of the progenitor model. However, a more complete analysis will require the exploration of a larger set of progenitors with the PUSH method. This will be the topic of a forthcoming work. An extended study considering all possible progenitors for core-collapse supernovae in the mass range of 8~--~100~M$_{\odot}$\xspace will be relevant for several open questions in nuclear astrophysics, for example for the comparison of predicted to observed explosion energies, neutron-star remnant masses, and ejected $^{56}$Ni (see, e.g., \citet{Bruenn2014}) or for the prediction of complete nucleosynthesis yields of all elements which is a crucial input to galactic chemical evolution. A full progenitor study could also give more insight into the extent to which the compactness parameter affects the supernova energetics and nucleosynthesis.
\acknowledgments
The authors thank Marcella Ugliano for useful discussions and Tobias Fischer for useful comments to the manuscript.
The work of A.P. is supported by the Helmholtz-University Investigator grant No. VH-NG-825.
C.F.\ acknowledges support from the Department of Energy through an Early Career Award (DOE grant no.\ SC0010263) and through the Topical Collaboration in Nuclear Science ``Neutrinos and Nucleosynthesis in Hot and Dense Matter'' (DOE grant no.\ DE-SC0004786).
M.H., K.E., and M.E. acknowledge support from the Swiss National Science
Foundation (SNSF). Partial support comes from ``NewCompStar'', COST Action MP1304.
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n° 321263 - FISH. M.H. and F.K.T. are also grateful for participating in the ENSAR/THEXO project.
\bibliographystyle{apj}
|
1,108,101,563,857 | arxiv | \section{General Form of Boundary Conditions}
In \cite{PIERL2016,AP2017,GBC} the set of most general linear and local boundary conditions (GBC) was introduced as
\e \left(\begin{array}{cc} \#a_1 & \#b_1\\ \#a_2 & \#b_2\end{array}\right) \.\left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0\\0\end{array}\right), \l{GBC} \f
where $\#a_1, \#a_2, \#b_1$ and $\#b_2$ are four dimensionless vectors and $\eta_o=\sqrt{\mu_o/\epsilon_o}$. It is assumed for simplicity that the medium outside the boundary is isotropic with parameters $\epsilon_o,\mu_o$.
The general form of conditions \r{GBC} were arrived at through a process of generalizing known boundary conditions. Denoting vector tangential to the boundary surface by $()_t$, the conditions of perfect electric (PEC) and magnetic (PMC) conductor conditions are respectively defined by
\e \#a_{1t}\.\#E=0,\ \ \ \ \#a_{2t}\.\#E=0,\ \ \ \ \#a_{1t}\times\#a_{2t}\not=0,\l{PEC} \f
and
\e \#b_{1t}\.\#H=0,\ \ \ \ \#b_{2t}\.\#H=0,\ \ \ \ \#b_{1t}\times\#b_{2t}\not=0. \l{PMC}\f
A generalization of these, the perfect electromagnetic conductor (PEMC), is defined by \cite{PEMC,253}
\e \#b_{1t}\.(\#H + M\#E)=0,\ \ \ \#b_{2t}\.(\#H+ M\#E)=0, \l{PEMC}\f
where $M$ is the PEMC admittance.
Conditions \r{PEC} -- \r{PEMC} are special cases of the impedance-boundary conditions \cite{Methods} defined by,
\e \left(\begin{array}{cc} \#a_{1t} & \#b_{1t}\\ \#a_{2t}& \#b_{2t}\end{array}\right) \.\left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right). \l{imp} \f
The soft-and-hard (SH) boundary \cite{SH,SH1}
\e \#a_t\.\#E=0,\ \ \ \ \#a_t\.\#H=0, \l{SH}\f
and the generalized soft-and-hard (GSH) boundary \cite{GSH},
\e \#a_t\.\#E=0,\ \ \ \ \#b_t\.\#H=0, \f
are other special cases of the impedance boundary.
As examples of boundaries not special cases of \r{imp}, the DB boundary is defined by \cite{Rumsey,DB,259}
\e \#n\.\#D= \epsilon_o\#n\.\#E=0,\ \ \ \ \#n\.\#B = \mu_o\#n\.\#H=0, \l{DB}\f
while the soft-and-hard/DB (SHDB) boundary \cite{SHDB} generalizes both the SH and the DB boundaries as
\e \left(\begin{array}{cc} \alpha\#n & \#a_t\\-\#a_t & \alpha\#n\end{array}\right) \.\left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0 \\ 0\end{array}\right). \l{SHDB}\f
A further generalization is the generalized soft-and-hard/DB (GSHDB) boundary \cite{GSHDB}, with
\e \left(\begin{array}{cc} a_{1n}\#n & \#b_{1t}\\ \#a_{2t}& b_{2n}\#n\end{array}\right)\.\left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right)=\left(\begin{array}{c} 0\\ 0\end{array}\right). \l{GSHDB}\f
A remarkable property of the GSHDB boundary and its special cases is that any plane wave can be decomposed in two parts reflecting from the GSHDB boundary as from respective PEC and PMC boundaries \cite{GSHDB}. The converse was shown in \cite{PIERL2016}, i.e., that a GBC boundary, required to have this property, must actually equal a GSHDB boundary.
The form \r{GBC} of general boundary conditions is not unique, since the same boundary is defined by conditions obtained by multiplying the vector matrix by any scalar matrix,
\e \left(\begin{array}{cc} \#a_1 & \#b_1\\ \#a_2 & \#b_2\end{array}\right) \rightarrow \left(\begin{array}{cc} \alpha & \beta \\ \gamma & \delta\end{array}\right) \left(\begin{array}{cc} \#a_1 & \#b_1\\ \#a_2 & \#b_2\end{array}\right) , \f
with nonzero determinant $\alpha\delta-\beta\gamma\not=0$. A more unique form of general boundary conditions \r{GBC} can be written as
\e \#m\times\#E = \=W\.\eta_o\#H. \l{WH} \f
In fact, \r{WH} is equivalent to \r{GBC} for
\ea \#m &=& \#a_1\times\#a_2, \l{m}\\
\=W &=& \#a_1\#b_2-\#a_2\#b_1. \l{W}\fa
Here we must assume $\#m=\#a_1\times\#a_2\not=0$, which rules out a special case of \r{GBC}. The conditions of a boundary with $\#a_1\times\#a_2=0$ in \r{GBC} can be reduced to the form
\e \left(\begin{array}{cc} \#a & \#b_1\\ 0 & \#b\end{array}\right) \left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right), \l{m=0} \f
in terms of three vectors $\#a,\#b$ and $\#b_1$, see the Appendix. For the special case $\#b_1=0$, \r{m=0} is reduced to
\e \left(\begin{array}{cc} \#a & 0\\ 0 & \#b\end{array}\right) \left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right), \l{EH} \f
corresponding to what have been called conditions of the EH boundary in \cite{AP2017,GBC}.
From \r{WH} the dyadic $\=W$ must satisfy $\#m\.\=W=0$ and, hence, ${\rm det}\=W=0$. In the form \r{WH}, the general boundary conditions could be made unique by requiring an additional normalizing condition for the vector $\#m$. However, let us omit that for simplicity, whence the vector $\#m$ and the dyadic $\=W$ may be multiplied by an arbitrary scalar coefficient without changing the definition of the boundary.
The form \r{WH} resembles that of the impedance boundary \r{imp}, which can alternatively be written as \cite{Methods}
\e \#E_t = \=Z_t\.(\#n\times\#H_t), \f
or, as
\e \#n\times\#E = -(\=Z_t\displaystyle{{}^\times}\llap{${}_\times$}\#n\#n)\.\#H, \f
in terms of what is normally called the impedance dyadic, $\=Z_t$. Because the vector $\#m$ is more general than the unit vector $\#n$, the form \r{WH} can be conceived as a generalization of the impedance-boundary conditions.
The dyadic $\=W$ can be decomposed in its symmetric and antisymmetric parts,
\e \=W = \=W_s + \=W_a, \f
satisfying
\e \=W{}_s^T = \=W_s,\ \ \ \=W{}_a^T = -\=W_a. \f
They are defined by
\e \=W_s = \frac{1}{2}( \#a_1\#b_2+\#b_2\#a_1-\#a_2\#b_1- \#b_1\#a_2), \l{Ws}\f
\e \=W_a= \frac{1}{2}( \#a_1\#b_2-\#b_2\#a_1-\#a_2\#b_1+ \#b_1\#a_2). \f
The antisymmetric part can be represented as \cite{Methods}
\e \=W_a = \#w\times\=I,\f
in terms of the vector
\e \#w = \frac{1}{2}(\#b_2\times\#a_1+\#a_2\times\#b_1). \l{w}\f
\section{Duality Transformation}
In its basic form, the duality transformation in electromagnetic theory makes use of the symmetry of the Maxwell equations. This allows interchanging electric and magnetic quantities, $\#E\rightarrow\#H$, $\#H\rightarrow-\#E$, while the total set of equations remains the same. In this form it was originally introduced by Heaviside \cite{Heaviside}. In its more complete form, it can be defined by \cite{Methods}
\e \left(\begin{array}{c} \#E_d \\ \eta_o\#H_d\end{array}\right) = \left(\begin{array}{cc} A & B\\ C & D\end{array}\right) \left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right), \l{dual}\f
where the four transformation parameters are assumed to satisfy
\e AD-BC=1. \f
The inverse transformation has the form
\e \left(\begin{array}{c} \#E \\ \eta_o\#H\end{array}\right) = \left(\begin{array}{cc} D& -B\\ -C & A\end{array}\right)\left(\begin{array}{c} \#E_d\\ \eta_o\#H_d\end{array}\right). \l{invdual}\f
In addition to electromagnetic fields, \r{dual} induces transformations to electromagnetic sources and parameters of electromagnetic media and boundaries. One can show that, requiring the simple-isotropic medium with parameters $\epsilon_o,\mu_o$ to be invariant in the duality transformation, the parameters $A \cdots D$ must be chosen as \cite{GBC,234}
\e A=D=\cos\varphi,\ \ \ \ B=-C=\sin\varphi, \l{ADBC}\f
which leaves us with one transformation parameter $\varphi$, only. In the form \r{dual} and \r{ADBC}, the transformation was introduced by Larmor as duality rotation \cite{Fushchich,Mignaco}.
Applying \r{invdual} and \r{ADBC}, the GBC conditions \r{GBC} are transformed to
\e \left(\begin{array}{cc} \#a_1 & \#b_1\\ \#a_2 & \#b_2\end{array}\right) \.\left(\begin{array}{cc} \cos\varphi & -\sin\varphi\\ \sin\varphi & \cos\varphi\end{array}\right) \left(\begin{array}{c}\#E_d\\ \eta_o\#H_d\end{array}\right) = \left(\begin{array}{c} 0\\0\end{array}\right), \l{GBC2d} \f
which yields the set of dual boundary conditions,
\e \left(\begin{array}{cc} \#a_{1d} & \#b_{1d}\\ \#a_{2d} & \#b_{2d}\end{array}\right) \.\left(\begin{array}{c} \#E_d\\ \eta_o\#H_d\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right),\l{GDBd}\f
in terms of the dual set of vectors
\ea \#a_{1d} &=& \#a_1\cos\varphi + \#b_1\sin\varphi \l{a1d}\\
\#b_{1d} &=& -\#a_1\sin\varphi + \#b_1\cos\varphi \\
\#a_{2d} &=& \#a_2\cos\varphi +\#b_2\sin\varphi \\
\#b_{2d} &=& -\#a_2\sin\varphi + \#b_2\cos\varphi. \l{b2d}\fa
Applying the duality transformation to boundary conditions of the form \r{WH} yields
\e \#m_d\times\#E_d = \=W_d\.\eta_o\#H_d, \f
with
\ea \#m_d &=& \#a_{1d}\times\#a_{2d}\\
&=& (\#a_1\cos\varphi + \#b_1\sin\varphi)\times(\#a_2\cos\varphi + \#b_2\sin\varphi), \\
\=W_d &=& \#a_{1d}\#b_{2d}- \#a_{2d}\#b_{1d} \\
&=& (\#a_1\cos\varphi + \#b_1\sin\varphi)(\#b_2\cos\varphi -\#a_2\sin\varphi) \nonumber\\
&&-(\#a_2\cos\varphi + \#b_2\sin\varphi)(\#b_1\cos\varphi-\#a_1\sin\varphi). \fa
The dyadic $\=W_d$ can be expanded as
\e \=W_d = \=W\cos^2\varphi + \=W{}^T\sin^2\varphi + (\#a_1\times\#a_2- \#b_1\times\#b_2)\times\=I\ \cos\varphi\sin\varphi. \f
Remarkably, the symmetric part of $\=W_d$ equals that of $\=W$ in \r{Ws},
\e \=W_{ds} = \=W_s , \l{WdsWs}\f
while the antisymmetric part of $\=W_d$ written as
\e \=W_{da} = \#w_d\times\=I, \f
is related to \r{w} by
\e \#w_d = \#w\cos2\varphi+ \frac{1}{2}(\#a_1\times\#a_2- \#b_1\times\#b_2)\sin2\varphi.\f
\section{Self-Dual Boundary Conditions}
A number of special GBC boundaries have been previously shown to be self dual, i.e., invariant in the duality transformation. Such a property has engineering interest, because objects with certain geometric symmetry and self-dual boundary conditions appear invisible for the monostatic radar \cite{AP09}. In fact, in such cases there is no back scattering from the object for any incident wave. In particular, the SHDB boundary and its special cases, the SH and DB boundaries, are known to be self dual \cite{GBC}. Also, two special cases of the PEMC boundary, with $M= \pm 1/j\eta_o$, have been shown to be self dual \cite{GBC}. The task of this paper is to define the most general class of GBC boundaries whose conditions are invariant in the duality transformation.
For a boundary defined by conditions of the form \r{WH} to be self dual, we must have the three conditions
\ea \#m_d&=&\alpha\#m, \l{mdAm}\\
\#w_d &=& \alpha\#w, \l{wdAw}\\
\=W_{ds} &=& \alpha\=W_s, \l{WsAWs} \fa
valid for some scalar $\alpha$. The conditions \r{mdAm}, \r{wdAw} and \r{WsAWs} can be respectively expanded as
\e \#a_1\times\#a_2(\cos^2\varphi-\alpha) + \#b_1\times\#b_2\sin^2\varphi + (\#a_1\times\#b_2-\#a_2\times\#b_1)\sin\varphi\cos\varphi =0. \l{1}\f
\e (\#a_1\times\#b_2-\#a_2\times\#b_1)(\cos2\varphi -\alpha) + (\#b_1\times\#b_2-\#a_1\times\#a_2)\sin2\varphi=0.\l{2} \f
\e (1-\alpha)(\#b_2\#a_1 + \#a_1\#b_2 -\#b_1\#a_2 - \#a_2\#b_1)=0. \l{3}\f
Because from \r{WdsWs} and \r{WsAWs} we have
\e (\alpha-1)\=W_s=0, \f
for the boundary to be self dual, either $\alpha=1$ or $\=W_s=0$ must be satisfied. Let us consider these two cases separately.
\subsection{Case 1, $\alpha=1$}
The condition \r{3} is now satisfied identically, while the conditions \r{1} and \r{2} become
\e (\#a_1\times\#a_2 - \#b_1\times\#b_2)\sin\varphi + (\#b_2\times\#a_1+\#a_2\times\#b_1)\cos\varphi =0, \l{1'}\f
\e -(\#b_2\times\#a_1 +\#a_2\times\#b_1)\sin\varphi + (\#a_1\times\#a_2- \#b_1\times\#b_2)\cos\varphi=0,\l{2'} \f
when excluding the identity transformation $\sin\varphi=0$. After successive elimination of the bracketed terms and a comparison with \r{m} and \r{w}, the conditions \r{1'} and \r{2'} yield
\ea \#b_1\times\#b_2 = \#a_1\times\#a_2&=& \#m, \l{b1b2m}\\
\#b_2\times\#a_1 +\#a_2\times\#b_1&=& 2\#w=0, \l{b2a1}\fa
Because of $\#w=0$, the case $\alpha=1$ requires that the dyadic $\=W$ be symmetric.
From \r{b1b2m} and \r{b2a1}, it follows that the four vectors $\#b_1,\#b_2,\#a_1,\#a_2$ must be coplanar. Assuming $\#a_1\times\#b_1\not=0$, we can expand
\e \left(\begin{array}{c} \#a_2\\ \#b_2\end{array}\right) = \left(\begin{array}{cc} A_1 & B_1\\ A_2 & B_2\end{array}\right) \left(\begin{array}{c} \#a_1 \\ \#b_1\end{array}\right) . \l{A1B1A2B2}\f
Substituting these in \r{b1b2m} and \r{b2a1}, we obtain the relations
\e A_2=-B_1,\ \ \ B_2=A_1. \f
The expansion \r{A1B1A2B2} now becomes
\e \left(\begin{array}{c} \#a_2\\ \#b_2\end{array}\right) = \left(\begin{array}{cc} A_1 & B_1\\ -B_1 & A_1\end{array}\right) \left(\begin{array}{c} \#a_1 \\ \#b_1\end{array}\right) , \f
whence we can write
\ea \#m &=& \#a_1\times\#a_2 = B_1\#a_1\times\#b_1,\\
\=W &=& \#a_1\#b_2-\#a_2\#b_1 = -B_1(\#a_1\#a_1+ \#b_1\#b_1), \fa
the latter of which has a symmetric form, as required. The boundary condition \r{WH} now becomes
\e (\#a_1\times \#b_1)\times\#E +(\#a_1\#a_1+ \#b_1\#b_1)\.\eta_o\#H=0.\f
Denoting for simplicity $\#a_1$ and $\#b_1$, respectively, by $\#a$ and $\#b$, this is equivalent to the special form of the GBC conditions \r{GBC},
\e \left(\begin{array}{cc} \#a & \#b\\ -\#b & \#a\end{array}\right) \left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right). \l{abcond}\f
To check the self-dual property of \r{abcond}, let us expand the dual boundary conditions \r{GDBd} in terms of \r{a1d} -- \r{b2d} as
$$ \left(\begin{array}{cc} \#a\cos\varphi + \#b\sin\varphi & -\#a\sin\varphi + \#b\cos\varphi \\
-\#b\cos\varphi +\#a\sin\varphi & \#b\sin\varphi + \#a\cos\varphi\end{array}\right) \left(\begin{array}{c} \#E_d\\ \eta_o\#H_d\end{array}\right) = $$
\e \left(\begin{array}{cc} \cos\varphi & -\sin\varphi\\ \sin\varphi & \cos\varphi\end{array}\right) \left(\begin{array}{cc} \#a & \#b\\ -\#b & \#a\end{array}\right) \left(\begin{array}{c} \#E_d\\ \eta_o\#H_d\end{array}\right) =\left(\begin{array}{c} 0\\ 0\end{array}\right). \f
Obviously, these are equivalent to the original boundary conditions \r{abcond} whence the corresponding boundary is self dual for any parameter $\varphi$.
\subsection{Case 2, $\=W_s=0$}
For $\alpha\not=1$, the dyadic $\=W$ must be antisymmetric and it can be expressed as
\e \=W = \#w\times\=I. \f
Applying \r{m} and \r{W} we have
\e \#m\.\=W=0,\ \ \Rightarrow\ \ \#m\times\#w=0, \f
whence the two vectors must be linearly dependent,
\e \#w = \beta\#m. \f
In this case, the boundary condition \r{WH} becomes that of the generalized PEMC boundary \cite{GPEMC},
\e \#m\times(\#H+ M\#E)=0, \l{GPEMC}\f
with $M=-1/\beta\eta_o$. For $\#m=\#n$, \r{GPEMC} is equivalent with the PEMC boundary condition, \r{PEMC}.
The dual boundary condition can be expanded as
\e\#m_d\times(\#E_d-\beta\eta_o\#H_d) = \alpha\#m\times((\cos\varphi+\beta\sin\varphi)\#E + (\sin\varphi-\beta\cos\varphi)\eta_o\#H)=0. \f
To be self-dual, this should be a multiple of \\
\e\#m\times(\#E-\beta\eta_o\#H), \f
which requires
\e \sin\varphi-\beta\cos\varphi = -\beta(\cos\varphi+\beta\sin\varphi),\ \ \ \Rightarrow\ \ \ \beta^2=-1. \f
This leaves us with the two possibilities,
\e \beta=\pm j. \f
In conclusion, in the case of antisymmetric dyadic $\=W$, the self-dual boundary condition must equal either of the two conditions
\e \#m\times(\#E \pm j\eta_o\#H)=0, \l{GPEMC1}\f
which can be called self-dual generalized PEMC boundaries with $M=\mp j/\eta_o$.
To check the self-dual property of this result, we can apply \r{invdual} to \r{GPEMC1}. In fact, the resulting condition
\e e^{\pm j\varphi}\#m\times(\#E_d \pm j\eta_o\#H_d)=0, \l{GPEMCd}\f
equals that of \r{GPEMC1} for dual fields.
\subsection{Case 3, $\#m=0$}
The representation \r{WH} is not valid for $\#m=\#a_1\times\#a_2=0$, in which case the boundary conditions are of the reduced form \r{m=0}. In such a case, (see the Appendix) the self-dual condition requires that the vectors $\#b$ and $\#b_1$ be multiples of the vector $\#a$, whence the boundary conditions are reduced to the form
\e \left(\begin{array}{cc} \#a & 0\\ 0 & \#a\end{array}\right) \.\left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right). \l{SDEH} \f
A boundary defined by conditions \r{SDEH} can be called a self-dual EH boundary because \r{SDEH} is the self-dual special case of \r{EH}, the EH-boundary conditions. Because \r{SDEH} is also a special case of \r{abcond}, we can actually include Case 3 as a subcase in the broader class of Case 1 boundaries.
\section{Special Cases}
Let us consider some special cases of the self-dual boundary of Case 1, as defined by the conditions \r{abcond}.
\begin{itemize}
\item $\#a=a\#n,\ \#b= b\#n$
In this case, the conditions \r{abcond} are reduced to those of the DB boundary, \r{DB}.
\item $\#a=\#b=\#a_t$
This case corresponds to that of the SH boundary, \r{SH}.
\item $\#a=\#a_t$, $\#b=\#b_t$, with $\#a_t\times\#b_t=\#n$
The conditions \r{abcond} can now be written as
\e \#n\times\#E = -\eta_o(\#a_t\#a_t+ \#b_t\#b_t)\.\#H, \f
which has the form of impedance-boundary conditions, \r{imp}. The impedance dyadic is symmetric,
\e \=Z_t = \eta_o\#n\#n\displaystyle{{}^\times}\llap{${}_\times$}(\#a_t\#a_t+ \#b_t\#b_t),\f
and satisfies \cite{GBC}
\e {\rm det}_t\=Z_t = {\rm tr }\=Z{}_t^{(2)} = \eta_o^2. \f
The impedance dyadic consists of isotropic and anisotropic parts as \cite{GBC}
\e \=Z_t=Z_s\=I_t + \=Z_a, \f
with
\e \=I_t=\=I -\#n\#n, \f
and
\ea Z_s &=& \frac{1}{2}{\rm tr }\=Z_t= \frac{\eta_o}{2}(\#a_t\.\#a_t + \#b_t\.\#b_t), \\
\=Z_a &=& \=Z_t - Z_s\=I_t. \fa
An example of this type of self-dual surface is the perfect co-circular polarization reflector \cite{Jensen}.
\item $\#a=a_n\#n,\#b=\#b_t$
This case corresponds to that of the SHDB boundary, \r{SHDB}. A similar case is obtained for $\#a=\#a_t,\#b=b_n\#n$.
\item $\#a=a_n\#n + \#a_t$, $\#b= \#b_t$
Actually, the case that one of the vectors $\#a$ and $\#b$ has only the tangential component, in general equals the case \r{abcond}. In fact, subsequently eliminating $\#n\.\#E$ and $\#n\.\#H$ from \r{abcond}, the remaining equations, with redefined vectors $\#a$ and $\#b$, can be expressed in the form
\e \left(\begin{array}{cc} \#a & \#b_t\\ -\#b_t& \#a\end{array}\right) \.\left(\begin{array}{c} \#E\\ \eta_o\#H\end{array}\right)=\left(\begin{array}{c} 0\\ 0\end{array}\right), \l{SDBC}\f
where $\#b$ has no normal component.
\item $\#a=\#b$
This case corresponds to that of the self-dual EH boundary, defined by conditions of the form \r{SDEH}.
\end{itemize}
\section{Plane-Wave Reflection from Self-Dual Boundary}
Let us consider a plane wave incident to and reflecting from a boundary surface defined by the self-dual conditions \r{abcond}. The respective $\#k$ vectors
\ea \#k^i &=& -\#n k_n + \#k_t,\\
\#k^r &=& \#n k_n + \#k_t, \fa
satisfy
\e \#n\.\#k_t=0,\ \ \ \#k^i\.\#k^i=\#k^r\.\#k^r=k_o^2. \f
Following the analysis of \cite{GBC}, Sec.\ 5.4, we can express the reflected electric field in terms of the incident electric field as
\e \#E^r = \=R\.\#E^i, \l{ERE}\f
where the reflection dyadic has the form
\e \=R = \frac{1}{J^r}\#k^r\times(\#c_2^r\#c_1^i-\#c_1^r\#c_2^i), \l{R} \f
with
\ea \#c_1^{i,r} &=& \#k^{i,r}\times\#b_1 -k_o\#a_1= \#k^{i,r}\times\#b -k_o\#a, \\
\#c_2^{i,r} &=& \#k^{i,r}\times\#b_2 -k_o\#a_2= \#k^{i,r}\times\#a +k_o\#b, \fa
and
\e J^r = \#k^r\.\#c_1^r\times\#c_2^r. \l{Jr}\f
Now it is quite straightforward to show that, if the incident field is decomposed in two parts as
\e \#E^i = \#E_1^i+ \#E_2^i, \l{Ei12}\f
and defined by
\e \#E_1^i = E_1^i\#k^i\times\#c_1^i,\ \ \ \#E_2^i = E_2^i \#k^i\times\#c_2^i,\f
from \r{ERE} and \r{R}, the reflected field will be decomposed as
\e \#E^r = \#E_1^r + \#E_2^r, \f
and defined by
\e \#E_1^r = E_1^r\#k^r\times\#c_1^r,\ \ \ \#E_2^r = E_2^r \#k^r\times\#c_2^r.\f
The four field vectors satisfy $\#c_1^i\.\#E_1^i=\#c_2^i\.\#E_2^i=0$ and $\#c_1^r\.\#E_1^r=\#c_2^r\.\#E_2^r=0$, while the scalar coefficients are obtained from the field vectors as
\e E_1^i= \frac{1}{J^i}\#c_2^i\.\#E^i,\ \ \ \ E_2^i = -\frac{1}{J^i}\#c_1^i\.\#E^i, \f
\e E_1^r= \frac{1}{J^r}\#c_2^r\.\#E^r,\ \ \ \ E_2^r = -\frac{1}{J^r}\#c_1^r\.\#E^r, \f
with
\e J^i = \#k^i\.\#c_1^i\times\#c_2^i. \f
Substituting \r{Ei12} and \r{R} in \r{ERE}, relations between the scalar field coefficients can be written as
\ea J^rE_1^r &=& -J^iE_1^i \l{E1rE1i}\\
J^rE_2^r &=& -J^iE_2^i.\l{E2rE2i} \fa
Thus, there is no cross coupling between the waves 1 and 2 in reflection from the boundary, and the ratio of two scalar field coefficients is the same,
\e \frac{E_1^r}{E_1^i} = \frac{E_2^r}{E_2^i}= -\frac{J^i}{J^r}. \l{ref}\f
Here we have assumed $J^r\not=0$ and $J^i\not=0$.
In the case $J^i=0$, i.e., if the wave vector $\#k=\#k^i$ satisfies
\e J(\#k)= (\#k\times\#a)^2 + (\#k\times\#b)^2 + 2k_o\#k\.\#a\times\#b=0, \l{Jk} \f
from \r{E1rE1i} and \r{E2rE2i} it follows that $E_1^r=E_2^r=0$, that is, $\#E^r=0$. Thus, for such a wave vector, the incident wave $\#E^i,\#H^i$ satisfies the boundary conditions identically, i.e., it is matched to the boundary. Similarly, for $J^r=0$, when the wave vector $\#k^r$ satisfies $J^r=0$, which equals \r{Jk} for $\#k=\#k^r$, the incident wave vanishes, $\#E^i=0$. In fact, from \r{R} it follows that, for $J^r=0$, the magnitude of the reflection dydic becomes infinite, whence for finite $\#E^r$, we must have $\#E^i=0$. In this case the reflected wave $\#E^r,\#H^r$, is matched to the boundary. For any single matched plane wave it does not matter whether it is called "incident" or "reflected". Thus, the reflection coefficient \r{ref} may be either zero or infinite for the matched-wave cases. \r{Jk} is called the dispersion equation for the matched waves of the boundary\cite{GBC}.
\section{Matched Waves for Self-Dual EH Boundary}
As a more concrete example, let us consider the self-dual EH boundary defined by \r{SDEH}, i.e., by \r{abcond} with $\#b=0$. In this case, we can write
\ea \#c_1^{i,r} &=& -k_o\#a, \\
\#c_2^{i,r} &=& \#k^{i,r}\times\#a, \fa
whence, from \r{Jr}, $J^r=-k_o(\#k^r\times\#a)^2$. The reflection dyadic \r{R} can be represented in the form
\e \=R = \frac{1}{(\#k^r\times\#a)^2}((\#k^r\times(\#k^r\times\#a))\#a -(\#k^r\times\#a)(\#k^i\times\#a)). \l{REH}\f
Let us now assume that $\#a=\#u$ is a real unit vector, and $(\#u,\#v,\#w)$ is a right-hand basis of real orthogonal unit vectors. Denoting
\e \#k = k_u\#u + \#k_\bot,\ \ \ \ \#k_\bot\.\#u=0, \f
the dispersion equation \r{Jk} now becomes
\e (\#k\times\#a)^2 = ((k_u\#u+ \#k_\bot)\times\#u)^2 =\#k_\bot\.\#k_\bot=0, \f
whence $\#k_\bot$ may be any circularly-polarized vector in the plane orthogonal to $\#u$. From
\e \#k\.\#k = k_u^2+ \#k_\bot\.\#k_\bot = k_u^2= k_o^2, \f
we obtain $k_u=k_o$ or $k_u=-k_o$.
Now any circularly-polarized vector orthogonal to $\#u$ can be represented as a multiple of one of the two circularly polarized vectors $\#u_+,\#u_-$ defined by \cite{Methods}
\e \#u_+ = \#v+ j\#w,\ \ \ \ \#u\times\#u_+ = -j\#u_+,\f
\e \#u_-= \#v- j\#w,\ \ \ \ \#u\times\#u_- = j\#u_-. \f
as
\e \#k_+{}_\bot = k_+\#u_+,\ \ \ \ \#k_-{}_\bot = k_-\#u_-. \f
Assuming $\#n\.\#u>0$, we can express the possible vectors $\#k^i$ satisfying $J^i=0$ by
\e \#k^i_+= -k_o\#u + k_+^i\#u_+,\ \ \ \ \#k^i_-= -k_o\#u + k_-^i\#u_-. \l{kipm} \f
Similarly, the possible vectors $\#k^r$ satisfying $J^r=0$ can be expressed as
\e \#k^r_+= k_o\#u + k_+^r\#u_+,\ \ \ \ \#k^r_-= k_o\#u + k_-^r\#u_-. \l{krpm}\f
We can now make use of the following relations between the fields at the boundary,
\ea \#E^r &=& \frac{1}{(\#k^r\times\#a)^2}\#k^r\times(\#k^r\times\#a\#a+\#a\#a\times\#k^i)\.\#E^i, \l{Er}\\
\#E^i &=& \frac{1}{(\#k^i\times\#a)^2}\#k^i\times(\#k^i\times\#a\#a+\#a\#a\times\#k^r)\.\#E^r. \l{Ei}\fa
The relation \r{Er} is obtained from \r{REH}, while \r{Ei} can be verified by eliminating $\#E^i$ from the two equations, which yields $\#E^r=\#E^r$.
The fields for the matched waves at the self-dual EH boundary can be found for the two cases from \r{Er} and \r{Ei} as follows:
\begin{enumerate}
\item For $J^i=0$ and $\#k^i=\#k^i_\pm= -k_o\#u + k_\pm^i\#u_\pm$, from \r{Er} we obtain $\#E^r_\pm=0$ for the incident fields
\ea \#E^i_\pm &=& \alpha_\pm^i \#u\times(\#u\times\#k^i_\pm) = E^i_\pm\#u_\pm \\
\eta_o\#H_\pm^i &=& \frac{1}{k_o}\#k_\pm^i\times\#E_\pm^i = -\#u\times E_\pm^i\#u_\pm = \pm j\#E_\pm^i. \fa
\item For $J^r=0$ and $\#k^r=\#k^r_\pm= k_o\#u + k_\pm^r\#u_\pm$, from \r{Ei} we obtain $\#E^i_\pm=0$ for the reflected fields
\ea \#E^r_\pm &=& \alpha_\pm^r \#u\times(\#u\times\#k^r_\pm) = E^r_\pm \#u_\pm\\
\eta_o\#H_\pm^r &=& \frac{1}{k_o}\#k_\pm^r\times\#E_\pm^r = \#u\times \#E^r_\pm =\mp j\#E_\pm^r. \fa
This "reflected" matched wave corresponds to the non-existing "incident" wave arriving at $\#k^i= (\=I-2\#n\#n)\.\#k^r$.
\end{enumerate}
In conclusion, the plane waves matched to the self-dual EH boundary are circularly polarized parallel to the plane orthogonal to $\#a=\#u$.
For $\#a=\#u=\#n$, the self-dual EH boundary is reduced to the DB boundary. It is known that the normally incident or reflected wave is matched to the DB boundary for any polarization \cite{GBC}.
\section{Reflection from Self-Dual EH Boundary}
Let us consider reflection from the self-dual EH boundary, defined by the conditions \r{SDEH}, applying the relation \r{Er}. Such a boundary has previously found research interest and its realization in terms of a medium interface has been suggested \cite{Tedeschi}. $\#a$ is assumed to be a real unit vector and form the angle $\alpha$ with the normal $\#n$ of the boundary, Figure~\ref{fig:geometria},
\e \#u = \#n\cos\alpha + \#u_1\sin\alpha. \f
Here, $\#u_1$ denotes a real unit vector tangential to the boundary and $\#u_2=\#n\times\#u_1$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1.0]
\draw[thick][->] (0,0) -- (0,2);
\node at (2.2,.2) {$\mathbf{u}_1$};
\draw[brown, line width=2pt] (-4,0) -- (5,0);
\node[black] at (4,.3) {\bf{SDEH}};
\draw[thick,green!30!black] [->] (0,0)--(0,2);
\node at (-1,1.5) {$\mathbf{k}^i$};
\node[blue] at (2.1,1.2) {$\mathbf{a}$};
\node at (-1.5,.5) {$\psi$};
\node at (.5,1) {$\alpha$};
\node[red!30!black] at (.4,2) {$\mathbf{n}$};
\draw[thick] [->] (-2,2)--(-.1,5/70);
\draw[blue, ultra thick] [->] (.05,.05)--(1.8,.9);
\draw[thick,dashed] [-] (-1,-0) .. controls (-.95,.3) .. (-.68,0.6);
\draw[thick,dashed] [-] (0,.7) .. controls (.27,.6) .. (.5,0.31);
\draw[thick][->] (0,0) -- (2,0);
\node at (2.2,.2) {$\mathbf{u}_1$};
\end{tikzpicture}
\caption{Plane wave incident on a self-dual EH boundary. The incident wave forms the elevation angle $\psi$ with the surface. $\#a$ is a real unit vector.}
\label{fig:geometria}
\end{figure}
Let us separate two main cases of incidence to the self-dual EH boundary: $\#k^i$ lies in a plane either parallel or perpendicular to the plane of $\#a$ and $\#n$.
\subsection{Parallel Incidence}
For the wave incident in the plane of $\#n$ and $\#a$ at an angle $\psi$, Figure~\ref{fig:geometria}, we can write
\ea \#k^i &=& k_o(-\#n\sin\psi + \#u_1 \cos\psi) = k_o\#u_2\times\#u_p^i, \\
\#k^r &=& k_o(\#n\sin\psi + \#u_1 \cos\psi)= k_o\#u_2\times\#u_p^r, \fa
where the two unit vectors
\ea \#u_p^i &=& \#n\cos\psi + \#u_1\sin\psi, \\
\#u_p^r &=& -\#n\cos\psi + \#u_1\sin\psi, \fa
satisfy
\e \#u_p^i\.\#k^i=\#u_p^i\.\#u_2=0,\ \ \ \#u_p^r\.\#k^r=\#u_p^r\.\#u_2=0, \f
Inserting these in \r{Er}, we obtain
\e \#E^r = -\frac{1}{\cos(\psi+\alpha)}\big(-\#u_p^r(\#n\cos\alpha +\#u_1\sin\alpha)+ \#u_2\#u_2\cos(\psi-\alpha)\big)\.\#E^i. \l{ErEi}\f
Expanding the incident and reflected fields as
\ea \#E^i&=& \#u_p^iE^i_p + \#u_2E^i_2,\l{Eip2} \\
\#E^r&=& \#u_p^rE^r_p + \#u_2E^r_2, \fa
in their components $E_p^i,E_p^r$, and $E^i_2,E^r_2$, respectively parallel and perpendicular to the plane of $\#a$ and $\#k^i$, the expression for the reflected field \r{ErEi} can be given the simple form
\e \#E^r = \frac{\cos(\psi-\alpha)}{\cos(\psi+\alpha)}(\#u_p^r E_p^i - \#u_2E_p^i)). \f
Because the field magnitudes obey the relations
\e E_p^r = R_p E_p^i,\ \ \ \ E_2^r = R_2 E_2^i,\ \ \ \ R_p=-R_2=\frac{\cos(\psi-\alpha)}{\cos(\psi+\alpha)}, \l{129}\f
the reflection coefficient has the same magnitude for the two polarizations.
For $\#a=\#n$ we have $\alpha=0$, in which case the self-dual EH boundary equals the DB boundary. In this case, the parallel polarization is reflected as from the PMC boundary ($R_p=1$), and the perpendicular polarization as from the PEC boundary ($R_2=-1$).
For $\psi=\pi/2+\alpha$ we have $\#E^r=0$ and, for $\psi=\pi/2-\alpha$ we have $\#E^i=0$, which correspond to the respective two cases of matched waves $J^i=0$ and $J^r=0$.
\subsection{Perpendicular Incidence}
For the wave incident with $\#k^i$ in the plane normal to the vector $\#u_1$, we have
\ea \#k^i &=& -\#n\sin\psi + \#u_2 \cos\psi, \\
\#k^r &=& \#n\sin\psi + \#u_2 \cos\psi. \fa
Because the wave vectors satisfy the property
\e (\#k^i\times\#a)^2 = (\#k^r\times\#a)^2 = k_o^2(\cos^2\psi + \sin^2\psi\sin^2\alpha), \l{kikru}\f
there are no matched waves for real $\psi$ values if $\alpha$ has a real value.
Let us expand the two fields as
\ea \#E^i &=& (\#k^i\times\#a)A^i + \#k^i\times(\#k^i\times\#a)B^i,\l{EiAi} \\
\#E^r &=& (\#k^r\times\#a)A^r + \#k^r\times(\#k^r\times\#a)B^r. \l{ErAr}\fa
The coefficients can be found from
\ea A^{i,r} &=& \frac{\#k^{i,r}\times\#a}{(\#k^{i,r}\times\#a)^2}\.\#E^{i,r} = -\frac{k_o\#a\.\eta_o\#H^{i,r}}{(\#k^{i,r}\times\#a)^2}, \\
B^{i,r} &=& \frac{\#k^{i,r}\times(\#k^{i,r}\times\#a)}{k_o^2(\#k^{i,r}\times\#a)^2}\.\#E^{i,r} = -\frac{\#a\.\#E^{i,r}}{(\#k^{i,r}\times\#a)^2}, \fa
Substituting \r{EiAi} in \r{Er} and taking \r{kikru} into account, we can expand
\ea \#E^r &=& \frac{1}{(\#k^r\times\#a)^2}\#k^r\times(\#k^r\times\#a\#a-\#a\#k^i\times\#a)\.\#E^i \nonumber\\
&=&\frac{1}{(\#k^i\times\#a)^2}(\#k^r\times(\#k^r\times\#a)(-(\#k^i\times\#a)^2B^i)-(\#k^r\times\#a)(\#k^i\times\#a)^2A^i) \nonumber\\
&=& -(\#k^r\times\#a)A^i-\#k^r\times(\#k^r\times\#a)B^i. \l{Erexp} \fa
Comparing with the expansion \r{ErAr}, we finally obtain the simple relations between the field coefficients,
\e A^r=-A^i,\ \ \ \ B^r=-B^i. \l{ArAiBrBi}\f
These relations are independent of the polarization of the incident field $\#E^i$. Because we can write
\e \#E^r\.\#E^r = (\#k^r\times\#a)^2A^{i2} + k_o^2(\#k^r\times\#a)^2 B^{i2} =\#E^i\.\#E^i, \f
for real $\#k^i$, the magnitude of the field in reflection is constant for all angles of incidence. This effect is demonstrated in Figure \ref{fig:numerics}.
\subsection{Numerical Example}
As an example of the characteristics of a self-dual EH boundary, in the following, reflection amplitudes of plane waves are computed for different incidence angles and polarizations. The vector $\#a$ defining the boundary according to \r{SDEH} forms the angle $\alpha=\pi/3=60^\circ$ with the normal $\#n$, as shown in Figure~\ref{fig:geometria}.
Figure~\ref{fig:numerics} displays the magnitude of the reflection coefficient as function of the elevation angle $\psi$, defined in Figure~\ref{fig:geometria}. The magnitude of the reflection coefficient is computed for two planes of incidence: the plane spanned by $\mathbf n$ and $\mathbf a$ (the plane in Figure~\ref{fig:geometria}, with solid blue line), and the plane perpendicular to that (plane of $\#n$ and $\#n\times\#a$, with dashed orange line).
\begin{figure}
\centerline{\includegraphics[width=5cm]{SD1.png}
\hspace{10pt}\includegraphics[width=6cm]{SD2.png}}
\caption{The magnitude of the reflection coefficient
$|R|$ (absolute and logarithmic) for a plane wave incident to a planar self-dual EH boundary
defined by a real vector $\mathbf a$ which forms the angle $\pi/3$ with the normal of the boundary.
Solid blue line: $\mathbf k^i$ lies in the plane parallel to the vectors $\mathbf n$ and $\mathbf a$ and makes the angle $\psi$ with the boundary (see Figure~\ref{fig:geometria}). Matched wave conditions appear for $\psi=5\pi/6$ (matched incident wave) and for $\psi=\pi/6$ (matched reflected wave). Dashed orange line: the reflection coefficient is $|R|=1$ for all incidence angles in the plane parallel to vectors
$\mathbf n$ and $\mathbf n \times\mathbf a$. The value $|R|$ is independent of the polarization of the wave in both cases.
}
\label{fig:numerics}
\end{figure}
The results show the extremely strong dependence of the response on the direction of the incident wave. While in the plane perpendicular to that of Figure~\ref{fig:geometria} the reflection satisfies $|R|=1$ (meaning that all power is reflected), the angular dependence in the $\mathbf n$-$\mathbf a$ plane contains a zero and an infinity. These are the two matched-wave cases. For the elevation angle $\psi= \pi/2+\alpha=5\pi/6$, the plane wave is incident from the direction of $\mathbf a$, whence $\#a\.\#E=\#a\.\#H=0$. Because the boundary conditions are satisfied, the wave is matched, and no reflection is generated. Likewise, for the case $\psi=\pi/2-\alpha=\pi/6$, the reflected wave is matched, leading to a singularity in the reflection coefficient.
As shown in Equations~\r{129} and \r{ArAiBrBi}, the magnitude of the reflection coefficients is independent of the polarization of the incident wave. For the dashed orange line with full reflection independently of elevation angle, the reflection coefficient is $R=-1$ for a perpendicularly (TE) polarized wave while for the parallel-polarized (TM) incidence, the reflection coefficient is $R=+1$. This behavior (PEC-like for TE-polarization and PMC-like for TM-polarization) is exactly the same as for a DB boundary \cite{DB,259}. It is worth noting that these two polarizations are the eigenpolarizations in reflection: for an arbitrary incident wave, the polarization changes in general but nevertheless the reflection amplitude remains at $|R|=1$.
Also in the other plane (plane of Figure~\ref{fig:geometria}, solid blue curve), the reflection magnitude is polarization-independent. In fact, also for any incidence direction, not necessarily in these two planes, the same observation applies that the magnitude of the reflection coefficient is independent of the polarization state of the wave.
\section{Conclusion}
It has been shown in this study that the possible self-dual electromagnetic boundaries, invariant in duality transformations, fall in two possible classes: Case 1, those which are certain generalizations of the soft-and-hard/DB (SHDB) boundaries, as defined by conditions of the form \r{abcond}, and Case 2, those called self-dual generalized PEMC (GPEMC) boundaries, defined by conditions of the form \r{GPEMC}. Plane-wave reflection and matched waves associated to these boundaries have been analyzed and numerical examples were computed for the self-dual EH boundary, which belongs to the class of Case 1 boundaries.
\section*{Appendix: Self-Dual EH Boundary}
Let us consider in more detail the condition
\e \#m = \#a_1\times\#a_2 =0, \f
making the representation \r{WH} invalid and find the corresponding self-dual boundary conditions. Here we can separate the three cases:
\begin{itemize}
\item $\#a_1=\#a_2=0$.
In this case, \r{GBC} yields
\e \#b_1\.\#H=\#b_2\.\#H=0, \f
i.e., conditions of the H-boundary \cite{GBC}, which are not self dual.
\item $\#a_1\not=0$ and $\#a_2=0$, (the converse case can be handled similarly).
The boundary conditions \r{GBC} can be written in the form
\ea \#a\.\#E + \#b_1\.\eta_o\#H &=& 0, \l{b1H}\\
\#b\.\eta_o\#H &=& 0. \l{bH}\fa
\item $\alpha_1\#a_1+\alpha_2\#a_2=0$ with $\alpha_1\not=0,\ \alpha_2\not=0$.
In this case, after elimination, \r{GBC} can be reduced to the previous form \r{b1H} and \r{bH}.
\end{itemize}
The self-dual conditions for a boundary defined by \r{b1H} and \r{bH}, can be found by requiring that the dual set of vectors \r{a1d} - \r{b2d} satisfy
\e \left(\begin{array}{cc} \#a\cos\varphi + \#b_1\sin\varphi & -\#a\sin\varphi + \#b_1\cos\varphi\\ \#b\sin\varphi & \#b\cos\varphi\end{array}\right) = \left(\begin{array}{cc} \alpha & \beta\\ \gamma & \delta\end{array}\right) \left(\begin{array}{cc} \#a & \#b_1\\ 0 & \#b \end{array}\right), \f
for some scalars $\alpha\cdots\delta$. These can be written as
\ea \#a\cos\varphi + \#b_1\sin\varphi &=& \alpha\#a,\l{ab1} \\
\#b\sin\varphi &=& \gamma\#a, \l{bsga}\\
-\#a\sin\varphi + \#b_1\cos\varphi &=& \alpha\#b_1 + \beta\#b, \\
\#b\cos\varphi &=& \gamma\#b_1 + \delta\#b. \fa
Since $\sin\varphi=0$ corresponds to identity transformation, and $\#b=0$ to incomplete boundary conditions, these possibilities can be neglected. From \r{bsga} and \r{ab1} we have, respectively,
\ea \#b&=&\frac{\gamma}{\sin\varphi}\#a. \l{bga}\\
\#b_1 &=& \frac{\alpha-\cos\varphi}{\sin\varphi}\#a. \fa
Since $\#b$ is a multiple of $\#a$, and $\#b_1$ is a multiple of $\#a$ or zero, in the self-dual case, the boundary conditions \r{b1H}, \r{bH} are reduced to the form \r{SDEH}, corresponding to those of the self-dual EH boundary.
|
1,108,101,563,858 | arxiv | \section{Introduction}
An important consideration to create a convolutional neural network (CNN) model is the number of filters required at every layer. The Neocognitron implementation for example, keeps an equal number of filters for each layer in the model \cite{fukushima1980neocognitron}. A very common practice has been to use a bipyramidal architecture. The number of filters across the different layers is usually increased as the size of the feature maps decreases. This pattern was first proposed in \cite{lecun1998gradient} with the introduction of LeNet and can be observed in a diverse set of models such as VGG\cite{simonyan2014very}, ResNet\cite{he2016deep} and MobileNet\cite{howard2017mobilenets}. Even models obtained from automatic model discovery, like NASNet \cite{zoph2017learning}, follow this principle since neural architecture search methods are mainly formulated to search for layers and connections while the number of filters in each layer remains fixed. The motivation behind this progressive increase in the number of kernels is to compensate a possible loss of the representation caused by the spatial resolution reduction \cite{lecun1998gradient}. In practice it improves performance by keeping a constant number of operations in each layer \cite{chu2014analysis}. It remains unknown if this pyramidal distribution of filters is also beneficial to different aspects of model performances other than the number of operations.
\begin{figure*}
\begin{multicols}{5}
\includegraphics[width=0.9\linewidth]{filters_model_base}\par
\includegraphics[width=0.9\linewidth]{filters_model_uniform}\par
\includegraphics[width=0.9\linewidth]{filters_model_reverse-base}\par
\includegraphics[width=0.9\linewidth]{filters_model_quadratic}\par
\includegraphics[width=0.9\linewidth]{filters_model_negative-quadratic}\par
\end{multicols}
\caption{Filters per layer using the proposed templates for filter redistribution in a VGG style model. Base distribution, which is the original distribution, shows the common design of growing the filters when resolution of feature maps decreases in deeper layers. Although the total number of filters is kept constant after templates, changes in filter distribution induce different effects in performance and resource consumption.}
\label{fig:templates}
\end{figure*}
The contribution of this paper is to challenge the widely used design of increasing filters in neural convolutional models by applying a small subset of diverse filter distributions, called templates, to existing neural network designs. Experimental evidence shows that simple changes to the pyramidal distribution of filters in convolutional network models lead to improvements in accuracy, number of parameters or memory footprint; we highlight that most recent models, which have had a more detailed tuning in the filter distribution, present resiliency in accuracy to changes in the filter distribution, a phenomena that requires further research and explanation.
Experiments in this document are exploratory. We use equal number of filters in all templates without constraining the effects of the redistribution. We extend this work in \cite{izquierdo2021filter} where templates are evaluated with more rigorous experiments keeping FLOPs to similar values as in the original model and then comparing resource consumption.
\section{Related Work}\label{chap:related}
The process of designing a neural network is a task that has largely been based on experience and experimentation which consumes a lot of time and computational resources. Of note are reference models such as VGG\cite{simonyan2014very}, ResNet\cite{he2016deep}, Inception\cite{szegedy2015going} and MobileNet\cite{howard2017mobilenets} that have been developed with significant use of heuristics. Even with automatic methods, one key feature that has constantly been adopted is the manual selection of the number of filters in each layer in the final model. Filters are set in such a way to have an increasing number as the layers go deeper, differing from the original Neocognitron design\cite{fukushima1980neocognitron}.
With the increase in the use of Neural Networks, and particularly Convolutional Networks for computer vision problems, a mechanism to automatically find the best architecture has become a requirement in the field of Deep Learning. One of the biggest challenges in automatic architecture design is that the search space for CNN architectures is very large\cite{ren2020comprehensive}. Two fields have derived from the problem: \textit{i)} neural architecture search (NAS), that develops mechanisms for searching for the best combination of layers\cite{zoph2018learning} and \textit{ii)} channel number search (CNS), which look for the best distribution of filters given an initial architecture\cite{dong2019network,wang2020revisiting}.
Pruning methods could be seen as an special case of CNS in which there is the assumption that the weights, obtained at the end of the training process of the original model, are important to the pruning method\cite{frankle2018lottery}.
In pruning methods, searching involves training models for several iterations to select the correct weights to remove \cite{frankle2018lottery,he2019filter,you2019gate}, or at least increasing the computation during the training when doing jointly training and search \cite{leclerc2018smallify,li2019learning}. In \cite{liu2018rethinking} it is suggested that accuracy obtained by pruning techniques can be reached by removing filters to fit a certain resource budget and training from scratch.
Our work for finding an appropriate distribution of filters relates to \cite{gordon2018morphnet} in the sense that their method is not restricted to reducing filters but also to increase them to see if the changes are beneficial. Our approach differs however, because it only requires the model to be trained in the final stage, after manually making some predefined changes to the number of filters using the redistribution templates.
\section{Filter distribution templates}\label{chap:methods}
While most of neural network architectures show an incremental distribution of filters, recent pruning methods such as \cite{gordon2018morphnet,leclerc2018smallify}, have shown different filter distribution patterns emerging when reducing models like VGG that defy the notion of pyramidal design as the best distribution for a model. This is a motivational insight into what other distributions can and should be considered when designing models. On one side the combinatorial space of distributions make this a challenging exploration, on the other however, it importantly highlights the need to pursue such exploration if gains in accuracy and overall performance can be made.
In this work, rather than attempting to find the optimal filter distribution with expensive automatic pruning or growing techniques, we propose to first adjust the filters of a convolutional network model via a small number of pre-defined templates. These templates such as those depicted in figure~\ref{fig:templates}, are inspired by existing models that have already been found to perform well and thus candidates that could be beneficial for model performance improvement beyond the number of operations. Performance criteria such as accuracy, memory footprint and inference time are arguably as important as the number of operations required.
In particular, we adopt as one template, a distribution with a fixed number of filters as with the original Neocognitron design, but also other templates inspired by the patterns found in \cite{gordon2018morphnet} where some behaviours are present in different blocks from the resulting ResNet101 model: 1) filters agglomerate in the centre and 2) filters are reduced in the centre of the block. In \cite{leclerc2018smallify,you2019gate} is shown also a pattern with more filters in the centre of a VGG model. Based on these observations we define the templates we use in this work.
Different distributions with the same number of filters can lead to different number of parameters (e.g. weights) and different memory or computational requirements (e.g. GPU modules). In the toy example in Figure \ref{fig:templates_resource_change}, both models have the same number of filters but the one on the right has less parameters and less compute requirements at the cost of more memory footprint. This example highlights the compromises that filter distributions can offer for the design and operation of network models.
\begin{figure}
\centering
\begin{multicols}{2}
\includegraphics[width=1.0\linewidth]{resource_changes_base}\par
\includegraphics[width=1.0\linewidth]{resource_changes_uniform}\par
\end{multicols}
\caption{A toy example to show how two different templates with the same number of filters produce a variety of effects in parameters, memory, inference time and flops. Layers (rectangles) contain in total, equal number of filters (circles) for both templates. Lines between filters represent parameters, red squares are by-channel feature maps which reside in memory jointly with parameters. Flops are produced by shifting filters along feature maps. Inference time is affected by flops and number of transfers, indicated by blue arrows and here limited to two simultaneously, between memory and GPU modules. Diagram assumes filters of equal sizes and pooling between layers. Differences are scaled up in real models counting thousand of filters.}
\label{fig:templates_resource_change}
\end{figure}
We define a convolutional neural network base model as a set of numbered layers $\textsl{L}=1,... ,D+1$, each with $f_{l}$ filters in layer $l$, $D+1$ is the final classification layer. The total number of filters that can be redistributed is given by $F = \sum_{l=1}^{D}f_{l}$. We want to test if the common heuristic of distributing $F$ having $f_{l+1} = 2f_{l}$ each time the feature map is halved, is advantageous to the model over other distributions of $F$ when evaluating performance, memory footprint and inference time.
The number of filters in the final layer $D+1$ depends on the task and remains the same for all the templates, therefore it is not taken into account for computing the number of filters to redistribute. For architectures composed of blocks (e.g. Inception) we consider blocks as single layers and keep the number of filters within a block the same. As a result, a final Inception module marked with $f_{l}$ filters, is set to that number of filters in each layer inside the module.
\textbf{Uniform Template}. The most simple distribution to evaluate is, as the original Neocognitron, an uniform distribution of filters. Computing the number of filters in an uniform distribution is straightforward, the new number in each layer is given by $f'_{l}= F / D \quad \forall l \in \left \{1,... ,D \right \}$.
\textbf{Reverse Template}. Another straight-forward transformation for the filter distribution adopted in this paper is reversing the number of filters in every layer. Our final model with this template is defined by the filters $f'_{l} = f_{D-l+1}$.
\textbf{Quadratic Template}. This distribution is characterised by a quadratic equation $f'_{l} = al^{2}+bl + c$ and consequently, has a parabolic shape with the vertex in the middle layer. We set this layer to the minimal number of filters in the base model $f_{min} = min\left ( f_{l} \right ) \quad l \in \left \{1,... ,D \right \}$ so, the number of filters is described by $f'_{D/2} = f_{min}$. Also, we find the maximum value in both the initial and final convolutional layers, thus $f'_{1} = f'_{D}$.
To compute the new number of filters in each layer we solve the system of linear equations given by \textit{i)} the restriction of the total number of filters in $\sum_{l=1}^{D}\left ( f'_{l} \right ) = \sum_{l=1}^{D}\left ( al^{2}+bl + c \right ) = F$, that can be reduced to $\left ( \frac{D^{3}}{3} + \frac{D^{2}}{2} + \frac{D}{6} \right )a + \left ( \frac{D^{2}}{2} + \frac{D}{2} \right )b + Dc = F$, \textit{ii)} the equation produced by the value in the vertex $f'_{D/2} = \frac{D}{2}^{2}a+\frac{D}{2}b + c = f_{min}$ and \textit{iii)} the equality from the maximum values which reduces to $(D^2-1)a +(D-1)b = 0$.
\textbf{Negative Quadratic Template}. It is a parabola with the vertex in a maximum, that is, a negative quadratic curve. The equation is the same as the previous template but the restrictions change. Instead of defining a value in the vertex, $f'_{l}$ at the initial and final convolutional layers are set to the minimal number of filters in the base model $f'_{l} = f_{min} \quad l \in \left \{1,D \right \}$.
The number of filters in each layer is computed again with a system of equations specified by \textit{i)}, the restriction of the total number of filters as in the quadratic template, and the two points already known in the first and last convolutional layers defined by \textit{ii)} $a + b + c = f_{min}$ and \textit{iii)} $D^{2}a + Db + c = f_{min}$.
\section{Models Comparison Under Size, Memory Footprint and Speed}\label{chap:experiments}
In this section we investigate the effects of applying different templates to the distribution of kernels in convolutional neural network models (VGG, ResNet, Inception and MobileNet). We compare models under the basis of size, memory and speed in three of the popular datasets for classification tasks.\\
\begin{table}
\caption{Model performances with the original distribution and four templates for the same number of filters evaluated on CIFAR-10, CIFAR-100 and Tiny-Imagenet datasets. After filter redistribution models surpass the base accuracy. Results show average of three repetitions.}
\footnotesize
\begin{tabularx}{\linewidth}{|c|C|C|C|C|C|}
\hline
& & \multicolumn{4}{c|}{Redistribution Templates} \\ \cline{3-6}
Model & Base & Rev Base & Unif & Quad & Neg Quad \\ \hline
\multicolumn{6}{|l|}{CIFAR-10} \\ \hline
VGG-19 & 93.52 & \textbf{94.40} & 94.24 & 94.18 & 94.21 \\ \cline{1-6}
ResNet-50 & 94.70 & 95.17 & 95.08 & 94.41 & \textbf{95.23} \\ \cline{1-6}
Inception & 94.84 & 94.60 & 94.82 & \textbf{94.86} & 94.77 \\ \cline{1-6}
MobileNet & 89.52 & \textbf{91.35} & 91.28 & 89.98 & 91.04 \\ \hline
\multicolumn{6}{|l|}{CIFAR-100} \\ \hline
VGG-19 & 71.92 & \textbf{74.65} & 74.03 & 73.55 & 74.05 \\ \cline{1-6}
ResNet-50 & \textbf{77.09} & 74.80 & 76.65 & 75.71 & 76.76 \\ \cline{1-6}
Inception & 78.03 & 77.78 & \textbf{78.12} & 77.67 & 76.65 \\ \cline{1-6}
MobileNet & 65.08 & 66.39 & \textbf{68.71} & 63.89 & 67.05 \\ \hline
\multicolumn{6}{|l|}{Tiny-Imagenet} \\ \hline
VGG-19 & 54.62 & 57.73 & 56.68 & 54.73 & \textbf{59.50} \\ \cline{1-6}
ResNet-50 & \textbf{61.52} & 53.67 & 60.97 & 59.77 & 60.12 \\ \cline{1-6}
Inception & 54.80 & 55.24 & 55.78 & 54.97 & \textbf{55.87} \\ \cline{1-6}
MobileNet & 56.29 & 51.40 & \textbf{58.11} & 53.37 & 55.76 \\ \hline
\end{tabularx}
\label{tab:templates_accuracy}
\end{table}
\begin{table*}[hbt!]
\caption{Parameters, memory and inference time for selected models when applying our templates keeping the same number of filters evaluated on the CIFAR-10 (black) and Tiny-Imagenet (blue) datasets. Models are normally optimised to fast GPU operation, therefore the original base distribution has a good effect in speed but the redistribution of filters induced by our templates makes models capabilities improve on the other metrics. Memory footprint reported by CUDA.}
\footnotesize
\begin{tabularx}{\textwidth}{|l|C|r|>{\color{blue}}r|r|>{\color{blue}}r|r|>{\color{blue}}r|r|>{\color{blue}}r|r|>{\color{blue}}r|}
\hline
& & \multicolumn{2}{C|}{} & \multicolumn{8}{c|}{Redistribution Templates} \\ \cline{5-12}
Resource & Model & \multicolumn{2}{C|}{Base} & \multicolumn{2}{C|}{Reverse Base} & \multicolumn{2}{C|}{Uniform} & \multicolumn{2}{C|}{Quadratic} & \multicolumn{2}{C|}{Negative Quadratic} \\ \hline
& VGG-19 & 20.0 & 25.0 & 20.0 & 20.6 & 16.0 & \textbf{19.3} & \textbf{15.8} & 20.7 & 20.0 & 20.6 \\ \cline{2-12}
Parameters & ResNet-50 & 23.5 & 23.9 & 23.1 & 23.1 & \textbf{12.9} & \textbf{13.0} & 19.0 & 19.3 & 33.0 & 33.0 \\ \cline{2-12}
(Millions) & Inception & \textbf{6.2} & 19.2 & 6.7 & \textbf{10.0} & \textbf{6.2} & 12.7 & 7.2 & 18.7 & 7.0 & 10.1 \\ \cline{2-12}
& MobileNet & 3.2 & 3.4 & \textbf{2.2} & \textbf{2.4} & \textbf{2.2} & \textbf{2.4} & 3.2 & 3.3 & 2.4 & 2.6 \\ \hline
Memory & VGG-19 & \textbf{1.3} & 1.5 & 2.6 & 10.0 & 4.4 & 4.8 & 2.0 & 6.8 & 1.4 & 3.8 \\ \cline{2-12}
Footprint & ResNet-50 & 3.1 & 5.0 & 11.5 & 10.1 & 4.1 & 9.6 & 7.9 & 7.5 & \textbf{3.0} & 9.8 \\ \cline{2-12}
(GB/batch) & Inception & \textbf{1.5} & 5.8 & 3.1 & 10.8 & 1.7 & 6.7 & 2.2 & 8.6 & 1.6 & 5.9 \\ \cline{2-12}
& MobileNet & 2.5 & 2.5 & 5.1 & 5.1 & 1.5 & 5.9 & 6.0 & 4.8 & \textbf{1.0} & \textbf{1.9} \\ \hline
Inference & VGG-19 & \textbf{3.0} & 4.9 & 8.2 & 4.1 & 5.3 & 4.2 & 7.5 & 4.6 & 7.3 & \textbf{3.5} \\ \cline{2-12}
Time & ResNet-50 & 46.4 & 13.3 & 61.0 & 12.8 & \textbf{23.4} & 12.8 & 59.0 & \textbf{11.0} & 47.6 & 29.9 \\ \cline{2-12}
(ms/batch) & Inception & 28.5 & 24.0 & 54.9 & 21.4 & 34.3 & 28.3 & 25.2 & \textbf{18.3} & \textbf{24.3} & 31.4 \\ \cline{2-12}
& MobileNet & \textbf{3.8} & 5.8 & 6.8 & 6.7 & 4.3 & 9.7 & 7.4 & 7.3 & 4.9 & \textbf{5.3} \\ \hline
\end{tabularx}
\label{tab:templates_parameters}
\end{table*}
\subsection*{Datasets and Models}
We trained over three datasets traditionally used for convolutional network evaluation: CIFAR-10, CIFAR-100 \cite{krizhevsky2009learning} and Tiny-Imagenet \cite{le2015tiny}. The first two datasets contain sets of 50,000 and 10,000 colour images for train and validation respectively, with a resolution of 32x32. Tiny-Imagenet is a reduced version of the original Imagenet dataset with only 200 classes and images with a resolution of 64 x 64 pixels.
We evaluate some of the most popular CNN models: VGG\cite{simonyan2014very}, ResNet\cite{he2016deep}, Inception\cite{szegedy2015going} and MobileNet\cite{howard2017mobilenets}; which represent some of the highest performing CNNs on the ImageNet challenge in previous years \cite{russakovsky2015imagenet}.
\subsection*{Implementation Details}
Experiments have models fed with images with the common augmentation techniques of padding, random cropping and horizontal flipping. Our experiments were run in a NVidia Titan X Pascal 12GB GPU adjusting the batch size to 128.
All convolutional models, with and without templates, were trained for 160 epochs using the same conditions. Therefore, there is some margin for improving accuracy for each distribution by performing individual hyperparameter \cite{mittal2020hyperstar,li2017hyperband}. We used stochastic gradient descent (SGD) with weight decay of 1e-4, momentum of 0.9 and a scheduled learning rate starting in 0.1 for the first 80 epochs, 0.01 for the next 40 epochs and finally 0.001 for the remaining epochs.
\subsection*{Template effect over baseline models}
We conducted an experiment to test our proposed templates on the selected architectures. Table \ref{tab:templates_accuracy} shows VGG, Inception and MobileNet accuracies improving in all datasets when templates are applied. Being complex architectures, ResNet and Inception present the highest accuracy in general. A surprising finding is that in both models difference in accuracy between templates is less than 2.3\% despite the drastic modifications that models are suffering after the change of filter distribution. Models that share a sequential classical architecture such as VGG and MobileNet, show a better improvement when using templates in Tiny-Imagenet. A remarkable accuracy improvement of 4.88 percentage point is achieved in VGG.
When analysing resource consumption (Table \ref{tab:templates_parameters}), we find models are affected differently with each template and model. Reverse-Base, Uniform and Quadratic templates show some reductions in the number of parameters while Negative Quadratic template reduces the memory usage. Inference Time is affected negatively for most of the templates. This is an expected result as original models are designed to perform well in the GPU. Inception model shows an improvement in speed with reductions of 14\% over inference time respect to the base model while maintaining comparable accuracy. ResNet is able to reduce inference time in 49\% at the cost of having slightly less accuracy than the base model.
\section{Conclusions}\label{chap:conclusions}
The most common design of convolutional neural networks when choosing the distribution of the number of filters is to start with a few and then to increase the number in deeper layers. We challenged this design by evaluating some architectures with a varied set of distributions on the CIFAR and Tiny-Imagenet datasets. Our results suggest that this pyramidal distribution is not necessarily the best option for obtaining the highest accuracy or even the highest parameter efficiency.
Our experiments show that models, with the same amount of filters but different distributions produced by our templates, improve accuracy with up to 4.8 points for some model-task pairs. In terms of resource consumption, they can obtain a competitive accuracy compared to the original models using less resources with up to 56\% less parameters and a memory footprint up to 60\% smaller. Results also reveal an interesting behaviour in evaluated models: a strong resilience to changes in filter distribution. The variation in accuracy for all models after administering templates is less than 5\% despite the considerable modifications in the distributions and therefore, in the original design. Our work overall offers insights to model designers, both automated and manual, to construct more efficient models by introducing the idea of new distributions of filters for neural network models and help gather data to build understanding of the design process for model-task pairs.\\
\subsubsection*{Acknowledgments}
This work was partially supported by CONACYT and the Secretar\'ia de Educaci\'on P\'ublica, M\'exico.
\clearpage
\clearpage
\bibliographystyle{ieee_fullname}
|
1,108,101,563,859 | arxiv | \section{Introduction}\label{Section:introduction}
\IEEEPARstart{M}{agnetic} resonance imaging (MRI) is a non-radioactive, non-invasive imaging technique that is able to provide multi-contrast images and permit excellent soft-tissue images. It has become an indispensable tool in medical diagnosis. However, the long acquisition time is one of its prominent limitations, which may introduce motion-caused artifacts into the images and may not be practical in some application scenarios \cite{2017_review_PI}.
To alleviate the prolonged acquisition time, researchers have made great efforts. Shortening the scan time by acquire only a small subset of the k-space data has emerged as an effective and widely-used way \cite{2007_MRM_Sparse_MRI,2007_PSF_Liang}. The sparse sampling technique enables significant reduction of time, but results in undersampling artifacts in the images due to sub-Nyquist sampling. Therefore, reconstruction models with prior information are exploited to enable promising results. Sparsity, low-rank, and sparsity plus low-rank are commonly-used priors. For instance, the sparse MRI methods enforce the sparsity of images in the transform domain, e.g., total variation \cite{2007_MRI_TV}, wavelets \cite{2007_MRM_Sparse_MRI}, contourlets \cite{2010_xiaobo}, and adaptive sparse transform \cite{2011_Ravishankar,2014_PANO,2016_TBME_Zhifang,2016_MIA_Lai,2016_Data_driven_Ravishankar,2016_pFISTA}. The low-rank prior stems from the linear correlations among multiple MRI images \cite{2007_PSF_Liang}, such as, dynamic MRI \cite{2012_TMI_Zhao,2015_LR_sparse_MRI}, parameter imaging \cite{2015_mapping,2016_MORASA}, and high-dimensional MRI \cite{2015_MIA_Zhang,2016_high_dim_MR_tensor} and has been utilized.
The low-rank structured matrix prior has been shown to be able to permit promising results \cite{2014_LORAKS,2014_MRM_Lustig,2016_ALOHA,2016_Off_the_grid,2020_xinlin}. For instance, simultaneous autocalibrating and k-space estimation \cite{2014_MRM_Lustig} utilizes the strong correlation among receiver coils to enable the low-rankness of a block Hankel matrix constructed from the multi-coil k-space data. The low-rank matrix modeling of local k-space neighborhoods (LORAKS) \cite{2014_LORAKS} assumes the image of interest has limited support and/or a slowly varying phase, which would lead to a block-Hankel-like matrix being low-rank. The annihilating filter-based low-rank Hankel matrix approach (ALOHA) \cite{2016_ALOHA}, and Ongie’s method \cite{2016_Off_the_grid} consider the sparsity of the signal in the transform domain. Similarly, the simultaneous two-directional low-rankness with SPIRiT (STDLR-SPIRiT) method \cite{2020_xinlin} exploits the low-rank prior and enforces the self-consistency of k-space data providing state-of-the-art performance.
The structured low-rank methods lift the signal into a higher dimensional space to achieve low-rankness. However, the lifting brings some limitations, such as, large memory consumption and lengthy computational time. As for four-coil $256 \times 256$ data, the dimension of its block Hankel matrix reaches $2116 \times 54756$ when the pencil parameters equal $23$. Researcher have established methods aiming to tackle the problem. The generalized iterative reweighted annihilating filter algorithm \cite{2017_GIRAF} utilized the convolutional structure of the lifted matrix so that it can performs the computations in the original signal instead of the lifted matrix. This method dramatically reduces the complexity and shortens the computational time. Auto-calibrated-LORAKS (AC-LORAKS) \cite{2015_AC-LORAKS} exploited the auto-calibration signal (ACS) to reduce the computational time. Though the two methods gain considerable acceleration in time, they take advantage of specific priors that may not available in other low-rank approaches. Thus, fast low-rank Hankel MRI reconstruction is still highly desirable.
In this work, we attempted to establish a framework to permit fast and reliable low-rank reconstructions. A separable low-rank reconstruction method was proposed by enforcing the low-rankness of multiple small Hankel matrices from each row and each column of MRI data, avoiding constructing the high-dimensional low-rank matrix thereby enables much less memory and allows faster computation. In addition, the self-consistence and conjugate symmetric of k-space data are considered to further reduce the reconstruction error. Besides, the parameter-dimensional information will be introduced into formulation so that the original model can not only be applied to non-parameter parallel imaging but also extended to parameter imaging. The proposed method is expected to effectively utilize the low-rank property to yield fast reconstruction and maintain low reconstruction error.
The rest of this paper is organized as follows. We introduce some notations in Section \ref{Section:notations}, and related work in Section \ref{Section:relatedWork}. Section \ref{Section:method} presents the proposed model and numerical algorithm for non-parameter and parameter parallel imaging reconstructions, and Section III demonstrates the reconstruction performance. Section \ref{Section:discussion} discusses some factors that may affect the reconstruction error and speed. The conclusions are finally drawn in Section \ref{Section:conclusion}.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.4in]{./eps/Fig_mindmap.png}
\caption{Mind map of this work.}
\label{fig_mindmap}
\end{figure}
\section{Notations} \label{Section:notations}
For easier reading, we first list some notations used throughout this paper in Table \ref{Table_notation}.
\begin{table}[htbp]
\centering
\caption{Notations used in this paper.}
\begin{tabular}{m{1.5cm}m{6.5cm}}
\toprule
Symbol & Quantity \\
\midrule
$\mathbf{K}$ & desired multi-coil k-space data, $M$ (rows) $\times$ $N$ (columns) $\times$ $J$ (coils)\\
$\mathbf{X}$ & desired multi-coil image, $M$ (rows) $\times$ $N$ (columns) $\times$ $J$ (coils)\\
$\mathbf{x}^{\text{row}}_{m,j}$ & vector of the $m$-th row and the $j$-th coil of $\mathbf{X}$\\
$\mathbf{x}^{\text{col}}_{n,j}$ & vector of the $n$-th column and the $j$-th coil of $\mathbf{X}$ \\
$\mathbf{X}^{\text{Parameter}}$ & desired image of parameter imaging , $M$ (FE) $\times$ $N$ (PE) $\times$ $L$ (P) $\times$ $J$ (coils)\\
$\mathbf{X}^{\text{PE-P}}_{m}$ & desired image at the $m$-th position on FE dimension of parameter imaging, $N$ (PE) $\times$ $L$ (P) $\times$ $J$ (coils)\\
\bottomrule
\end{tabular}%
\begin{tablenotes}[flushleft]
\footnotesize
\item Note: the FE is the abbreviation of frequency encoding, the PE is the abbreviation of phase encoding, the capital letter P denotes the parameter dimension.
\end{tablenotes}
\label{Table_notation}%
\end{table}
\section{Related Work} \label{Section:relatedWork}
The STDLR \cite{2020_xinlin} simultaneously enforced the low-rank constraint of k-space data along both horizontal and vertical directions as follows:
\begin{equation}
\underset{\mathbf{K}}{\mathop{\min }}{{\left\| \mathbf{BW}_{\bot }^{2\text{D}}\mathbf{K} \right\|}_{*}} \! + \! {{\left\| \mathbf{BW}_{=}^{2\text{D}}\mathbf{K} \right\|}_{*}} \! + \! \frac{\lambda }{2} \! \left\| \text{vec}\left( \mathbf{Y} \!-\! \mathbf{UK} \right) \right\|_{F}^{2},
\end{equation}
where $\mathbf{K}\in {{\mathbb{C}}^{M\times N\times J}}$ denotes the targeted k-space data, and $\mathbf{Y}\in {{\mathbb{C}}^{M\times N\times J}}$ the acquired multi-coil k-space data with zero-filling in unacquired position. The operator $\mathbf{B}$ transfers multi-coil data into a cascaded block Hankel matrix; $\mathbf{W}_{=}^{2\text{D}}$ and $\mathbf{W}_{\bot }^{2\text{D}}$ are weighting operators that perform weighting on k-space data of each coil image, whose weights are the Fourier transform of filters in the horizontal and vertical directions; $\mathbf{U}$ represents an operator that undersamples data, and $\text{vec}\left( \cdot \right)$ means arranging a matrix or a tensor to a vector.
The STDLR explored the simultaneous horizontal and vertical directional low-rankness in the k-space data, reducing the image reconstruction errors. Compared to the state-of-the-art method ALOHA, STDLR achieves lower reconstruction errors \cite{2020_xinlin}. However, the size of the block Hankel matrix appears dramatically huge, leading to a long computational time and large memory requirements. Take four-coil parallel imaging data for an example, the STDLR requires $1014.2$ s to finish reconstruction while the separable Hankel low-rank method takes only $40.8$ s (Fig. \ref{fig_method_2}).
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.4in]{./eps/Fig_STDLR.png}
\caption{Reconstructions of a brain image using STDLR and SHLR. (a) The fully sampled SSOS image; (b-c) SSOS images of reconstructed results by STDLR and SHLR, respectively; (d) the Cartesian undersampling pattern with a sampling rate of 0.34; (e-f) the reconstruction error distribution (12.5$\times$) corresponding to the above methods. Note: the RLNE of (b-c) are 0.0611 and 0.0825, and the computational time of (b-c) are 1014.2 s and 40.8 s, respectively.}
\label{fig_method_2}
\end{figure}
\begin{figure*}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.8in]{./eps/Fig_flowchart.png}
\caption{Illustration of constructing Hankel matrices in (a) STDLR and (b) SHLR.}
\label{fig_flowchart}
\end{figure*}
\section{Proposed Method} \label{Section:method}
\subsection{Basic Model - SHLR}
In this work, we first proposed a separable Hankel low-rank reconstruction method (SHLR) to enforce the low-rankness of Hankel matrices constructing from each row and each column as follows:
\begin{equation}
\begin{medsize}
\begin{aligned}
\textbf{(SHLR)} \quad \underset{\mathbf{X}}{\mathop{\min }} & \sum\limits_{m=1}^{M} {{{\left\| \mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X} \right\|}_{*}}} + \sum\limits_{n=1}^{N}{{{\left\| \mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}\mathbf{X} \right\|}_{*}}} \\
& \qquad \qquad \qquad \qquad \quad + \frac{\lambda }{2}\left\| \text{vec}\left( \mathbf{Y}-\mathbf{U}{{\mathbf{F}}^{\text{2D}}}\mathbf{X} \right) \right\|_{2}^{2},
\end{aligned}
\end{medsize}
\end{equation}
where the $\mathbf{X}\in {{\mathbb{C}}^{M\times N\times J}}$ denotes the desired multi-coil image, ${{\mathbf{F}}^{\text{2D}}}$ is the 2D Fourier transform for each coil data. The ${{\mathbf{P}}_{m}}$ and ${{\mathbf{Q}}_{n}}$ denote the operators that extract $m$-th row ($m$-th vector on the first dimension) and $n$-th column ($n$-th vector on the second dimension) from each coil data for $m=1,\cdots ,M$ and $n=1,\cdots ,N$. Here, we define $\mathbf{x}_{m,j}^{\text{row}}$ as the $m$-th row and $j$-th coil of $\mathbf{X}$, and $\mathbf{x}_{n,j}^{\text{col}}$ as the $n$-th column and $j$-th coil of $\mathbf{X}$. Then, we have ${{\mathbf{P}}_{m}}\mathbf{X}=\left[ \mathbf{x}_{m,1}^{\text{row}},\cdots ,\mathbf{x}_{m,j}^{\text{row}},\cdots ,\mathbf{x}_{m,J}^{\text{row}} \right]\in {{\mathbb{C}}^{N\times J}}$ and ${{\mathbf{Q}}_{n}}\mathbf{X}=\left[ \mathbf{x}_{n,1}^{\text{col}},\cdots ,\mathbf{x}_{n,j}^{\text{col}},\cdots ,\mathbf{x}_{n,J}^{\text{col}} \right]\in {{\mathbb{C}}^{M\times J}}$.
Fig. \ref{fig_flowchart} shows a graphical illustration of the notations of constructing Hankel matrices. The operator ${{\mathbf{F}}^{1\text{D}}}$ denotes the 1D Fourier transform on a vector, $\mathbf{W}$ performs weighting on a vector with the weights obtained from applying Fourier transform to 1D sparse transform filter, and $\mathbf{H}$ converts a vector into a Hankel matrix. The tilde above the operator means that corresponding operation is performed on each column vector of the matrix, that is $\mathbf{\tilde{H}\tilde{W}}{{\mathbf{\tilde{F}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X}=\left[ \mathbf{HW}{{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{m,1}^{\text{row}},\cdots ,\mathbf{HW}{{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{m,J}^{\text{row}} \right]$ and $\mathbf{\tilde{H}\tilde{W}}{{\mathbf{\tilde{F}}}^{1\text{D}}}{{\mathbf{Q}}_{n}}\mathbf{X}=\left[ \mathbf{HW}{{\mathbf{F}}^{1\text{D}}}\mathbf{x}_{n,1}^{\text{col}},\cdots ,\mathbf{HW}{{\mathbf{F}}^{1\text{D}}}\mathbf{x}_{n,J}^{\text{col}} \right]$.
\subsection{Enhanced Model}
As mentioned above, building the block Hankel matrix requires huge memory consumption and thereby increases the computational complexity and time. To mitigate the problem, we proposed to establish multiple small-size Hankel matrices by exploiting the low-rankness of a row or a column vector of the signal. This will significantly reduce memory consumption and make computational speed much faster.
However, constraining the low-rankness of each row and each column separately weakens the utilization between rows and columns, leading to an increase in reconstruction errors, e.g. relative $\ell_2$ norm error (RLNE). (Fig. \ref{fig_method_2} (e-f)). In order to improve the reconstruction while maintaining a faster reconstruction speed, we consider introducing other information to strengthen the constraints of the model. The following three kinds of information are exploited.
\subsubsection{Strengthen the correlation between rows and columns}
The proposed SHLR method makes little use of the correlation between rows and columns. A natural idea for improvement is to strengthen the exploitation of k-space neighborhood information. The SPIRiT is a good choice. The SPIRiT \cite{2010_SPIRiT} primarily bases on the assumption that each k-space data point is the convolution of multi-coil data of its neighboring k-space, which implies the SPIRiT constraint has the ability to utilize the linear correlation of adjacent rows and columns to reconstruct MRI image. Therefore, the SHLR with SPIRiT constraint (SHLR-S) can be modeled as:
\begin{equation}
\begin{medsize}
\begin{aligned}
& \textbf{(SHLR-S)} \;\; \underset{\mathbf{X}}{\mathop{\min }} \sum\limits_{m=1}^{M}{{{\left\| \mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X} \right\|}_{*}}}+\sum\limits_{n=1}^{N}{{{\left\| \mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}\mathbf{X} \right\|}_{*}}}\\
&\qquad \qquad \ +\frac{\lambda }{2}\left\| \text{vec}\left( \mathbf{Y}-\mathbf{U}{{\mathbf{F}}^{\text{2D}}}\mathbf{X} \right) \right\|_{2}^{2}+\frac{{{\lambda }_{1}}}{2}\left\| \text{vec}\left( \mathbf{X}-\mathbf{GX} \right) \right\|_{2}^{2},
\end{aligned}
\end{medsize}
\end{equation}
where $\mathbf{G}$ denotes the SPIRiT operator in image domain.
\subsubsection{Strengthen the low-rankness within each row (or column)}
Another way to improve the reconstruction is to strengthen the low-rankness within each row and each column. The smooth phase of MRI images is often used as prior information to be involved in MRI reconstruction \cite{1991_PF,2012_SmoothPhase,2014_LORAKS}. By generating the virtual coils containing the conjugate symmetric, the smooth MRI image phase information can be incorporated into the reconstruction problem \cite{2009_VC}. The conjugate symmetry of k-space has been applied into the Hankel low-rank reconstruction \cite{2020_Hankel_VC}. This property can be easily introduced into SHLR reconstruction model as:
\begin{equation}
\begin{medsize}
\begin{aligned}
{{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X} & =\left[ \mathbf{HW}{{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{m,1}^{\text{row}},\cdots ,\mathbf{HW}{{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{m,J}^{\text{row}}, \right.\\
& \left. \mathbf{HW}{{\left( {{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{m,1}^{\text{row}} \right)}^{\dagger }},\cdots ,\mathbf{HW}{{\left( {{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{m,J}^{\text{row}} \right)}^{\dagger }} \right], \\
{{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}\mathbf{X} & =\left[ \mathbf{HW}{{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{n,1}^{\text{col}},\cdots ,\mathbf{HW}{{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{n,J}^{\text{col}}, \right.\\
& \left. \mathbf{HW}{{\left( {{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{n,1}^{\text{col}} \right)}^{\dagger }},\cdots ,\mathbf{HW}{{\left( {{\mathbf{F}}^{\text{1D}}}\mathbf{x}_{n,J}^{\text{col}} \right)}^{\dagger }} \right].
\end{aligned}
\end{medsize}
\end{equation}
where the superscript $\dagger$ represents the operation of taking the conjugate and flipping the vector along the center. Therefore, we can formulate the SHLR with virtual coil (SHLR-V) as:
{\small
\begin{multline}
\textbf{(SHLR-V)} \quad \quad \underset{\mathbf{X}}{\mathop{\min }} \frac{\lambda }{2}\left\| \text{vec}\left( \mathbf{Y}-\mathbf{U}{{\mathbf{F}}^{\text{2D}}}\mathbf{X} \right) \right\|_{2}^{2} + \\
\sum\limits_{m=1}^{M}{{{\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X} \right\|}_{*}}}+\sum\limits_{n=1}^{N}{{{\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}\mathbf{X} \right\|}_{*}}}.
\end{multline} }
We introduced both the SPIRiT and virtual coil information into the basis SHLR model aiming to further reduce the reconstruction errors. We can achieve better reconstruction results with the optimization as:
\begin{equation}\label{(SHLR-SV_model)}
\begin{medsize}
\begin{aligned}
\textbf{(SHLR-SV)} \; \underset{\mathbf{X}}{\mathop{\min }}\frac{\lambda }{2}\left\| \text{vec} \! \left( \! \mathbf{Y} \! - \! \mathbf{U}{{\mathbf{F}}^{\text{2D}}}\mathbf{X} \! \right) \! \right\|_{2}^{2} \! + \! \frac{{{\lambda }_{1}}}{2} \! \left\| \text{vec}\left( \! \mathbf{X} \! - \! \mathbf{GX} \right) \right\|_{2}^{2}\\
+\sum\limits_{m=1}^{M}{{{\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X} \right\|}_{*}}} \! + \! \sum\limits_{n=1}^{N}{{{\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}\mathbf{X} \right\|}_{*}}}.
\end{aligned}
\end{medsize}
\end{equation}
As shown in Fig. \ref{fig_method_1}, the SHLR-SV, which equipped with both the SPIRiT and virtual coil information, produces the image with lowest error (Fig. \ref{fig_method_1} (f)) while the basic SHLR and the SHLR with only one extra information enforcement yield results with obvious errors (Fig. \ref{fig_method_1} (b-c) and (e)). The results indicate the effectiveness of utilizing the two information in parallel imaging.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.4in]{./eps/Fig_SHLR.png}
\caption{Reconstruction on brain image using SHLR and its variants. (a) The fully sampled SSOS image; (b, c, e, f) the reconstruction error (12.5$\times$) of SHLR, SHLR-S, SHLR-V, and SHLR-SV, respectively; (d) the Cartesian undersampling pattern with a sampling rate of 0.34. Note: the RLNE of (b, c, e, f) are 0.0825, 0.0527, 0.0664, and 0.0426; and the corresponding computational time are 40.8 s, 84.6 s, 60.2 s, and 97.6 s.}
\label{fig_method_1}
\end{figure}
\subsubsection{Parameter-dimensional information}
Extra information can be incorporated to improve reconstructions of multi-dimensional MRI, e.g., information from parameter imaging.
Conventional parameter imaging acquires a series of images using different parameters. The image intensity variation along the parameter dimension then is used in data fitting to estimate the tissue intrinsic parameters, such as longitudinal relaxation time (T1), transversal relaxation time (T2). The tissue parameters are important in diagnosing the diseases including nervous, musculoskeletal, liver, and myocardial \cite{2008_mapping,2010_mapping,2011_mapping,2011_Feng_mapping,2014_Bo_parameter,2015_mapping}. It is worthy to mention that the signal along the parameter dimension can be modeled as mono or few exponentials \cite{2010_mapping,2014_Bo_parameter,2016_ALOHA_mapping}, which indicates the low-rankness of the signal \cite{2016_MORASA,2016_ALOHA_mapping,2017_Shah}. The low-rank property along the parameter dimension can be utilized to enhance the reconstructions.
\begin{figure}[!htb]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.4in]{./eps/Fig_flowchart_mapping}
\caption{Graphical illustration of proposed parameter imaging reconstruction. (a) the diagrammatic sketch of parameter imaging data; (b) the undersampling patterns; (c) constructing the Hankel matrices of proposed method SHLR-VP.}
\label{fig_flowchart_mapping}
\end{figure}
Motivated by ALOHA \cite{2016_ALOHA_mapping}, we reconstruct images in the frequency encoding dimension one-by-one (Fig. \ref{fig_flowchart_mapping}). The difference between ALOHA and our proposed method is that ALOHA constructed a block Hankel matrix from the data on the phase encoding-parameter (PE-P) plane while the proposed method converted rows or columns on the phase encoding-parameter plane into Hankel matrices separately. The seperable low-rank Hankel matrix strategy reduces the reconstruction time to a certain extent (Fig. \ref{fig_SHLR-P}).
Let ${{\mathbf{X}}^\text{Parameter}}\in {{\mathbb{C}}^{M\times N\times L\times J}}$ denote the desired parameter imaging MRI image. Denote $\mathbf{X}_{m}^{\text{PE-P}}\in {{\mathbb{C}}^{N\times L\times J}}$ as the image at the $m$-th position on the frequency encoding dimension of $\mathbf{X}^\text{Parameter}$. Let $\mathbf{Y}^\text{Parameter}$ denote the acquired k-space data of parameter imaging, with zero-filling at unacquired position and 1D inverse Fourier transform on along the frequency encoding dimension. And let $\mathbf{Y}_{m}^{\text{PE-P}}$ represent the data at the $m$-th position on the frequency encoding dimension of $\mathbf{Y}^\text{Parameter}$. ${{\mathbf{F}}^{\text{PE}}}$ denotes the 1D Fourier transform along the phase encoding dimension.
The reconstruction model for parameter imaging is as follow
\begin{equation}
\begin{medsize}
\begin{aligned}
\textbf{(SHLR-P)}& \quad \underset{\mathbf{X}_{m}^{\text{PE-P}}}{\mathop{\min }}\frac{\lambda }{2}\left\| \text{vec}\left( \mathbf{Y}_{m}^{\text{PE-P}}-\mathbf{U}{{\mathbf{F}}^{\text{PE}}}\mathbf{X}_{m}^{\text{PE-P}} \right) \right\|_{2}^{2} + \\
& \sum\limits_{l=1}^{L}{{{\left\| {{{\mathbf{\tilde{H}}}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}_{m}^{\text{PE-P}} \right\|}_{*}}}+{{\lambda }_{2}}\sum\limits_{n=1}^{N}{{{\left\| \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}_{m}^{\text{PE-P}} \right\|}_{*}}}.
\end{aligned}
\end{medsize}
\end{equation}
To further reduce the reconstruction error, we establish a reconstruction approach that takes advantages of both virtual coil and parameter-dimension information. Notice that SPIRiT exploits the correlation of image of different coils, it appears not appropriate for the recovery of PE-P plane images. Thus, the proposed method stems from the basic model and introduces two prior information, smooth phase of the image and low-rankness in the parameter dimension:
\begin{equation}\label{(SHLR-P_model)}
\begin{medsize}
\begin{aligned}
\textbf{(SHLR}&\textbf{-VP)} \quad \underset{\mathbf{X}_{m}^{\text{PE-P}}}{\mathop{\min }}\frac{\lambda }{2}\left\| \text{vec}\left( \mathbf{Y}_{m}^{\text{PE-P}}-\mathbf{U}{{\mathbf{F}}^{\text{PE}}}\mathbf{X}_{m}^{\text{PE-P}} \right) \right\|_{2}^{2} + \\
& \sum\limits_{l=1}^{L}{{{\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}_{m}^{\text{PE-P}} \right\|}_{*}}}+{{\lambda }_{2}}\sum\limits_{n=1}^{N}{{{\left\| \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}_{m}^{\text{PE-P}} \right\|}_{*}}}.
\end{aligned}
\end{medsize}
\end{equation}
As shown in Fig. \ref{fig_SHLR-P}, the SHLR-VP which utlizes the virtual coil acheives the lowest error, and the computational time of the proposed SHLR-VP is reduced to $2/3$ of that of ALOHA. The results demonstrate the advantages of the proposed method in terms of low reconstruction error and fast reconstruction speed for parameter imaging.
\begin{figure}[!htb]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=2.8in]{./eps/Fig_SHLR-P}
\caption{Reconstruction on T2 mapping using ALOHA and the proposed method with the reduction factor R=8. (a) The T2 map estimated from fully sampled data; (b-d) the T2 map error distribution (6$\times$) of ALOHA, SHLR-P, and SHLR-VP, respectively. Note: the RLNE of (b-d) are 0.1262, 0.1174, and 0.1029; and the computational time of (b-d) are 309 s, 108 s, and 197 s, respectively.}
\label{fig_SHLR-P}
\end{figure}
\begin{figure*}[!htb]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.4in]{./eps/Fig_2D_uniform}
\caption{Parallel imaging reconstruction results under uniform undersampling pattern. (a) An SSOS image of fully sampled data; (b-e) SSOS images of reconstructed results by $\ell_1$-SPIRiT, AC-LORAKS, STDLR-SPIRiT and SHLR-SV, respectively; (f) the uniform undersampling pattern with reduction factor R=6 and 20 ACS lines; (g-j) the reconstruction error distribution (10$\times$) corresponding to reconstructed image above them.}
\label{fig_2D_uniform}
\end{figure*}
\subsection{Numerical Algorithm}
We adopted the alternating direction method of multiplier (ADMM) \cite{2011_ADMM} to solve both the proposed SHLR-SV in Eq. \eqref{(SHLR-SV_model)} and the SHLR-VP model in Eq. \eqref{(SHLR-P_model)}.
\subsubsection{SHLR-SV}
The augmented Lagrangian form of Eq. \eqref{(SHLR-SV_model)} lies in
\begin{equation} \label{(SHLR-SV_ALF)}
\begin{medsize}
\begin{aligned}
& \underset{\mathbf{D}_{m}^{\text{row}},\mathbf{D}_{n}^{\text{col}}}{\mathop{\max }}\underset{\mathbf{X},\mathbf{Z}_{m}^{\text{row}},\mathbf{Z}_{n}^{\text{col}}}{\mathop{\min }} \frac{\lambda }{2} \! \left\| \text{vec} \! \left( \! \mathbf{Y} \! -\! \mathbf{U}{{\mathbf{F}}^{\text{2D}}}\mathbf{X} \! \right) \right\|_{2}^{2} \! +\! \frac{{{\lambda }_{1}}}{2} \! \left\| \text{vec}\left( \mathbf{X} \! -\! \mathbf{GX} \right) \right\|_{2}^{2} \\
& \qquad \! + \! \sum\limits_{m=1}^{M}{\left( {{\left\| \mathbf{Z}_{m}^{\text{row}} \right\|}_{*}} \! + \! \frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}\mathbf{X} \! - \! \mathbf{Z}_{m}^{\text{row}} \! + \! \frac{\mathbf{D}{{_{m}^{\text{row}}}}}{\beta } \right\|_{F}^{2} \right)} \\
& \qquad \! + \! \sum\limits_{n=1}^{N}{\left( {{\left\| \mathbf{Z}_{n}^{\text{col}} \right\|}_{*}} \! + \! \frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}\mathbf{X}-\mathbf{Z}_{n}^{col} \! + \! \frac{\mathbf{D}{{_{n}^{\text{col}}}}}{\beta } \right\|_{F}^{2} \right)} \\
\end{aligned}
\end{medsize}
\end{equation}
where $\left\langle \cdot ,\cdot \right\rangle $ represents the inner product of matrix, and $\beta $ the penalty parameter.
The solution of Eq. \eqref{(SHLR-SV_ALF)} can be obtained by alternatively solving the following sub-problems:
\begin{equation} \label{(SHLR-SV_X)}
\begin{medsize}
\begin{aligned}
&{{\mathbf{X}}^{\left( k+1 \right)}} \! = \! \underset{\mathbf{X}}{\mathop{\min }} \!
\sum\limits_{m=1}^{M} \! {\frac{\beta }{2} \! \left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}{{\mathbf{X}}^{\left( k \right)}} \!\! - \! \mathbf{Z}{{_{m}^{\text{row}}}^{\left( k \right)}} \!\! +\!\! \frac{\mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2}} \\
& \! + \! \sum\limits_{n=1}^{N}{\frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}{{\mathbf{X}}^{\left( k \right)}} \! - \! \mathbf{Z}{{_{n}^{col}}^{\left( k \right)}} \! + \! \frac{\mathbf{D}{{_{n}^{\text{col}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2}}\\
& \! + \! \frac{\lambda }{2} \! \left\| \text{vec}\left( \mathbf{Y} \! - \! \mathbf{U}{{\mathbf{F}}^{\text{2D}}}{{\mathbf{X}}^{\left( k \right)}} \right) \right\|_{2}^{2} \! + \! \frac{{{\lambda }_{1}}}{2} \! \left\| \text{vec}\left( {{\mathbf{X}}^{\left( k \right)}} \! - \! \mathbf{G}{{\mathbf{X}}^{\left( k \right)}} \right) \right\|_{2}^{2},\\
\end{aligned}
\end{medsize}
\end{equation}
\begin{equation} \label{(SHLR-SV_Zrow)}
\begin{medsize}
\begin{aligned}
\mathbf{Z}{{_{m}^{\text{row}}}^{\left( k+1 \right)}} & = \underset{\mathbf{Z}_{m}^{\text{row}}}{\mathop{\min }}\,{{\left\| \mathbf{Z}{{_{m}^{\text{row}}}^{\left( k \right)}} \right\|}_{*}} \\
& + \frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{P}}_{m}}{{\mathbf{X}}^{\left( k+1 \right)}} \! - \! \mathbf{Z}{{_{m}^{\text{row}}}^{\left( k \right)}} \! + \! \frac{\mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2},
\end{aligned}
\end{medsize}
\end{equation}
\begin{equation} \label{(SHLR-SV_Zcol)}
\begin{medsize}
\begin{aligned}
\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k+1 \right)}} & = \underset{\mathbf{Z}_{n}^{\text{col}}}{\mathop{\min }}\,{{\left\| \mathbf{Z}{{_{n}^{\text{col}}}^{\left( k \right)}} \right\|}_{*}}\\
& +\frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{\text{1D}}}{{\mathbf{Q}}_{n}}{{\mathbf{X}}^{\left( k+1 \right)}}-\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k \right)}}+\frac{\mathbf{D}{{_{n}^{\text{col}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2},
\end{aligned}
\end{medsize}
\end{equation}
\begin{equation} \label{(SHLR-SV_Drow)}
\begin{medsize}
\mathbf{D}{{_{m}^{\text{row}}}^{\left( k+1 \right)}} \!=\! \mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}} \!+\! \tau \! \left( {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{P}}_{m}}{{\mathbf{X}}^{\left( k+1 \right)}}-\mathbf{Z}{{_{m}^{\text{row}}}^{\left( k+1 \right)}} \! \right),
\end{medsize}
\end{equation}
\begin{equation} \label{(SHLR-SV_Dcol)}
\begin{medsize}
\mathbf{D}{{_{n}^{\text{col}}}^{\left( k+1 \right)}}=\mathbf{D}{{_{n}^{\text{col}}}^{\left( k \right)}}+\tau \left( {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{n}}{{\mathbf{X}}^{\left( k+1 \right)}}-\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k+1 \right)}} \right),
\end{medsize}
\end{equation}
where $\tau$ is the step size.
For fixed $\mathbf{Z}{{_{m}^{\text{row}}}^{\left( k \right)}}$, $\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k \right)}}$, $\mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}}$, and $\mathbf{D}{{_{n}^{\text{col}}}^{\left( k \right)}}$, ${{\mathbf{X}}^{\left( k+1 \right)}}$ has a close-form solution as
\begin{equation} \label{(SHLR-SV_X_solution)}
\begin{medsize}
\begin{aligned}
& {\mathbf{X}}^{\left( k+1 \right)} \!\! = \! \left( \! \lambda {{\mathbf{F}}^{\text{2D,*}}}{{\mathbf{U}}^{*}}\mathbf{U}{{\mathbf{F}}^{\text{2D}}} \!\! +\! \beta \!\! \sum\limits_{m=1}^{M} \!\! {\mathbf{P}_{m}^{*}{{{\mathbf{\tilde{F}}}}^{1\text{D,*}}}{{{\mathbf{\tilde{W}}}}^{*}}\mathbf{\tilde{H}}_{\text{vc}}^{*}{{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{P}}_{m}}} \right.\\
& + {{\left. {{\lambda }_{1}}{{\left( \mathbf{G} \! - \! \mathbf{I} \right)}^{*}} \!\! \left( \mathbf{G} \! - \! \mathbf{I} \right) \! + \!\beta \! \sum\limits_{n=1}^{N}{\mathbf{Q}_{n}^{*}{{{\mathbf{\tilde{F}}}}^{1\text{D,*}}}{{{\mathbf{\tilde{W}}}}^{*}}\mathbf{\tilde{H}}_{\text{vc}}^{*}{{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{n}}} \right)}^{-1}} \\
& \left ( \lambda {{\mathbf{F}}^{\text{2D,*}}}{{\mathbf{U}}^{*}}\mathbf{Y} \!+\!\beta \!\! \sum\limits_{m=1}^{M}{\mathbf{P}_{m}^{*}{{{\mathbf{\tilde{F}}}}^{1\text{D,*}}}{{{\mathbf{\tilde{W}}}}^{*}}\mathbf{\tilde{H}}_{\text{vc}}^{*}\left( \mathbf{Z}{{_{m}^{\text{row}}}^{\left( k \right)}}-\frac{\mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}}}{\beta } \right)} \right . \\
& \left . + \beta \sum\limits_{n=1}^{N}{\mathbf{Q}_{n}^{*}{{{\mathbf{\tilde{F}}}}^{1\text{D,*}}}{{{\mathbf{\tilde{W}}}}^{*}}\mathbf{\tilde{H}}_{\text{vc}}^{*}\left( \mathbf{Z}{{_{n}^{\text{col}}}^{\left( k \right)}}-\frac{\mathbf{D}{{_{n}^{\text{col}}}^{\left( k \right)}}}{\beta } \right)} \right ),
\end{aligned}
\end{medsize}
\end{equation}
where the upper subscript $*$ denotes the adjoint operator. Here, for any matrix $\mathbf{A}$, $\text{vec}^{-1}\left( \text{vec} \left( \mathbf{A} \right) \right) = \mathbf{I}$, therefore, we omitted the writing of $\text{vec}^{-1}\left( \text{vec} \left( \cdot \right) \right)$ for simplicity in the following.
For fixed ${{\mathbf{X}}^{\left( k+1 \right)}}$ and $\mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}}$, $\mathbf{Z}{{_{m}^{\text{row}}}^{\left( k+1 \right)}}$ is obtained by
\begin{equation} \label{(SHLR-SV_Zrow_solution)}
\begin{medsize}
\mathbf{Z}{{_{m}^{\text{row}}}^{\left( k+1 \right)}}={{S}_{1/\beta }}\left( {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{P}}_{m}}{{\mathbf{X}}^{\left( k+1 \right)}}+\frac{\mathbf{D}{{_{m}^{\text{row}}}^{\left( k \right)}}}{\beta } \right),
\end{medsize}
\end{equation}
where ${{S}_{1/\beta }}\left( \cdot \right)$ denotes the singular value threshold operator with the threshold of $1/\beta $.
Similarly, the $\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k+1 \right)}}$ can also be obtained by
\begin{equation} \label{(SHLR-SV_Zcol_solution)}
\begin{medsize}
\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k+1 \right)}}={{S}_{\frac{1}{\beta }}}\left( {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{n}}{{\mathbf{X}}^{\left( k+1 \right)}}+\frac{\mathbf{D}{{_{n}^{\text{col}}}^{\left( k \right)}}}{\beta } \right).
\end{medsize}
\end{equation}
The numerical algorithm for SHLR-SV is summarized in Supplementary Material S1.
\subsubsection{SHLR-VP}
Considering the space limitations, please refer to Supplementary Material S2 for detailed derivations for solving the SHLR-VP model.
\section{Results}\label{Section:results}
\begin{figure*}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.4in]{./eps/Fig_2D_random_V1}
\caption{Parallel imaging reconstruction results and errors under two patterns. (a) An SSOS image of fully sampled data; (b-e) SSOS images of reconstructed results by $\ell_1$-SPIRiT, AC-LORAKS, STDLR-SPIRiT and SHLR-SV, respectively; (f) the Cartesian undersampling pattern with a sampling rate of 0.34; (g-j) the reconstruction error distribution (12.5$\times$) corresponding to reconstructed image above them.}
\label{fig_2D_random}
\end{figure*}
In this section, we will evaluate the two proposed approaches, SHLR-SV and SHLR-VP using \textit{in vivo} data from different MRI scanners.
In all experiments, the multi-coil images are combined by a square root of squares (SSOS). We adopted the RLNE \cite{2021_MIA_xinlin} and mean structure similarity index measure (MSSIM) \cite{2004_MSSIM} as objective criteria to quantify the reconstruction performance. A lower RLNE, the higher consistency lies between the fully sampled image and the reconstruction image. A higher MSSIM indicates higher detail preservation in reconstruction. The calculation of RLNE and MSSIM can be found in Supplementary Material.
\subsection{Parallel Imaging}
In this subsection, we assessed the reconstruction results and running time of SHLR-SV and compared the results of SHLR-SV with three state-of-the-art reconstruction approaches including $\ell_1$-SPIRiT \cite{2010_SPIRiT}, AC-LORAKS \cite{2015_AC-LORAKS}, and STDLR-SPIRiT \cite{2020_xinlin}. These three methods and SHLR-SV require ACS for reconstruction. More specifically, $\ell_1$-SPIRiT reconstructs image by exploiting kernel estimation and the sparsity of the image in the transform domain. Both AC-LORAKS and STDLR-SPIRiT utilize the low-rankness of block Hankel matrix (or structured matrix) of k-space to recover MRI image. In addition, AC-LORAKS utilizes ACS for reducing the algorithm complexity \cite{2015_AC-LORAKS}.
The codes of $\ell_1$-SPIRiT are shared online by Dr. Michael Lustig \cite{code_SPIRiT} and codes of AC-LORAKS \cite{code_LORAKS} are shared at Dr. Justin P. Haldar's website. Here, we adopted the S-based AC-LORAKS with virtual coil which makes use of phase constraints. The reason why we chose S-based is that LORAKS with phase constraint provides results with lower error compared to other constraints \cite{2014_LORAKS}. Parameters of all the compared methods are optimized to obtain the lowest RLNE. For the proposed methods, the effect of parameters setting will be discussed in Supplementary Material S5.
Two brain datasets acquired from healthy volunteers are adopted in experiments. The dataset depicted in Fig. \ref{fig_2D_uniform} (a) is obtained from a 3T SIEMENS MRI scanner (Siemens Healthcare, Erlangen, Germany) equipped with 32 coils using T2-weighted turbo spin echo sequence (matrix size = $256 \times 256$, TR/TE = $3000 / 66$ ms, FOV = $200$ mm $\times$ $200$ mm, slice thickness = $5$ mm). Eight virtual coils are compressed from the acquired data of 32 coils \cite{2013_Coil_Compression} to reduce the computational complexity. The other dataset shown in Fig. \ref{fig_2D_random} (a) is acquired from a 3T SIEMENS Trio whole-body scanner (Siemens Healthcare, Erlangen, Germany) equipped with 32 coils using the 2D T2-weighted turbo spin echo sequence (matrix size = $256 \times 256$, TR/TE = $6100 / 99$ ms, FOV = $220$ mm $\times$ $220$ mm, slice thickness = $3$ mm). Four virtual coils are compressed from the acquired data of 32 coils \cite{2013_Coil_Compression}.
We first test the proposed method for parallel imaging using the widely-used sampling pattern adopted in commercial MRI scanners named uniform undersampling pattern. Both $\ell_1$-SPIRiT (Fig. \ref{fig_2D_uniform} (b)) and AC-LORAKS (Fig. \ref{fig_2D_uniform} (c)) have strong undersampling artifacts. STDLR-SPIRiT (Fig. \ref{fig_2D_uniform} (d)) and SHLR-SV (Fig. \ref{fig_2D_uniform} (e)) show the good ability of artifacts removing. But it is worthy to note that SHLR-SV provides the image with lower error than STDLR-SPIRiT.
We then test the proposed method using non-uniform undersampling patterns, including 1D Cartesian and 2D random undersampling patterns. In the reconstruction under Cartesian undersampling pattern, AC-LORAKS (Fig. \ref{fig_2D_random} (c)) yields results exhibiting obvious artifacts inside the brain area, whereas the ringing artifacts also remain in the reconstructed image of $\ell_1$-SPIRiT (Fig. \ref{fig_2D_random} (b)). Both STDLR-SPIRiT (Fig. \ref{fig_2D_random} (d)) and SHLR-SV (Fig. \ref{fig_2D_random} (e)) provide the image with nice artifacts suppression, however, the reconstruction error of STDLR-SPIRiT appear slightly larger than that of SHLR-SV, especially in the skull area.
\begin{figure}[!htb]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.4in]{./eps/Fig_2D_PF_V1}
\caption{Parallel imaging reconstruction resutls using the undersampling pattern with patrial Fourier. (a) An SSOS image of fully sampled data; (b, c, e, f) reconstruction errors (10$\times$) introduced by $\ell_1$-SPIRiT, AC-LORAKS, STDLR-SPIRiT and SHLR-SV, respectively; (d) the Cartesian undersampling pattern with 3/4 PF (total sampling rate 0.30).}
\label{fig_2D_PF}
\end{figure}
\begin{table}[htbp]
\centering
\caption{RLNE/MSSIM for parallel imaging reconstructions.}
\begin{tabular}{m{0.7cm}<{\centering}m{1.35cm}<{\centering}m{1.6cm}<{\centering}m{1.85cm}<{\centering}m{1.25cm}<{\centering}}
\toprule
Image & $\ell_1$-SPIRiT & AC-LORAKS & STDLR-SPIRiT & SHLR-SV \\
\midrule
Fig. \ref{fig_2D_uniform} & 0.0688/0.9637 & 0.0709/0.9386 & 0.0488/0.9810 & \textbf{0.0482}/\textbf{0.9822} \\
Fig. \ref{fig_2D_random} & 0.0689/0.9856 & 0.0596/0.9806 & 0.0518/0.9918 & \textbf{0.0426}/\textbf{0.9934} \\
Fig. \ref{fig_2D_PF} & 0.1028/0.9751 & 0.0935/0.9543 & 0.0914/0.9833 & \textbf{0.0639}/\textbf{0.9882} \\
\bottomrule
\end{tabular}
\label{Table_2D}%
\end{table}%
In addition, SHLR-SV allows promising reconstruction on the undersampling patterns with partial Fourier (PF), which is commonly adopted in commercial equipment for further acceleration \cite{1991_PF}, indicating the potential to be robust to patterns. As shown in Fig. \ref{fig_2D_PF}, under the case with PF, the reconstruction errors of three compared approaches, including STDLR-SPIRiT, apparently increased, whereas SHLR-SV still yields nice reconstruction with low reconstruction error. In the meanwhile, the proposed SHLR-SV outperforms other methods in terms of RLNE and MSSIM, indicating the excellent ability of artifact suppression and detail preservation.
We include more data and sampling patterns to further evaluate the proposed SHLR-SV approach. The detailed results can be found in Supplementary Material S3. Similar phenomenons were observed in the results indicating SHLR-SV is robust to the different data from different subjects, and also robust to different sampling patterns. It can be seen that SHLR-SV outperforms the $\ell_1$-SPIRiT and AC-LORAKS, and shows comparable results with STDLR-SPIRiT. Given the patterns with PF, SHLR-SV gains a slight improvement than STDLR-SPIRiT with respect to RLNE and MSSIM. However, the computational time of the proposed method is significantly reduced compared to that of STDLR-SPIRiT ($8 \times$ faster). Overall, the proposed method provides faithful reconstruction with acceptable computational time, having the potential to be used in clinical applications.
\begin{figure*}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=5.6in]{./eps/Fig_mapping_V1}
\caption{T2 mapping reconstruction results and errors under two patterns. (a) (or (h)) T2 map of fully sampled data; (b-d) T2 map images of reconstructed results at reduction factors R=6 by MORASA, ALOHA, and SHLR-VP, respectively; (i-k) T2 maps of reconstructed results at R=8 by MORASA, ALOHA, and SHLR-VP, respectively; (e-g) and (l-n) the reconstruction error distribution (6$\times$) corresponding to reconstructed image above them. Note: the RLNE/MSSIM of (b-d) are 0.0978/0.9891, 0.0959/0.9883, and \textbf{0.0882}/\textbf{0.9909}, respectively, and the RLNE/MSSIM of (i-k) are 0.1262/0.9831, 0.1386/0.9775, and \textbf{0.1029}/\textbf{0.9872}, respectively.}
\label{fig_mapping}
\end{figure*}
\subsection{Parameter Imaging}
In this subsection, we validate the proposed SHLR-VP method using a acquired \textit{in-vivo} T2 mapping data.
We compare the proposed method named SHLR-VP with two state-of-the-art mapping reconstruction methods including MORASA \cite{2016_MORASA}, and ALOHA \cite{2016_ALOHA_mapping}. These two methods both utilized the property of exponential in the parameter dimension. MORASA modeled the time-domain signal into a exponential function and the enforced the sparsity of the image at each time point. ALOHA made use of the annihilation relationship of the data and lifted the signal into a higher dimensional space to ensure the low-rankness. The codes of MORASA are shared by Dr. Xi Peng and codes of ALOHA \cite{code_ALOHA} are shared at Dr. Jong Chul Ye website.
A fully sampled brain dataset was acquired from a healthy volunteer on a 3T 12 coils MRI scanner (Siemens Healthcare, Erlangen, Germany) using turbo spin echo sequence (TR = $4000$ ms, $15$ TEs from $8.8$ to $132$ ms with $8.8$ ms sapcing, FOV = $200$ mm $\times$ $200$ mm, matrix size = $192 \times 192$, slice thickness = $3$ mm). To estimate the T2 map, standard nonlinear least square fitting method was performed in the selected region-of-interest on a pixel-by-pixel basis. T2 relaxation values outside the reasonable range were excluded from the T2 map.
We performed reconstructions under two different reduction factors, $R = 6$ and $R = 8$. In both cases, the MORASA images appear apparent string-like artifacts (Fig. \ref{fig_mapping} (e) and (i)). Compared to ALOHA, SHLR-VP produces mapping results with better accuracy. As can be seen from the images, the mapping error is obviously smaller than that of ALOHA. The RLNE and MSSIM also indicate that SHLR-VP persevere the image structure better than the comparison methods.
In addition, we validated the proposed method using other datasets, and the sampling patterns with PF. Please refer to the Supplementary Material S4 for detailed results. The results demonstrate that the proposed method SHLR-VP yields reconstructions with the lowest error in both reconstructed images and the quantization T2 map.
\section{Discussions}\label{Section:discussion}
\subsection{The Number of ACS}
The ACS is crucial for traditional parallel reconstruction methods. These methods estimate sensitivity maps or kernels from ACS, once the number of ACS lines is limited, the estimation turns inaccurate, leading to failures of reconstruction \cite{2014_MRM_Lustig}. Four comparison methods for parallel imaging require ACS to reconstruct images, thus, in this subsection we discuss the effect of the number of ACS on the reconstructed results.
The results shown in Fig. \ref{fig_ACS} indicates the proposed SHLR-SV maintains the robustness to ACS lines. When ACS lines ($>14$ lines) are relatively sufficient, all these comparison methods provide reconstruction with relatively low reconstruction error. Especially, the proposed method SHLR-SV achieved the lowest RLNE and the highest MSSIM. However, when the number of ACS lines is extremely low ($=8$), $\ell_1$-SPIRiT and AC-LORAKS fail to produce satisfying results (the RLNE reaches closely to $0.2$). In contrast, the STDLR-SPIRiT and SHLR-SV, still work well, even with only $8$ ACS lines offering an RLNE of approximately 0.07. The observation demonstrates that the proposed SHLR-SV still inherits the robustness of structured low-rank methods.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.4in]{./eps/Fig_ACS.png}
\caption{The variation trends of RLNE (a) and MSSIM (b) versus the number of ACS lines under parallel imaging reconstruction. This experiment is carried out with a series of Gaussian Cartesian sampling patterns but all at the same sampling rate of 0.34. The difference between each sampling pattern is the number of ACS lines. Note: all experiments are conducted on the brain data shown in Fig. \ref{fig_2D_random} (a). All methods are performed with the same parameters respectively, with which these methods provide the lowest RLNE when ACS lines equal 22.}
\label{fig_ACS}
\end{figure}
\subsection{Computational Time}
All reconstruction software were ran on a CentOS 7 computation server with two Intel Xeon CPUs of 3.5 GHz and 112 GB RAM. The computational time is obtained by averaging the reconstruction time of $10$ Monte Carlo tests. All comparison approaches were implemented in MATLAB.
As shown in Fig. \ref{fig_runtime}, the running time of SHLR-SV is reduced to $1/8$ of that of STDLR-SPIRiT, which substantially alleviates the burden of lengthy reconstruction time of STDLR-SPIRiT. Moreover, it can be observed that the proposed method runs relatively slower than $\ell_1$-SPIRiT and AC-LORAKS. However, the additional computational cost of SHLR-SV should be acceptable considering its improvement in image reconstruction.
The computational time on parameter imaging reconstruction is also depicted in Fig. \ref{fig_runtime}. It shown that the proposed SHLR-VP reconstructs data at a faster speed than MORASA and ALOHA. Besides, it is worthy to notice that runtime of the proposed method can be further reduced with GPU or MEX/C implementations.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=3.2in]{./eps/Fig_runtime.png}
\caption{The computational time of different methods in spatial imaging and parameter imaging. Note: parallel imaging reconstrctions are conducted on the brain data shown in Fig. \ref{fig_2D_random} (a) under Gaussian Cartesian undersampling pattern shown in Fig. \ref{fig_2D_random} (f), and parameter imaging reconstructions are conducted on the data shown in Fig. \ref{fig_mapping} with the reduction factor R=8.}
\label{fig_runtime}
\end{figure}
\section{Conclusion}\label{Section:conclusion}
In this work, we attempted to alleviate the huge computational complexity and lengthy reconstruction time problems of the state-of-the-art structural low-rank approaches. To reduce the computational time, we proposed a separable Hankel low-rank reconstruction method, named SHLR, to enforce the Hankel low-rankness of each row and each column of the signal of interest. But this way to enforce the low-rankness is sub-optimal compared to block Hankel methods, though holds the advantage of swift runtime. To enhance the performance of SHLR, we introduced prior information. For parallel imaging, we explored the correlation between rows/columns and the virtual coil technique, and proposed the SHLR-SV model. Regarding parameter imaging, we assumed that the signal intensity along the parameter dimension varies in an exponential way. We enforced the exponential property and virtual coil technique and proposed a model named SHLR-VP for accelerating parameter imaging.
Experimental \textit{in-vivo} results showed that the proposed two approaches enable better results than the state-of-the-art methods. Notably, the proposed methods allow reconstructions with smaller errors and faster reconstruction speed.
\bibliographystyle{IEEEtran}
\section{Numerical algorithm for SHLR-SV}
The numerical algorithm for SHLR-SV is summarized in Algorithm \ref{alg:1}.
\begin{algorithm}[htb]
\footnotesize
\caption{MRI reconstruction with SHLR-SV.}\label{alg:1}
\hspace*{0.02in}{\bf{Input:}} $\mathbf{Y}$, $\mathbf{U}$, $\mathbf{G}$, $\lambda $, ${{\lambda }_{1}}$, $\beta $, $\tau $.\\
\hspace*{0.02in}{\bf{Initialization:}} $\mathbf{Z}_{m}^{\text{row}}=\mathbf{Z}_{n}^{\text{col}}=\mathbf{0}$, $\mathbf{D}_{m}^{\text{row}}=\mathbf{D}_{n}^{\text{col}}=\mathbf{1}$, and $k=1$.\\
\hspace*{0.02in}{\bf{Output:}} ${\mathbf{X}}$.
\begin{algorithmic}[1]
\WHILE{ $k\le 50$ and $ \left\| {{\mathbf{X}}^{\left( k+1 \right)}}-{{\mathbf{X}}^{\left( k \right)}}\ \right\|_{F}^{2}\ /\left\| {{\mathbf{X}}^{\left( k \right)}}\ \right\|_{F}^{2}\ge {{10}^{-6}}$ }
\STATE
Update ${{\mathbf{X}}^{\left( k+1 \right)}}$ by solving Eq. (15);
\STATE
Update $\mathbf{Z}{{_{m}^{\text{row}}}^{\left( k+1 \right)}}$ and $\mathbf{Z}{{_{n}^{\text{col}}}^{\left( k+1 \right)}}$ by using Eq. (16) and Eq. (17) in the main text;
\STATE
Update multiplier $\mathbf{D}{{_{m}^{\text{row}}}^{\left( k+1 \right)}}$ and $\mathbf{D}{{_{n}^{\text{col}}}^{\left( k+1 \right)}}$ by using Eq. (13) and Eq. (14) in the main text;
\STATE $ k = k + 1 $;
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Derivations and numerical algorithm for SHLR-VP}
The augmented Lagrangian form of Eq. (8) in the main text is
\begin{equation} \label{(SHLR-P_ALF)}
\begin{medsize}
\begin{aligned}
\underset{\mathbf{D}_{n}^{\text{PE}},\mathbf{D}_{l}^{\text{P}}}{\mathop{\max }}\,\underset{\mathbf{X}_{m}^{\text{PE-P}},\mathbf{Z}_{n}^{\text{PE}},\mathbf{Z}_{l}^{\text{P}}}{\mathop{\min }} & \frac{\lambda }{2}\left\| \text{vec}\left( \mathbf{Y}_{m}^{\text{PE-P}}-\mathbf{U}{{\mathbf{F}}^{\text{PE}}}\mathbf{X}_{m}^{\text{PE-P}} \right) \right\|_{2}^{2} +
\sum\limits_{n=1}^{N}{\left( {{\lambda }_{2}}{{\left\| \mathbf{Z}_{n}^{\text{PE}} \right\|}_{*}} \! + \! \left\langle \mathbf{D}_{n}^{\text{PE}},\mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}_{m}^{\text{PE-P}} \right\rangle \! + \! \frac{\beta }{2}\left\| \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}_{m}^{\text{PE-P}} \! - \! \mathbf{Z}_{n}^{\text{PE}} \right\|_{F}^{2} \right)} \\
& + \sum\limits_{l=1}^{L}{\left( {{\left\| \mathbf{Z}_{l}^{\text{P}} \right\|}_{*}} \! + \! \left\langle \mathbf{D}_{l}^{\text{P}},{{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}_{m}^{\text{PE-P}} \right\rangle \! + \! \frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}_{m}^{\text{PE-P}} \!- \! \mathbf{Z}_{l}^{\text{P}} \right\|_{F}^{2} \right)}.
\end{aligned}
\end{medsize}
\end{equation}
The solution of Eq. \eqref{(SHLR-P_ALF)} is obtained by alternatively solving the following sub-problems:
\begin{equation} \label{(SHLR-P_X)}
\begin{medsize}
\begin{aligned}
\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}} = & \arg \underset{\mathbf{X}_{m}^{\text{PE-P}}}{\mathop{\min }}\, \frac{\lambda }{2} \left\| \text{vec}\left( \mathbf{Y}_{m}^{\text{PE-P}}-\mathbf{U}{{\mathbf{F}}^{\text{PE}}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k \right)}} \right) \right\|_{2}^{2}
+\sum\limits_{n=1}^{N}{\frac{\beta }{2}\left\| \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k \right)}}-\mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k \right)}}+\frac{\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k \right)}}}{\beta} \right\|_{F}^{2}} \\
& +\sum\limits_{l=1}^{L}{\frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k \right)}}-\mathbf{Z}{{_{l}^{\text{P}}}^{\left( k \right)}}+\frac{\mathbf{D}{{_{l}^{\text{P}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2}},
\end{aligned}
\end{medsize}
\end{equation}
\begin{equation} \label{(SHLR-P_Zpe)}
\mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k+1 \right)}} = \arg \underset{\mathbf{Z}_{n}^{\text{PE}}}{\mathop{\min }}\,{{\lambda }_{2}}{{\left\| \mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k \right)}} \right\|}_{*}} + \frac{\beta }{2}\left\| \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}} - \mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k \right)}} + \frac{\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2},
\end{equation}
\begin{equation} \label{(SHLR-P_Zt)}
\mathbf{Z}{{_{l}^{\text{P}}}^{\left( k+1 \right)}} = \arg \underset{\mathbf{Z}_{l}^{\text{P}}}{\mathop{\min }}\,{{\left\| \mathbf{Z}{{_{l}^{\text{P}}}^{\left( k \right)}} \right\|}_{*}} + \frac{\beta }{2}\left\| {{{\mathbf{\tilde{H}}}}_{\text{vc}}}\mathbf{\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}} - \mathbf{Z}{{_{l}^{\text{P}}}^{\left( k \right)}} + \frac{\mathbf{D}{{_{l}^{\text{P}}}^{\left( k \right)}}}{\beta } \right\|_{F}^{2},
\end{equation}
\begin{equation} \label{(SHLR-P_Dpe)}
\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k+1 \right)}}=\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k \right)}}+\tau \left( \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}}-\mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k+1 \right)}} \right),
\end{equation}
\begin{equation} \label{(SHLR-P_Dt)}
\mathbf{D}{{_{l}^{\text{P}}}^{\left( k+1 \right)}}=\mathbf{D}{{_{l}^{\text{P}}}^{\left( k \right)}}+\tau \left( \mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}}-\mathbf{Z}{{_{l}^{\text{P}}}^{\left( k+1 \right)}} \right).
\end{equation}
The solution of Eq. \eqref{(SHLR-P_X)}
\begin{equation} \label{(SHLR-P_X_solution)}
\begin{medsize}
\begin{aligned}
\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}} = &\left( \lambda {{\mathbf{F}}^{\text{PE,*}}}{{\mathbf{U}}^{*}}\mathbf{UF}^{\text{PE}} + \beta \sum\limits_{n=1}^{N}{\mathbf{P}_{n}^{*}{{{\mathbf{\tilde{H}}}}^{*}}\mathbf{\tilde{H}}{{\mathbf{P}}_{n}}} + \beta \sum\limits_{l=1}^{L}{\mathbf{Q}_{l}^{*}{{{\mathbf{\tilde{F}}}}^{1\text{D,*}}}{{{\mathbf{\tilde{W}}}}^{*}}{{{\mathbf{\tilde{H}}}}^{*}}\mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}} \right)^{-1} \\
& \left ( \lambda {{\mathbf{F}}^{\text{PE,*}}}{{\mathbf{U}}^{*}}\mathbf{Y}_{m}^{\text{PE-P}}+\beta \sum\limits_{n=1}^{N}{\mathbf{R}_{n}^{*}{{{\mathbf{\tilde{H}}}}^{*}}\left( \mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k \right)}}-\frac{\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k \right)}}}{\beta } \right)} + \beta \sum\limits_{l=1}^{L}{\mathbf{Q}_{l}^{*}{{{\mathbf{\tilde{F}}}}^{1\text{D,*}}}{{{\mathbf{\tilde{W}}}}^{*}}{{{\mathbf{\tilde{H}}}}^{*}}\left( \mathbf{Z}{{_{l}^{\text{P}}}^{\left( k \right)}}-\frac{\mathbf{D}{{_{l}^{\text{P}}}^{\left( k \right)}}}{\beta } \right)} \right ).
\end{aligned}
\end{medsize}
\end{equation}
The solution of Eq. \eqref{(SHLR-P_Zpe)} can be obtained by
\begin{equation} \label{(SHLR-P_Zpe_solution)}
\mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k+1 \right)}} = {{S}_{{{\lambda }_{2}}/\beta }}\left( \mathbf{\tilde{H}}{{\mathbf{P}}_{n}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}}+\frac{\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k \right)}}}{\beta } \right).
\end{equation}
Similarly, the Eq. \eqref{(SHLR-P_Zt)} can be solved by
\begin{equation} \label{(SHLR-P_Zt_solution)}
\mathbf{Z}{{_{l}^{\text{P}}}^{\left( k+1 \right)}}={{S}_{1/\beta }}\left( \mathbf{\tilde{H}\tilde{W}}{{{\mathbf{\tilde{F}}}}^{1\text{D}}}{{\mathbf{Q}}_{l}}\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}}+\frac{\mathbf{D}{{_{l}^{\text{P}}}^{\left( k \right)}}}{\beta } \right).
\end{equation}
The numerical algorithm for SHLR-VP is summarized in Algorithm \ref{alg:2}.
\begin{algorithm}[htb]
\footnotesize
\caption{Parameter imaging reconstruction with SHLR-VP.}\label{alg:2}
\hspace*{0.02in}{\bf{Input:}} ${{\mathbf{Y}}^{\text{Parameter}}}$, $\mathbf{U}$, $\lambda $, ${{\lambda }_{2}}$, $\beta $, $\tau $.\\
\hspace*{0.02in}{\bf{Output:}} ${\mathbf{X}^\text{Parameter}}$.
\begin{algorithmic}[1]
\WHILE{ $m \le M$ }
\STATE
{\bf{Initialization:}} $\mathbf{Z}_{n}^{\text{PE}}=\mathbf{Z}_{l}^{\text{P}}=\mathbf{0}$, $\mathbf{D}_{n}^{\text{PE}}=\mathbf{D}_{l}^{\text{P}}=\mathbf{1}$, and $k=1$;
\WHILE{ $k\le 100$ and $\left\| \mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k+1 \right)}}-\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k \right)}} \right\|_{F}^{2}/\left\| \mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k \right)}} \right\|_{F}^{2}\ge {{10}^{-6}}$ }
\STATE
Update $\mathbf{X}{{_{m}^{\text{PE-P}}}^{\left( k \right)}}$ by solving equation \eqref{(SHLR-P_X_solution)};
\STATE
Update $\mathbf{Z}{{_{n}^{\text{PE}}}^{\left( k+1 \right)}}$ and $\mathbf{Z}{{_{l}^{\text{P}}}^{\left( k+1 \right)}}$ by using \eqref{(SHLR-P_Zpe_solution)} and \eqref{(SHLR-P_Zt_solution)};
\STATE
Update multiplier $\mathbf{D}{{_{n}^{\text{PE}}}^{\left( k+1 \right)}}$ and $\mathbf{D}{{_{l}^{\text{P}}}^{\left( k+1 \right)}}$ by using \eqref{(SHLR-P_Dpe)} and \eqref{(SHLR-P_Dt)};
\STATE $ k = k + 1 $;
\ENDWHILE
\STATE $ m = m + 1 $;
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{More experimental results for parallel imaging}
In this section, more undersampling patterns and brain images are adopted for further validation in parallel imaging.
We adopted RLNE and MSSIM as evaluation criteria.
The RLNE is calculated through:
\begin{equation}
\text{RLNE=}\frac{{{\left\| \mathbf{x}-\mathbf{\hat{x}} \right\|}_{2}}}{{{\left\| \mathbf{x} \right\|}_{2}}}.
\end{equation}
where $\mathbf{x}$ and $\hat{\mathbf{x}}$ denote the vectorized fully sampled and reconstructed SSOS image, respectively.
The mean measure of structural similarity (MSSIM) on two images $\mathbf{A}$ and $\mathbf{B}$ is calculated by
\begin{equation}
\text{MSSIM} \left( \mathbf{A},\mathbf{B} \right) = \frac{1}{R} \sum\limits_{i=1}^{R}{\frac{\left( 2{{\mu }_{{{\mathbf{a}}_{i}}}}{{\mu }_{{{\mathbf{b}}_{i}}}} + {{C}_{1}} \right) \left( 2{{\sigma }_{{{\mathbf{a}}_{i}}{{\mathbf{b}}_{i}}}} + {{C}_{2}} \right)}{\left( \mu _{{{\mathbf{a}}_{i}}}^{2} + \mu _{{{\mathbf{b}}_{i}}}^{2} + {{C}_{1}} \right) \left( \sigma _{{{\mathbf{a}}_{i}}}^{2} + \sigma _{{{\mathbf{b}}_{i}}}^{2} + {{C}_{2}} \right)}},
\end{equation}%
where $\mathbf{A}$ and $\mathbf{B}$ represent fully sampled and reconstructed SSOS images, respectively; ${{\mu }_{{{\mathbf{a}}_{i}}}}$ , ${{\mu }_{{{\mathbf{b}}_{i}}}}$, ${{\sigma }_{{{\mathbf{a}}_{i}}}}$, ${{\sigma }_{{{\mathbf{b}}_{i}}}}$ and ${{\sigma }_{{{\mathbf{a}}_{i}}{{\mathbf{b}}_{i}}}}$ respectively denote the means, standard deviations and covariance of the local window ${{\mathbf{a}}_{i}}$ and ${{\mathbf{b}}_{i}}$; $R$ the number of local windows. Constants ${{C}_{1}}$ and ${{C}_{2}}$ are introduced to avoid the case when the denominator multiplied by $\mu _{{{\mathbf{a}}_{i}}}^{2}+\mu _{{{\mathbf{b}}_{i}}}^{2}$ and $\sigma _{{{\mathbf{a}}_{i}}}^{2}+\sigma _{{{\mathbf{b}}_{i}}}^{2}$ is close to zero.
Here, we first validate the proposed method under 2D random undersampling pattern. The 2D random undersampling patterns, which have stronger incoherence, are adopted here to simulate the 3D imaging with two phase encoding. As shown in Figs. \ref{fig_SM_2D_random} (b-e), all these four approaches reconstruct with artifact-free images. The reconstruction image of AC-LORAKS seems relatively more noise, but the error distributes randomly inside the skull, being preferred in clinical application. Errors of $\ell_1$-SPIRiT, STDLR-SPIRiT and SHLR-VC have reached a promising low level, assuring reliable reconstructions. It can also be seen that the SHLR-VC and STLR-SPIRiT provides slightly better reconstruction than that of $\ell_1$-SPIRiT in terms of RLNE and MSSIM.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.4in]{./eps/Fig_SM_2D_random}
\caption{Parallel imaging reconstruction results and errors under two patterns. (a) An SSOS image of fully sampled data; (b-e) SSOS images of reconstructed results by $\ell_1$-SPIRiT, AC-LORAKS, STDLR-SPIRiT and SHLR-SV, respectively; (f) the 2D random undersampling pattern of sampling rate $0.18$; (g-j) the reconstruction error distribution ($12.5 \times$) corresponding to reconstructed image above them. Note: the RLNE/MSSIM of (b-e) are $0.0537/0.9878$, $0.0595/0.9767$, $0.0413/ \mathbf{0.9926}$, $\mathbf{0.0396}/0.9925$, respectively.}
\label{fig_SM_2D_random}
\end{figure}
Similarly, SHLR-SV outperforms the comparison methods under 2D random undersampling with PF. As shown in Fig. \ref{fig_SM_2D_PF} (g-j), the proposed method yields the reconstruction with lowest error distribution. The metrics listed in Table \ref{Table_2D_table} also demostrate the superiority of the proposed method.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.4in]{./eps/Fig_SM_2D_PF}
\caption{Parallel imaging reconstruction results and errors under two patterns. (a) An SSOS image of fully sampled data; (b-e) SSOS images of reconstructed results by $\ell_1$-SPIRiT, AC-LORAKS, STDLR-SPIRiT and SHLR-SV, respectively; (f) the 2D random undersampling pattern with $4/5$ PF (total sampling rate $0.17$); (g-j) the reconstruction error distribution ($10 \times$) corresponding to reconstructed image above them. Note: the RLNE/MSSIM of (b-e) are $0.0493/0.9874$, $0.0489/0.9743$, $0.0420/0.9908$, $\mathbf{0.0362}/ \mathbf{0.9916}$, respectively.}
\label{fig_SM_2D_PF}
\end{figure}
Besides, we adopted two more \textit{in vivo} brain data under six different undersampling patterns to demonstrate the performance of the proposed method (shown in Fig. \ref{fig_2D_dataset}). Two brain datasets acquired from healthy volunteer. The first dataset shown in Fig. \ref{fig_2D_dataset} (a) is acquired from a 3T SIEMENS MRI scanner (Siemens Healthcare, Erlangen, Germany) equipped with 32 coils using T2-weighted turbo spin echo sequence (matrix size = $256 \times 256$, TR/TE = $6100 / 99$ ms, FOV = $220$ mm $\times$ $220$ mm, slice thickness = $3$ mm). Four virtual coils are compressed from the acquired data of 32 coils \cite{2013_Coil_Compression}. The other dataset shown in Fig. \ref{fig_2D_dataset} (e) is obtained from a 3T SIEMENS MRI scanner (Siemens Healthcare, Erlangen, Germany) equipped with 32 coils using the 2D T1-weighted FLAIR sequence (matrix size = $256 \times 256$, TR/TE = $3900 / 9.3$ ms, FOV = $200$ mm $\times$ $200$ mm, slice thickness = $5$ mm). Eight virtual coils are compressed from the acquired data of 32 coils \cite{2013_Coil_Compression}.
The quality metric of reconstruction results are listed in Table \ref{Table_2D_table}. The same phenomenon as that in the main text can be observed. The proposed method in parallel imaging, SHLR-SV, is superior to $\ell_1$-SPIRiT and AC-LORAKS, and comparable to the STDLR-SPIRiT, in terms of RLNE and MSSIM. When it turns to undersampling patterns with PF, SHLR-SV performs slightly better results than STDLR-SPIRiT does.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=4.5in]{./eps/Fig_2D_dataset}
\caption{More parallel imaging brain images and undersampling patterns. (a) and (e) Two different brain images; (b) the uniform undersampling pattern with $R=6$ and $20$ ACS lines; (c) Cartesian sampling pattern of a sampling rate 0.34; (d) the 2D random un- dersampling pattern of sampling rate 0.18; (e) the $R=6$ uniform undersampling pattern with $20$ ACS lines with $3/4$ PF (total sampling rate $0.20$); (f) the Cartesian undersampling pattern with $3/4$ PF (total sampling rate $0.30$); (j) the 2D random undersampling pattern with $4/5$ PF (total sampling rate $0.17$.)}
\label{fig_2D_dataset}
\end{figure}
\begin{table}[htbp]
\centering
\caption{RLNE/MSSIM results for brains in Fig. \ref{fig_2D_dataset} using different undersampling patterns.}
\begin{tabular}{m{1.5cm}<{\centering}m{1.5cm}<{\centering}m{2.5cm}<{\centering}m{2.5cm}<{\centering}m{2.5cm}<{\centering}m{2.5cm}<{\centering}}
\toprule
Image & Pattern & $\ell_1$-SPIRiT & AC-LORAKS & STDLR-SPIRiT & SHLR-SV \\
\midrule
\multicolumn{1}{c}{\multirow{6}[2]{*}{Fig. \ref{fig_2D_dataset} (a)}} & Fig. \ref{fig_2D_dataset} (b) & 0.1062/0.9608 & 0.0710/0.9747 & 0.0713/0.9794 & \textbf{0.0611}/\textbf{0.9863} \\
& Fig. \ref{fig_2D_dataset} (c) & 0.0810/0.9794 & 0.0675/0.9793 & 0.0599/0.9883 & \textbf{0.0531}/\textbf{0.9901} \\
& Fig. \ref{fig_2D_dataset} (d) & 0.0704/0.9800 & 0.0698/0.9732 & 0.0551/0.9871 & \textbf{0.0530}/\textbf{0.9870} \\
& Fig. \ref{fig_2D_dataset} (f) & 0.1300/0.9508 & 0.0858/0.9655 & 0.0977/0.9721 & \textbf{0.0699}/\textbf{0.9832} \\
& Fig. \ref{fig_2D_dataset} (g) & 0.1101/0.9688 & 0.0920/0.9629 & 0.0884/0.9809 & \textbf{0.0698}/\textbf{0.9849} \\
& Fig. \ref{fig_2D_dataset} (h) & 0.0906/0.9870 & 0.0729/0.9730 & 0.0797/0.9840 & \textbf{0.0570}/\textbf{0.9871} \\
\midrule
\multicolumn{1}{c}{\multirow{6}[2]{*}{Fig. \ref{fig_2D_dataset} (e)}} & Fig. \ref{fig_2D_dataset} (b) & 0.0722/0.9578 & 0.0613/0.9446 & \textbf{0.0425}/0.9811 & 0.0499/\textbf{0.9820} \\
& Fig. \ref{fig_2D_dataset} (c) & 0.0397/0.9861 & 0.0410/0.9753 & 0.0298/0.9915 & \textbf{0.0297}/\textbf{0.9918} \\
& Fig. \ref{fig_2D_dataset} (d) & 0.0292/0.9897 & 0.0412/0.9619 & \textbf{0.0246}/0.9920 & 0.0254/\textbf{0.9923} \\
& Fig. \ref{fig_2D_dataset} (f) & 0.0804/0.9505 & 0.0652/0.9402 & 0.0493/0.9797 & \textbf{0.0483}/\textbf{0.9809} \\
& Fig. \ref{fig_2D_dataset} (g) & 0.0506/0.9823 & 0.0503/0.9698 & 0.0387/0.9895 & \textbf{0.0365}/\textbf{0.9897} \\
& Fig. \ref{fig_2D_dataset} (h) & 0.0419/0.9870 & 0.0401/0.9777 & 0.0363/0.9911 & \textbf{0.0308}/\textbf{0.9920} \\
\bottomrule
\end{tabular}%
\label{Table_2D_table}%
\end{table}%
\newpage
\section{More experimental results for parameter imaging}
In this section, we adopted two \textit{in vivo} T2 mapping data to verify the performance of the proposed method.
We first test the comparison methods under the sampling pattern with partial Fourier. The dataset used here was fully sampled on a 3T SIEMENS MRI scanner (Siemens Healthcare, Erlangen, Germany) with 20 coil using turbo spin echo sequence (TR = $4000$ ms, $15$ TE with $8.2$, $16$, $25$, $33$, $41$, $49$, $58$, $66$, $74$, $82$, $91$, $99$, $107$, $115$, and $124$ ms, FOV = $220$ mm $\times$ $220$ mm, matrix size = $192 \times 192$, slice thickness = $4$ mm). We removed 4 coils of data with strong noise, and finally, 16 coils of data were used for reconstructions.
Look at the reconstructed image at $9$-th echo, we can find that the MORASA image has obvious sampling artifacts (Fig. \ref{fig_mapping_PF} (b) and (e)). The artifacts can still be seen in its mapping results (Fig. \ref{fig_mapping_PF} (i) and (I)). Both ALOHA and SHLR-VP remove the sampling artifacts promisingly (Fig. \ref{fig_mapping_PF} (c) and (d)). However, we can still see significant difference between their error images (Fig. \ref{fig_mapping_PF} (f) and (g)). The SHLR-VP error is much smaller than ALOHA. The associated mapping result of SHLR is more consistent with that of ALOHA (Please see the zoom-in image in Fig. \ref{fig_mapping_PF} (j) and (k)).
\begin{figure*}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.0in]{./eps/Fig_mapping_PF}
\caption{T2 mapping reconstruction results and errors under $R=8$ undersampling pattern with PF. (a) The ($9$-th echo) images of fully sampled data; (b-d) reconstructed images of MORASA, ALOHA, and SHLR-VP, respectively; (e-g)) the reconstruction error distribution ($8 \times$) corresponding to reconstructed image above them; (h) T2 map of fully sampled data; (i-k) T2 maps of reconstructed results at R=8 by MORASA, ALOHA, and SHLR-VP, respectively; (l-n) the reconstruction error distribution ($10 \times$) corresponding to reconstructed image above them. Note: the average RLNE/MSSIM of reconstructed images of MORASA, ALOHA, and SHLR-VP are $0.0967/0.9577$, $0.0599/0.9807$, and $\mathbf{0.0563} / \mathbf{0.9828}$, respectively. The RLNE/MSSIM of reconstructed T2 maps of MORASA, ALOHA, and SHLR-VP are $0.1404/0.9794$, $0.0983/0.9889$, and $\mathbf{0.0966} / \mathbf{0.9904}$, respectively.}
\label{fig_mapping_PF}
\end{figure*}
Besides, we conducted experiments on undersampling patterns with different acceleration factors and patterns with/without PF. Another slice of T2 mapping data shown in Fig. 10 in the main text was adopted here. The results are summerized in Table \ref{Table_mapping_table}. As shown in Table \ref{Table_mapping_table}, the proposed method for parameter imaging, SHLR-VP, yields reconstruction with lower error and better structural preservation than that of comparison methods in both reconstructed images and estimated T2 maps. Especially when the acceleration factor is higher ($R=8$) or when the undersampling patterns with PF, the improvement gained by SHLR-VP appears more obvious.
\begin{table}[htbp]
\centering
\caption{RLNE/MSSIM of parameter imaging reconstruction under different undersampling patterns.}
\begin{tabular}{m{3cm}<{\centering}m{3cm}<{\centering}m{3cm}<{\centering}m{3cm}<{\centering}}
\toprule
Pattern & MORASA & ALOHA & SHLR-VP \\
\midrule
\multicolumn{4}{p{12cm}}{Average RLNEs of reconstructed images} \\
$R=6$ & 0.0730/0.9844 & 0.0621/0.9819 & \textbf{0.0611}/\textbf{0.9882} \\
$R=8$ & 0.0840/0.9812 & 0.0793/0.9775 & \textbf{0.0718}/\textbf{0.9843} \\
$R=6$ with PF & 0.0840/0.9793 & 0.0776/0.9752 & \textbf{0.0722}/\textbf{0.9836} \\
$R=8$ with PF & 0.0967/0.9718 & 0.0929/0.9698 & \textbf{0.0815}/\textbf{0.9802} \\
\midrule
\multicolumn{4}{p{12cm}}{RLNEs of reconstructed T2 maps} \\
$R=6$ & 0.1080/0.9881 & \textbf{0.0917}/0.9897 & 0.0921/\textbf{0.9909} \\
$R=8$ & 0.1182/0.9852 & 0.1201/0.9836 & \textbf{0.1077}/\textbf{0.9873} \\
$R=6$ with PF & 0.1309/0.9830 & 0.1370/0.9790 & \textbf{0.1113}/\textbf{0.9861} \\
$R=8$ with PF & 0.1423/0.9783 & 0.1458/0.9762 & \textbf{0.1147}/\textbf{0.9854} \\
\bottomrule
\end{tabular}%
\label{Table_mapping_table}%
\end{table}%
\newpage
\section{Parameter selection of the proposed approaches}
In this subsection, the effect of parameters setting of the two proposed methods will be discussed, including regularization parameters ${{\lambda }}$ and ${{\lambda }_{1}}$ for parallel imaging and ${{\lambda }}$ and ${{\lambda }_{2}}$ for parameter imaging.
The brain image in Fig. 8 (a) and the sampling pattern in Fig. 8 (f) has been used to perform parallel imaging reconstructions, and the T2 mapping data shown in Fig. 10 with the $R=8$ reduction factor has been adopted for parameter imaging reconstruction.
The reconstruction errors versus different values of regularization parameters of SHLR-SV are shown in Fig. \ref{fig_lambda} (a). As can be seen in Fig. \ref{fig_lambda} (a), there exists a wide range of ${{\lambda_{1}}}$ (${{10}^{0}}\le {{\lambda}_{1}}\le {{10}^{4}}$) and ${{\lambda}}$ (${{10}^{3}}\le {{\lambda}}\le {{10}^{6}}$) leading to relatively low reconstruction errors. Too small or too large values of ${{\lambda}}$ and ${{\lambda}_{2}}$ produce higher reconstruction errors. Similar trend was observed in the parameter imaging results (Fig. \ref{fig_lambda} (b)) but the ranges of $\lambda$ and $\lambda_{2}$ are smaller. There exists a range of ${{\lambda_{2}}}$ ($1 \le {{\lambda}_{2}}\le 5$) and ${{\lambda}}$ (${{10}^{3}}\le {{\lambda}}\le {{10}^{5}}$) leading to relatively low reconstruction errors.
\begin{figure}[htbp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=6.0in]{./eps/Fig_lambda.png}
\caption{RLNEs versus regularization parameters of SHLR-SV and SHLR-VP. (a) RLNEs versus $\lambda$ and ${\lambda}_{1}$ in SHLR-SV, (b) RLNEs versus ${\lambda}$ and ${\lambda }_{2}$ in SHLR-VP.}
\label{fig_lambda}
\end{figure}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,563,860 | arxiv | \section{Introduction}\label{sec: Introduction}
Constructing meta-stable de Sitter (dS) vacua in string theory is undoubtedly one of the most challenging problems in contemporary physics that still remains unsolved (see e.g. the arguments in \cite{Danielsson:2018ztv,Obied:2018sgi,vanbeest2021lectures}). In \cite{Banerjee:2018qey} it was proposed that a dS universe can be realized in a string theory setting through a braneworld scenario similar to a Randall-Sundrum (RS) or Karch-Randall (KR) set up \cite{RS1, RS2, KR}. The brane corresponds to an expanding Coleman-de Luccia (CL) bubble of a true vacuum inside a false, unstable, five-dimensional anti de Sitter ($\text{AdS}$) vacuum. The motivation for this model comes from the ubiquity of AdS vacua in string theory which, if non-supersymmetric, are believed to be unstable as a consequence of the weak gravity conjecture \cite{ooguri2017nonsupersymmetric,freivogel2016vacua,2016mtxDanielsson}. The dark bubble scenario thus emerges naturally whenever there is a codimension one brane present in the theory that can mediate the decay of the false $\text{AdS}_5$. Its advantage is twofold: it evades the swampland constraints of constructing `fundamental' dS vacua based on pure compactifications of string theory \cite{Banjeree2021a} and, at the same time, it provides a concrete realisation of a braneworld in string theory with an inside-outside construction (in contrast to the inside-inside structure of RS or KR models). In \cite{Koga:2019yzj,Koga:2020jok,Basile:2021vxh,BasileNoGos} the physics of nucelation in $\text{AdS}_5$ was further studied, while \cite{Berglund2021} discusses a concrete realization of the dark bubble throguh a resolved Calabi-Yau conifold.
The proposed model realises an effective dS vacuum with lower dimensional observers being confined to the bubble boundary, where they perceive an expanding FLRW cosmology with a positive cosmological constant (CC). This model was studied in \cite{Banerjee:2018qey,Banerjee:2019fzz, Banerjee:2020wov, Banerjee:2020wix,Danielsson:2021tyb}. The power of this dark bubble model is that all features in the 4D cosmology will find an interpretation in the bulk:
\begin{itemize}
\item Four-dimensional gravity arises as an effective description. Indeed, it was shown in \cite{Banerjee:2019fzz} how the 4D Einstein equation follows from the junction condition across the brane. The brane geometry is sourced by the energy-momentum tensor of the brane itself (which for empty branes acts as a cosmological constant) and by contributions from the higher dimensional geometry.
\item The value of the four-dimensional cosmological constant is set by the differences amongst the AdS scales on the inside and outside of the bubble and by the brane tension. Its positivity is guaranteed by the occurrence of a nucleation event. To construct a phenomenological 4D cosmology with a small vacuum energy, one requires a modest hierarchy where the AdS scales are smaller that the 5D Planck scale.
\item The familiar dust and radiation components of the 4D cosmic fluid correspond to, respectively, stretched strings and matter in the bulk \cite{Banerjee:2019fzz}.
\item What a lower-dimensional observer would call `the Big Bang', has the bulk interpretation of a well-understood nucleation event a la Brown-Teitelboim (BT) \cite{Brown:1988kg}. From the higher dimensional perspective, the Big Bang does therefore not appear as a singularity. This was explored in \cite{Danielsson:2021tyb}.
\end{itemize}
The higher-dimensional interpretation of the Big Bang is particularly interesting as it provides a connection with quantum cosmology. In this inherently 4D framework, one uses canonical quantisation to derive the Wheeler-DeWitt (WdW) equation
\begin{equation}
\mathcal{H}\Psi[g, \phi] = 0, \label{eq: WdW}
\end{equation}
where $\mathcal{H}$ is the Hamiltonian and $\Psi[g, \phi ] $ is the wave function of the universe defined on the space of all three-geometries $g$, as well as on the space of all matter fields $\phi$ that are present. A common prediction of quantum cosmology is that a spherical universe can spontaneously nucleate out of `nothing', similar to the nucleation event in our dark bubble model. Importantly, any attempt to solve the Schr\"odinger-like equation \eqref{eq: WdW} requires the additional input of boundary conditions, which is a highly non-trivial issue. Two natural choices are Vilenkin's tunneling proposal \cite{VILENKIN198225, VILENKIN1984} and the no-boundary proposal of Hartle and Hawking \cite{HartleHawking}. From the 4D perspective of quantum cosmology neither one seems to be physically favoured\footnote{From a perspective based on Swampland criteria, it is only Vilenkin's proposal that could be realized in string theory \cite{BasileNoGos}.}. In our opinion this should not come as a surprise: boundary conditions should be able to account for the UV behaviour of gravity (for an example see e.g. \cite{Hertog:2021jyd}), but this is not encapsulated by any GR inspired Hamiltonian $\mathcal{H}$. The dark bubble model has the advantage of providing a UV completion of 4D gravity on the bubble making it possible to explore the issue of boundary conditions. Keeping the scale factor as the only dynamical variable, it was shown in \cite{Danielsson:2021tyb} that the BT amplitude in 5D perfectly matches Vilenkin's tunneling amplitude in 4D quantum cosmology.
This tunneling wave function has recently been subject of debate when perturbations about a homogeneous and isotropic background were included. The most natural fluctuations to consider from the point of view of the dark bubble, are of the gravitational type and we refrain from introducing any other kinds of fields. The 4D Hamiltonian is then of the form
\begin{equation}
\mathcal{H} = \frac{\kappa_4}{24 \pi^2 a}\frac{\partial^2}{\partial a^2} - \frac{\kappa_4}{2a^3}\frac{\partial^2}{\partial h_n^2} - \frac{6 \pi^2}{\kappa_4}\left( a-\frac{\Lambda_4}{3}a^3\right) + \frac{1}{\kappa_4}a (n^2-1)h_n^2,
\end{equation}
where $a$ is the scale factor and $h_n$ represents a specific transverse tracefree tensor mode of $S^3$ labelled by three quantum numbers $(n,\ell, m)$. We have defined $\kappa_4 = 8\pi G_4$. For notational simplicity, mode indices will generically be suppressed. In \cite{Feldbrugge2017, feldbrugge2018inconsistencies, PhysRevD.95.103508,PhysRevD.96.043505,PhysRevLett.121.081302} it was argued that such perturbations are unbounded and would ultimately destroy Vilenkin's quantum cosmology. However, in \cite{Vilenkin2018, Vilenkin:2018oja, DiTucci2019,PhysRevD.100.043544} it was argued that this problem can be avoided by imposing suitable boundary conditions on the perturbation amplitudes. Our embedding of Vilenkin into the dark bubble also suggests that such instability cannot be present since there is no reason why the nucleation event in AdS would be unphysical.
In this paper we will construct the uplift of 4D gravitational waves to 5D ones. This is necessary if we want to map a mini-superspace model of such perturbations onto a dark bubble nucleation event. For the purpose of writing down the junction condition, one is obliged to find the backreaction of the waves on the bulk geometry. This is a highly non-trivial task that we will tackle perturbatively in the metric
\begin{equation}\label{perturbedMetric}
g_{\mu\nu} = g_{\mu\nu}^{(0)} + \xi g_{\mu\nu}^{(1)} + \xi^2 g_{\mu\nu}^{(2)} + \mathcal{O}(\xi^3),
\end{equation}
where $\xi$ is a formal expansion parameter. The conventional procedure is that one solves the Einstein equation order by order in $\xi$. Plugging the previous expression into Einstein equation, one finds:
\begin{equation}\label{pertubedEinstein}
G_{\mu\nu} +\Lambda g_{\mu\nu}= \left(G_{\mu\nu}^{(0)}[g^{(0)}] + \Lambda g_{\mu\nu}^{(0)}\right)+\xi\left(G_{\mu\nu}^{(1)}[g^{(1)}] + \Lambda g_{\mu\nu}^{(1)}\right)+\xi^2\left(G_{\mu\nu}^{(2)}[g^{(1)}]+G_{\mu\nu}^{(1)}[g^{(2)}] + \Lambda g_{\mu\nu}^{(2)}\right) + \mathcal{O}(\xi^3) = 0,
\end{equation}
where $G^{(i)}_{\mu\nu}[g^{(j)}]$ denotes the $i$-th order variation of the Einstein tensor evaluated on the $j$-th order metric perturbation. Formally this is a quantity of order $\max\{i,j\}$ in $\xi$.
At zeroth order, the Einstein equation simply yields the background geometry $g_{\mu\nu}^{(0)}$. In the dark bubble scenario this could, for instance, correspond to a gas of strings inside a Schwarzschild-AdS space. We will, for simplicity, consider a background of pure AdS.
Gravitational waves (GW) appear at first order in $\xi$ through the linearized Einstein equation (the GW equation), which schematically can be understood as solutions to
\begin{equation}
G_{\mu\nu}^{(1)}[g^{(1)}] + \Lambda g_{\mu\nu}^{(1)}=0.
\end{equation}
The GW in the dark bubble model must satisfy specific requirements given that the 5D bulk induces a 4D metric on the dark bubble constrained by the junction conditions.
The second order Einstein equation can now be written as
\begin{equation}
G_{\mu\nu}^{(1)}[g^{(2)}] + \Lambda g_{\mu\nu}^{(2)} = -G_{\mu\nu}^{(2)}[g^{(1)}]. \label{eq: Second order Einstein equation}
\end{equation}
This can in principle be solved to give $g^{(2)}$. This tells us how the geometry reacts to the presence of the gravitational wave $g^{(1)}$. The RHS can be interpreted as an effective energy-momentum tensor $\langle T_{\mu\nu}\rangle\equiv -\kappa_D^{-1}\langle G_{\mu\nu}^{(2)}[g^{(1)}]\rangle$, where $\kappa_D$ is the gravitational constant in $D$ dimensions. This energy-momentum tensor is quadratic in $g^{(1)}$. The angular bracket $\langle\cdot\rangle$ denotes an averaging procedure over several wavelengths that is required for a proper interpretation, see e.g. \cite{Isaacson1968}. Observe the overall minus sign in the definition of this energy-momentum tensor $\langle T_{\mu\nu}\rangle$. We will see the fundamental relevance of this sign's presence through this paper. This effective energy-momentum term can also be captured by a backreacted background metric. This fact will be of great importance as, once the backreaction is accounted for, the junction condition will dictate how gravitational waves in the bulk will affect the evolution of the four-dimensional bubble.
The outline of the paper is as follows. In section \ref{section: review} we review the dark bubble model and introduce the tools that we need. In section \ref{section: 4Dwaves} we discuss 4D gravitational waves in an expanding universe. In section \ref{section: 5Dwaves} we perform the uplift into the 5D bulk, and perform consistency checks between 4D and 5D. This interplay between the bulk and boundary features will be examined in section \ref{section: Duality}. Finally, we discuss the importance and interpretation of our results.
\section{Review of the dark bubble model}\label{section: review}
\subsection{Friedmann cosmology}
It was proposed in \cite{Banerjee:2018qey} that a $\text{dS}_4$ cosmology can be obtained as the induced 4D metric on a co-dimension one bubble in $\text{AdS}_{5}$. The 5D bulk geometries inside and outside the bubble correspond to $\text{AdS}_{5}$ vacua
\begin{equation}
\textrm{d} s^2_\pm = g^\pm_{\mu\nu}\textrm{d} x^\mu\textrm{d} x^\nu = -f_\pm(z) \textrm{d} t^2_\pm + \frac{\textrm{d} z^2}{f_\pm(z)} + z^2\textrm{d} \Omega_3^2, \label{eq1: bulk metric}
\end{equation}
where $-(+)$ refers to the inside (outside) of the bubble, $\textrm{d}\Omega_3^2=\gamma_{ij}\textrm{d} x^i\textrm{d} x^j$ is the metric on $S^3$, and $f_\pm$ is for pure $\text{AdS}_5$ given by
\begin{equation}
f_\pm(z) = 1+k_\pm^2z^2,
\end{equation}
In the following, we will omit the $\pm$ subscript for notational simplicity. The constant $k$ defines the $\text{AdS}_5$ scale (i.e. $L_{\rm AdS} = 1/k$) and the 5D cc is given by $\Lambda_5= -6k^2$. A false (outside) $\text{AdS}_5^+$ vacuum can decay to a true (inside) $\text{AdS}_5^-$ vacuum via the nucleation of a spherical Brown-Teitelboim (BT) instanton \cite{Brown:1988kg} provided $k_->k_+$. Once nucleated, the bubble expands rapidly thereby eating all of $\text{AdS}_5^+$ in a finite time. The bubble can be described by specifying its radius $z=a(\tau)$, where $\tau$ is some time parameter on the bubble. We will assume that the bubble is sufficiently large: $ka\gg1$. The induced metric on the bubble wall is exactly of the FLRW form
\begin{equation}
\textrm{d} s^{2}_{\rm ind} = - N^2(\tau)\textrm{d}\tau^{2} + a(\tau)^{2} \textrm{d} \Omega_{3}^{2},\label{eq: FLRW}
\end{equation}
where a lapse function $N$ has been introduced to make time reparametrization invariance manifest. The relation between bulk time $t$ and brane time $\tau$ is given by
\begin{equation}
N^2(\tau) = f(a)\dot{t}^2-\frac{\dot{a}^2}{f(a)} \label{eq: bulk vs brane time}
\end{equation}
where a dot denotes a $\tau$-derivative.
The expansion of the bubble is governed by Israel's junction conditions:
\begin{equation}
\sigma = \frac{3}{\kappa_{5}}\left(\sqrt{\frac{f_-(a)}{a^2}+\frac{\dot{a}^2}{N^2a^2}}-\sqrt{\frac{f_+(a)}{a^2}+\frac{\dot{a}^2}{N^2a^2}}\right), \label{eq: junction condition}
\end{equation}
where $\sigma$ corresponds to the tension of the bubble wall. By expanding the square root, the first Friedmann equation can be extracted
\begin{equation}
\frac{1}{N^2}\left(\frac{\dot{a}}{a}\right)^2 = \frac{\kappa_4}{3}\rho_\Lambda - \frac{1}{a^2},
\end{equation}
where the 4D gravitational constant is identified with
\begin{equation}
\kappa_4=\frac{2k_-k_+}{k_--k_+}\kappa_5,
\end{equation}
and where the 4D cc is determined by $ \rho_\Lambda = \sigma_{\rm cr} -\sigma$ with $\sigma_{\rm cr}$ the critical brane tension at which the bubble remains static
\begin{equation}
\sigma_{\rm cr} = \frac{3}{\kappa_{5}} (k_{-}-k_{+}).
\end{equation}
The Friedmann equation only admits real solutions if $\sigma<\sigma_{\rm cr}$. From the 5D perspective, this means bubbles with a tension greater than $\sigma_{\rm cr}$ can simply not nucleate.
For a more general bulk metric that corresponds to a gas of strings in a Schwarzschild-AdS space, the function $f$ is given by
\begin{equation}
f(r) = 1 + k^2r^2 - \frac{\kappa_5M}{3\pi^2 r^2}-\frac{\kappa_5\alpha}{4\pi r}.
\end{equation}
Through the junction condition \eqref{eq: junction condition}, one can identify several different contributions to the Friedmann equation
\begin{equation}
\frac{1}{N^2}\left(\frac{\dot{a}}{a}\right)^2 = - \frac{1}{a^2} + \frac{\kappa_4}{3}\rho_{\Lambda} + \frac{\kappa_4}{3}\rho_{\rm r}a^{-4}+\frac{\kappa_4}{3}\rho_{\rm m}a^{-3}. \label{eq0: Friedmann}
\end{equation}
The vacuum energy $\rho_\Lambda$, the radiation density $\rho_{\rm r}$ and the matter density $\rho_{\rm m}$ find their origin in the bulk geometry
\begin{align}
\rho_\Lambda \approx \sigma_{\rm cr}-\sigma&& \rho_{\rm r} \approx \frac{1}{2\pi^2}\left(\frac{M_+}{k_+}-\frac{M_-}{k_-}\right) &&\rho_{\rm m} \approx \frac{3}{8\pi}\left(\frac{\alpha_+}{k_+}-\frac{\alpha_-}{k_-}\right), \label{eq: friedmann components}
\end{align}
We conclude that a bulk black hole with mass $M$ gives rise to radiation in the 4D world, while a gas of stretched strings with average density $\alpha$ gives rise to dust.
\subsection{The general case}
In the following we will use Greek indices when referring to the five-dimensional (bulk) geometry and Latin indices for quantities associated to the induced one. In general, the different bulk metrics across the bubble's wall cause the presence of an energy-momentum tensor $S^{a}_{b}$ on the brane. This can be captured by the (second) Israel's junction condition as:
\begin{equation}
\kappa_{5}S_{ab} = \left.\left[K_{ab}-Kh_{ab}\right]\right|^-_+, \label{eq: junction condition 2}
\end{equation}
where $[A]_+^- = A_--A_+$, $K_{ab} = \nabla_{\beta} n_{\alpha} \: e^{\alpha}_{a}\: e^{\beta}_{b}$, with $n_{\alpha}$ being an unit normal vector\footnote{Pointing in the direction where the bubble's volume increases.} to the wall and $e_{a}^{\alpha}$ its tangent vector. $h_{ab}$ is the induced metric on the wall. $K_{ab}$ (with trace $K$) represents the extrinsic curvature, which carries information about the bubble's embedding in the bulk geometry. For the sake of simplicity, we will consider the case where the wall is a simple empty brane with $S_{ab}=-\sigma h_{ab}$ with $\sigma$ the brane tension.
To extract the 4D Einstein equations in the general case, one can make use of the Gauss-Codazzi equation
\begin{equation}
R^{(5)}_{\alpha\beta\gamma\delta}e^\alpha_a e^\beta_b e^\gamma_c e^\delta_d = R^{(4)}_{abcd} + K_{ad}K_{bc}-K_{ac}K_{bd}, \label{GaussCodazzi}
\end{equation}
which connects the extrinsic curvature $K_{ab}$ and the intrinsic curvature of the brane to the projected bulk curvature. Inserting the Gauss-Codazzi equation (and contractions thereof) into the junction condition \eqref{eq: junction condition 2} eliminates the extrinsic curvature in favor of the energy-momentum tensor. Eventually one finds
\begin{equation}
G_{ab}^{(4)} = \left(\kappa_4\sigma-3k_+k_-\right)h_{ab} + \frac{k_+k_-}{k_--k_+}\left[\frac{\mathcal{J}_{ab}^+}{k_+}-\frac{\mathcal{J}_{ab}^-}{k_-}-\frac{1}{2}\left(\frac{\mathcal{J}^+}{k_+}-\frac{\mathcal{J}^-}{k_-}\right)h_{ab}\right] + \mathcal{O}\left((\kappa_4\Lambda_4)^2\right), \label{ProjectedEinsteinEqs}
\end{equation}
where $\mathcal{J}_{ab}$ is a tensor defined by
\begin{equation}
\mathcal{J}_{ab} = R^{(5)}_{\alpha\beta\gamma\delta}e^\alpha_a e^\beta_b e^\gamma_c e^\delta_d h^{cd}.
\end{equation}
Thus the four-dimensional geometry is sourced by the bulk geometry through the tensor $\mathcal{J}_{ab}$. One can verify that this expression reproduces the FLRW-case reviewed in the previous section. Note that even in pure $AdS_5$, $\mathcal{J}_{ab}$ has a contribution $-3k^2 h_{ab}$, which contributes to a net cosmological constant given by
\begin{equation}
\Lambda_4 =6 k_+ k_- - \kappa_4 \sigma = \kappa_4 \left(\frac{3}{\kappa_5}(k_- -k_+ ) - \sigma \right).
\end{equation}
In the 5D Einstein equation above we see how the bulk geometry induces matter in the effective 4D theory, which then sources the 4D Einstein equations. In \cite{Banerjee:2019fzz} it was shown how localized matter sources in 4D, such as a massive particle, is uplifted into a string that stretches into the bulk, similar to the hanging strings representing quarks in holography. Contrary to Randall-Sundrum \cite{RS1,RS2} or Karch-Randall \cite{KR} models, neither gravity nor matter is localized to the brane but extends holographically into the bulk. It was shown in \cite{Banerjee:2020wov} how the gravitational attraction between two stretched strings in the bulk projects down to the gravitational attraction between two point particles in 4D.
In the rest of the paper we will further study the interplay between 5D and 4D. In particular, we will discuss gravitational waves and their backreaction on the metric. In the dark bubble model the AdS scale $k$ is assumed to be a UV scale that is somewhere between the scales of particle physics and the Planck scale. In the present paper we will for simplicity focus on regimes where the Hubble scale $H$ is much smaller than any such UV-scale, i.e. $H \ll k$. As we will briefly mention, it is in principle possible to relax this assumption.
\section{Gravitational waves in a 4D expanding universe}\label{section: 4Dwaves}
In this section will review gravitational waves in an expanding FLRW cosmology with a flat or spherical topology. When the wavelength is sufficiently small, waves in a flat universe serve as a proxy for those in a spherical universe. Indeed, high frequency waves only probes small regions and do not feel the curvature at larger scales. Gravitational waves are described by transverse-tracefree (TT) perturbations to the metric \eqref{eq: FLRW}. In the conformal time gauge, these are\footnote{Note that in our conventions the coordinates $(\eta,x^i)$ are dimensionless and $a$ has a dimension of length.}
\begin{equation}
\textrm{d} s^2=a^2(\eta)\left[-\textrm{d}\eta^2+\left(\gamma_{ij}+\xi h_{ij}(\eta,x)\right)\textrm{d} x^i\textrm{d} x^j\right], \label{eq: perturbed FLRW}
\end{equation}
where $\gamma_{ij}$ is the metric on a spatial slice in $x^i$-coordinates and $h_{ij}$ is transverse and tracefree. In the following we will, for simplicity, ignore contributions from matter and radiation and consider a pure 4D dS cosmology with positive cosmological constant $\Lambda_4$ only. We then have that the 4D Hubble constant is given by $H^2 = \kappa_4 \rho_\Lambda/3 = \Lambda_4/3$.
\subsection{Flat universe}\label{Flat universe}
The scale factor for a flat dS universe is $a(\eta) = -1/(H\eta)$ with $-\infty<\eta<0$. For concreteness, we will consider a GW travelling in the $x_1$ direction\footnote{It is a trivial modification to the given analysis to consider a wave in any other direction.} with either a $+$ or $\times$ polarization. The perturbation can be expanded into harmonics on the spatial manifold. The first-order Einstein equation then yields a wave equation for each mode separately. For a single mode $h_{\rm 4D}(\eta,x_1) = e^{iqx_1}h_{\rm 4D}(\eta)$, labelled by some continuous wave number $q$, one finds
\begin{equation}
\frac{\textrm{d}^2h_{\rm 4D}}{\textrm{d}\eta^2} + 2\mathcal{H}\frac{\textrm{d} h_{\rm 4D}}{\textrm{d} \eta}+q^2h_{\rm 4D} = 0,
\end{equation}
where $\mathcal{H} = -1/\eta$ is the conformal Hubble rate. Solutions are easily found and given by
\begin{equation}
h_{\rm 4D}(\eta) = -\eta\cos(q\eta + \phi_0)+\frac{1}{q}\sin(q\eta+\phi_0),
\label{fourdimensionalsolu}
\end{equation}
where $\phi_0$ is an arbitrary phase. The wave $h$ freezes out to a constant at late times (which happens to be zero if $\phi_0=0$).
\subsection{Closed universe}\label{Closed universe}
A spherical dS universe is a bouncing cosmology with scale factor $a(\eta) = -1/(H\sin\eta)$ with $-\pi/2\leqslant\eta<0$. The moment $\eta= -\pi/2$ is the bounce, which coincides with the moment of nucleation. In a similar vein, one can expand the perturbation in TT harmonics\footnote{We adopt the convention that $S^{3}$ harmonics $Y_{ij}$ satisfy $\triangle Y_{ij} = -(n^2-3)Y_{ij}$ with $n\geqslant3$ with $\triangle$ the Laplacian on $S^3$. Note that mode indices are suppressed. See e.g. \cite{Gerlach, Lindblom}.} on $S^3$. For a single mode $h_{ij}(\eta, x) = h_{\rm 4D}(\eta)Y_{ij}(x)$, labelled by some discrete wave number $n$, the GW equation is
\begin{equation}
\frac{\textrm{d}^2h_{\rm 4D}}{\textrm{d}\eta^2} + 2\mathcal{H}\frac{\textrm{d} h_{\rm 4D}}{\textrm{d} \eta}+(n^2-1)h_{\rm 4D} = 0, \label{eq: Hawking eq}
\end{equation}
where a prime denotes a derivative to conformal time and $\mathcal{H}=a'/a=-\cot\eta$ is the conformal Hubble parameter. It is useful to redefine the time coordinate to
\begin{equation}
v = \cos\eta,\qquad\qquad \text{with }\:v\:\in\: [0,1). \label{eq: v}
\end{equation}
The GW equation \eqref{eq: Hawking eq} then becomes
\begin{equation}
\left(1-v^2\right)\frac{\textrm{d}^2h_{\rm 4D}}{\textrm{d} v^2}+ v\frac{\textrm{d} h_{\rm 4D}}{\textrm{d} v}+(n^2-1)h_{\rm 4D} = 0.
\end{equation}
With three regular singular points ($v=-1,+1,\infty$), it is well-known that this differential equation can be converted to the hypergeometric kind. The solutions are thus given in terms of these functions:
\begin{subequations}
\begin{align}
&h_{\rm 4D}(v)= \:_2F_1\left(-\frac{n+1}{2},\frac{n-1}{2},-\frac{1}{2};1-v^2\right),\\
&\tilde{h}_{\rm 4D}(v) = \left(1-v^2\right)^{3/2}\:_{2}F_1\left(1-\frac{n}{2},1+\frac{n}{2},\frac{5}{2};1-v^2\right).
\end{align}%
\end{subequations}
At late times when $v\to 1$, $h$ freezes out to a constant value while $\tilde{h}$ decays completely. Since $n$ is an integer, these hypergeometrics take a simpler form and can be rewritten in terms of the Chebyshev polynomials
\begin{subequations}
\begin{align}
&h_{\rm 4D}(v) = vT_n\left(v\right) - \frac{n}{n+1}T_{n+1}\left(v\right),\\
&\tilde{h}_{\rm 4D}(v) = \sqrt{1-v^2} \left[vU_{n-1}\left(v\right)-\frac{n}{n+1}U_n\left(v\right)\right],
\end{align}%
where $T_n$ and $U_n$ are the Chebyshev polynomials of the first and second kind respectively. One can also simplify the Chebyshev polynomials:
\begin{align}
&h_{\rm 4D}(v) = \frac{1}{n+1}\cos\left((n+1)\eta\right) + \sin\eta\sin(n\eta),\\
&\tilde{h}_{\rm 4D}(v) = \frac{1}{n+1}\sin\left((n+1)\eta\right) - \sin\eta\cos(n\eta),
\end{align}\label{eq: 4D GW}%
\end{subequations}%
where we used that
\begin{align}
T_n(v) =\cos(n\eta), && \sqrt{1-v^2}U_n(v) =\sin\left((n+1)\eta\right). \label{eq: Chebyshev}
\end{align}
At late times, high-frequency GWs (large $n$) reduce to \eqref{fourdimensionalsolu} upon the identification $q=\sqrt{n^2-1}\approx n$. This requires taking the limit $\eta\to0$ while keeping $n\eta$ finite. Note that $h_{\rm 4D}$ reduces to a wave with phase $\phi_0=0$ wile $\tilde{h}_{\rm 4D}$ will have the phase $\phi_0=\pi/2$. Physically, this limit corresponds to a late-time observer being able to see gravitational fluctuations within his Hubble radius.
\subsection{Energy-momentum tensor}\label{Energy-momentum tensor}
The presence of these GWs affects the background by sourcing an effective energy-momentum tensor as explained in the introduction. Let us first consider this backreaction from a pure 4D perspective. The purpose of this paper is then to explain how this can also be found from the 5D treatment. In particular, GWs in the bulk should backreact on the bubble in the same manner. For phenomenological reasons, we will be interested in the limiting case where the bubble appears to be flat (large wave-number and late time) and where radiation is distributed homogeneously and isotropically throughout the universe. We will therefore restrict this analysis to the waves found in section \ref{Flat universe}.
By following the averaging procedure, i.e. integrating over all the phases $x_1$, one finds the energy-momentum tensor
\begin{equation}
\langle \tensor{T}{^a_b}\rangle = \frac{H^2\eta^2}{8\kappa_4}\begin{pmatrix} 7 - 2q^2\eta^2 & \pm2q^2\eta^2 & 0 & 0\\ \mp2q^2\eta^2 & 5+2q^2\eta^2 & 0 & 0\\0&0&1&0\\0&0&0&1 \end{pmatrix},
\end{equation}
where the $\pm$ sign represents waves travelling in opposite $x_1$-directions. An uniform background of gravitational radiation is realised by averaging this tensor over all sorts of waves travelling in all possible $(x_1,x_2,x_3)$-directions with different polarizations. The energy-momentum tensor that describes an uniform background of gravitational radiation is thus given by, in terms of the scale factor,
\begin{equation}
\langle \tensor{T}{^a_b}\rangle_{\rm iso} = \frac{7}{8\kappa_4}\frac{1}{a^2}\begin{pmatrix}1&0&0&0\\0&\frac{1}{3}&0&0\\0&0&\frac{1}{3}&0\\0&0&0&\frac{1}{3} \end{pmatrix}+\frac{q^2}{4\kappa_4H^2}\frac{1}{a^4}\begin{pmatrix}-1&0&0&0\\0&\frac{1}{3}&0&0\\0&0&\frac{1}{3}&0\\0&0&0&\frac{1}{3}\end{pmatrix}. \label{eq: EM tensor 4D}
\end{equation}
The first term corresponds to a form of energy with equation of state $p=-\rho/3$ whose energy dilutes as $\rho \sim 1/a^2$. This behaves like curvature in the Friedmann equation. The second term has the equation of state $p=\rho/3$ and a dilution $\rho \sim 1/a^4$, which corresponds to radiation. When the wavelength of gravitational waves is larger than the horizon it becomes frozen, and the curvature component is all that remains.
\subsection{Backreaction}\label{4D backreaction}
To address the problem of backreaction, we will make an Ansatz of what the backreacted geometry should look like. In particular, the geometry will now be sourced by a perturbative amount of radiation and curvature \eqref{eq: EM tensor 4D} without any spatial anisotropies or inhomogeneities. We therefore take the backreacted geometry to be of the form
\begin{equation}
\textrm{d} s^2_{\rm back}= \left(g_{\mu\nu}^{(0)}+g_{\mu\nu}^{(2)}\right)\textrm{d} x^\mu\textrm{d} x^\nu = a^2(\eta)\left[-\left(1+\xi^2Q(\eta)\right)\textrm{d}\eta^2+\gamma_{ij}\textrm{d} x^i\textrm{d} x^j\right]. \label{eq: backreaction 4D}
\end{equation}
The second order Einstein equation then implies
\begin{equation}
Q(\eta) = \frac{7}{24}\eta^2 - \frac{1}{12}q^2\eta^4.
\end{equation}
By redefining the time coordinate as
\begin{equation}
\textrm{d}\chi = \sqrt{1+\xi^2 Q(\eta)}\textrm{d}\eta \approx \left(1+\frac{1}{2}\xi^2Q(\eta)\right)\textrm{d}\eta,
\end{equation}
the metric \eqref{eq: backreaction 4D} is of the FLRW form. Expanding in small $\xi$ one finds
\begin{equation}
\eta \approx \chi + \xi^2\left(-\frac{7}{144}\chi^3+\frac{1}{120}q^2\chi^5\right).
\end{equation}
By computing the Hubble rate for small $\xi$, one easily recognises contributions from curvature and radiation beyond the dominant cosmological constant
\begin{equation}
\left(\frac{1}{a^2}\frac{\textrm{d} a}{\textrm{d}\chi}\right)^2 \approx H^2 + \xi^2\left(-\frac{7H^2}{24}\chi^2+\frac{H^2q^2}{12}\chi^4\right) \approx H^2 + \xi^2\left(-\frac{7}{24}\frac{1}{a^2}+\frac{q^2}{12H^2}\frac{1}{a^4}\right). \label{eq: backreaction 4D Friedmann}
\end{equation}
\section{Uplifting gravitational waves to the bulk}\label{section: 5Dwaves}
We are interested in perturbations propagating in the $\text{AdS}_5$ bulk that correspond to the GW on the brane that were found in the previous section. Clearly, there are different possible fluctuations in the $\text{AdS}_5$ geometry; the ones relevant for the present discussion are TT perturbations to the $S^3$ that enter in the metric as\footnote{Note that in our conventions the coordinates $t$ and $z$ have the dimension of length and the coordinates $x^i$ are --still-- dimensionless.}
\begin{equation}\label{5Dbackgroundplusgw}
\textrm{d} s^2 = -f(z)\textrm{d} t^2 + \frac{\textrm{d} z^2}{f(z)} + z^2\left(\gamma_{ij}+\xi h_{ij}(t,z,x)\right)\textrm{d} x^i\textrm{d} x^j,
\end{equation}
where $h_{ij}$ is transverse and tracefree. It is easily checked that the induced metric on the brane in conformal coordinates then corresponds to \eqref{eq: perturbed FLRW}.
\subsection{Finding the 5D wave}\label{Finding the 5D wave}
As before, the TT perturbations can be decomposed in $S^3$ harmonics and for a single mode $h_{ij}=h_{\rm 5D}(\eta,z)Y_{ij}$ one finds the GW equation:
\begin{equation}
\frac{\partial^2h}{\partial t^2}-f^2\frac{\partial^2h}{\partial z^2}-\frac{f}{z}\left(2+4k^2z^2+f\right)\frac{\partial h}{\partial z} + \frac{n^2-1}{z^2}fh=0, \label{eq: 5D GW equation}
\end{equation}
where we assume emtpy $\text{AdS}_5$ with $f(z)=k^2 z^2 +1$. This determines the evolution of a gravitational wave throughout the $\text{AdS}_5$ bulk. It is useful to work with a coordinate that provides a 5D uplift of \eqref{eq: v}
\begin{equation}
w = \cos(k t) = \frac{\cos\eta}{\sqrt{1+\left(\frac{H}{k}\right)^2\sin^2\eta}} = \cos\eta + \mathcal{O}\left(\left(\tfrac{H}{k}\right)^2\sin^2\eta\right)
\end{equation}
where the relation between bulk time and conformal time on the brane \eqref{eq: bulk vs brane time} has been used. This means that in the relevant limit for 4D GR where $H/k\ll1$, one has $w\approx v$. In particular,
\begin{equation}
kt=\eta + \mathcal{O}\left(\left(\tfrac{H}{k}\right)^2\sin(2\eta)\right)
\end{equation}
Note that this relation is only meaningful once the bubble has nucleated. This occurs at $\eta=-\pi/2$ as alluded to in the 4D treatment. It corresponds to a bulk time $t\approx -\pi/2k$. The bulk time in principle has the full range $-\infty<t<+\infty$. However, one has to take into account the composed inside-outside geometry and the non-eternity of the bubble. The time range of outside geometry is $-\infty<t_+<0$ where the limit $t_+\to0$ corresponds to the bubble having eaten all of $\text{AdS}_5^+$. The inside geometry is only present once a bubble has nucleated after which it persists forever. Therefore $-\pi/2k<t_-<+\infty$.
The GW equation in the bulk is given by
\begin{equation}
k^2\left[(1-w^2)\frac{\partial^2 h}{\partial w^2}-w\frac{\partial h}{\partial w}\right]-f^2\frac{\partial^2h}{\partial z^2}-\frac{f}{z}\left(2+4k^2z^2+f\right)\frac{\partial h}{\partial z} + \frac{n^2-1}{z^2}fh=0. \label{eq: 5D GW equation}
\end{equation}
This equation needs to be supplemented by suitable boundary conditions. In particular, when $h$ is restricted to the brane (this will be called the `induced wave' $h_{\rm ind}$), we require that $h_{\rm ind}$ coincides with the 4D GW \eqref{eq: 4D GW} found before to leading order in $H/k$. This amounts to imposing the boundary conditions at the location of the bubble $z=a(w)$, assuming $ka\gg1$,
\begin{align}
h_{\rm ind}(w) \equiv h_{\rm 5D}\left(w,\tfrac{1}{H\sqrt{1-w^2}}\right) = h_{\rm 4D}(v) + \mathcal{O}\left(\left(\tfrac{H}{k}\right)^2\right), && \lim_{z\to0}h_{\rm 5D}(w,z) = 0.
\end{align}
The last condition is the requirement that there are no sources inside the bubble. Note that, in principle, there are two different waves: inside and outside the bubble. It would therefore seem natural to insist that \textit{only} the inside wave decays and \textit{only} the outside wave does not blow up as $z\to\infty$. However note that their evolution is governed by the same wave equation (upto a difference in $k$) and that the boundary condition at the location of the brane must be imposed for both of them. This boundary condition uniquely fixes the inside \textit{and} the outside wave. This means that if the outside wave would be extrapolated in the would-be limit $z\to0$, it would still vanish.
As far as this uplift is concerned, one may verify that the following meet the requirements
\begin{subequations}
\begin{align}
&h_{\rm 5D}(w,z) = \frac{(k z)^{n-1}}{\left(1+k^2z^2\right)^{\frac{n-1}{2}}}\quad\left[wT_n\left(w\right)-\frac{n\left(n+1+2k^2z^2\right)}{2(n+1)\left(1+k^2z^2\right)}T_{n+1}\left(w\right)\right],\\
&\tilde{h}_{\rm 5D}(w,z) = \frac{(kz)^{n-1}\sqrt{1-w^2}}{\left(1+k^2z^2\right)^{\frac{n-1}{2}}}\left[wU_{n-1}\left(w\right)-\frac{n\left(n+1+2k^2z^2\right)}{2(n+1)\left(1+k^2z^2\right)}U_n\left(w\right)\right].
\end{align}%
By using \eqref{eq: Chebyshev}, these uplifted waves can be written as
\begin{align}
&h_{\rm 5D}(t,z) = \frac{(k z)^{n-1}}{\left(1+k^2z^2\right)^{\frac{n-1}{2}}}\left[\frac{\frac{1}{2}(1+n)(2-n)+k^2z^2}{(n+1)\left(1+k^2z^2\right)}\cos\left((n+1)kt\right)+\sin(kt)\sin(nkt)\right],\\
&\tilde{h}_{\rm 5D}(t,z) = \frac{(kz)^{n-1}}{\left(1+k^2z^2\right)^{\frac{n-1}{2}}}\left[\frac{\frac{1}{2}(1+n)(2-n)+k^2z^2}{(n+1)\left(1+k^2z^2\right)}\sin\left((n+1)kt\right)-\sin(kt)\cos(nkt)\right].
\end{align}
\end{subequations}
Just as for 4D gravitational waves, one might be tempted to take the large $n$, late time ($t\to0$) limit to find an uplift of the GW in a flat universe \eqref{fourdimensionalsolu}. However, one must also take into account that, to reach this flat limit, the wave must be considered near to the bubble. This means it is also required to take the limit in which $kz$ is large; it cannot probe curvature at large scales. The following waves thus represent the correct uplift of \eqref{fourdimensionalsolu}, with phases $\phi_0=0$ and $\phi_0=\pi/2$ respectively, under the identification $q=\sqrt{n^2-1}\approx n$,
\begin{subequations}
\begin{align}
&h_{\rm 5D}(t,z) = \frac{-\frac{1}{2}n^2+k^2z^2}{nk^2z^2}\cos\left(nkt\right) + kt\sin(knt),\\
&\tilde{h}_{\rm 5D}(t,z) = \frac{-\frac{1}{2}n^2+k^2z^2}{nk^2z^2}\sin\left(nkt\right) - kt\cos(knt).
\end{align}\label{eq: flat 5D waves}%
\end{subequations}%
For large $n$, there are interesting corrections beyond the 4D wave of general relativity at high momentum. At fixed $kz$, there is a competition between $n$ and $kz$ in the first term. The first modification to the 4D wave comes when $kz\sim n$. This translates into $p \sim n/z \sim k$, where $p$ is the proper momentum (or energy) of the wave. We therefore conclude that the AdS-scale $k$ represents a UV-scale where new physics is introduced beyond 4D gravity on the brane.
\subsection{Energy-momentum tensor in the bulk}\label{energy-momentum tensor in the bulk}
In the same spirit as in subsection \ref{Energy-momentum tensor}, we will compute the energy-momentum tensor associated to the waves found before in the flat limit. These waves are described by \eqref{eq: flat 5D waves}. One aims to obtain an isotropic tensor $\langle\tensor{T}{_{\mu}_{\nu}}\rangle_{\rm iso}$ by an average over several wavelengths, polarizations and propagation directions; this superposition of waves represents uniform gravitational radiation filling the bulk geometry. This isotropic stress tensor consists of three identifiable pieces (respectively curvature, radiation, and flux):
\begin{subequations}
\begin{equation}
\langle\tensor{T}{^\mu_\nu}\rangle_{\rm iso} = \langle\tensor{T}{^\mu_\nu}\rangle_{\rm c}+ \langle\tensor{T}{^\mu_\nu}\rangle_{\rm r}+ \langle\tensor{T}{^\mu_\nu}\rangle_{\rm f}
\end{equation}
where each component is given by
\begin{align}
&\begin{aligned}\langle\tensor{T}{^\mu_\nu}\rangle_{\rm r} = \frac{k^2n^2t^2}{4\kappa_5z^2}\begin{pmatrix}-1&0&0&0&0\\0&\frac{1}{3}&0&0&0\\0&0&\frac{1}{3}&0&0\\0&0&0&\frac{1}{3}&0\\0&0&0&0&0 \end{pmatrix},
&&
\langle\tensor{T}{^\mu_\nu}\rangle_{\rm f} = \frac{n^2}{8\kappa_5z^2}\begin{pmatrix}0&0&0&0&-\frac{2t}{k^2z^3}\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\2k^2tz&0&0&0&0 \end{pmatrix},\end{aligned}\\
\nonumber\\
\nonumber&\langle\tensor{T}{^\mu_\nu}\rangle_{\rm c} = \frac{1}{8\kappa_5z^2}\left(7-\frac{n^4}{2k^4z^4}\right)\begin{psmallmatrix}1&0&0&0&0\\0&\frac{1}{3}+\mathcal{O}\left(\frac{n^2}{k^2z^2}\right)&0&0&0\\0&0&\frac{1}{3}+\mathcal{O}\left(\frac{n^2}{k^2z^2}\right)&0&0\\0&0&0&\frac{1}{3}+\mathcal{O}\left(\frac{n^2}{k^2z^2}\right)&0\\0&0&0&0&1+\mathcal{O}\left(\frac{n^2}{k^2z^2}\right) \end{psmallmatrix}.%
\end{align}\label{isotropicEnergyStressTensorSecondOrder5D}%
\end{subequations}%
Recalling that the momentum of the wave is $p\sim n/z$, we see how the UV-corrections we commented on earlier, enter in the curvature contribution but not in the radiation piece. The flux contribution shows how these GWs represent a net flow of energy in the positive $z$ direction. We will comment more on this nice feature in section VI.
\subsection{Backreaction}\label{backreaction}
Along similar lines of \ref{4D backreaction}, we will compute the response of the bulk geometry due to the presence of the GWs. We will again limit ourselves to the case where the brane appears flat such that \eqref{eq: flat 5D waves} are the appropriate waves to use. The presence of the energy-momentum tensor \eqref{isotropicEnergyStressTensorSecondOrder5D} will generate a deformation in the bulk's geometry, which can be described by a backreacted metric accounting for it. This backreaction is determined by the second order Einstein equation \eqref{eq: Second order Einstein equation}. To keep the calculation tractable, the GW background was made isotropic in \eqref{isotropicEnergyStressTensorSecondOrder5D}. To continue approaching this problem from the simplest perspective, it is therefore convenient to adopt the global coordinate system for the backreated bulk geometry. We will make the Ansatz
\begin{equation}\label{backreacted5dmetric}
\begin{aligned}
\textrm{d} s^2_{\rm back} &= \left(g_{\mu\nu}^{(0)}+ \xi^{2}\:g_{\mu\nu}^{(2)}\right)\textrm{d} x^\mu\textrm{d} x^\nu \\
&\approx -\left[1+k^2z^2+\xi^2\left(q_1-q_2k^2t^2\right)\right]\textrm{d} t^2 + \frac{\textrm{d} z^2}{1+k^2z^2+\xi^{2}\left(q_1-q_3k^2t^2\right)}+z^2\textrm{d}\Omega_3^2.
\end{aligned}
\end{equation}
The set of coefficients $\{q_i\}$ will be determined later. It is easy to see that in the $\xi\to 0$ limit, we recover the $\text{AdS}_5$ background. The backreaction piece $g_{\mu\nu}^{(2)}$ is given by the $\xi^2$ coefficient in the small $\xi$ expansion, in particular
\begin{align}
g_{tt}^{(2)} = q_1-q_2k^2t^2, && g_{zz}^{(2)} = \frac{q_3k^2t^2 -q_{1}}{(1+k^2z^2)^2}.
\end{align}
In order to fix the value of the coefficients $\{q_i\}$, one needs to compute the second order Einstein tensor and solve the second order Einstein equation. This yields
\begin{equation}
q_{1} = -\frac{7}{24},\quad \quad q_{2} = -\frac{q^{2}}{6},\quad \quad q_{3} = \frac{q_{2}}{2} = -\frac{q^{2}}{12}. \label{fixingq}
\end{equation}
This change in the bulk geometry will affect the evolution of the bubble wall at $z=a(\eta)$ through the junction condition \eqref{eq: junction condition}. By computing the extrinsic curvature, one finds (in the large $k$ limit) that the brane's energy-momentum tensor can be written as
\begin{equation}
\begin{aligned}
\kappa_4\tensor{S}{^a_b}= -(\kappa_4\: \sigma + \Lambda_{4})\delta^{a}_{b} &+ \delta^{a}_{b}\left(3 H^{2} + \frac{1}{a^{2}}\right) + \frac{2}{a^{2}}\delta^{a}_{0}\delta^{0}_{b} +\\ &+\xi^2 \left[\frac{q_{1}}{a^{2}} (\delta^{a}_{i}\delta^{i}_{b} + 3\: \delta^{a}_{0}\delta^{0}_{b}) + \frac{2q_{2}}{H^{2}a^{4}}\delta^{a}_{i}\delta^{i}_{b} - \frac{3q_{3}}{H^{2}a^{4}}\left(\delta^{a}_{0}\delta^{0}_{b}+\delta^{a}_{i}\delta^{i}_{b}\right)\right],
\end{aligned}\label{SabProject}
\end{equation}
where $\sigma$ corresponds to the tension of the brane. If we then impose the junction conditions using $S_{ab}=-\sigma h_{ab}$, we obtain the Friedmann equations. Alternatively, one can use the Gauss-Codazzi equation \eqref{GaussCodazzi}, and the projection of Einstein equations \eqref{ProjectedEinsteinEqs} as in \cite{Banerjee:2019fzz}, to obtain the same result in the form
\begin{equation}
\begin{aligned}
\tensor{G}{^{(4)}^a_b}=&-\Lambda_4 \delta^{a}_{b} + \xi^2 \left[\frac{q_{1}}{a^{2}} (\delta^{a}_{i}\delta^{i}_{b} + 3\: \delta^{a}_{0}\delta^{0}_{b}) + \frac{2q_{2}}{H^{2}a^{4}}\delta^{a}_{i}\delta^{i}_{b} - \frac{3q_{3}}{H^{2}a^{4}}\left(\delta^{a}_{0}\delta^{0}_{b}+\delta^{a}_{i}\delta^{i}_{b}\right)\right].
\end{aligned}\label{EnergyMomentumFromProjection}
\end{equation}
Covariant conservation $\nabla_a\tensor{S}{^a_b} = 0$ imposes the same relation between $q_{2}$ and $q_{3}$ that was found in \eqref{fixingq}. This constraint can also be verified by comparing the covariant derivative of the extrinsic curvature\footnote{In the same regime as previous expressions.} to the projection of the bulk energy stress tensor using
\begin{equation}
\nabla_{a} K_{b}^{a}-\partial_{b} K=G_{\mu \nu} \tensor{e}{^\mu_b} n^{\nu}.
\end{equation}
Note that this result agrees with \eqref{eq: backreaction 4D Friedmann} upon using \eqref{fixingq}. This implies that the junction conditions have taken care of the gravitational perturbation in the bulk, providing a clear connection between the bulk and boundary bubble's cosmological physics.
\begin{figure}[t]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=0.75cm]
\clip(-2,-2.5) rectangle (11,5);
\draw (0.3,4.9) node[anchor=north west] {$g^{\rm 5D}$};
\draw [->,-{Computer Modern Rightarrow}, line width=0.3pt] (1.,4.) -- (4.,3.);
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt] (8.,4.) -- (5.,3.);
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt,dash pattern=on 1pt off 2pt] (1,4.4) -- (8,4.4);
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt,dash pattern=on 1pt off 2pt] (1,-0.4) -- (8,-0.4);
\draw (4.1,5) node[anchor=north west] {\ref{backreaction}};
\draw (4.1,0.2) node[anchor=north west] {\ref{4D backreaction}};
\draw (8.2,4.9) node[anchor=north west] {$g^{\rm 5D}_{\rm back}$};
\draw (4.05,3.3) node[anchor=north west] {$\langle T_{\mu\nu}\rangle$};
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt] (0.5,4) -- (0.5,0);
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt] (8.5,4) -- (8.5,0);
\draw (0.3,0.) node[anchor=north west] {$g^{\rm 4D}$};
\draw (8.2,0.) node[anchor=north west] {$g^{\rm 4D}_{\rm back}$};
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt] (1.,-0.8) -- (4.,-1.8);
\draw [->,-{Computer Modern Rightarrow},line width=0.3pt] (8.,-0.8) -- (5.,-1.8);
\draw (4.05,-1.5) node[anchor=north west] {$\langle T_{ab}\rangle$};
\draw (-0.5,2.5) node[anchor=north west] {\ref{Finding the 5D wave}};
\draw (1.9,-1.2) node[anchor=north west] {\rotatebox{-12}{\ref{Energy-momentum tensor}}};
\draw (1.9,3.6) node[anchor=north west] {\rotatebox{-12}{\ref{energy-momentum tensor in the bulk}}};
\draw (6.1,3.6) node[anchor=north west] {\rotatebox{14}{\ref{backreaction}}};
\path [line width=0.3pt] (9.2,4.4) edge (10,4.4);
\path [line width=0.3pt] (10,4.4) edge (10,-2);
\path [line width=0.3pt,-{Computer Modern Rightarrow}] (10,-2) edge (5,-2) ;
\node at (10,1.5)[right]{\ref{backreaction}};
\end{tikzpicture}
\caption{Each section where the computation is performed is indicated by the side of each arrow in the diagram. Horizontal one refers to backreaction, while vertical ones relates the bulk and boundary geometries. Diagonal ones stand for general relativity calculations. The right most line represents the Gauss-Codazzi projection onto the boundary.}
\label{diagram}
\end{figure}
Let us summarize what we have done. It could be useful to navigate through the conceptual diagram in figure \ref{diagram}. In the upper left entry we find the five dimensional background metric plus the gravitational perturbation contribution at first order. The wave extends from $z=0$, where it vanishes, through the interior of the bubble, across the bubble wall and further out through the exterior of the bubble where the wave remains finite. Computing its Einstein equations, averaged over all directions, yields an energy-momentum tensor of the form \ref{isotropicEnergyStressTensorSecondOrder5D}. This same tensor can be obtained if the object sourced by these perturbations is accounted through a "backreacted" bulk geometry at second order as \ref{backreacted5dmetric}.
Restricting $g^{\rm 5D}$ to the boundary of the bubble (i.e $z = \tfrac{-1}{H\eta}$), one can recover the induced four dimensional metric of the expected form $g^{\rm 4D}$, in the conformal time gauge, as shown in \eqref{eq: perturbed FLRW}. This geometry contains a four dimensional wave that solves the Einstein equations at first order in $\xi$. Solving these at second order, averaging over wavelengths and imposing isotropic superposition, one finds an energy-momentum tensor of the form \eqref{eq: EM tensor 4D}. On the other side of the diagram (upper right corner), starting from the backreacted metric $g^{\rm 5D}_{\rm{back}}$, one can project down this corrected background through the junction conditions to source the Einstein equations in 4D, as shown in \eqref{ProjectedEinsteinEqs}. The tensor $\mathcal{J}_{ab}$ is of the same form on both sides of the brane (up to $k_{\pm}$), but there is a jump in the extrinsic curvature that will backreact on the induced 4D metric. The associated energy-momentum tensor is exactly the same as if you had solved the 4D Einstein equations directly at second order \eqref{eq: EM tensor 4D}, using the averaged 4D waves. This demonstrates the self consistency of the dark bubble model. It is important to realize that it is {\it not} possible to geometrically just project $\langle T_{\mu\nu}\rangle$ from 5D to 4D to obtain $\langle T_{ab}\rangle$. The relation between the two involves the relation between the 5D and 4D gravitational constants, which is determined by the junction conditions.
\section{The duality between brane dynamics in 5D and Einstein gravity in 4D}\label{section: Duality}
\subsection{Two ways to look at brane dynamics}
Examining the 4D Einstein equation \eqref{ProjectedEinsteinEqs}, induced by the junction conditions, we note the presence of the tension $\sigma$ and how it acts as a {\it negative} energy density. This is also manifest in \eqref{eq: friedmann components}. This is physically correct: increasing $\sigma$ eventually brings the tension above its critical value yielding a negative 4D cosmological constant that prevents the bubble to nucleate. But what do the fluctuations of the brane correspond to? These would seem to add energy to the tension, thus contributing as a {\it negative} energy density to the 4D energy-momentum tensor. This naively signals an instability. As we will argue, such fluctuations are already taken care of by the junction conditions.
To see this, one needs to recognise that there are two equivalent ways to describe the motion and fluctuations of the brane. From the 5D perspective one studies the brane equations of motion, as we will see below, which make sure that the backreaction on the 5D geometry is taken into account. It is physically clear that there are no instabilites in this system, beyond the accelerated expansion of the bubble itself, and that all other perturbations will cost energy. On the other hand, one can use the junction conditions \eqref{eq: junction condition 2} as described earlier, where the motion of the brane is captured by 4D Einstein gravity. Eventually, the result should be the same.
If one considers a 5D bulk geometry that induces matter with a positive energy density in 4D, it requires a response from the 4D Einstein tensor with the same positive sign to satisfy the Einstein equations. The brane will give rise to such a geometric contribution that enters into the Einstein equation through the 4D Einstein tensor. However, one can also view the brane as a contribution to the energy momentum-tensor by formally moving this geometric contribution to the other side of the Einstein equation, thereby picking up a sign. If it is to account for adding matter with a \textit{positive} energy density, the response should be a \textit{negative} energy density. This is precisely what an increase in the brane energy will accomplish. The sign only looks wrong if one, incorrectly, interprets the brane term to be associated with 4D matter. Instead, it is precisely this physical behaviour of the brane that is responsible for 4D gravity. In the case of vibrational modes in the bulk, the brane will start to vibrate and increase its energy as a response. These vibrations are, through the junction conditions, encoded into the 4D Einstein tensor and identical to the response of gravity to a matter source.
The simplest illustration of the two points of view can be found in pure FLRW. We start with the junction condition \eqref{eq: junction condition} in proper time rewritten as
\begin{equation}
\frac{\sigma}{2} \left(\sqrt{k_{-}^{2}+\frac{\dot{a}^{2}}{a^{2}}+\frac{\Delta_- (a)}{a^2}}+\sqrt{k_{+}^{2}+\frac{\dot{a}^{2}}{a^{2}}+\frac{\Delta_+ (a)}{a^2}}\right) = \frac{3}{2\kappa_{5}}\left(k_{-}^{2}-k_{+}^{2}+\Delta_- (a)-\Delta_+ (a)\right), \label{junction1}
\end{equation}
with the metric factor written as $f(r)=k^2 r^2 + \Delta (r)$. Here, we simply have $\Delta_\pm=1$ in pure AdS. Multiplying with the volume of the bubble, proportional to $a^4$, this can be interpreted as energy conservation, comparing the case with and without the bubble. The energy of the brane is simply set equal to the energy difference of the two vacua. Following BT, we note that the energy of the brane itself is given by the average of the energy obtained from the two sides of the brane. This is dictated by the junction conditions. This expression can be viewed as the integrated equations of motion of the brane. If we express the previously mentioned relation using bulk time (taking into account that the time coordinates are different on the two sides), this can be written as
\begin{equation}
\frac{\sigma}{2} \left( \frac{k_-^2 a^2 + \Delta_-(a)}{a\sqrt{k_-^2 a^2 + \Delta_-(a)-\frac{a'^2}{k_-^2 a^2 + \Delta_-(a)}}} + \frac{k_+^2 a^2 + \Delta_+(a)}{a\sqrt{k_+^2 a^2 + \Delta_+(a)-\frac{a'^2}{k_+^2 a^2 + \Delta_+(a)}}}\right)= \frac{3}{2\kappa_{5}}\left(k_{-}^{2}-k_{+}^{2}+\Delta_- (a)-\Delta_+ (a)\right),\label{junction2}
\end{equation}
where the prime denotes a derivative with respect to the bulk time. We recognize the relativistic energy of a brane in a curved background (compare with the energy of a relativistic particle, given by $\frac{m}{\sqrt{1-v^2}}$). Hence, we see how the junction conditions, interpreted as 4D gravity, are equivalent to the equations of motion for the transverse brane degree of freedom, and there are no issues with any wrong sign kinetic terms.
\subsection{Gravitational fields}
The gravitational waves we have studied in the present paper provide a nice example of the same phenomenon. To see this, consider an oscillator exposed to a gravitational wave that functions as an antenna and absorbs energy. In particular, one can have a situation where the oscillator is in sync with the wave, and sits in an excited state. This is how one should think of the brane in the dark bubble model: in the presence of the 5D gravitational wave the brane becomes excited and this is how the 5D wave backreacts on the brane. Hence, there will be contributions to the energy of the brane corresponding to such excitations that will contribute {\it negatively} in the effective energy density in 4D for the simple reason that they {\it add} to the tension. However, these are precisely the terms needed to cancel the induced energy-momentum tensor in 4D from the 5D waves through the junction conditions \eqref{EnergyMomentumFromProjection}. In fact, this is the way you can argue for that those excitations have to be present.
On the other hand, from our review of the network of backreactions, it is clear how to interpret these terms. If we move them to the other side of Einsteins equations they represent nothing else than the response of 4D gravity to the presence of the waves. The point is that the excitations of the brane can be viewed in two different and dual ways. You either track the detailed and time-dependent fluctuations of the brane, which will carry energy, and show up through the non-trivial Einstein tensor in 4D. This balances the effect of the 5D wave to make sure the junction conditions are solved. Alternatively, you keep the brane in its original position, and let the degrees of freedom of the brane carry the energy.
\subsection{Gauge fields}
In string theory, this is not the whole story. T-duality requires the existence of further degrees of freedom in the form of gauge fields, which are described by the DBI-action. Their gauge potentials are related to coordinates parallel to the brane, which through T-duality correspond to transverse degrees of freedom of lower dimensional branes. According to T-duality, the number of such degrees of freedom should not change. When excited, they should add energy to the brane tension $\sigma$, and looking at the junction conditions it seems as this should lead to a {\it negative} energy density in 4D. How can this be?
It is crucial to understand that the brane is not a probe. Adding gauge flux to the brane, will force the brane to bend and it will move in response. This will, in turn, affect the bulk geometry, which will give other contributions to the 4D effective energy momentum tensor. As we will argue, the net contribution will always be {\it positive} in physically interesting cases.
An instructive example is the case of charged BPS-strings ending on the brane world. As shown in \cite{1998}, such strings can be described as spikes on the brane world, which act as sources of gauge flux within the brane. The transverse coordinate of the brane, which gives the shape of the spike, also determines the gauge potential and the electric field within the brane. The energy of this configuration is fully captured by the brane action. It is a divergent quantity, but by introducing a cutoff close to the spike one discovers that the energy is proportional to the string tension and the length of the spike up to the cutoff, i.e. the string. This is precisely what one would expect for a BPS-string.
In \cite{1998}, the case of a string going through the brane world was discussed. In particular, what would happen if the string were cut at the brane world, and the end points were to move away from each other? Since the string is BPS one would not expect any net force acting between the end points. The attractive electric force between the endpoints is cancelled by a repulsive force due to the scalar field describing the embedding. Similarly, if you consider two strings on the same side of the brane, pulling upwards, you would expect a repulsive electric force. This is, however, cancelled by an attractive force due to the embedding in analogue with \cite{1998}. In fact, this attractive force is nothing else than the gravitational force as studied in \cite{Banerjee:2020wix} and \cite{Banerjee:2020wov}, where it is interpreted as 4D gravity.
The case of a D3 brane in the background of a stack of D3 branes (which is $\text{AdS}_5 \times S^5$ in the near horizon limit) was discussed in \cite{1999}. The spikes turn into strings with the energy carried by the brane. Similar results were obtained in \cite{2007} for a D5-brane in the same background. Both references considered the brane with the spike being a probe and neglected the backreaction on the bulk. For us the backreaction is crucial.
By adding a gauge field like in the case above, we get an increased energy density on the brane. One might worry that this will contribute with the wrong sign in the 4D Einstein equation. However, we must also take into account that the brane deforms into a spike, and that this change in shape backreacts on the bulk geometry. It is now very easy to see what will happen.
For simplicity, let us consider a distribution of such spikes. Relative to the unperturbed piece of the brane in between the spikes, there is a cloud of strings stretching outwards from the bubble. (See figure \ref{fig: spikes}). The gravitational backreaction from the 5D bulk through the junction conditions leads to FLRW with dust, as discussed earlier in the review of the dark bubble model. The net effect of the spikes must therefore be a {\it positive} contribution to the energy density. Hence, our initial conclusion was wrong since we ignored the backreaction.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{Spikes.png}
\caption{Artistic representation of a distribution of spikes, where the tips of these turns into strings, as if they were pulling upwards.}
\label{fig: spikes}
\end{figure}
Let us summarize what happened. We added matter to the brane, expecting the brane to sag down while yielding a contribution to the energy in 4D with the wrong sign. Instead, the coupling between the gauge field and the scalar forces the brane to bend upwards. This back reacts on the 5D geometry, yielding an extra contribution to the junction condition that pulls the brane up. The net 4D interpretation is a net {\it positive} energy density.
It would be interesting to study the interplay between the gauge fields on the brane and the physics of the bulk - including 2-form fields to which the strings will couple - to find a 5D uplift of electromagnetic waves and Maxwell-Einstein theory in general. We leave this for future work.
\section{Discussion and conclusion}\label{Discussion and conclusion}
In this paper we have constructed the uplift of 4D gravitational waves into 5D in the dark bubble model of de Sitter cosmology. The waves extend outwards as well as inwards from the bubble. The waves remains finite everywhere and, in particular, go to zero at the center of the bubble.
The gravitational waves that we have constructed in 5D have an interesting time dependence. Let us, for clarity, focus on waves of high frequency compared to the size of the bubble (this is equivalent to ignoring the positive curvature of the dark bubble universe). In 4D, these reduce to the familiar gravitational waves in a flat universe that at late times exit the horizon and freeze. In 5D, as well as in 4D, there are oscillating terms with coefficients that are constant as well as linear in conformal time $\eta$. The constant piece is what remains after freezing as $\eta \rightarrow 0$. Averaging over many wavelengths, we find constant and quadratic pieces in conformal time in the expressions for the averaged energy density as well as in the back reacted metric.
Starting out at bulk time $t \sim \eta/k <0$, and increasing $t$, we see that the energy density decreases towards $t=0$, and then starts to increase again when $t$ goes positive. Physically, we have a cloud of radiation that is bound inside the AdS-throat that expands towards maximum dilution and then recollapses. This cloud is matched through the junction conditions to the 4D physics on top of an expanding bubble. Actually, the bubble will expand towards infinity eating up the full AdS as proper time goes to infinity, while conformal time $\eta$ and global bulk time $t$ goes to zero. The recollapsing phase will therefore never occur in the cosmology we study. Clearly, one can envision scenarios with a 4D recollapsing cosmology that would correspond to a recollapsing cloud of radiation in the bulk. Viewed from the bulk, the time scale for the expansion is set by $1/k$ in global time. This is the same as that of oscillating geodesic motion in AdS. Due to the blueshift this translates into cosmological times on the brane world.
The wave that we have constructed is crucial for the applications to quantum cosmology that we initiated in \cite{Danielsson:2021tyb}. There, we studied the WdW equation for a mini-superspace containing only the scale factor, and showed that the dark bubble is a realization of Vilenkin's quantum cosmology. According to our interpretation, it is not quite a creation out of nothing, but a creation out of something, i.e., the already present $\text{AdS}_5$. We argued that our embedding of quantum cosmology into a higher dimension, and the interpretation of the act of creation as simply a CL transition, demonstrates the consistency of the model. The instabilities argued for in \cite{Feldbrugge2017, feldbrugge2018inconsistencies} can therefore not be there on physical grounds.
To make full contact with the Vilenkin version of quantum cosmology, we need to extend the mini-superspace to also include, e.g., gravitational perturbations. Formally, one could do this directly in 4D, trusting that the dark bubble fully reproduce 4D gravity. However, to make use of the higher dimensions to throw new light on the problem, we need the full 5D uplift. This is what we have achieved in the present paper.
The next step would be to use these waves to investigate the quantum vacuum and its regularization and renormalization starting in 5D. In \cite{Vilenkin:2018oja} a concern in \cite{feldbrugge2018inconsistencies} was addressed, where it was argued that backreaction from the scalars or gravitational modes would destroy the model. As explained in \cite{Vilenkin:2018oja}, this backreaction is nothing else than the vacuum energy of those modes. These contributions need to be taken into account regardless of whether you are studying quantum cosmology or not. Vilenkin introduces a cutoff at fixed proper momentum and obtains a finite result that is absorbed into the cosmological constant.
While this is a standard procedure, which you always need to invoke more or less implicitly when doing cosmology with quantum fields, it is not quite consistent. As reviewed in \cite{2019}, the regularized vacuum energy does not have the correct equation of state, and it is unclear how to treat it. We believe that the uplift to 5D, which is the subject of the present paper, can throw new light on the important problem of quantum contributions to the vacuum energy in an expanding universe. We hope to return to this question in future work.
Finally, we have also commented on the negative sign kinetic terms of brane excitations in the dark bubble model. This sign is a direct consequence of the dark bubble having an inside and an outside, contrary to RS. This changes the physics in a dramatic way. Using a few examples, in particular the gravitational waves, we have shown how the excitations should not be interpreted as 4D matter but instead as the 4D gravitational response mediated by the brane.
\newpage\newpage
\begin{acknowledgments}
We would like to thank Suvendu Giri and Thomas van Riet for comments on an earlier draft. DP would like to thank the Centre for Interdisciplinary Mathematics (CIM) for financial support. The work of RT is supported by the KU Leuven grant C16/16/005 - Horizons in hoge-energie-fysica.
\end{acknowledgments}
|
1,108,101,563,861 | arxiv | \section{INTRODUCTION\label{sec:INTRODUCTION}}
The Probability Density Function (PDF) approach has proved very useful
in studying the behavior of stochastic systems. Familiar examples
of its usage occur in the study of Brownian Motion \cite{Chandrasekhar43}
and in the kinetic theory of gases \cite{Chapman&Cowling}. In more
recent times it has been used extensively by Pope and others to model
turbulence \cite{Pope86} and turbulence related phenomena such as
combustion \cite{Pope91} and atmospheric dispersion \cite{MacInnes&Bracco92}.
This paper is about its application to particle transport in turbulent
gas-flows where it has been been developed and refined over a number
of years by numerous authors. It has been successfully applied to
a whole range of turbulent dispersed flow problems involving mixing
and dispersion as well as particle collisions and clustering in a
particle pair formulation of the approach. It has also formed a fundamental
basis for dealing with complex flows in formulating the continuum
equations and constitutive relations for the dispersed phase precisely
analogous to the way the Maxwell Boltzmann equation has been used
in the kinetic theory. It has become an established technique for
studying dispersed flows so much so that the method and its numerous
applications are the subject of a recent book \cite{ZaichikAliphenkovSinaiskibook}
and the subject of a chapter in the recent Multiphase flow Handbook
\cite{ReeksSimoninFede}
There are currently two PDF approaches that have been used extensively
to describe the transport, mixing and collisions of small particles
in turbulent gas flows. The first approach referred to as the kinetic
approach, is based on a kinetic equation for the PDF $p(\ve x,\ve v,t)$
of the particle position $\ve x$ and velocity $\ve v$ at time $t$.
This equation based on a particle equation of motion involving the
flow velocity along a particle trajectory derived from a Gaussian
stochastic flow field. In the kinetic equation the particles' random
motion arisng from this stochastic field is manifest as a diffusive
flux which is a linear combination of gradient diffusion in both $\ve x$
and $\ve v$. Transient spatio-temporal structures in the turbulence
give rise to an extra force due to clustering and preferential sweeping
of particles \cite{Maxey1987}.
In the second PDF approach an equation for the PDF $p(\ve x,\ve v,\ve u,t)$
is constructed, where $\ve u$ is the carrier flow velocity sampled
along particle paths. Thus, unlike the kinetic approach, the flow
velocity in this approach is retained in the particle phase space,
and is described by a model evolution equation. In particular, this
PDF model is based on a generalized Langevin model (GLM) (see Pope
\cite{Pope86}) where the velocity of the underlying carrier flow
measured along a particle trajectory is described by a generalized
Langevin equation. As such the associated PDF equation is described
by a Fokker-Planck equation. This GLM PDF equation has sometimes been
inappropriately referred to as the dynamic PDF equation \cite{minier15}
implying that it is a more general PDF approach from which the kinetic
equation can in general be derived. However it is important to appreciate
the kinetic equation is not a standard Fokker-Planck equation, since
it captures the non-Markovian features of the underlying flow velocities.
\\
\\
The problem of closure and the associated realizability and well posedness
of PDF equations are profoundly important in the study of stochastic
equations. So despite the successful application of the kinetic equation
to a whole range of problems, recent claims in the literature of ill-posedness
and realizability of this equation are disturbing and a serious concern.
The root cause of this concern is the non-positive definiteness of
the diffusion tensor associated with the phase space diffusion flux.
That in particular this tensor has both $+ve$ and $-ve$ eigenvalues
implying that along the eigenvectors with a $-ve$ eigenvalue the
particle dispersion exhibits the properties of a backward diffusion
equation leading to solutions with finite time singularities. In fact
Minier and Profeta \cite{minier15} following a detailed analysis
of the relative merits of the 2 PDF approaches, have concluded that
the kinetic equation is ill-posed and therefore an invalid description
of disperse two-phase flows (except in the limiting case for particles
with large Stokes numbers when the kinetic equation reduces to a Fokker-Planck
equation). This raises a number of issues and inconsistencies that
we wish to examine and resolve:
\begin{enumerate}
\item The closure of the diffusive terms in the kinetic equation is exact
for a Gaussian process for the aerodynamic driving forces in the particle
equation of motion. Not withstanding any $-ve$ eigenvalues, such
dispersion processes are demonstrably forward rather than backward
in time with statistical moments that monotonically increase rather
than decrease with time. This behaviour is reflected in the analytic
solutions of the kinetic equation for particle dispersion in shear
flows in which the mean shear is linear and the turbulence is statistically
homogeneous and stationary (see Hyland et al \cite{hyland99}, Swailes
and Darbyshire \cite{Darbyshire_Swailes96}). In these generic flows,
there is exact correspondence of the analytical solution with a random
walk simulation using a Lagrangian particle tracking approach, solving
the individual particle equations of motion in the associated Gaussian
random flow field. See as an example the illustration in Figure \ref{fig:simple turbulent shear flow }.
\begin{figure}
\noindent \begin{centering}
\includegraphics[bb=0bp 200bp 595bp 700bp,scale=0.3]{analytic_simple_shear_2}
\par\end{centering}
\noindent \begin{centering}
Analytic solution
\par\end{centering}
\noindent \begin{centering}
\includegraphics[bb=0bp 200bp 595bp 700bp,scale=0.3]{sim_simple_shear_2}
\par\end{centering}
\noindent \begin{centering}
Random walk simulation\\
\par\end{centering}
\begin{centering}
\includegraphics[scale=0.4]{simple_shear_difffusion_coeffcients}
\par\end{centering}
\begin{centering}
Diffusion coefficients $D_{ij}$ as a function of time $t$, $\alpha$=
shear rate
\par\end{centering}
\noindent \centering{}\protect\caption{\label{fig:simple turbulent shear flow } Dispersion of an instantaneous
point source of particles in a simple shear flow. Comparison of the
analytic solution of the kinetic equation for the particle spatial
concentration and a random walk simulation based on Stokes drag with
a Gaussian process for the aerodynamic driving force. For more precise
details see \cite{Darbyshire_Swailes96,hyland99,ReeksSimoninFede} }
\end{figure}
\item In simple generic flows the GLM PDF equation is entirely consistent
with the kinetic equation i.e the kinetic equation is recoverable
from the GLM equations and has exactly the same solution for the same
mean flows and statistical correlations for the turbulent velocity
$\ve u$ along particle trajectories. They are both compatible with
a Gaussian process. The claim of ill posedness of the kinetic equation
would therefore seem to contradict the well posedness associated with
the Fokker-Planck equation of the GLM.
\end{enumerate}
So the first objective of the analysis we present here is to show
that despite the non positive definiteness of the phase space diffusion
tensor, this does not imply backward diffusion and the existence of
finite time singularities, that the kinetic equation is well posed
and has realizable solutions that are forward rather than backward
in time consistent with a Gaussian process. We shall show that this
is intimately related to the non Markovian nature of the kinetic equation,
that the time evolution of the phase dispersion tensor from its initial
state and the coupling between phase space variables are crucial considerations.
In the course of this analysis we will recall the stages of the development
of the kinetic equation and the important role played by certain consistency
and invariance principles which taken together with the other features
determining well-posedness and realizability have not been properly
understood or appreciated in previous analyses.\\
\\
Previous work has purported to show that the GLM is a more general
approach than the kinetic approach. That in particular the kinetic
equation can be derived from the GLM, and that the features of transport
and mixing in more general non uniform inhomogeneous turbulent flows
implicit in the solutions of the kinetic equation are intrinsic to
the GLM. So the second objective of this analysis is to examine the
basis for this assertion. In the process, we provide a more balanced
appraisal of the benefits of both PDF approaches\textcolor{black}{{}
and point out the limitations of the GLM that have been ignored in
previous analyses. We regard these limitations to be areas for improvement
of the GLM rather than inherent deficiencies. }Like all modeling approaches,
each of the two approaches considered have their strengths and weaknesses.
A categorical dismissal of one in preference to another in previous
work would seem misplaced. From a practical point of view this paper
is more about how one approach can support the other in solving dispersed
flow problems.
\section{Ill-Posed Kinetic PDF Equations?}
In this section we examine in detail the previous analysis of Minier
\& Profeta (M\&P) \cite{minier15} that leads to the assertion of
ill-posedness of the kinetic equation. For ease of comparison we use
the same notation here and throughout the paper. Thus M\&P consider
particle phase-space trajectories $\ve Z_{p}(t)=(\ve X_{p}(t),\ve U_{p}(t))$
governed by
\begin{equation}
\dot{\ve X}_{p}=\ve U_{p},\hspace{2em}\dot{\ve U}_{p}=\frac{1}{\tau_{p}}(\ve U_{s}-\ve U_{p})+\ve F_{ext}.\label{pem1}
\end{equation}
$\ve U_{s}(t)$ representing a flow velocity at time $t$ sampled
along the trajectory $\ve X_{p}(t)$, and $\ve F_{ext}$ an external
body force e.g. gravity. In the kinetic modeling framework, $\ve U_{s}$
is derived via an underlying flow velocity field $\ve u_{f}(\ve x,t)$
which has both a mean $\langle\ve u_{f}\rangle$ and fluctuating (zero
mean component) $\ve u_{f}^{\prime}$. That is $\ve U_{s}=\ve u_{f}(\ve X_{p}(t),t)$.
Treating $\ve u_{f}(\ve x,t)$ as a Gaussian stochastic flow field,
and with the particle response time $\tau_{p}$ as a constant independent
of the particle Reynolds no (i.e Stokes relaxation), the PDF $p(\ve z,t)$
defining the distribution of $\ve Z_{p}(t)$ then satisfies a transport
equation (the kinetic equation) which can be written compactly in
phase-space notation as\_
\begin{equation}
\partial_{t}p=-\partial_{\ve z}\cdot\ve ap\ +\ \tfrac{1}{2}\partial_{\ve z}\cdot\left(\partial_{\ve z}\cdot\ve Bp\right)\label{pde1}
\end{equation}
where $\ve z=(\ve x,\ve v)$ refers to the particle position and velocity
(in a fixed frame of reference) and
\begin{equation}
\ve a=\left(\ve v,\ve F_{ext}+\dfrac{1}{\tau_{p}}\bigl(\langle\ve u_{f}(\ve x,t)\rangle-\ve v\bigr)+\bm{\kappa}\right)\label{a}
\end{equation}
\begin{equation}
\ve B=\left(\begin{array}{c|c}
\bm{0} & \bm{\lambda}\\
\hline \bm{\lambda}^{\top} & \bm{\mu}+\bm{\mu}^{\top}
\end{array}\right)\label{B}
\end{equation}
$\ve\lambda$ and $\ve\mu$ are diffusion tensors that define gradient
dispersion separately in real space ($\ve x)$ and velocity space
$(\ve v)$ respectively. They are functions of time and depend on
the particle response to the carrier flow velocity fluctuations along
its trajectory. The specific forms for $\ve\lambda$ and $\ve\mu$
based on the LHDI closure scheme \cite{Reeks92}
\begin{align*}
\ve\lambda & =\tau_{p}^{-2}\int_{0}^{t}\ve g^{T}(t-s)\cdot\langle\ve u_{f}^{\prime}(\ve x,\ve v,t\mid s)\ve u_{f}{}^{\prime}(\ve x,t)\rangle ds\\
\ve\mu & =\tau_{p}^{-2}\int_{0}^{t}\dot{\ve g^{T}}(t-s)\cdot\langle\ve u_{f}^{\prime}(\ve x\mathbf{,}\ve v,t\mathbf{\mid}s)\ve u_{f}{}^{\prime}(\ve x,t)\rangle ds\\
\ve\kappa & =-\tau_{p}^{-2}\int_{0}^{t}\ve g^{T}(t-s)\cdot\langle\ve u_{f}^{\prime}(\ve x\mathbf{,}\ve v,t\mathbf{\mid}s)\partial_{\ve x}\ve u_{f}^{\prime}(\ve x,t)\rangle ds
\end{align*}
where the particle response tensor $\ve g(t-s)$ has elements $g_{ij}(t\mid s)$
corresponding to the displacement at time $t$ in the $j$-direction
when $\mbox{\ensuremath{\tau}}_{p}u_{f}^{\prime}$ is an impulsive
force $\delta(t-s)$ applied in the $i$-direction. In general $\ve g(t-s)$
depends up the local straining and rotation of the flow. Note the
response tensor based on the Furutsu-Novikov closure scheme \cite{swailes97}
is slightly different in definition (see Bragg \& Swailes \cite{SwailesBragg2012}
for a discussion of the different closure schemes for the kinetic
equation). Following the analysis of M\&P, we consider the case for
dispersion of an instantaneous point source in statistically stationary
homogeneous and isotropic turbulence with a zero external force $\ve F_{ext}=0$
, in which case $\ve g(t)=(1-e^{-t/\tau_{p}})\:\ve I$ and
\begin{align*}
\ve\lambda & =\tau_{p}^{-2}\int(1-e^{-s/\tau_{p}})R(s)ds\:\ve I\\
\ve\mu & =\tau_{p}^{-2}\int e^{-s/\tau_{p}}R(s)ds\:\ve I\\
\ve\kappa & =0
\end{align*}
where $R(s)$ is the autocorrelation $\frac{1}{3}\left\langle \ve U_{s}^{\prime}(0)\cdot\ve U_{s}^{\prime}(s)\right\rangle $
of the flow velocity fluctuations $\ve U^{\prime}(s)$ measured along
a particle trajectory. \eqref{pde1},\eqref{a}), \eqref{B} correspond
to equations (65), (66), (67) in \cite{minier15}. M\&P claim that
equation \eqref{pde1} is ill-posed in the sense that solutions to
this can (will) exhibit unphysical behaviour except in special or,
to use their phrase, `lucky' cases. Specifically, they assert that
solutions $p$ of equation \eqref{pde1} will exhibit finite-time
singularities except for very special initial conditions, for example
with a Gaussian form. Their justification for this claim is based
on an analysis centered round the observation that $\ve B$ is not
positive-definite but possesses both negative and positive eigenvalues.
We show here that their analysis is incorrect.
Firstly we note that equation \eqref{pde1} is \emph{not} a model
for the PDF of $\ve Z_{p}(t)$, but describes precisely how this PDF
must evolve. There is an \emph{exact} correspondence between equation
\eqref{pde1} and the underlying equation of motion \eqref{pem1}.
This equivalence, \textit{i.e.}~ the formal derivation of \eqref{pde1}
from \eqref{pem1}, is subject only to the requirement that the field
$\ve u_{f}(\ve x,t)$ is Gaussian. Then, notwithstanding the non-definiteness
of $\ve B$, equation \eqref{pde1} is an exact description of how
$p$, as determined by \eqref{pem1}, behaves. Contrary to previous
claims \cite{minier15} no Gaussian (or other) constraint is necessary
on the initial distribution $p^{0}(\ve z)$ of $\ve Z_{p}(0)$. Thus,
should solutions to \eqref{pde1} exhibit finite-time, or even asymptotic
($t\rightarrow\infty$), singularities when $p^{0}$ is non-Gaussian,
then this feature must be inherent in the system determined by \eqref{pem1}.
Either this singular behaviour is intrinsic to the system, or the
analysis upon which M\&P base their conclusion is incorrect.
To demonstrate that the non-definiteness of $\ve B$, coupled with
arbitrary initial conditions, does not lead to singular solutions
of equation \eqref{pde1} we note that the solution to this equation
can be written
\begin{equation}
p(t;\ve z)=\int\phi(t;\ve z,\ve z^{\prime})p^{0}(\ve z^{\prime})d\ve z^{\prime}\label{superposition}
\end{equation}
where $\phi(t;\ve z,\ve z^{\prime})$ is the fundamental solution
satisfying $\phi(0;\ve z,\ve z^{\prime})=\delta(\ve z-\ve z^{\prime})$.
Now consider the case when $\ve{U}_{s}^{\prime}(t)=\ve{u_{f}}^{\prime}(\ve{X}_{p},t)=\ve{u_{f}}(\ve{X}_{p},t)-\langle\ve{u_{f}}\rangle(\ve{X}_{p},t)$
is treated, \textit{ab initio}, as a Gaussian process. The structure
of equation \eqref{pde1} remains unchanged, except $\bm{\kappa}\equiv\bm{0}$
and $\bm{\lambda}$, $\bm{\mu}$ are independent of $\ve{Z}$ (but,
crucially, they will still depend on $t$). $\ve{B}$ still has negative
eigenvalues. With $\langle\ve u_{f}\rangle$ linear in $\ve x$ (and
$\ve{F}_{\mathrm{ext}}$ constant) the form of $\phi$ is well-documented,
both in general terms and for a number of specific linear flows \cite{reeks05,Darbyshire_Swailes96,hyland99}.
This solution is Gaussian, and it is straightforward to show that
it corresponds exactly, as it must, to the Gaussian form of $\ve{Z}_{p}$
determined by equation \eqref{pem1}. Thus, any singular behaviour
of the general solution $p$, defined by equation \eqref{superposition},
can only be a consequence of degeneracy in the Gaussian form of $\phi$,
and not the form of an arbitrary initial distribution $p^{0}$. Again,
should such degeneracy exist then it would be symptomatic of behaviour
determined by \eqref{pem1}, and not some artifact of the non-definiteness
of $\ve{B}$.\\
There are several flaws in the analysis upon which M\&P base their
claim of ill-posedness: To begin, they consider a form of equation
\eqref{pde1} in which $\ve{B}$ is taken as independent of time,
arguing that this corresponds to stationary isotropic turbulence.
This is not correct. $\ve{B}$ is intrinsically time dependent. This
dependence reflects the non-zero time correlations implicit in the
turbulent velocity field $\ve{U}_{f}$, and the consequent non-Markovian
nature of $\ve{Z}_{p}$. Moreover, and crucially, $\ve{B}(0)=\bm{0}$
unless the initial values $\ve{U}_{p}(0)$, $\ve{U}_{s}(0)$ are correlated.
A detailed analysis of this is given in \cite{hyland99}. So, even
when $\ve{B}\rightarrow\ve{B}^{\infty}$ (constant) as $t\rightarrow\infty$,
it is inappropriate to set $\ve{B}=\ve{B}^{\infty}$ in a formal analysis
of the time problem. Indeed, it is straightforward to show that the
fundamental solution $\phi$ breaks down for arbitrarily small $t$
when this inappropriate approximation is introduced.
Of course, the non-definiteness of $\ve{B}$ is not altered by taking
this tensor to be $t$ dependent. The eigensolution based transformation
that M\&P introduce can still be invoked. Analogous to equation (71)
in \cite{minier15} we define trajectories $\widetilde{\ve{Z}}_{p}(t)$
with components $(\widetilde{Z}_{p1},\widetilde{Z}_{p2}$) in a transformed
phase space $\widetilde{\ve z}=\left(\widetilde{z}_{1},\widetilde{z}_{2}\right)$
with
\begin{equation}
\widetilde{\ve{Z}}_{p}(t)=\mathrm{P}^{\top}\cdot\ve{Z}_{p}(t)\label{eq:transform}
\end{equation}
where $\mathrm{P}(t)$ is the transformation matrix determined by
the (now time dependent) normalized eigenvectors of $\ve{B}$. Thus,
$\mathrm{P}^{\top}\cdot\mathrm{P}=\mathrm{I}$ and $\mathrm{P}^{\top}\cdot\ve{B}\cdot\mathrm{P}=\bm{\Lambda}=\mathrm{diag}(\omega_{i})$,
with $\omega_{i}$ the eigenvalues of $\ve{B}$. We note that, in
applying this to the 2D case considered by the M\&P, it is sensible
to label the two eigenvalues such that $\omega_{1}<0$, $\omega_{2}>0$
since this gives $\mathrm{P}(0)=\mathrm{I}$. By neglecting the time
dependence in $\ve{B}$ M\&P missed this point and chose the opposite
ordering (see equation (69) in \cite{minier15}). Here we take $\omega_{1}<0$.
In using the transform given by equation \eqref{eq:transform} it
is important to note that equation \eqref{pem1} governing $\ve{Z}_{p}(t)$
is not to be interpreted as a stochastic differential equation driven
by a white-noise process, and equation \eqref{pde1} is not a corresponding
Fokker-Planck equation. Clearly this would be nonsense since $\ve{B}$
is not positive-definite. It is more transparent (and correct) to
note that equation \eqref{eq:transform} implies that the PDF $\widetilde{p}(\widetilde{\ve{z}},t)$
of $\widetilde{\ve{Z}}_{p}(t)$ is related to the PDF $p(\ve{z},t)$
of $\ve{Z}_{p}(t)$ by $\widetilde{p}\,\vert\mathrm{J}\vert=p$, where
$\mathrm{J}=\det\bigl[\mathrm{P}\bigr]$ is the Jacobean of the transform
$\widetilde{\ve{z}}=\mathrm{P}^{\top}\cdot\ve{z}$. Since $\mathrm{P}$
is orthogonal we have $\mathrm{J}=1$. The PDF equation for $\widetilde{p}$
is
\begin{equation}
\partial_{t}\widetilde{p}=-\partial_{\widetilde{\ve{z}}}\cdot\widehat{\ve{a}}\widetilde{p}\ +\ \tfrac{1}{2}\partial_{\widetilde{\ve{z}}}\cdot\left(\partial_{\widetilde{\ve{z}}}\cdot\bm{\Lambda}\widetilde{p}\right)\label{pde2}
\end{equation}
where $\widehat{\ve{a}}=\mathrm{P}^{\top}\cdot\widetilde{\ve{a}}+\mathrm{R}\cdot\widetilde{\ve{z}}$,
$\widetilde{\ve{a}}(\widetilde{\ve{z}},t)=\ve{a}(\ve{z},t)$, $\mathrm{R}=\dot{\mathrm{P}}^{\top}\cdot\mathrm{P}$.
This is analogous to equation (72) in \cite{minier15}, except these
authors have not included the time dependence in $\ve{B}$ and so
set $\dot{\mathrm{P}}=0$. We note that $\mathrm{R}$ represents a
rate of rotation matrix, $\mathrm{trace}(\mathrm{R})=0$. In the 2D
model considered, the authors integrate equation \eqref{pde2} over
$\widetilde{z}_{2}$ (corresponding to the transformed variable with
the positive eigenvalue $\omega_{2}$) to obtain (compare with equation
(74) in \cite{minier15})
\begin{equation}
\partial_{t}\widetilde{p}_{r}=-\partial_{\widetilde{\mathrm{z}}_{1}}\overline{\widehat{\mathrm{a}}}_{1}\widetilde{p}_{r}\ -\ \partial_{\widetilde{\mathrm{z}}_{1}}^{2}\tfrac{1}{2}\vert{\omega_{1}}\vert\widetilde{p}_{r}\label{pde3}
\end{equation}
where $\widetilde{p}_{r}$ is the PDF for $\widetilde{Z}_{p1}$ and
$\overline{\widehat{\mathrm{a}}}_{1}\widetilde{p}_{r}=\int\widehat{\mathrm{a}}_{1}\widetilde{p}\,d\widetilde{\mathrm{z}}_{2}$.
Based on the negative diffusion coefficient in equation \eqref{pde3}
M\&P seek to show that this equation and so also equation \eqref{pde1}
is ill-posed. Their argument fails to take into account that the conditional
average $\overline{\widehat{\mathrm{a}}}_{1}$ is a density weighted
average, i.e its value at $z_{1}$ is dependent upon the distribution
of $Z_{p2}(t)$ at $z_{1}$ which itself can be a function $z_{1}$.
For instance using a more explicit notation we may write
\begin{equation}
\overline{\widehat{\mathrm{a}}}_{1}\equiv\Big\langle\widehat{\mathrm{a}}_{1}(\widetilde{\mathrm{z}}_{1},\widetilde{\mathrm{Z}}_{p2}(t))\Big\rangle_{\widetilde{z}_{1}}\label{eq:abardef}
\end{equation}
where $\langle\cdot\rangle_{\widetilde{z}_{1}}$ denotes an ensemble
average conditioned on $\widetilde{\mathrm{Z}}_{p1}(t)=\widetilde{\mathrm{z}}_{1}$.
What equation (\ref{eq:abardef}) illustrates is that only a sub-set
of all trajectories $\widetilde{\mathrm{Z}}_{p2}(t)$ contribute to
$\overline{\widehat{\mathrm{a}}}_{1}$, namely those that are also
associated with $\widetilde{\mathrm{Z}}_{p1}(t)=\widetilde{\mathrm{z}}_{1}$.
The term $\overline{\widehat{\mathrm{a}}}_{1}$ is therefore affected
by coupling between $\widetilde{\mathrm{Z}}_{p1}(t)$ and $\widetilde{\mathrm{Z}}_{p2}(t)$.
Indeed, in the case where $\widetilde{\mathrm{Z}}_{p1}(t)$ and $\widetilde{\mathrm{Z}}_{p2}(t)$
are statistically decoupled, we have
\begin{equation}
\Big\langle\widehat{\mathrm{a}}_{1}(\widetilde{\mathrm{z}}_{1},\widetilde{\mathrm{Z}}_{p2}(t))\Big\rangle_{\widetilde{z}_{1}}=\Big\langle\widehat{\mathrm{a}}_{1}(\widetilde{\mathrm{z}}_{1},\widetilde{\mathrm{Z}}_{p2}(t))\Big\rangle,\label{abardef2}
\end{equation}
i.e. \emph{all} realizations of $\widetilde{\mathrm{Z}}_{p2}(t)$
would contribute to $\overline{\widehat{\mathrm{a}}}_{1}$. In this
case $\overline{\widehat{\mathrm{a}}}_{1}(z_{1})$ is convective as
M\&P have assumed. However, in general, $\widetilde{\mathrm{Z}}_{p1}(t)$
and $\widetilde{\mathrm{Z}}_{p2}(t)$ will be statistically coupled,
and as a consequence $\overline{\widehat{\mathrm{a}}}_{1}$ cannot
be treated as an arbitrary convective term. Indeed as we shall show
momentarily, the term $\overline{\widehat{\mathrm{a}}}_{1}$ is associated
with both convective and diffusive fluxes, and its diffusional contribution
offsets that associated with the negative eigenvalue.
By failing to appreciate this particular property of $\overline{\widehat{\mathrm{a}}}_{1}$,
M\&P \cite{minier15} have overlooked a fundamental property of the
particle dispersion process. That is in the dynamical system described
by equation (\ref{pem1}), the particle position and velocity are
not independent. This is reflected in the fixed-frame kinetic equation
(\ref{pde1}) through the term $\partial_{x}vp$, which couples the
spatial and velocity distributions of the particles. In the same way,
the distributions of the variables $\widetilde{Z}_{p1},\widetilde{Z}_{p2}$
are coupled in equation (\ref{pde3}). The implication of this coupling
is that fluctuations in particle velocity give rise to fluctuations
in particle position, in addition to the fluctuations in particle
position that arise directly from fluctuations in the fluid force
$\tau_{p}^{-1}\ve{U}_{s}$. In the moving frame it is the fluctuations
in $\widetilde{Z}_{p2}$ (with the +ve eigenvalue, $\omega_{2}$)
via the $+ve$ covariance between between $\widetilde{Z}_{p1}$ and
$\widetilde{Z}_{p2}$, that overcomes the $-ve$ diffusion associated
with $\widetilde{Z}_{p1}$ (in the absence of the coupling). We note,
for instance, that in equation (\ref{pde1}), the particle flux $\ve vp$
integrated over all particle velocities is expressible as a net gradient
diffusion flux ,$\overline{\ve v}p_{r}$ for which the long term ($t\rightarrow\infty)$
particle diffusion coefficient $\varepsilon(\infty)$ in statistically
stationary, homogeneous, isotropic turbulence is given by
\begin{equation}
\varepsilon(\infty)=\tau_{p}\left\{ \left\langle v^{2}(\infty)\right\rangle +\lambda(\infty)\right\} \label{eq:diffusion coefficient}
\end{equation}
where $\left\langle v^{2}(\infty)\right\rangle $ is the variance
of the particle velocity (which for a Gaussian process is given by
$(\tau_{p}/3)\mathrm{trace}(\bm{\mu}(\infty))$, see e.g equations
(78-79) in \cite{Reeks92}), and $\lambda=(1/3)\mathrm{trace}(\bm{\lambda})$.
This simple relationship clearly identifies the two sources of dispersion
independently, the first from fluctuations in the particle velocity
(the kinetic contribution) and the second term $\mbox{\ensuremath{\lambda}(\ensuremath{\infty)}}$
arising from fluctuations in $\tau_{p}^{-1}\ve{U}_{s}$ (the turbulent
aerodynamic force contribution). We refer to \cite{Reeks91PoF} for
a detailed analysis of how this relationship defines an equation of
state for the particle pressure and where $\left\langle v^{2}(\infty)\right\rangle $and
$\lambda(\infty)$ are more correctly identified as the normal components
of stress tensors. We refer to \cite{Reeks83} on how a proper treatment
of the integrated flux terms in the kinetic equation in inhomogeneous
turbulence gives rise to \emph{turbophoresis}, an important mechanism
for particle deposition (in response the unfounded criticism in both
\cite{minier15,MINIER20161} that the kinetic equation is inappropriate
for modeling particle deposition).
To demonstrate these features in a quantitative way we consider the
simple 2D case examined by M\&P in which $\langle\ve{U}\rangle=\bm{0}$,
and $\widetilde{\ve{Z}}_{p}(0)=\widetilde{\ve{z}}^{0}$ fixed. Then
$\widehat{\ve{\mathrm{a}}}$ is linear in $\widetilde{\ve{z}}$, and
$\overline{\widehat{\mathrm{a}}}_{1}\widetilde{p}_{r}$ involves $\overline{\widetilde{\mathrm{z}}}_{2}\widetilde{p}_{r}=\int{\widetilde{\mathrm{z}}}_{2}\widetilde{p}\,d\widetilde{\mathrm{z}}_{2}$.
This can be expressed in terms of convective and gradient diffusive
fluxes (see \cite{swailes97})
\begin{equation}
\overline{\widetilde{\mathrm{z}}}_{2}\widetilde{p}_{r}=\widetilde{\mathit{m}}_{2}\widetilde{p}_{r}-\widetilde{\theta}_{21}\partial_{\widetilde{\mathrm{z}}_{1}}\widetilde{p}_{r}{\color{red}}\label{flux1}
\end{equation}
where $\widetilde{\mathit{m}}_{2}$, $\widetilde{\theta}_{21}$ are
components of $\langle\widetilde{\ve{Z}}_{p}\rangle=\widetilde{\ve{m}}=(\widetilde{\mathit{m}}_{1},\widetilde{\mathit{m}}_{2})$
and $\langle(\widetilde{\ve{Z}}_{p}-\widetilde{\ve{m}})(\widetilde{\ve{Z}}_{p}-\widetilde{\ve{m}})\rangle=\widetilde{\Theta}=(\widetilde{\theta}_{ij})$
satisfying
\begin{align}
\dot{\widetilde{\ve{m}}} & =\ve{\tilde{\Gamma}}\cdot\widetilde{\ve{m}}+\widetilde{\ve{k}}\label{m}\\
\dot{\widetilde{\Theta}} & =\ve{\tilde{\Gamma}}\cdot\widetilde{\mathrm{\Theta}}+\left(\ve{\tilde{\Gamma}}\cdot\widetilde{\mathrm{\Theta}}\right)^{\mathrm{T}}+\Lambda\label{Theta}
\end{align}
with $\widetilde{\ve{m}}(0)=\widetilde{\ve{z}}^{0}$, $\widetilde{\Theta}(0)=\bm{0}$.
Here $\ve{\tilde{\Gamma}}=\mathrm{P}^{\mathrm{T}}\cdot\mathbf{A}\cdot\mathrm{P}+\mathrm{R}$,
$\widetilde{\ve{k}}=\mathrm{P}^{\mathrm{T}}\cdot\ve{k}$ with $\ve{k}=(\bm{0},\ve{F}_{\mathrm{ext}})$
and $\mathrm{A}_{11}=\mathrm{A}_{21}=0,\mathrm{A}_{12}=1$, $\mathrm{A}_{22}=-1/\tau_{p}^{\mathrm{St}}$.
equations \eqref{flux1}, \eqref{m}, \eqref{Theta} allow equation
\eqref{pde3} to be written
\begin{equation}
\partial_{t}\widetilde{p}_{r}=-\partial_{\widetilde{\mathrm{z}}_{1}}\dot{\widetilde{\mathit{m}}}_{1}\widetilde{p}_{r}\ +\ \partial_{\widetilde{\mathrm{z}}_{1}}^{2}\tfrac{1}{2}\hspace{0.2em}\dot{\hspace{-0.2em}\widetilde{{\theta}}}_{11}\widetilde{p}_{r}\label{pde4}
\end{equation}
The net diffusional effect is therefore determined by the particle
diffusion coefficient $\widetilde{D}_{1}(t)$ of the transformed variable
$\tilde{z}_{1}$ (associated with the negative eigenvalue $\omega_{1}$)
and given by
\begin{equation}
\widetilde{D}_{1}(t)=\tfrac{1}{2}\hspace{0.2em}\dot{\hspace{-0.1em}\widetilde{\theta}}_{11}=(\ve{\tilde{\Gamma}}\cdot\widetilde{\Theta})_{11}-\frac{1}{2}\vert\omega_{1}\vert\label{eq:D11}
\end{equation}
This shows how the `anti-diffusion' associated with $\omega_{1}$
is offset by the contribution emerging from the flux $\overline{\widehat{\mathrm{a}}}_{1}\widetilde{p}_{r}$
associated with the coupling between $\widetilde{Z}_{p1}$ and $\widetilde{Z}_{p2}$
through their covariance $\widetilde{\theta}_{12}$ in equation (\ref{eq:D11}).
\begin{figure}[h]
\includegraphics[scale=0.5]{Figure_1} \protect\protect\protect\caption{Plots of $\frac{1}{2}\vert\omega_{1}\vert/(\ve{\tilde{\Gamma}}\cdot\widetilde{\Theta}{}_{11})$
equation (\ref{eq:D11}) for the ratio of $-ve$ /$+ve$ contributions
to the particle diffusion coefficient $\widetilde{D}_{1}$ of the
transformed variable $\widetilde{Z}_{p1}$(with a $-ve$ eigenvalue)
in the moving frame of reference, as a function of time $t$ for a
range of values of the particle response time $\tau_{p}$. Both $t$
and $\tau_{p}$ are scaled on $T_{L}$, the Lagrangian integral timescale
of the carrier flow measured along a particle trajectory.}
{\label{figure-1}}
\end{figure}
Figure \ref{figure-1} demonstrates that $0\leqslant\frac{1}{2}\vert\omega_{1}\vert/(\ve{\tilde{\Gamma}}\cdot\widetilde{\Theta})_{11}\leqslant1$.
The plots, which show the time evolution of this ratio for a range
of values for $\tau_{p}$ (with $\mb{F}_{\mathrm{ext}}=\bm{0}$),
were obtained from closed form solutions of \eqref{Theta}. These
solutions are constructed by noting that $\widetilde{\ve\Theta}=\mathrm{P}^{\mathrm{T}}\cdot\ve\Theta\cdot\mathrm{P}$,
where the covariances $\ve\Theta=\langle{\mb{Z}}_{p}{\mb{Z}}_{p}\rangle$
in the fixed frame are governed by a set of equations analogous to
equations (\eqref{Theta}) \cite{Reeks91PoF} which can be integrated
analytically. We refer to \cite{Reeks91PoF} where analytic solutions
are given for \textbf{$\ve\Theta$} in terms of $\langle\mathrm{U}_{s}^{\prime}(0)\mathrm{U}_{s}^{\prime}(t)\rangle$
the autocorrelation of the carrier flow velocity fluctuations sampled
along particle trajectories. The values of the $-ve$ to $+ve$ ratio
plotted in Figure \ref{figure-1} were obtained using an exponential
decay $\exp\left[-t/T_{L}\right]$ for this autocorrelation. For completeness
we also show in Figure \ref{fig:figure-2} for a similar range of
values of $\tau_{p}$, the evolution of the particle diffusion coefficient
$\widetilde{D}_{1}(t)$ in the moving frame of reference indicating
not only that $\widetilde{D}_{1}\geqslant0$, but also that it reaches
an asymptotic limit that is the same for all $\tau_{p}$. This is
is also true of the particle diffusion coefficient $\varepsilon(\infty)$
in the fixed frame of reference, equation (\ref{eq:diffusion coefficient}).
In particular in the normalised units used to express the values for
$\widetilde{D}_{1}$ in Figure $\ref{fig:figure-2}$, $\varepsilon(\infty)=1$.
This result is universally true for a particle equation of motion
involving the linear drag form in equation (\ref{pem1}) for statistically
stationary homogeneous isotropic turbulence (see \cite{Reeks1977}
where it is $T_{L}$ that depends on $\tau_{p}$). An evaluation of
the asymptotic form of $\langle{\mb{Z}}_{p}{\mb{Z}}_{p}\rangle$ which
is linear in $t$ in this limit shows that
\begin{equation}
\begin{array}{ccc}
\widetilde{D}_{\textcolor{red}{1}}(\infty)=1/(4-2\sqrt{2})\\
\widetilde{D}_{2}(\infty)=1/(4+2\sqrt{2})
\end{array}\label{eq:D1 asymptotic form}
\end{equation}
and is consistent with the forms for $\widetilde{D}_{1}(t)$ in Figure
\ref{fig:figure-2} obtained by solving a coupled set of equations
(\ref{Theta}) for $\widetilde{\ve\Theta}$. That the asymptotic result
in equation (\ref{eq:D1 asymptotic form}) agrees with the results
in Figure \ref{fig:figure-2} provides not only a check for the analytic
solutions used in Figure \ref{fig:figure-2}, but also a proof that
the $+ve$ contribution to $\widetilde{D}_{1}(t)$ will always outweigh
the $-ve$ contribution in equation (\ref{eq:D11}) (i.e. it applies
to all physically acceptable forms of the autocorrelation for $\mathrm{\ve U}_{s}$,
and not just the decaying exponential form of $\langle\mathrm{U}_{s}^{\prime}(0)\mathrm{U}_{s}^{\prime}(t)\rangle$
that we have chosen to obtain our analytical results).
This must be so for two reasons. Firstly the route involving a solution
of the kinetic equation in the fixed frame of reference and the linear
relationship between the fixed and transformed variables always ensures
a realizable Gaussian distribution for the transformed variables.
Secondly in this calculation this realizability does not itself explicitly
involve or rely in any way on whether one of the eigenvalues $\omega_{i}<0$
and any explicit form for $\langle\mathrm{U}_{s}^{\prime}(0)\mathrm{U}_{s}^{\prime}(t)\rangle$
we might choose, only that the transformation matrix $\ve P$ formed
from the normalised eigenvectors of the diffusion matrix exists and
is well behaved. However the second route via equation (\ref{Theta})
only ensures a realizable Gaussian process if the $+ve$ contribution
to $\widetilde{D}_{1}(t)$ exceeds the $-ve$ contribution. But since
the two methods of calculating $\widetilde{\Theta}$ are in the end
mathematically equivalent to one another, then the $+ve$ contribution
to $\widetilde{D}_{1}(t)$ must always exceed the $-ve$ contribution
in equation (\ref{eq:D11}).
We show the values of the moments $\left\langle \widetilde{Z}_{pi}\widetilde{Z}_{pj}\right\rangle $
in Figure \ref{fig:fig-3} appropriate for the Gaussian function solution
of the kinetic equation in the moving frame (see equation (87) in
\cite{Reeks91PoF}). There is of course no hint of a singularity in
Figure \ref{fig:fig-3}, all 3 moments being smoothly varying, monotonically
increasing in time and linear in time for $t/T\gg1$.
\begin{figure}
\includegraphics[scale=0.5]{Figure_2} \protect\protect\protect\caption{\label{fig:figure-2}Evolution of the particle diffusion coefficient
$\widetilde{D}_{1}(t)$ evaluated using equation (\ref{eq:D11}) in
the moving frame of reference for a range of values of $\tau_{p}$
(the particle response time normalised on the Lagrangian integral
time scale,$T_{L}$). Time is real time $t$ normalised on $T_{L}.$}
\end{figure}
\begin{figure}
\includegraphics[scale=0.5]{Figure_3}\protect\protect\protect\caption{Moments \label{fig:fig-3}$\left\langle \widetilde{Z}_{pi}\widetilde{Z}_{pj}\right\rangle $
in the moving frame of reference based on the moments $\left\langle \ve Z_{p}\ve Z_{p}\right\rangle $for
$\tau_{p}/T_{L}$ in the fixed frame of reference as solutions of
the fixed frame kinetic equation (\ref{pde1}) or equivalently by
evaluating $\langle{\ve{Z}}_{p}{\ve{Z}}_{p}\rangle$ from solutions
of the particle equation of motion equation (\ref{pem1}).}
\end{figure}
The results also illustrate the now obvious result that, at large
times, the two contributions to the diffusional transport are of the
same order in $t$. The claim in \cite{minier15} that equation \eqref{pde3}
reduces to the form of a backward heat equation because $\overline{\widehat{\mathrm{a}}}_{1}\widetilde{p}_{r}\rightarrow0$
as $t\rightarrow\infty$ is invalid. It fails to acknowledge that
$\omega_{1}\partial_{\widetilde{\mathrm{z}}_{1}}\widetilde{p}_{r}\rightarrow0$
at the same rate. \\
\\
Although we have now demonstrated that the transformed kinetic equation
is not ill-posed, we close this section with some comments on M\&P's
use of the Feynman-Kac formula (FKF) and the associated arguments
in \cite{minier15}. In \cite{minier15}, M\&P suggest that equation
(\ref{pde3}) has the structure of a (generalized) Backward Kolmogorov
Equation (BKE), that may be derived from FKF. Noting this, M\&P use
the FKF to construct the solution to equation (\ref{pde3}), using
the terminal condition $\widetilde{p_{r}}(\widetilde{\mathrm{z}}_{1},T)=\Psi(\widetilde{\mathrm{z}}_{1})$,
to obtain ($t\in[0,T]$)
\begin{align}
\widetilde{p_{r}}(\widetilde{\mathrm{z}}_{1},t)=\Bigg\langle\exp\Bigg[\int_{t}^{T}\partial_{\widetilde{\mathrm{z}}_{1}}\overline{\widehat{\mathrm{a}}}_{1}(\mathcal{X}(s),s)\,ds\Bigg]\Psi(\mathcal{X}(T))\Bigg\rangle_{\mathcal{X}(t)=\widetilde{\mathrm{z}}_{1}},\label{FKf}
\end{align}
where $\mathcal{X}(s)$ is a stochastic process defined through
\begin{align}
d\mathcal{X}(s)\equiv\overline{\widehat{\mathrm{a}}}_{1}(\mathcal{X}(s),s)ds+\sqrt{\vert\omega_{1}(s)\vert}dW(s),
\end{align}
and $W(s)$ is a Wiener process. M\&P argue that the solution (\ref{FKf})
implies that only ``special'' initial ($t=0$) conditions are permitted
when solving (\ref{pde3}) since (\ref{FKf}) specifies
\begin{align}
\widetilde{p_{r}}(\widetilde{\mathrm{z}}_{1},0)=\Bigg\langle\exp\Bigg[\int_{0}^{T}\partial_{\widetilde{\mathrm{z}}_{1}}\overline{\widehat{\mathrm{a}}}_{1}(\mathcal{X}(s),s)\,ds\Bigg]\Psi(\mathcal{X}(T))\Bigg\rangle_{\mathcal{X}(0)=\widetilde{\mathrm{z}}_{1}}.\label{FKfic}
\end{align}
From this they conclude that since equation (\ref{FKf}) only applies
for the ``special initial condition'' given by equation (\ref{FKfic}),
then equation (\ref{pde3}) ``is an unstable and ill-posed equation.''
This conclusion is clearly erroneous. Since the FKF employs a terminal
condition in solving the PDE, then provided the PDE is well-posed
as a terminal-value problem, the solution of the PDE at $t=0$ must
of necessity be unique and ``special''. For a well-posed, deterministic
PDE, there exists only one solution at $t=0$ that generates the specified
terminal condition at $t=T$, otherwise solutions to the PDE are not
unique!
If equation (\ref{pde3}) were truly a BKE, then it could indeed be
considered ill-posed since the BKE is in general ill-posed when solved
as a time-forward problem (and equation (\ref{pde3}) is to be solved
as a a time-forward problem with a prescribed initial condition).
However, the important point is that although equation (\ref{pde3})
superficially appears to have the structure of a BKE, it cannot be
considered to be equivalent to a BKE for two reasons. First, as we
have already discussed, the term $\overline{\widehat{\mathrm{a}}}_{1}$
is not a general convection term, but has a specific form since it
is a functional of the solution of the equation (\ref{pde3}). This
is in part a manifestation of the fact that unlike the BKE, equation
(\ref{pde3}) is in fact derived from an underlying process that takes
place in a higher dimensional space (i.e the phase-space). Second,
equation (\ref{pde3}) is associated with a non-Markovian process,
whereas the BKE corresponds to a Markov process. The implication of
this is that equation (\ref{FKf}) cannot, at least formally, cover
the entire solution space of the PDE in equation (\ref{pde3}), since
equation (\ref{pde3}) admits solutions that correspond to non-Markov
trajectories in the space $\widetilde{\mathrm{z}}_{1}$, which equation
(\ref{FKf}) does not account for since it constructs solutions via
a conditional expectation over Markov trajectories. Therefore, in
the general case, the FKF cannot be used to say anything categorical
regarding the solutions to equation (\ref{pde3}).
\vspace{1em}
\section{KINETIC and GLM EQUATIONS}
\label{2}
It has been claimed in recent studies of PDF methods \cite{minier15,MINIER20161},
that the kinetic PDF is the marginal of the GLM PDF. This claim is
based on analysis that purports to show that the dispersion tensors
appearing in a kinetic PDF equation derived from the GLM PDF equation
are `strictly identical' to the corresponding tensors emerging directly
from the kinetic modeling approach. If this is so the claim of ill
posedness of the kinetic equation contradicts the well posedness associated
with the Fokker-Planck equation of the GLM. Of course, as we have
just demonstrated, this claim of ill-posedness is ill founded. Here
we consider the validity of the analysis presented in \cite{minier15}
to demonstrate how the kinetic equation can be derived from the GLM
PDF equation.
The analysis is based on the construction of a closure for $\langle\ve{u}_{s}\mathcal{P}\rangle$
where $\ve{u}_{s}(t;\ve{x})=\ve{U}_{s}(t)-\langle\ve{U}_{s}(t)\vert(\ve{X}_{p}(t)=\ve{x})\rangle$
and $\mathcal{P}(\ve{x},\ve{v},t)=\delta(\ve{X}_{p}(t)-\ve{x})\delta(\ve{U}_{p}(t)-\ve{v}))=\delta(\ve{Z}_{p}(t)-\ve{z})$.
We make the simple observation that the ensemble $\langle\cdot\rangle$
to be considered in this closure involves \textbf{\emph{all}} realizations
of the system being considered. It is not, nor can it be interpreted
as an average over only those realizations in which the trajectories
$\ve{Z}_{p}$ satisfy the end-condition $\ve{Z}_{p}(t)=\ve{z}$. Indeed
this is why $\langle\ve{u}_{s}\mathcal{P}\rangle=\langle\ve{u}_{s}\rangle_{\mathbf{z}}\,p(\ve{z},t)$,
where $\langle\cdot\rangle_{\mathbf{z}}$ denotes an average based
on the sub-ensemble containing only those trajectories satisfying
this end-condition. Although self-evident, this point is missed in
the closure formulated in \cite{minier15}. This closure is constructed
by introducing paths $\bm{\omega}(s)=\bm{\omega}(s;\ve{z},t)$ such
that $(\bm{\omega}(t),\dot{\bm{\omega}}(t))=\ve{z}$. These paths
are used to partition particle trajectories; for a given path $\bm{\omega}(\cdot;\ve{z},t)$
define $\bm{\Omega}_{\bm{\omega}}=\{\ve{Z}_{p}:\ve{X}_{p}(s)=\bm{\omega}(s;\ve{z},t)\
$. In \cite{minier15} a closure is then considered for the sub-ensemble
$\langle\ve{u}_{s}\mathcal{P}\rangle^{\bm{\Omega}_{\bm{\omega}}}$
over those trajectories in $\bm{\Omega}_{\bm{\omega}}$ (see equation
(39) in \cite{minier15}), and this closure is then integrated over
all paths $\bm{\omega}(\cdot;\ve{z},t)$. Thus, only trajectories
satisfying the specified end-condition $\ve{Z}_{p}(t)=\ve{z}$ have
been taken into account. This is wrong. Moreover, the form of the
closure for $\langle\ve{u}_{s}\mathcal{P}\rangle^{\bm{\Omega}_{\bm{\omega}}}$
is questionable. The Furutsu-Novikov formula is invoked,
the correct application of this should result in a closure framed
in terms of the two-time correlation tensor $\mathrm{C}(s,s^{\prime};\ve{z},t)=\langle\ve{u}^{\bm{\omega}}(s)\ve{u}^{\bm{\omega}}(s^{\prime})\rangle^{\ve{u}^{\bm{\omega}}}$
of the process $\ve{u}^{\bm{\omega}}(s)=\ve{u}_{s}(\bm{\omega}(s;\ve{z},t),s)$.
However, in \cite{minier15} this is conflated with another correlation,
namely
\begin{equation}
R(s,\ve{x};s^{\prime},\ve{x}^{\prime})=\langle\ve{u}_{s}(s;\ve{x},t)\ve{u}_{s}(s^{\prime};\ve{x}^{\prime},t^{\prime})\rangle\label{R}
\end{equation}
Again, this is evidently wrong; $\mathrm{C}$ depends on a single
phase-space point, $\ve{z}$, whereas $R$ is defined in terms of
two points $\ve{x}$, $\ve{x}^{\prime}$ in configuration space. Not
only this, the ensembles over which these two correlation tensors
are constructed are different. Finally (and notwithstanding these
apparent oversights), even if the resulting forms of the dispersion
tensors emerging from the construction given in \cite{minier15} were
correct, it is incorrect to claim that these tensors are identical
to those appearing in the PDF equation of the kinetic model. In the
kinetic PDF equation the dispersion tensors are defined in terms of
the basic two-point, two-time correlation tensor of the underlying
fluctuations in the carrier flow velocity field, that is $\mathcal{R}(\ve{x},t;\ve{x}^{\prime},t^{\prime})=\langle\ve{u}^{\prime}(\ve{x},t)\ve{u}^{\prime}(\ve{x}^{\prime},t^{\prime})\rangle$.
This makes no reference to particle trajectories and, therefore, $\mathcal{R}$
cannot be deemed identical to $R$ defined by equation \eqref{R}.
\section{Limitations of the GLM for dispersed particle flows}
\label{3}
\textcolor{black}{That the GLM is a model and not a fundamental theory
of particle dispersion in turbulent flows, is not an issue of critical
concern. Like all models it has its advantages as well as its limitations.
For instance an obvious advantage is that the }GLM PDF\textcolor{black}{{}
includes the flow velocity sampled along a particle trajectory as
an additional statistical variable as well as the particle velocity
and position. So a solution of the }PDF\textcolor{black}{{} equation
in principle contains more information about the dispersion process
than the solution of the kinetic equation. Most noticeably Simonin
and his co workers have used this }PDF\textcolor{black}{{} equation
to formulate transport equations for the density weighted mean flow
velocity $\overline{\ve U_{s}}$ and the particle-flow covariances
and obtained remarkably good agreement with experimental measurement
in numerous particle laden flows including jets and vertical channel
flows \cite{Simonin96b,ReeksSimoninFede}. Van Dijk \& Swailes \cite{vanDijk_Swailes2012}
solved this GLM }PDF\textcolor{black}{{} equation numerically directly
in the case of particle transport and deposition in a turbulent boundary
layer showing the existence of singularities in the near wall particle
concentration. Reeks \cite{Reeks2005} solved this }PDF\textcolor{black}{{}
equation for particle dispersion in a simple shear and obtained valuable
insights into the influence of the shear on the fluid velocity correlations
as well as the dispersion in the streamwise direction which showed
a component of contra-gradient diffusion \cite{Reeks2005}.}\\
\\
\textcolor{black}{Our aim here is to point out the limitations of
the GLM for dispersed gas-particle flows that have been ignored in
previous analyses especially in \cite{minier15}, to give} a\textcolor{blue}{{}
}\textcolor{black}{more balanced view of its strengths and weaknesses
when compared to the kinetic approach. We regard these limitations
to be areas for improvement of the model rather than inherent deficiencies.
The advantage of models of this sort is that features inherent in
more fundamental approaches like the kinetic approach can be included
in an }\textcolor{black}{\emph{ad hoc}}\textcolor{black}{{} manner.}\\
\\
Central to the formulation of the GLM PDF equations is the need to
model $\dot{\ve{U}}_{s}$ (equations (23), (24), (25) in \cite{minier15}).
By definition
\begin{equation}
{\color{red}{\normalcolor \dot{\ve{U}}_{s}(t)=\Bigg(\frac{D\ve u_{f}}{Dt}-(\ve{u}_{f}-\ve{U}_{p})\cdot\partial_{\mathbf{x}}\ve{u}_{f}\Bigg)_{\bm{x}=\bm{X}_{p}(t)}}}\label{dotUs}
\end{equation}
with $D\ve{u}_{f}/Dt$ denoting the fluid acceleration field , and
$(\cdot)_{\bm{x}=\bm{X}_{p}(t)}$ denoting that the field variables
inside the parenthesis are evaluated at the particle position. Equation
(\ref{dotUs}) shows that the process $\dot{\ve{U}}_{s}(t)$ is fundamentally
connected to the properties of the underlying flow fields, and as
such is influenced by the spatio-temporal structure of those fields.
This is particularly important since it is known, for example, that
inertial particles interact with the topology of fluid velocity fields
in particular ways, with a preference to accumulate in the strain
dominated regions of the flow \cite{Maxey1987}. Equation (\ref{dotUs})
captures the way in which the process $\dot{\ve{U}}_{s}(t)$ is affected
by the properties of the underlying flow fields. However, in the GLM,
$\dot{\ve{U}}_{s}(t)$ is modeled using a Langevin equation, and as
such, the influence of the spatio-temporal structure of the underlying
fields on $\dot{\ve{U}}_{s}(t)$ is lost. This means then that the
GLM cannot properly capture the role of flow structure on inertial
particle dynamics in turbulent flows, which is known to be very important
for describing the spatial distributions of the particles. In contrast
to this, the kinetic model does capture the role of the spatio-temporal
structure of the flow on the particle motion. For example, the dispersion
tensors $\ve\lambda$, $\ve\mu$ and $\ve\kappa$ capture these effects
through their dependence on the two-point, two-time correlation tensor
of the fluid velocity field. \\
A second, related issue, concerns the handling of the term $(\ve{u}_{f}-\ve{U}_{p})\cdot\partial_{\ve{x}}\ve{u}_{f}$
in the GLM. The role of this term in (\ref{dotUs}) is that it captures
how the particle inertia causes the timescale of ${\ve{U}}_{s}(t)$
to deviate from the Lagrangian timescale of the fluid velocity. For
example, in the limit $\tau_{p}\to0$, one should recover $\dot{\bm{U}}_{s}=(D\ve{u}_{f}/Dt)_{\bm{x}=\bm{X}_{p}(t)}$,
while in the limit $\tau_{p}\to\infty$ (without body forces), one
should recover $\dot{\bm{U}}_{s}=(\partial_{t}\bm{u}_{f})_{\bm{x}=\bm{X}_{p}(t)}$.
In the former case, the timescale of $\bm{U}_{s}$ is the fluid Lagrangian
timescale, whereas in the latter case the timescale of $\bm{U}_{s}$
is the fluid Eulerian timescale. With body forces e.g. gravity , the
timescale of $\bm{U}_{s}$ for inertial particles would also be affected
by the crossing trajectories effect \cite{Wells_Stock83}.
Conventionally, the term $(\ve{u}_{f}-\ve{U}_{p})\cdot\partial_{\ve{x}}\ve{u}_{f}$
is either neglected, such that the Langevin model relates to $\dot{\bm{U}}_{s}=(D\ve{u}_{f}/Dt)_{\bm{x}=\bm{X}_{p}(t)}$,
or else its effect is modeled by making the timescale in the Langevin
model a function of $\tau_{p}$. Both approaches are problematic:
the first because it neglects the effect of inertia on the timescale
which can be strong, the second because one then requires an additional
model for the timescale of ${\bm{U}}_{s}$ as a function of $\tau_{p}$.
In contrast, in the kinetic model, the role of inertia on ${\bm{U}}_{s}$
is formally accounted for, and is an intrinsic part of the model.
In particular, it is captured through the dependence of $\ve\mu$,
$\ve\lambda$ and $\ve\kappa$ on the correlation tensors of the fluid
velocity \emph{field} evaluated along the inertial particle trajectories.
Another implication of the GLM's use of a Langevin equation to describe
${\ve{U}}_{s}(t)$ is that, as is well known, it cannot accurately
describe the Lagrangian properties of the system in the short-time
'ballistic' limit. For example, the second-order Lagrangian structure
function $\langle\|{\ve{U}}_{s}(t+s)-{\ve{U}}_{s}(t)\|^{2}\rangle$
should grow as $s^{2}$ in the limit $s\to0$, whereas a Langevin
equation predicts that it grows linearly in $s$ in the limit $s\to0$.
Interestingly, this very fact has an important bearing on claim of
the exact correspondence of the PDF of the kinetic equations with
the marginal of the GLM PDF. Even aside from other issues, this claim
cannot be correct since the kinetic model would give the correct short-time
behavior for $\langle\|{\ve{U}}_{s}(t+s)-{\ve{U}}_{s}(t)\|^{2}\rangle$
since it allows for the general case where the fluid velocity field
is differentiable in time.
\textcolor{black}{In addition to these points, recent criticism of
the kinetic equation failed to appreciate or show any awareness of
important consistency and invariance principles that were important
guidelines in the construction of the kinetic equation and highly
relevant to the to the limitations and generality of the GLM PDF equations.
The first is that the kinetic equation should generate the correct
equation of state, i.e. the relation between the equilibrium pressure
associated with the correlated turbulent motion of the particles and
their mass density in homogeneous isotropic statistically stationary
turbulence. This can be obtained independently of the kinetic equation
by evaluating the Virial for the particle equation of motion (see
section II in \cite{Reeks91PoF}). This relates the kinematic pressure
$\hat{p}$ to the particle diffusion coefficient $\varepsilon$ via
the particle response time $\tau_{p}$, namely $\hat{p}=\varepsilon\tau_{p}^{-1}$.}
\textcolor{black}{The second important consideration is that the kinetic
equation should satisfy Random Galilean Transformation (RGT) invariance
\cite{Kraichnan65,Reeks92,Frisch95} . In the development of legitimate
closure schemes invariance to RGT is crucial to account for the transport
of small scales of turbulence by the large scales and the $E(k)\sim k^{-5/3}$.
Specifically RGT means applying to each realization of the carrier
flow a translational velocity, constant in space and time but varying
randomly in value from one realization to the next. In Kraichnan\textquoteright s
traditional usage of RGT the distribution of velocities is taken to
be Gaussian for convenience. Clearly the internal dynamics should
be unaffected by this transformation and should be reflected in the
equations that describe the average behavior of the resulting system.
In the case of the case of the kinetic equation the terms that describe
the dispersion due to the aerodynamic driving force and that due to
the transnational velocity should be separate. When the timescale
of $\bm{U}_{s}$ is finite, RGT cannot be satisfied by a PDF equation
with the traditional Fokker-Planck structure. Indeed, RGT invariance
implies that the dispersion tensor $\ve B$ in equation (\ref{pde1})
must have the form given in equation (\ref{B}) \cite{Reeks91PoF},
which is not satisfied in a PDF equation with the Fokker-Planck structure
(for which $\ve\lambda\equiv\bm{0}$).}
\textcolor{black}{This failure to preserve RGT invariance means failure
to reproduce the correct equation of state for the dispersed phase.
In the case of the kinetic equation, it implies that that the dispersion
tensor $\ve B$ in Eq.(\ref{pde1}) must have the form given in Eq.(\ref{B})
for a Gaussian non-white noise process. See \cite{Reeks91PoF} for
the form of the dispersion tensor $\ve B$ for a non Gaussian process
as a cumulant expansion in particle fluid velocity correlations. In
the case of the GLM equations the failure to preserve RGT invariance
is associated with the fluctuating stochastic acceleration field $\dot{\ve U}_{s}^{\prime}=\dot{\ve U}_{s}-\left\langle \dot{\ve U_{s}}\right\rangle $
described by a Fokker Planck equation. It is highlighted in the case
of short term dispersion in e.g homogeneous station ary turbulence
where the GLM predicts an exponential decay for the fluid velocity
autocorrelation (along particle trajectories) and with it a discontinuity
in the slope at $t=0$ and a consequent error in the short term dispersion
of $\mathcal{O}t$ as opposed to $\mathcal{O}t^{2}$. Such a result
cannot arbitrarily be changed since the exponential autocorrelation
is a property of the white noise based GLM equation for all time. }
\textcolor{black}{This has some bearing on the equivalence of the
two approaches, since the kinetic approach does not have this limitation
and correctly predicts the short term diffusion. So whereas in the
GLM, the form of the particle-flow correlations are calculated and
an intrinsic part of the model, in the kinetic equation these are
prescribed or calculated using independent knowledge of the statistics
of the carrier flow field and a relationship between Eulerian and
Lagrangian correlations. As pointed out in \cite{Reeks2005} in the
case of dispersion in a simple shear flow, if the statistics of the
fluid velocity along a particle trajectory are assumed derivable from
a Gaussian process and the fluid velocity correlations as a function
of time are taken to be the same in either case, then the two approaches
are identical, but only then. Whilst in the kinetic equation one is
free in principle to choose whatever is physically acceptable for
the fluid particle correlation, the problem remains one of calculating
carrier flow velocity correlations along particle trajectories, given
the underlying Eulerian statistics of the carrier flow velocity field. }
\subsection*{The kinetic equation for non-linear drag}
In closing this section, we wish to address the numerous claims made
that the kinetic approach is limited in its application to situations
where the drag force is linear in the relative velocity between particle
and fluid. This is not correct. We refer the reader to section III
in \cite{Reeks92} on the particle motion which specifically deals
with the treatment of non linear drag and how it is used to evaluate
the convective and dispersive terms in the kinetic equation. In particular
the mean and fluctuating aerodynamic driving force are expressed in
terms of the particle mean density weighted particle velocity $\overline{\ve v}(\mathbf{x},t)$
and incorporated into the particle momentum equations by suitably
integrating the kinetic equation over all particle velocities. We
refer also to \cite{Reeks1980} where using the kinetic equation for
nonlinear drag, an evaluation is made of the long term diffusion coefficient
for high inertial particles in homogeneous isotropic statistically
stationary turbulence.
\section{Summary and Conclusions}
This paper is about well-posedness and realizability of the kinetic
equation and its relationship to the GLM equation for modeling the
transport of small particles in turbulent gas-flows. Previous analyses
\cite{minier15,MINIER20161} claim that the kinetic equation is ill-posed
and therefore \emph{invalid} as a PDF description of dispersed two-phase
flows. Specifically, it is asserted that the kinetic equation as given
in equation \eqref{pde1} has the properties of a backward heat equation
and as a consequence its solutions will in the course of time exhibit
finite-time singularities. The justification for this claim is based
on an analysis centered around the observation that the phase space
diffusion tensor $\ve{B}$ in equation (\ref{pde1}) is not positive-definite
but possesses both negative and positive eigenvalues. So we have examined
the validity of assumptions that lead to this conclusion and in particular
the form of the kinetic equation in a moving frame where the PDF $\widetilde{p}(\widetilde{z}_{1},\widetilde{z}_{2},t)$
refers to that of transformed variables $\widetilde{z}_{1},\widetilde{z}_{2}$
measured at time $t$ along the principal axes of $\ve B$ (see equation
(\ref{eq:transform})). Based on the negative diffusion coefficient
in the transformed PDF equation \eqref{pde3} for the marginal distribution
$\widetilde{p_{r}}(\widetilde{z}_{1},t)$, these studies seek to show
that this equation (and so also equation (\eqref{pde1})) is ill-posed.
However a fundamental error is made by assuming incorrectly that the
term $\overline{\widehat{\mathrm{a}}}_{1}$ in equation (\ref{pde3})
is wholly convective when in fact it is a density weighted variable
which because $\widetilde{z}_{1}$ and $\widetilde{z}_{2}$ for the
particle motion are coupled in phase space, means that $\overline{\widehat{\mathrm{a}}}_{1}$
has a gradient diffusive component with a $+ve$ diffusion coefficient
which offsets the component in equation (\ref{pde3}) with a $-ve$
diffusion coefficient. More particularly, we showed that the solution
is a Gaussian distribution with covariances that are the solutions
of a set of coupled equations in equations (\ref{m}), (\ref{Theta}).
Based on these solutions, the resultant convection-gradient diffusion
equation for $\widetilde{p}_{r}(\widetilde{z}_{1},t)$ is given by
equation (\ref{pde4}) with a diffusion coefficient $\widetilde{D}_{1}(t)$
given by the sum of the $+ve$ and $-ve$ contributions defined in
equations (\ref{eq:D11}). Using an exponential decaying autocorrelation
of the fluid velocity measured along a particle trajectory, we obtained
analytic solutions for the $+ve$ and $-ve$ components of $\widetilde{D}_{1}$
which show that the $+ve$ component \textbf{\emph{always}} outweighs
the $-ve$ component and that $\widetilde{D}_{1}$ is crucially always
$+ve$. The corresponding $+ve$ values of $\widetilde{D}_{1}$ are
shown in Figure \ref{fig:figure-2} which indicate that $\widetilde{D}_{1}(t)$
approaches an asymptotic value that is independent of the particle
response $\tau_{p}$, evident from asymptotic expression given in
equation (\ref{eq:D1 asymptotic form}). Significantly we were able
to show that this was a general result for all realizable forms for
the flow velocity autocorrelation along particle trajectories and
as a consequence the kinetic equation is not ill-posed.
Finally, in the course of our examination of the analysis of ill-posedness,
we pointed out a number of issues with the use of the Feynman-Kac
formula (FKF). The application of the FKF to equation (\ref{pde3})
is problematic because equation (\ref{pde3}) is not really a Backward
Kolmogorov Equation. Furthermore, the claim that the FKF solution
to equation (\ref{pde3}) implies that the kinetic equation is only
solvable for special initial conditions is erroneous. The FKF employs
a terminal condition, and therefore there can be only one possible
``initial condition'', or else solutions to equation (\ref{pde3})
would not be unique.
Another important issue was the claim made in \cite{minier15} that
the kinetic equation can be derived from the GLM PDF equation. That
in fact the GLM is a more general approach than the kinetic approach.
We showed that this is not the case, that the assumptions made regarding
the averaging process lead again to a fundamental error in the closure
approximation that negates this claim.
In the final part of our analysis we sought to give a more balanced
appraisal of the benefits of both PDF approaches\textcolor{black}{{}
and in particular to point out the limitations of the GLM for gas-particle
flows that have previously been ignored. We regarded these limitations
to be areas for improvement of the GLM rather than inherent deficiencies.
As we pointed out, the value of models of this sort is that features
inherent in more fundamental approaches like the kinetic approach
can be included in an ad-hoc manner}. None the less there were terms
that were fundamental to the modeling like the fluctuating convective
strain rate contribution which had been ignored but which contained
valuable information on the relationships between Lagrangian and Eulerian
timescales and the dependence on particle inertia. We suggested how
additional features like particle clustering and drift in inhomogeneous
turbulent flows particularly in turbulent boundary layers might be
included in the model to make it more complete. This is one of the
ways that the kinetic approach can support the PDF dynamic model by
giving specific formulae for these additional features.
\bibliographystyle{plain}
\nocite{*}
|
1,108,101,563,862 | arxiv | \section{\label{sec:introduction}introduction}
\subsection{\label{sec:JTWPA}Josephson Travelling Wave Parametric Amplifiers}
Josephson junction (JJ) based parametric amplifiers (JPAs) \cite{Yurke_PRA_1989,yamamoto_2008_APL_flux_JPA,mutus_white_aip_2014_couple_JPA} have been used in recent years to provide quantum-limited noise performance for quantum optics experiments \cite{Mallet_Beltran_quantum_optics_JPA}, single microwave photon detection \cite{lehnert_single_mw_photon_JPA}, high-fidelity qubit readout for quantum information technologies \cite{siddiqi_hacohen_cqed_jpa,lin_aip_2013_singleshot_qubit}, as well as producing squeezed states \cite{Castellanos-Beltran2008}. These microwave, small signal, amplifiers have been shown to exhibit large gain ($>20~$dB) \cite{beltran-reso,Zhou_Esteve_chalmers_APS_2014_High_gain_array}, and approach the quantum noise limit \cite{Teufel_NNano_2009}. Typically these amplifiers have utilised high-Q superconducting resonators which have a limited bandwidth and dynamic range. Removing the resonant architecture and allowing non-linear interactions along a transmission line can increase both the dynamic range and bandwidth \cite{Eom2012_dynamic_range}. More recently, the Josephson Travelling Wave Parametric Amplifier (JTWPA), based on JJs embedded in a microwave transmission line has been shown to provide large gain without the bandwidth limitation of the JPAs \cite{Macklin_Science_2015,White_APL_2015,Yaakobi_PRB_2013}.
Implementing unbiased JJs along the transmission line leads to a centrosymmetry of the system and four wave mixing (4WM) whereby the signal and idler frequencies are close to the frequency of the pump, $f_{\mathrm{s}}+f_{\mathrm{i}} = 2f_{\mathrm{p}}$. In this paper we focus primarily on the three wave mixing (3WM) scheme, $f_{\mathrm{s}}+f_{\mathrm{i}}=f_{\mathrm{p}}$ which shifts the pump frequency away from that of the signal and idler allowing the pump to be filtered more easily from the signal. The 3WM regime also takes advantage of the inherently stronger interactions than the 4WM regime. In this regime the phase modulation effect and the signal gain are controlled independently, the process of which is described in detail by Zorin \cite{Zorin_PRAppl_2016}. To access the 3WM regime rf-SQUIDs are embedded in the transmission line and an externally applied magnetic field modifies the centrosymmetry of the circuit. The circuit as proposed in Ref.~\onlinecite{Zorin_PRAppl_2016} is shown in \Cref{fig:cell}.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{Fig1_Submission.png}
\caption{Circuit schematic showing 3 cells of an N-cell array of rf-SQUIDs implemented in a microwave transmission line design as proposed by Zorin \cite{Zorin_PRAppl_2016}. The blue dotted line highlights one of the repeating cell elements. An externally applied magnetic field is used to shift the operation of the JTWPA to the 3WM regime (in the WRspice simulations, flux coupling to the rf-SQUID is used to bias the JTWPA). Each cell consists of an rf-SQUID with geometric inductance $L_{\mathrm{g}}$ and a Josephson junction (Josephson inductance $L_{\mathrm{J}}$, and junction capacitance $C_{\mathrm{J}}$). Each cell has a capacitance to ground $C_{\mathrm{0}}$. The parameters of the circuit presented in this work are $I_{\mathrm{c}}=5\,\mu\mathrm{A}$ $C_{\mathrm{J}}=60\,\mathrm{fF}$, $C_{\mathrm{0}}=100\,\mathrm{fF}$, and $L_{\mathrm{g}}=57\,\mathrm{pH}$.
\label{fig:cell}}
\end{figure}
\section{\label{sec:method}Modelling the JTWPA}
\subsection{\label{sec:WRspice}WRspice Simulations}
In order to capture the full behaviour of the JTWPA, we use WRspice to simulate the circuit design shown in Fig.~\ref{fig:cell}. WRspice is a SPICE-like circuit simulator which includes a Josephson junction model \cite{WRspice}. Conventional analytical models describing three-wave mixers consider only three mixing tones, the pump $f_{\mathrm{p}}$, signal $f_{\mathrm{s}}$, and idler $f_{\mathrm{i}}$ \cite{Cullen_1960}. Using WRspice we observe that other mixing tones, especially the harmonics of the pump, are generated in the JTWPA. In this work we show that generation of other mixing tones has a strong reduction on the signal gain that can be achieved. In WRspice we implement a 2000 cell version of the circuit shown in \Cref{fig:cell}. The rf-SQUIDs are flux-biased such that we operate in the 3WM regime. A strong ($I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx1.97\,\mu\mathrm{A}\approx-70\,\mathrm{dBm}$), pump current at $f_{\mathrm{p}} = 12~$GHz and a weak ($I^{\mathrm{rms}}_{\mathrm{s}}(0)\approx0.07\,\mu\mathrm{A}\approx-96\,\mathrm{dBm}$) signal current at $f_{\mathrm{s}} = 7.2~$GHz are input to the JTWPA at node 0. The values replicate those used as example parameters in the analytical model by Zorin \cite{Zorin_PRAppl_2016}. By performing an FFT of the current entering each node $n$ we observe the behaviour of all tones propagating along the amplifier. We observe wave mixing processes including generation of the idler tone ($f_{\mathrm{i}} = 4.8~$GHz) at the difference of the pump and signal tones. This wave mixing derives solely from the non-linear current phase relation of the Josephson junction $I=I_{\mathrm{c}}\sin(\phi)$ and demonstrates the ability of WRspice to model the non-linear behaviour of the system. \Cref{fig:WRsim1} shows a colourmap of the current at each node of the JTWPA as simulated by WRspice. Note that as well as the signal, pump, and idler, we observe significant generation of pump harmonics $f_{\mathrm{2p}}$, $f_{\mathrm{3p}}$, $f_{\mathrm{4p}}$, and $f_{\mathrm{5p}}$. In addition to the pump harmonics, we observe sum-frequency generation associated with the pump and the pump harmonics.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{Fig2_Submission.png}
\caption{Colourmap showing the current at each node of the JTWPA circuit, as simulated in WRspice. The colourmap shows the expected pump, signal, and idler tones as well as generation of additional tones. The harmonics of the pump (second-, third-, fourth- and fifth-harmonic generation shown) are clearly observed. Sum frequency generation (pump + idler, pump + signal, etc) are also observed. Pump harmonic terms labelled in red, pump-mediated sum-frequency generation labelled in blue.
\label{fig:WRsim1}}
\end{figure}
The signal tone amplifies along the JTWPA from an input amplitude of $I^{\mathrm{rms}}_{\mathrm{s}}(0)\approx0.10\,\mu\mathrm{A}$ to {$I^{\mathrm{rms}}_{\mathrm{s}}(399)\approx0.19\,\mu\mathrm{A}$} representing a signal gain of 5.5 dB. This gain is less than a third of that predicted in Ref.~\onlinecite{Zorin_PRAppl_2016} for the same pump and signal input amplitudes and JTWPA length. We show here that the generation of the additional terms seen in the WRspice simulations accounts for most of the reduction in amplifier gain observed in WRspice when compared to the gain expected from the analytical theory described in Ref.~\onlinecite{Zorin_PRAppl_2016}. It is therefore clear that for the given circuit parameters additional tones must be taken into account in the analytical theory.
\subsection{\label{Numerical model}Extension of the Coupled Mode Equations}
To allow the analytical theory to capture more of the behaviour demonstrated by the JTWPA simulations we extend the coupled mode equations (CMEs) to include additional tones. The theory extension method is similar to that considered by Chaudhuri \textit{et al} for the 4WM case \cite{Chaudhuri_Gao_2015_4wm_CME_extend}. In \cref{tab:CME} the conventional theory as presented in Ref.~\onlinecite{Zorin_PRAppl_2016} is denoted as `CME-1' and includes the pump, signal and idler tones. Each further CME extension (CME-$k$) contains all pump-mediated mixing tones up to and including the $k^{\mathrm{th}}$-harmonic of the pump. Here we extend up to CME-5. The constituent tones of each CME set are shown in \cref{tab:CME}.
\begin{table}[h!]
\caption{\label{tab:CME}Tones included in each Coupled Mode Equation (CME) set. (CME-$k$) contains all pump-mediated mixing tones up to, and including the $k^{\mathrm{th}}$-harmonic of the pump.}
\begin{ruledtabular}
\begin{tabular}{lllll}
CME-1 & CME-2 & CME-3 & CME-4 & CME-5\\
\hline\\
$f_{\mathrm{i}}$ & $f_{\mathrm{i}}$ & $f_{\mathrm{i}}$ & $f_{\mathrm{i}}$ & $f_{\mathrm{i}}$\\
$f_{\mathrm{s}}$ & $f_{\mathrm{s}}$ & $f_{\mathrm{s}}$ & $f_{\mathrm{s}}$ & $f_{\mathrm{s}}$\\
$f_{\mathrm{p}}$ & $f_{\mathrm{p}}$ & $f_{\mathrm{p}}$ & $f_{\mathrm{p}}$ & $f_{\mathrm{p}}$\\
& $f_{\mathrm{p+i}}$ & $f_{\mathrm{p+i}}$ & $f_{\mathrm{p+i}}$ & $f_{\mathrm{p+i}}$\\
& $f_{\mathrm{p+s}}$ & $f_{\mathrm{p+s}}$ & $f_{\mathrm{p+s}}$ & $f_{\mathrm{p+s}}$\\
& $f_{\mathrm{2p}}$ & $f_{\mathrm{2p}}$ & $f_{\mathrm{2p}}$ & $f_{\mathrm{2p}}$\\
& & $f_{\mathrm{2p+i}}$ & $f_{\mathrm{2p+i}}$ & $f_{\mathrm{2p+i}}$\\
& & $f_{\mathrm{2p+s}}$ & $f_{\mathrm{2p+s}}$ & $f_{\mathrm{2p+s}}$\\
& & $f_{\mathrm{3p}}$ & $f_{\mathrm{3p}}$ & $f_{\mathrm{3p}}$\\
& & & $f_{\mathrm{3p+i}}$ & $f_{\mathrm{3p+i}}$\\
& & & $f_{\mathrm{3p+s}}$ & $f_{\mathrm{3p+s}}$\\
& & & $f_{\mathrm{4p}}$ & $f_{\mathrm{4p}}$\\
& & & & $f_{\mathrm{4p+i}}$\\
& & & & $f_{\mathrm{4p+s}}$\\
& & & & $f_{\mathrm{5p}}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
The inclusion of tones in the extended CMEs is described in detail below for the case of CME-2 (inclusion of the second harmonic of the pump, $f_\mathrm{2p}$, and the pump-mediated sum-frequency generations, $f_\mathrm{p+i}$ and $f_\mathrm{p+s}$). We introduce additional propagators $\partial A_{\mathrm{p+i}}/ \partial x$, $\partial A_{\mathrm{p+s}}/ \partial x$, and $\partial A_{\mathrm{2p}}/ \partial x$ in the allowed space of states $\Phi$ where,
\begin{align}
\Phi = \sum_{j=\mathrm{i,s,p,p+i,p+s,2p}}A_{j}(x)e^{i(k_{j} x-\omega_{j} t)} + c.c.,\label{eq:SHGstates}
\end{align}
where $A_{j}(x)$ is the amplitude at dimensionless coordinate $x$ along the JTWPA of the $j^{\mathrm{th}}$-tone in the space of states.
We treat these additional tones as a generated tone in the same way as the idler, that is, $A_{\mathrm{p+i}}(0)=A_{\mathrm{p+s}}(0)=A_{\mathrm{2p}}(0)=A_{\mathrm{i}}(0) = 0$. We then idealise our SQUID embedded transmission line to be purely non-centrosymmetric. This is the 3WM regime, where the coefficient of the cubic non-linearity $\gamma=0$. We follow the process outlined in Refs \onlinecite{Zorin_PRAppl_2016,Yaakobi_PRB_2013} to obtain the wave equation describing our transmission line of the form,
\begin{equation}
\begin{split}
\dfrac{\partial ^2 \Phi}{\partial x^2} &- \omega_0^{-2} \dfrac{\partial^2 \Phi}{\partial t^2} + \omega_{\mathrm{J}}^{-2}\dfrac{\partial^4\Phi}{\partial x^2 \partial t^2} \\&+ \beta\dfrac{\partial}{\partial x}\Big[\Big( \dfrac{\partial \Phi}{\partial x}\Big)^2 \Big]+ \cancelto{0}{\gamma\dfrac{\partial}{\partial x}\Big[\Big( \dfrac{\partial \Phi}{\partial x}\Big)^3 \Big]} = 0,
\label{eq:TLwave}
\end{split}
\end{equation}
where,
\begin{align*}
\omega_0 &= \frac{1}{\sqrt{L_{\mathrm{g}} C_0}},
\quad\text{and}\quad
\omega_{\mathrm{J}} = \frac{1}{\sqrt{L_{\mathrm{g}} C_{\mathrm{J}}}},\\
\end{align*}
and,
\begin{align*}
\beta &= \beta_{\mathrm{L}}\frac{1}{2}\sin(\phi_{\mathrm{dc}}), \quad\text{and}\quad
\beta_{\mathrm{L}} = \frac{2 \pi L_{\mathrm{g}} I_{\mathrm{c}}}{\Phi_0},\\
\end{align*}
where $\omega_{\mathrm{J}}$ is the plasma frequency, $\omega_{\mathrm{0}}$ is the cutoff frequency, with $L_{\mathrm{g}}$ the geometric inductance of the SQUID loop, $C_0$ capacitance to ground of the line, $C_{\mathrm{J}}$ the junction capacitance, $I_{\mathrm{c}}$ the junction critical current and $\Phi_0$ is the magnetic flux quantum. By assuming the non-linear component of \Cref{eq:TLwave} acts as a perturbation to the super-linear equation,
\begin{align}
\dfrac{\partial ^2 \Phi}{\partial x^2} - \omega_0^{-2} \dfrac{\partial^2 \Phi}{\partial t^2} + \omega_{\mathrm{J}}^2\dfrac{\partial^4\Phi}{\partial x^2 \partial t^2} = 0
\end{align}
we take the resulting super-linear dispersion solution,
\begin{align}
k(\omega) = \frac{\omega}{\omega_0(\sqrt{1-\sfrac{\omega^2}{\omega_{\mathrm{J}}^2}})},
\label{eq:kw}
\end{align}
and the space of allowed states in \cref{eq:SHGstates} as a trial solution to generate the coupled mode equations CME-2.
For frequencies much lower than the junction plasma frequency $\omega^{2}/\omega_{\mathrm{J}}^{2} \approx 0$, therefore \cref{eq:kw} can be simplified to $k(\omega)\approx \omega/\omega_{0}$. We now construct a simple set of CMEs including the tones $f_{\mathrm{p+i}}$, $f_{\mathrm{p+s}}$, and $f_{\mathrm{2p}}$ to find,
\begin{widetext}
\begin{align}
\dfrac{dA_{\mathrm{i}}}{dx} &= \dfrac{\beta}{2}\Big(k_{\mathrm{p}}k_{\mathrm{s}}A_{\mathrm{p}}A_{\mathrm{s}}^*e^{i(k_{\mathrm{p}}-k_{\mathrm{s}})x} + k_{\mathrm{p}}k_{\mathrm{p+i}}A_{\mathrm{p+i}}A_{\mathrm{p}}^*e^{i(k_{\mathrm{p+i}}-k_{\mathrm{p}})x} + k_{\mathrm{2p}}k_{\mathrm{p+s}}A_{\mathrm{2p}}A_{\mathrm{p+s}}^*e^{i(k_{\mathrm{2p}}-k_{\mathrm{p+s}})x} \Big)e^{-ik_{\mathrm{i}}x},\label{eq:CME2-i} \\
\dfrac{dA_{\mathrm{s}}}{dx} &= \dfrac{\beta}{2}\Big(k_{\mathrm{p}}k_{\mathrm{i}}A_{\mathrm{p}}A_{\mathrm{i}}^*e^{i(k_{\mathrm{p}}-k_{\mathrm{i}})x} + k_{\mathrm{p}}k_{\mathrm{p+s}}A_{\mathrm{p+s}}A_{\mathrm{p}}^*e^{i(k_{\mathrm{p+s}}-k_{\mathrm{p}})x} + k_{\mathrm{2p}}k_{\mathrm{p+i}}A_{\mathrm{2p}}A_{\mathrm{p+i}}^*e^{i(k_{\mathrm{2p}}-k_{\mathrm{p+i}})x} \Big)e^{-ik_{\mathrm{s}}x},\label{eq:CME2-s} \\
\dfrac{dA_{\mathrm{p}}}{dx} &=
\begin{aligned}[t]
\dfrac{\beta}{2}\Big(&-k_{\mathrm{s}}k_{\mathrm{i}}A_{\mathrm{i}}A_{\mathrm{s}}e^{i(k_{\mathrm{s}}+k_{\mathrm{i}})x} + k_{\mathrm{p+s}}k_{\mathrm{s}}A_{\mathrm{p+s}}A_{\mathrm{s}}^*e^{i(k_{\mathrm{p+s}}-k_{\mathrm{s}})x} + k_{\mathrm{p+i}}k_{\mathrm{i}}A_{\mathrm{p+i}}A_{\mathrm{i}}^*e^{i(k_{\mathrm{p+i}}-k_{\mathrm{i}})x} \\ &
+ k_{\mathrm{2p}}k_{\mathrm{p}}A_{\mathrm{2p}}A_{\mathrm{p}}^*e^{i(k_{\mathrm{2p}}-k_{\mathrm{p}})x} \Big)e^{-ik_{\mathrm{p}}x},\label{eq:CME2-p}
\end{aligned}\\
\dfrac{dA_{\mathrm{p+i}}}{dx} &= \dfrac{\beta}{2}\Big(-k_{\mathrm{p}}k_{\mathrm{i}}A_{\mathrm{p}}A_{\mathrm{i}}e^{i(k_{\mathrm{p}}+k_{\mathrm{i}})x} + k_{\mathrm{2p}}k_{\mathrm{s}}A_{\mathrm{2p}}A_{\mathrm{s}}^*e^{i(k_{\mathrm{2p}}-k_{\mathrm{s}})x} \Big)e^{-ik_{\mathrm{p+i}}x},\label{eq:CME2-pi} \\
\dfrac{dA_{\mathrm{p+s}}}{dx} &= \dfrac{\beta}{2}\Big(-k_{\mathrm{p}}k_{\mathrm{s}}A_{\mathrm{p}}A_{\mathrm{s}}e^{i(k_{\mathrm{p}}+k_{\mathrm{s}})x} + k_{\mathrm{2p}}k_{\mathrm{i}}A_{\mathrm{2p}}A_{\mathrm{i}}^*e^{i(k_{\mathrm{2p}}-k_{\mathrm{i}})x} \Big)e^{-ik_{\mathrm{p+s}}x},\label{eq:CME2-ps} \\
\dfrac{dA_{\mathrm{2p}}}{dx} &=\dfrac{\beta}{2}\Big(-\frac{k_{\mathrm{p}}^2A_{\mathrm{p}}^2}{2}e^{i(k_{\mathrm{p}}+k_{\mathrm{p}})x}
- k_{\mathrm{p+i}}k_{\mathrm{s}}A_{\mathrm{p+i}}A_{\mathrm{s}}e^{i(k_{\mathrm{p+i}}+k_{\mathrm{s}})x} - k_{\mathrm{p+s}}k_{\mathrm{i}}A_{\mathrm{p+s}}A_{\mathrm{i}}e^{i(k_{\mathrm{p+s}}+k_{\mathrm{i}})x} \Big)e^{-ik_{\mathrm{2p}}x}.\label{eq:CME2-pp}
\end{align}
\end{widetext}
Neglecting all terms proportional to $A_{\mathrm{p+i}}$, $A_{\mathrm{p+s}}$, and $A_{\mathrm{2p}}$, as well as their derivatives shows that we recover the conventional CMEs used to describe the three wave parametric amplification,
\begin{align}
\dfrac{dA_{\mathrm{i}}}{dx} &= \dfrac{\beta}{2}\Big(k_{\mathrm{p}}k_{\mathrm{s}}A_{\mathrm{p}}A_{\mathrm{s}}^*e^{i(k_{\mathrm{p}}-k_{\mathrm{s}})x} \Big)e^{-ik_{\mathrm{i}}x} ,\label{eq:zi} \\
\dfrac{dA_{\mathrm{s}}}{dx} &= \dfrac{\beta}{2}\Big(k_{\mathrm{p}}k_{\mathrm{i}}A_{\mathrm{p}}A_{\mathrm{i}}^*e^{i(k_{\mathrm{p}}-k_{\mathrm{i}})x} \Big)e^{-ik_{\mathrm{s}}x},\label{eq:zS}\\
\dfrac{dA_{\mathrm{p}}}{dx} &=-\dfrac{\beta}{2}\Big(k_{\mathrm{s}}k_{\mathrm{i}}A_{\mathrm{s}}A_{\mathrm{i}}e^{i(k_{\mathrm{s}}+k_{\mathrm{i}})x} \Big)e^{-ik_{\mathrm{p}}x}\label{eq:zP}.
\end{align}
A similar set of extended equations are constructed for CMEs-3, 4, and 5 (see \crefrange{app:CME3}{app:CME5} for a full list of the equations). Each set of equations, CME-1 (\crefrange{eq:zi}{eq:zP}), CME-2 (\crefrange{eq:CME2-i}{eq:CME2-pp}), CME-3 (\crefrange{eq:CME3-p}{eq:CME3-2p+i}), CME-4 (\crefrange{eq:CME4-p}{eq:CME4-3p+i}), and CME-5 (\crefrange{eq:CME5-i}{eq:CME5-5p}) are solved numerically using the \texttt{ode45} function in MATLAB.
\subsection{\label{sec:Comparison}Comparison of WRspice Simulations and Coupled Mode Equation Solutions}
In order to compare the WRspice simulations with the solutions to the Coupled Mode Equations it is necessary to relate the current $I(n)$ used in WRspice to the amplitude $A(x)$ used in the CMEs with the following relation,
\begin{equation}
I^{\mathrm{rms}}(n) =\lvert A(x)\rvert \frac{\omega L_{\mathrm{g}}I_{\mathrm{c}}}{\sqrt{2}\beta_{\mathrm{L}}Z},
\end{equation}
where $Z=\sqrt{L/C_{0}}$ is the impedance of the line.
To compare the WRspice simulations results with the solutions to the CMEs we focus first on the interaction between the pump $f_{\mathrm{p}}$ and the second harmonic of the pump $f_{\mathrm{2p}}$. From the WRspice output shown in \cref{fig:WRsim1} it is clear that the $f_{\mathrm{2p}}$ tone is of large amplitude and thus the second harmonic generation of the pump is a dominant mixing mechanism not accounted for in the CME-1 theory. \cref{fig:SHG-fit} shows the solution to CME-5 for the $f_{\mathrm{p}}$ and $f_{\mathrm{2p}}$ tones compared to the WRspice output. The amplitude of both tones is well described by the CME-5 solutions up to node 250, beyond which there is significant disagreement.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{Fig3_Submission.png}
\caption{Comparison between the extended CME-5 and the WRspice simulations of the pump ($f_{\mathrm{p}}$) and second harmonic of the pump ($f_{\mathrm{2p}}$). $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx1.97\,\mu\mathrm{A}$. The amplitude of both tones measured at each node as simulated in WRspice are well described using CME-5 up to node 250.
\label{fig:SHG-fit}}
\end{figure}
There are a number of assumptions made in the original CME-1 theory (and carried through our CME extensions) that are now considered to ensure we are performing WRspice simulations in a regime in which these assumptions are broadly satisfied. The phase of the junction is set by a dc bias of $\varphi_{\mathrm{dc}}=\pi/2$ in order to operate in a purely non-centrosymmetric regime. The ac phase $\varphi_{\mathrm{ac}}$ is assumed to be small with respect to $\varphi_{\mathrm{dc}}$. \cref{fig:phase} shows that for high pump currents, approaching $2\,\mu\mathrm{A}$, $\varphi_{\mathrm{ac}}$ can no longer be considered to be small in comparison to $\varphi_{\mathrm{dc}}$. In addition, so called optical rectification is absent in the CMEs. Optical rectification is a dc offset generated by all other tones. The consequence of significant optical rectification is a deviation from the optimal $\varphi_{\mathrm{dc}}=\pi/2$ bias point such that the device no longer operates in the purely non-centrosymmetric regime.
Note also that for such high input pump currents as shown in \cref{fig:WRsim1} for which $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx1.97\,\mu\mathrm{A}$, pump harmonic generation up to $f_{\mathrm{7p}}$ is observed (not shown in figure). As we only extend the CMEs to CME-5 we choose to reduce the input pump current such that pump harmonics beyond $f_{\mathrm{5p}}$ are insignificant, and that the assumption that $\varphi_{\mathrm{ac}}$ is small compared to $\varphi_{\mathrm{dc}}$ is upheld. \cref{fig:phase} shows that reducing the pump power from $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx1.97\,\mu\mathrm{A}$ to $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx0.67\,\mu\mathrm{A}$ reduces the amplitude of $\varphi_{\mathrm{ac}}$, and maintains a bias point $\varphi_{\mathrm{dc}}=1.57=\pi/2$ (non-centrosymmetric regime).
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{Fig4_Submission.png}
\caption{Plots of WRspice junction phase between nodes 500 and 1000. Taken at $t=15\,\mathrm{ns}$. (a) $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx1.97\,\mu\mathrm{A}$ as per Ref \onlinecite{Zorin_PRAppl_2016}. Phase set to $\varphi_{\mathrm{dc}}=\pi/2$. Strong pump current causes large phase swing ($\approx\pm\pi/4$) and dc bias moves away from optimal position. (b) Pump current reduced to $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx1.40\,\mu\mathrm{A}$. Both phase swing and dc offset reduced. (c) Pump current used in simulations to investigate JTWPA signal gain $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx0.67\,\mu\mathrm{A}$. Minimal phase swing observed, dc bias position remaining at optimal position ($\varphi_{\mathrm{dc}}=\pi/2$).
\label{fig:phase}}
\end{figure}
\cref{fig:SHG_inc} shows the current of the pump, and the current of the second harmonic of the pump along the JTWPA. The fit of CME-5 to the WRspice data is greatly improved with the pump power reduced to $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx0.67\,\mu\mathrm{A}$, and remains in agreement over more nodes. \cref{fig:SHG_inc} also shows that reducing the number of allowed states in the set of equations (i.e.,CME-5 $\rightarrow$ CME-4 $\rightarrow$ CME-3 $\rightarrow$ CME-2 $\rightarrow$ CME-1) results in an increased deviation of the agreement between the CME solutions and the WRspice output. These results show the risk of reducing the number of tones represented in the CME set. As the number of tones are reduced the behaviour of the pump and the second harmonic of the pump are less well described. Indeed, \cref{fig:SHG_inc}(d) shows no depletion of the pump due to second harmonic generation and that the CME-1 solutions do not capture the behaviour of the pump tone as simulated by WRspice.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{Fig5_Submission.png}
\caption{Pump, and the second harmonic of the pump currents as a function of node number. $I^{\mathrm{rms}}_{\mathrm{p}}(0)\approx0.67\,\mu\mathrm{A}$. The WRspice simulation of the pump and second harmonic of the pump are shown with dashed lines, and are the same for each panel. The {CME-$k$} solutions are shown with solid lines. Each panel shows a decreasing extension of CME. The agreement between the WRspice simulations and the CMEs reduces as the number of included tones in the CME are reduced. (a) CME-5, (b) CME-4, (c) CME-3, (d) CME-2, (e) CME-1.}
\label{fig:SHG_inc}
\end{figure}
\subsection{\label{Signal}Effect on Signal Gain}
\cref{fig:signal_gain}(a) shows the signal current at each node of the JTWPA for the WRspice simulations and for CME-1 to CME-5. It can be seen that the presence of additional tones in the CMEs leads to a reduction in gain. CME-5 and WRspice are in fair agreement and exhibit the least gain.
To quantify the reduction in gain observed as the CMEs are extended, we choose the optimal gain node of CME-1 ($n=1175$) and compare to the other CMEs and the WRspice simulation at this node. \cref{fig:signal_gain}(b) shows as the number of terms in the CMEs increase we capture more complex behaviour of the signal as well as the detrimental effect on the gain. WRspice includes all tones propagating along the JTWPA, as noted earlier, and shows an even lower gain than CME-5 at node $n=1175$.
\cref{fig:signal_gain}(a) also shows deamplification of the signal at the beginning of the JTWPA up to approximately node 300. We believe this deamplification is due to conservation of energy and the signal power dispersing into some of the other mixing tones. All tones, with the exception of the pump and the signal, are input to the equations with zero initial amplitude, and thus the power required to generate these tones must initially come from the pump and signal. It is observed that as the number of tones included in the CMEs increases, the number of nodes over which the signal deamplifies increases though the gradient is unchanged.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{Fig6_Submission.png}
\caption{(a) Signal current at each node of the JTWPA circuit for the WRspice simulations and each CME extension. With increasing CME extension we see improved agreement between the CME theory and the WRspice simulations. The traditional analytical theory CME-1 predicts a maximum signal at node $n=1175$ corresponding to a gain of 20 dB. We calculate the JTWPA gain for each CME set from the current measured at this node. (b) Gain measured at $n=1175$ for each CME extension. The WRspice simulation result is shown with a horizontal dashed line (G = 8.9 dB). The measured gain from each CME is reduced as the number of equations in the CME set increases. The gain measured approaches that of the value calculated from the WRspice simulations.
\label{fig:signal_gain}}
\end{figure}
\section{Discussion and Conclusion}
Our extension of the CMEs show that CME-1 (including only the pump, signal, and idler) is insufficient to capture the complex behaviour of the JTWPA. As we increase the number of terms in the CMEs we approach the behaviour and gain figures observed in WRspice simulations. We note that whilst good agreement between CME-5 and WRspice is achieved, there is still not full agreement. We now speculate below on the sources of the remaining discrepancy.
Only the quadratic term in the current-phase relation of the flux-biased SQUID is included in the formation of the CMEs. Inclusion of the quartic (and higher-order) terms may bring the WRspice and CME results into even better agreement. The dc offset generated by all other tones (optical rectification) is also not included in the CMEs whilst a dc current is seen in the WRspice for high pump currents. Finally, our choice of CME extensions are based on the WRspice results which show large amplitude pump harmonic and pump-mediated tones. Only these tones are included in the CME extensions we have presented in this work. Additional tones, including higher harmonics of the signal, may need consideration for improved agreement between WRspice and the CMEs.
We believe these results will have practical consequences for the design and operation of JTWPAs, in particular for considerations of measurement bandwidth, tone reflections, and optimisation procedures.
To conclude, we demonstrate that a simple consideration of only three tones is insufficient to describe the complex behaviour of the JTWPA. We have presented four further extensions of the coupled mode equations, increasing the number of interacting tones included with each extension. We also used WRspice to simulate the JTWPA and compared its output to that of the extended coupled mode equations. Each further extension of the CMEs agreed more accurately with the WRspice simulation.
We note that whilst good agreement between CME-5 and WRspice is achieved, there is still not full agreement and we have discussed possible reasons for this. In order to design an amplifier, and to obtain representative gain figures all of the behaviour of the JTWPA should be included. In this regard WRspice should be considered as the most reliable design tool. Both the simulations and the extended CME analytical theory show clearly that the generation of pump harmonics and the pump-mediated sum frequency generation terms must be considered when designing such a broadband device. In order to achieve the gains required for a usable JTWPA sufficient for quantum-limited amplification, engineering to suppress the pump harmonic generation may need be implemented. Some of this engineering is already considered in the form of stop-band engineering \cite{White_APL_2015,Zorin_PRAppl_2016,Zorin_flux_drive}.
This work realises a simple, computationally inexpensive, method for extension of the CMEs describing propagators which have been previously neglected and demonstrates the utility of WRspice for simulation of non-linear superconducting circuits, in particular as a design tool for JTWPAs.
\begin{acknowledgments}
This project has received funding from the EMPIR programme co-financed by the Participating States and from the European Union’s Horizon 2020 research and innovation programme. This work is part of the the Joint Research Project PARAWAVE, and we would like to thank members of the consortium, in particular R.~Dolata, M.~Khabipov, C.~Ki{\ss}ling, and A.~B.~Zorin for useful discussions on the operation of the JTWPA. The work is partially supported by the UK Department of Business, Energy and Industrial Strategy (BEIS). We thank J.~Burnett and J.~C.~Gallop for critical review of the manuscript.
\end{acknowledgments}
|
1,108,101,563,863 | arxiv | \section{Introduction}
Although discovered before the Cosmic Microwave Background (CMB),
the origin of the hard X-Ray Background (XRB) is still not fully
understood.
At energies below 2 keV, the XRB has now been almost entirely
resolved into discrete sources. Most of these are AGN's but other
types of sources (e.g. clusters, narrow emission line galaxies)
may also contribute a significant fraction of the total flux
(Hasinger et al.~1998, McHardy et al.~1998).
The X-ray spectra of these sources are, for the most part,
too soft to account for the shape and total intensity of the XRB
above 2 keV, but absorption can be invoked to remedy this problem
and allow the full energy range of the XRB to be nicely
fitted by the right mixture of absorbed and unabsorbed AGN's
(Comastri et al.~1995). Such models have their limitations,
but even if the detailed nature of the hard X-ray
sources, and their relation to the soft ones, are not
fully understood yet, it is now well established
that the dominant part of the hard XRB also arises from
the integrated emission of discrete sources.
The alternative hypothesis of a cosmic hot gas
origin was ruled out by the observation of
the undistorted CMB spectrum (see Fabian \& Barcons
1992 for review).
In order to account for the total flux of the XRB
(local X-ray sources only produce a very small fraction
of it), X-ray sources must be found throughout
a large enough volume of the universe. This
makes them convenient tracers of the mass distribution
on scales intermediate between those probed by COBE in the CMB
($\sim 1000 $ Mpc), and those probed by optical and IRAS
redshift surveys ($\sim 100 $ Mpc).
The CMB fluctuations originate from redshift $z \sim 1000$
and are due to the Sachs-Wolfe effect on scales larger than
a few degrees. On the other hand, the fluctuations in the XRB
are due to fluctuations in the space density of
X-ray sources which are likely to be distributed
at $z \sim 1-5$. In terms of level
of anisotropy, the XRB is also intermediate between the CMB
fluctuations ($\sim 10^{-5}$ on angular scales of
degrees) and the galaxy density fluctuations (of the order
of unity on scale of 8 $h^{-1}$ Mpc).
In this paper, we attempt a comparison between the hard band (2-10 keV)
XRB fluctuations seen in the HEAO1-A2 data and a range
of models. We measure the fluctuations in terms of
spherical harmonic coefficients, and make predictions
for the ensemble average of these coefficients using
a formalism presented by Lahav, Piran \& Treyer (1997)
(hereafter LPT97).
For related approaches to measurements
of the XRB fluctuations see Boughn, Crittenden \& Turok (1998)
and Carrera, Barcons \& Fabian (1997) and references therein.
The data analysis and the theoretical formalism are described
in Sections 2 and 3 respectively. Measurements and models are
compared in Section 4. We present our conclusions in Section 5.
For simplicity, we shall assume an Einstein-de Sitter
world geometry ($\Omega=1$, $\Lambda=0$).
We write the Hubble constant as $H_0=100~h~{\rm km/s/Mpc}$.
\section{The HEAO1-A2 Data Analysis}
Details of the HEAO1-A2 data analysis will be described in
a complementary paper (Scharf et al., in preparation).
This section summarizes the procedure.
We use the A2 counts from the
6 months following day 322 of 1977 in
the all-sky survey (c.f. Jahoda 1993). The data were provided
in rectangular ecliptic coordinates in approximately
$0.5 \times 0.25$ degree pixels (at ecliptic
equator), which considerably oversamples the $3 \times
1.5$ degree FWHM beam.
These data are then corrected for a small systematic
instrumental change from day $\sim 430$ onwards.
In this work, we further bin the data into groups of 12 by 12 pixels
(smaller resolution pixels are strongly correlated due to the
instrument beam) for all analyses. At the
ecliptic equator the pixel groups are therefore $6^{\circ}\times
3^{\circ}$. Masking (see below) is however performed initially on the
higher resolution data and the final pixel groups contain
the {\em mean} count rate of all non-zero `sub-pixels' and are
weighted according to their area.
It is difficult to unambiguously separate foreground (Galactic)
from background (extragalactic) information in the HEAO1 X-ray data.
The total number of resolved foreground and background sources
($|b|>20^{\circ}$) is small ($\sim 0.01$ deg$^{-2}$) and a
detailed model of possible large scale Galactic emission is
hard to determine. However, the Galactic 2-10 keV emission model
of Iwan et al.~(1982) predicts variations of no more than $3\%$
of the total flux due to smoothly distributed emission of Galactic
origin at latitudes $|b| >20^o$.
Studies in the soft bands ($<0.75$ keV) by ROSAT (Snowden 1996)
indicate that, at these lower energies, the picture is more complicated,
with Galactic emission at all scales.
In the present work, as a first step towards removing the
foreground, we construct a `mask' using a list of resolved and
identified Galactic X-ray sources (Piccinotti et al.~1982) and a
$|b|<20^{\circ}$ Galactic Plane mask. Regions of sizes varying from
$\sim 8^{\circ}$ to $12^{\circ}$ diameter are excised around
resolved sources, larger regions are removed around the Large and Small
Magellanic Clouds. A total of $\sim 23$\% of the raw all-sky
flux is removed by this `Galactic' mask.
The removal of bright extragalactic sources is also very important
in order to control shot-noise in the angular power estimates (LPT97).
We attempt to do this by further masking out all
61 extragalactic sources (AGN's and clusters) in the catalogue of
Piccinotti et al.~(1982), to a flux limit of $S_{cut}=3\times 10^{-11}$ erg
s$^{-1}$ cm$^{-2}$ (2-10 keV).
An additional $\sim 22$\% of the raw all-sky
flux is removed by this `extragalactic' mask
(the combination of large beam and cautiously generous
source excision
results in $\sim 50 $ deg$^2$ being removed per source).
The final unmasked area is therefore $\sim 55$\% of the sky
with an effective redshift of $\sim 0.02$ (approximately the median
redshift of the Piccinotti et al. sources).
Finally, the dipolar contribution to the anisotropy due to
the motion of the observer with respect to the XRB, the
Compton-Getting (CG) effect, is subtracted from the flux
(Boldt 1987, Jahoda 1993, LPT97).
The amplitude of this dipole is estimated from our observed
motion with respect to the CMB and the observed spectral index
of the hard XRB ($\alpha=0.4$).
We note that the raw HEAO1 dipole (Galactic sources and plane
removed), due to both the CG effect and large scale structure
(see LPT97), points in the direction $l \approx 330^o; b\approx 33^o$.
This can be compared with the CMB dipole (in the Local Group frame)
which points towards $l \approx 268^o; b\approx 27^o$
(based on COBE, Lineweaver et al. 1996). We shall further
discuss the HEAO1 dipole elsewhere (Scharf et al., in preparation).
The HEAO1 data are then expanded in spherical harmonics and
the harmonic coefficients determined (Scharf et al.~1992, LPT97).
\section{Modeling}
To model the large angular scale fluctuations in the XRB,
we follow the formalism proposed by LPT97
using the following new set of assumptions:
{\it (i)} X-ray light traces mass, and we assume linear,
epoch-{\it dependent} biasing between the spatial fluctuations
in the X-ray source distribution, $\delta_x$, and those
in the underlying mass distribution, $ \delta_M$:
$
\delta_x (z) = b_x(z) \delta_M (z).
$
We adopt the following prescription (Fry 1996) for the
time-dependence of the biasing parameter, which we
parametrize in terms
of the present-epoch parameter $b_x(0)$:
\begin{equation}
b_x(z)=b_x(0) +z [b_x(0)-1 ]
\end{equation}
This assumption is somewhat more realistic than
the time-independent bias parameter used by LPT97.
In Fry's model the galaxies are formed at an early epoch
$z_*$ in a biased way, then cluster with time under the influence
of gravity. Note that if $b_x(z_*) =1$ then $b_x(0)=1$.
However, if $b_x(z_*) >1$, biasing decreases with cosmic epoch
(see also Bagla 1998).
{\it (ii)}
We assume an Einstein-de Sitter cosmology
($\Omega=1$, $\Lambda=0$),
but we use a phenomenological
low-density CDM model (with shape parameter $\Gamma=0.2$)
to represent
the present-day power-spectrum $P(k)\equiv {\sigma_8}^2 {\bar P}(k)$,
where $\sigma_8$ is
the present-epoch
normalization of the mass fluctuations in 8 $h^{-1}$ Mpc spheres.
In this case the
mass power-spectrum evolves according to linear
theory as $P (k ,z)\propto (1+z)^{-2}$.
For the X-ray light fluctuations, $\delta_x({\bf k},z)$,
the above assumptions translate into:
\begin{equation}
\langle \delta_x({\bf k}) \delta_x^*({\bf k'}) \rangle (z) =
(2 \pi)^3 \sigma_8^2 ~b_x^2~(z)~ {\bar P}(k)(1+z)^{-2}
\delta^{(3)}({\bf k} - {\bf k'}),
\end{equation}
where $\delta^{(3)}$ is the three-dimensional delta-function.
{\it (iii)} The X-ray intensity observed in the 2-10 keV energy
band originates from the integrated emission of discrete
X-ray sources out to some high redshift $z_{max}$.
We describe this population by its local luminosity
function $\phi_x(L)$ and spectral index $\alpha$,
and assume simple power-law evolution both in luminosity:
$L(z)\propto (1+z)^e$,
and in number density:
$\phi(L,z)\propto (1+z)^d$.
The local X-ray light density is:
\begin{equation}
\rho_0 = \int_0^{\infty} L\phi_x(L){\rm d}L,
\end{equation}
and the X-ray light density at redshift $z$
{\it observed} in the 2-10 keV energy range is:
\begin{equation}
\rho_x(z)= \rho_0 (1+z)^q
\end{equation}
where $q=d+e-\alpha+1$.
\medskip
We use the above assumptions to predict the ensemble average
of the spherical harmonic coefficients in the XRB.
The total predicted signal results in a large scale structure
component, reflecting the underlying mass distribution,
and a shot noise component due to the discreteness
of the sources (as opposed to the continuous mass distribution):
\begin{equation}
\langle |a_l^m|^2 \rangle_{model} =
\langle |a_l^m|^2 \rangle_{LSS} +\langle |a_l^m|^2 \rangle_{SN}.
\end{equation}
The shot noise term is:
\begin{equation}
\langle |a_l^m|^2 \rangle_{SN}
= {1 \over 4 \pi} \sum_{sources} S_i^2
=\int_0^{S_{cut}} S^2 N(S) {\rm d}S,
\end{equation}
where $N(S)$ is the differential number-flux relation
of the X-ray sources. Bright sources (brighter than a suitable
flux cutoff $S_{cut}$) must be removed to reduce the shot noise.
In turn, removing sources, albeit few and nearby,
will also reduce the large scale structure signal.
However, as we demonstrate below, the shot noise decreases faster
than the signal as more and more sources are removed.
In other words, the large scale structure signal-to-noise
increases when lowering the flux cutoff.
The large scale structure component can be written as
an integral over the power spectrum (LPT97):
\begin{equation}
\langle |a_l^m|^2 \rangle_{LSS} =
{(r_H~ \rho_0)^2 \over (2 \pi)^3}
\int k^2 {\bar P}(k) |\Psi_l(k)|^2 {\rm d}k,
\end{equation}
where $r_H= c/H_0$ is the Hubble radius and
the window function $\Psi_l$ contains the model
parameters:
\begin{equation}
\Psi_l(k) = \int_0^{z_{max}} \sigma_8 b_x(z)
(1+z)^{q - 9/2} j_l(k r_c) W_{cut}(z) {\rm d}z~.
\end{equation}
The function $W_{cut}(z)$ accounts for the removal of sources
brighter than $S_{cut}$:
\begin{equation}
W_{cut}(z)= {1\over \rho_0} \int_0^{L_{cut}(z)} L\phi_x(L){\rm d}L,
\end{equation}
where:
\begin{equation}
L_{cut}(z)=4\pi r_c^2(z)S_{cut}(1+z)^{\alpha+1-e}
\end{equation}
and $r_c(z)$ is the comoving radial distance.
For the monopole ($l=0$), we recover the `Olbers integral':
$ A_0= \langle |a_0^0|^2 \rangle^{1/2}_{LLS}= \bar I \sqrt {4 \pi}$, where
$\bar I$ is the mean total intensity of the XRB. In a flat universe
and for $q\ne2.5$, Eq.~4 implies:
\begin{equation}
{\bar I}
= { \rho_{0} r_H \over 4 \pi} \times
{(1+z_{max})^{q-2.5} -1 \over q-2.5} .
\end{equation}
The higher order multipoles
characterize the spatial fluctuations of the XRB on angular scales
$\sim\pi/l$.
In order to compare model expectations with HEAO1 observations,
we further convolve our predictions
with the foreground masks described above:
\begin{equation}
\langle |c_l^m|^2 \rangle =\sum_{l' m'} |W_{ll'}^{mm'}|^2
\langle |a_{l'm'}|^2 \rangle,
\end{equation}
where the $W_{ll'}^{mm'}$ tensor models the mask
(Peebles 1980, Scharf et al.~1992, Baleisis et al.~1998).
Finally, the masked harmonics and shot noise are
normalized over the monopole. We use the following notation:
\begin{equation}
C_{SN} = {\langle |c_l^m|^2 \rangle_{SN}^{1/2} \over A_0}
\end{equation}
for the shot noise, and for the full signal:
\begin{equation}
C_{l}= {(\langle |c_l^m|^2 \rangle_{LSS}+
\langle |c_l^m|^2 \rangle_{SN})^{1/2} \over A_0}.
\end{equation}
\section{Constraints on model parameters}
The local luminosity function in the 2-10 keV energy band
can be fitted by a double power-law function between
$\sim 10^{42}$ and $10^{48}~h^{-2}~{\rm ergs~s^{-1}}$
(Grossan 1997, Boyle et al.~1998).
The integrated emission of local sources in this range of luminosity
is: $ \rho_0 \approx 10^{39}~ h~ {\rm ergs~s^{-1}Mpc^3}$.
The total intensity of the 2-10 keV XRB is
${\bar I}= 5.2 \times 10^{-8} {\rm ~ergs~s^{-1}~cm^2~sr^{-1}}$,
and its spectral index is $\alpha=0.4$ (Boldt 1987).
Boyle et al.~(1998) find evidence for strong cosmological
evolution matching a `pure' luminosity evolution model:
$L_x \propto (1+z)^e$ with $e \approx 2$ out to a redshift
of $\sim 2$, followed by a declining phase.
This scenario would have to hold to $z_{max}\approx 6.4$
in order to account for the total XRB intensity (Eq.~11),
and thus requires other processes or populations whose X-ray
emission would add to that currently observed.
On the other hand, Hasinger (1998) argues that
strong {\it number} density evolution:
$\phi(L,z)\propto (1+z)^d$ with $d \approx 4$,
provides a better fit to the ROSAT deep sky survey data,
implying that the whole XRB intensity should
be accounted for by $z_{max}\approx 1.3$.
New results from the Hamburg/ESO survey show that
QSO's keep evolving strongly to $z\sim 3$ and that none of
the above simple parameterizations is an acceptable representation
of the data (Wisotzki, private communication).
For simplicity however, we shall use the following two toy models
to bracket more realistic X-ray source evolution scenarios:
on the one hand, a `pure' luminosity evolution model with
$q=e-\alpha+1= 2.6$ (see Eq.~4) and $z_{max}=6.4$;
on the other hand, a `pure' density evolution model with
$q=d-\alpha+1= 4.6$ and $z_{max}=1.3$.
We compute the differential number counts relation for both models.
Both are in good agreement with the Euclidean curve,
$N(S) \propto S^{-2.5}$, derived from ASCA deep sky
observations to $S \sim 5\times 10^{-14}{\rm ergs~s^{-1}cm^{-2}}$
(e.g. Georgantopoulos et al.~1997).
At fainter fluxes both predicted log$N$-log$S$ relations slightly bend
down to $S\sim 5\times 10^{-16}{\rm ergs~s^{-1}cm^{-2}}$,
at which flux
the total intensity of the XRB is accounted for. From these number
counts, we derive the shot noise level as a function of
flux cutoff (Eq.~6).
\section{Results}
Figure 1 shows the normalized HEAO1 XRB
harmonics measured through the `Galactic' mask
(upper panel) and
through the full, foreground removed mask
(lower panel) respectively.
The lower shot noise from bright source removal is immediately
apparent as a lowering in the overall harmonic amplitude.
The various lines represent our model predictions for the shot
noise and large scale structure signal, as described below.
Both evolution scenarios yield similar shot noise values within 5\%.
Masking induces the otherwise constant shot noise to decrease
slightly (by less than 10\%) towards the high $l$'s.
As the difference between the two evolution scenarios
and the gradient due to masking are negligible,
we have plotted the mean shot
noise value as one horizontal line on both panels in Fig.~1:
for $S_{cut}=3\times 10^{-10} {\rm ergs~s^{-1}~cm^{-2}}$
(i.e.~Galactic sources removed),
$C_{SN}\approx 1.1\times 10^{-3}$;
for $S_{cut}=3\times 10^{-11} {\rm ~ergs~s^{-1}cm^{-2}}$
(i.e. the flux limit of the Piccinotti et al.~1982 catalogue),
$C_{SN}\approx 5.2\times 10^{-4}$.
The predicted shot noise levels (masked and normalized)
are in very good agreement with the flattening of the
measured signal in both cases.
We verified this by Maximum Likelihood analysis
over the harmonic range $10 \leq l \leq 20$, ignoring the
clustering term and leaving the shot-noise level as
a free parameter. We find that the derived shot-noise
for both masks is within 10 \% of the one predicted from
the counts. We also attempted a Maximum Likelihood over the range
$1 \leq l \leq 20 $ with 2 free parameters:
$b_x(0)$ and the shot noise level $C_{SN}$, but the
2 parameters are strongly coupled.
As another independent measure of the shot-noise level
we have generated a `noise' map, randomly drawing fluxes out of the real
flux distribution using the $S_{cut}=3\times 10^{-11}$ data (i.e. with
both Galactic and extragalactic sources from the Piccinotti et al.~catalogue
removed; The expected CG effect is also removed, as described previously).
The noise map is then masked as in the data and the $C_l$'s determined.
The `1-$\sigma$' errors on the mean over 100
realisations is in excellent agreement with our shot noise
estimate independently derived from the source counts (Eq~6).
The first harmonics, $l=1-3$, are well above the shot-noise
level on both panels, but higher order harmonics are just over
1-$\sigma$ away from the `noise' estimate.
(Note that the harmonic measurements are not independent,
due to `cross talk' introduced by the mask.)
Although contamination from Galactic emission or masking may
be non-negligible, it is nevertheless encouraging that the
shape of the harmonic spectrum over all $l's$ is
qualitatively in agreement with the prediction of
an extragalactic clustering signal.
All models for the spherical harmonic spectrum plotted in Fig.~1
assume a low density CDM model with shape parameter $\Gamma=0.2$
and normalization $\sigma_8=1.0$. The present-epoch bias
parameter $b_x(0)$ (Eq.~1) was left as a free parameter and
its optimal value derived from Maximum Likelihood over the range
$1 \leq l \leq 10$ (neglecting the mask, cf. Scharf et al. 1992).
Both evolution scenarios yield the same best fit values:
$b_x(0)=1.6$ for the Galactic mask, and $b_x(0)=1.0$ for the full mask.
To illustrate our estimate range,
predictions are plotted for these 2 values on $both$
panels (Galactic mask and full mask): upper lines are
for $b_x(0)=1.6$ and lower lines for $b_x(0)=1.0$.
The dotted lines represent the density evolution scenario
($q=4.6$) and the long-dashed lines
show the luminosity evolution scenario ($q=2.6$).
Assuming a standard CDM power spectrum yield $b_x(0) = 1.8$ and $1.2$
for Galactic and full masks, respectively. This is not surprising,
as low density CDM has more power on large scales than standard CDM.
For most of the models considered above,
the reduced $\chi^2$ is near unity,
suggesting acceptable fits.
However, the measured multipoles do show more curvature
as a function of $l$ than our models predict.
This may be explained by a number of reasons:
the low order multipoles measured in the XRB may result
from local (Galactic?) structures unaccounted for by the masks;
source clustering evolution may be significantly
stronger than the linear theory assumption we have made;
or else, the evolution parameters we have used
for the X-ray source population are overestimated,
at least on part of the redshift range.
Note also that the $b_x(0)=1$ models, which correspond to
constant biasing (see Eq.~1), are flatter than
the epoch-dependent biasing models. Therefore we expect
stronger bias evolution to improve the fit to
the data.
Figure 2 shows the signal-to-noise as a function of
$l$, assuming $b_x(0) =1$ and $\sigma_8=1$ (for the purpose of
illustration). In order to compare the above models with predictions
at fainter flux limits, we do not use the existing masks.
In the lower two panels, we show the signal-to-noise expected
if sources brighter than $S_{cut}=3\times 10^{-12}$ and
$3\times 10^{-13}{\rm ~ergs~s^{-1}cm^{-2}}$ respectively
(i.e. 1 and 2 magnitudes lower than the present data)
could be removed from the X-ray all-sky survey. We predict
the signal-to-noise to increase as
$S_{cut}$ decreases. The multipoles are also expected to
be detectable above shot noise for an increasing range
of $l$. As the present results suggest $b_x(0) \ge 1$,
the signal-to-noise ratios plotted here may be taken as lower limits.
Luminosity evolution and density evolution become increasingly
distinct as we remove fainter and fainter sources.
If sources evolve in luminosity, a given flux cutoff will
span a larger redshift range than if they don't or if they
only evolve in number, and therefore
a larger volume of space will be excluded from the analysis.
Hence a weaker signal-to-noise in the case of luminosity
evolution than in the case of density evolution.
We conclude that an X-ray all-sky survey in the hard band
(to minimize Galactic contamination)
resolving sources only one magnitude fainter than HEAO1,
is likely to reveal large scale fluctuations
in the background to significantly higher
order than the current data.
Figure 3 shows the amplitude of rms fluctuations,
$({ {\delta \rho} \over {\rho} })^2 \sim k^3 P(k) $,
derived at the
effective scale $k^{-1} \sim 600 h^{-1}$ Mpc probed by the XRB quadrupole
(cf. LPT97 Figure 1).
For Galactic mask and either evolution models we find
$\sigma_8 b_x(0) \sim 1.8 $ and $1.6$ for standard and low density
CDM models, respectively
(marked by the top and
bottom crosses).
The fractional error on the XRB amplitudes (due to the
shot-noise of the X-ray sources) is about 30\%.
We see that the observed fluctuations in the XRB
are roughly as expected from interpolating between the
local galaxy surveys and the COBE CMB experiment.
The rms fluctuations
${ {\delta \rho} \over {\rho} }$
on a scale of $\sim 600 h^{-1}$Mpc
are less than 0.2 \%.
Our estimate of the fluctuations
amplitude derived from HEAO1 is used elsewhere
(Wu, Lahav \& Rees 1998) to show that the fractal
dimension of the universe is very close to 3 (to within $10^{-4}$)
on the very large scales.
This XRB measurement strongly supports the validity
of the Cosmological Principle (Peebles 1993).
\section{Discussion}
We report on the possible detection of low-order spherical harmonic
modes in the HEAO1 XRB map. Although one must be cautious about the
interpretation of the signal as being purely extragalactic,
it is encouraging that the measurements are in agreement with
{\it a priori} predictions. We find that the XRB fluctuations
on scales of a few hundred Mpcs are consistent with the
result of interpolating between fluctuations derived
from local galaxy surveys and those derived from the
COBE CMB measurements.
Various models for the matter density fluctuations and
the evolution of X-ray sources yield
present-epoch biasing factor of typically $b_x(0) \sim 1-2 $.
The present analysis allows for epoch-dependent biasing,
which seems to give a more reasonable fit than a time-independent
biasing model, although both schemes yield similar values of $b_x(0)$.
Regarding models of density fluctuations,
as expected the low density CDM model requires lower $b_x(0)$
than standard CDM, which has less power on large scales.
We note that our values for the local bias factor $b_x(0)$ are
smaller than those derived from the dipole anisotropy
of the local AGN distribution (Miyaji 1994)
and from HEAO1 assuming epoch-independent biasing
(Boughn et al. 1998). On the other hand, our results
are in rough agreement with Carrera, Fabian \& Barcons
(1997) who also find small values of $b_x(0)$,
although using quite different techniques and data sets,
and assuming epoch-independent biasing.
We predict that an X-ray all-sky survey
resolving sources a factor of 10 fainter than HEAO1,
may allow us to measure large scale fluctuations
in the XRB to order $l\sim 20$ ($\theta \sim \pi/20$).
The present data cannot be used with lower flux thresholds as our
method of eliminating sources also reduces sky coverage.
There are two experimental approaches which can allow a
similar analysis to be done while employing a lower flux threshold,
thus reducing the shot noise, and allowing a significant measurement
of large scale structure over a larger range of $l$ values.
Barcons et al. (1997) propose an experimental concept that maps the X-ray
sky with a collimated proportional counter, substantially similar to the
A2 experiment, but with a smaller field of view. Such an experiment can
mask individual sources with a smaller penalty in terms of sky coverage
(and signal) and can therefore mask a larger number, reaching a fainter
limiting flux. But it cannot identify sources an order of magnitude fainter
than our flux threshold by itself, and therefore would rely on an externally
generated catalogue such as that produced by the
ABRIXAS survey (Tr\"umper et al.~1998). The advantages of this approach are
the relatively small size and simplicity of the experiment.
The combination of ABRIXAS with the experiment proposed by
Barcons et al.~(1997)
can eliminate sources down to a flux threshold
$\sim 1 \times 10^{-12} {\rm erg}\,{\rm sec}^{-1}\,{\rm cm}^{-2}$.
For the expected density of sources (0.3 per square degree)
and solid angle of the beam for this experiment (1 square degree),
the fraction of
the sky that will be masked is comparable to the analysis presented here.
This experiment represents the limiting capability of a non focussing
mission.
The time required to obtain a certain precision per pixel scales inversely
with solid angle of the beam, and the fraction of the sky which is masked
out remains large.
Removing sources at fainter thresholds requires an imaging experiment capable
of identifying, and excluding, faint sources without removing an entire
square degree of sky coverage.
Due to the relative inefficiency of X-ray telescopes (the effective
area is typically $\le$ 25\% of the geometric collecting area above 2 keV
(Serlemitsos \& Soong 1996)) a large collecting area
(i.e. many telescopes) is
required to obtain the same precision in the measurement of surface
brightness.
A collection of imaging telescopes capable of measuring surface brightness
to 1\% per square degree requires nearly 3 times the geometric collecting
area of the proportional counter experiment, but is plausibly within the
constraints of NASA Medium Explorer mission (Jahoda 1998). An additional
advantage of an imaging experiment is the simultaneous production of
a catalogue
of sources, useful in their own right as tracers of large scale structure.
An experiment capable of identifying sources as faint as
$3 \times 10^{-13} ~{\rm erg~cm^{-2}~sec^{-1}}$
in the 2-10 keV band would generate
an all-sky catalogue with $\ge 10^5$ hard X-ray selected sources.
However `old' and not optimally suited for this analysis,
the current data have allowed us to demonstrate that
future X-ray missions stand a good chance of revealing
significant structure in the matter distribution.
Not only are we likely to finally understand the long
sought for sources of the hard X-ray background in the coming
years, but we may also be able to get a strong hold on the
underlying matter distribution in an otherwise little
explored range of scales.
\acknowledgments
The authors thank B. Boyle, M. Rees and K. Wu for
helpful discussions. We thank the referee, D. Helfand,
for a careful reading and constructive suggestions that have
improved this paper.
|
1,108,101,563,864 | arxiv |
\section{Introduction}
In a recent paper \cite{BorFer} the authors introduced a Markov process on a system of interlacing particles. This model contains many parameters, creating a rich pool of interesting particular examples, and at the same time it has an integrable structure that allows for explicit computations. It is the purpose of this paper, to study a special case of this model. The interest of this model lies in the fact that for large time the particles will fill a domain that has a tacnode on the boundary. A somewhat similar situation also occurs in the case of non-intersecting Brownian paths with multiple sources and sinks \cite{AFvM}. The local process at the tacnode in this model is not understood, although a conjecture is given in \cite{AFvM}. The integrability of the model we consider allows us to compute the local process around the tacnode. This is the main result of this paper.
We consider an evolution on particles that are placed on the grid
\begin{align}
\mathcal G =\left\{(x,m) \mid m=1,2,\ldots \quad x\in \mathbb{Z}+\frac{m+1}{2}\right\}.
\end{align}
Hence, if $(x,m) \in \mathcal G$, then $x$ takes integer values for odd values of $m$ and half-integer values for even values of $m$. At each horizontal $m$-section we put $m$ particles and denote their horizontal coordinates by $x_k^m$ for $k=1,\ldots, m$. The evolution is such that at each time the system of particles satisfies the interlacing condition
\begin{align}
x_{k-1}^{m}<x_{k-1}^{m-1}<x_k^{m} ,\qquad k=2,\ldots, m, \qquad m=2,\ldots
\end{align}
At time $t=0$ we put the particles at positions $x_k^m=-(m+1)/2+k$ as shown in Figure \ref{fig:particles}.
\begin{figure}[t]\begin{center}
\subfigure[]{\includegraphics[scale=0.35]{demo0}} \label{fig:init} \hspace{1cm}
\subfigure[]{ \includegraphics[scale=0.35]{demo1}} \label{fig:pointconfig}
\end{center}
\caption{(a) The initial condition and (b) an example of a point configuration after some time.} \label{fig:particles}
\end{figure}
The evolution of the particles is as follows: each particle has two independent exponential clocks, a left and a right clock respectively. If the right (left) clock rings, the particle attempts to jump to the right (left) by one. But in doing so it is forced to respect the interlacing condition according to the following two rules: if the right (respectively left) clock of the particle at $x_{k}^m$ rings, then
\begin{enumerate}
\item if $x_k^m=x_{k}^{m-1}-1/2$ (or $x_k^m=x_{k-1}^{m-1}+1/2$ in case the left clock rings) then it remains put.
\item otherwise it jumps to the right by one and so do all particles $x_{k+l}^{m+l}$ with $x_{k+l}^{m+l}=x_{k}^m+l/2$ for $l=1,2,.\ldots$ (in case the left clock rings all particles $x_{k}^{m+l}$ with $x_{k}^{m+l}=x_k^m-l/2$ jump to the left for $l=1,2,\ldots$).
\end{enumerate}
Hence a particle that wants to jump is blocked by particles with lower $m$-index, but it pushes particles at a higher $m$-index.
Next we specify the rate of the exponential clocks. For odd $m$ particles jump to the right with rate $\varepsilon^{-1}>0$ and they jump to the left with rate $\varepsilon$. For even $m$ the particles jump to the right with rate $\varepsilon$ and to the left with rate $\varepsilon^{-1}$. We are interested in the case where $\varepsilon$ is small. Hence the particles for odd $m$ predominantly try to jump to the right. For even $m$ they mostly jump to the left. However, they are still subject to the interlacing condition. Due to this condition, some particles that want to jump to the left (or right) are blocked, and even pushed to the right (or left) by particles at a lower level that want to travel right (or left).
If we let time evolve we will see mainly two clouds of particles travelling to the left and to the right respectively. See Figure \ref{fig:pointconfig} for typical point configuration after some time. For large time we have the macroscopic picture as shown in Figure \ref{fig:domainD}. With high probability the particles will be distributed in a domain $\mathcal D$ contained in the upper half plane. This domain consist of two parts $\mathcal D_1$ and $\mathcal D_2$. In $\mathcal D_2$ the particles are still densely pact as in the initial configuration (which means the particles did not have the chance to jump yet). In $\mathcal D_1$ the particles have a density strictly less than one. The boundary of $\mathcal D_1$ is a smooth curve except for two cusp points. The cusps touch in the limit $\varepsilon\downarrow 0$.
After rescaling the time parameter, the process has a well-defined limit for $\varepsilon\downarrow 0$. In this limit the particles come in pairs. Indeed, by the interlacing condition some particles are blocked and even pushed by a particle at one level below that jumps in the reverse direction, thus forming a pair. The pairs are slanted to the right in the right half plane and slanted to the left in the left half plane, see also Figure \ref{fig:pointconfig}. The process for these pairs can be described in the following way: the process decouples in the sense that we have two independent processes, one in the upper left quadrant and the other in the upper right quadrant. The process in the right quadrant is equivalent to the process where particles can only jump to the right. The process at the left is just its reflected version (hence the particle only jump to the left). This process is analyzed in \cite{BorFer} and from their results we recover that the limiting domain has a tacnode.
By standard arguments we can compute the limiting mean density of the particles in all cases. This settles the macroscopic behavior for the particles at large time. At the local scale we retrieve the well-known universality classes.
First consider $\varepsilon>0$. If we zoom in at a point away from the boundary, then we find that the local correlations are governed by one of the extensions to the discrete sine process that falls into the class as introduced in \cite{Bor}. If we zoom in at a point at the boundary, but not the cusps points, then we obtain the Airy process (see \cite{PS} and \cite{F} for a review). The local correlations near the cusps are determined by the Pearcey process \cite{ABK,BH1,BH2,ORpearcey,TrW}. Since the proofs of these results follow from standard computations, they will be omitted. We do not need these statement for our main results.
In the case $\varepsilon=0$ we have a decoupled system. In addition to a discrete sine process and the Airy process we also obtain the local correlations around the tacnode, which are described by a process that is directly related to the GUE minor process (as we will prove).
The main result of the paper is to derive the process around the tacnode in the transition regime $\varepsilon \downarrow 0$. The main result is that we obtain a process that has not appeared in the literature (to the best of our knowledge). Naturally, it interpolates between the Pearcey process and a process related to the GUE minor process.
In Section 2 we will state our main results and prove them in Section 3.
\section{Statement of results}
We start with some definitions. Let $\mathcal X$ be a discrete set. A point process on $\mathcal X$ is a probability measure on $2^\mathcal X$. A point process is completely determined by its correlation functions
\begin{equation}
\rho(X)=\mathop{{\rm Prob}} \{ Y \in 2^\mathcal X \mid X\subset Y\}.
\end{equation}
A point process is called determinantal, if there exists a kernel $K:\mathcal X\times \mathcal X\to \mathbb{C}$ such that
\begin{align}
\rho(X)=\det \left[K(x,y)\right] _{x,y\in X}.
\end{align}
For more details on determinantal point processes we refer to \cite{BorDet,HKPV,J,K,L,Sosh,Sosh2}. A determinantal point process is completely determined by its kernel.
Now let us return to the evolution on the interlacing particle system as decribed in the introduction. By stopping the process at time $t$, we get a random collection of points on the grid $\mathcal G$. Hence, the Markov process at time $t$ defines a point process on $\mathcal G$. In \cite{BorFer}, the authors proved that this is in fact a determinantal point process on $\mathcal G$ with kernel $K$ given by
\begin{multline} \label{eq:kernel}
K(x_1,m_1;x_2,m_2)=
-\frac{\chi_{m_1<m_2}}{2\pi{\rm i}} \oint_{\Gamma_0} (1- \varepsilon w)^{[m_1/2]-[m_2/2]}(1- \varepsilon/w)^{[(m_1+1)/2]-[(m_2+1)/2]}w^{[x_1]-[x_2]} \frac{{\rm d}w}{w}\\
+
\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} {\rm d} w \oint_{\Gamma_{\varepsilon,\varepsilon^{-1}}}{\rm d}z\
\frac{
{\rm e}^{t(w+\frac{1}{w})}(1- \varepsilon w)^{[m_1/2]}( 1-\varepsilon/w)^{[(m_1+1)/2]}w^{[x_1]}
}
{{\rm e}^{t(z+\frac{1}{z})}(1- \varepsilon z)^{[m_2/2]}( 1-\varepsilon/z)^{[(m_2+1)/2]}z^{[x_2]}}
\frac{1}{z(w-z)},
\end{multline}
where $[x]$ is the largest integer less then $x$. Here $\Gamma_0$ is a contour that encircles the essential singularity $0$ but not the poles $\varepsilon$ and $\varepsilon^{-1}$. The contour $\Gamma_{\varepsilon,\varepsilon^{-1}}$ encircles $\varepsilon$ and $\varepsilon^{-1}$. Both $\Gamma_0$ and $\Gamma_{\varepsilon,\varepsilon^{-1}}$ have anti-clockwise orientation and do not intersect each other. Finally,
\begin{align}
\chi_{m_1<m_2}=\left\{\begin{array}{ll} 1, & \textrm{ if } m_1<m_2 \\ 0, & \textrm{otherwise.} \end{array}\right.
\end{align}
To be precise, the variables we use are different from \cite{BorFer}. We choose a symmetric picture since we have particles jumping both left and right, whereas the particular model that was analyzed in detail in \cite{BorFer} has particles jumping to the right only. Now \eqref{eq:kernel} is obtained by taking the kernel in \cite[Cor. 2.26]{BorFer} and substituting $y=[x]-[(m+1)/2)]$ and setting the $\alpha_l$'s for even values of $l$ to $\varepsilon$, and the other ones to $\varepsilon^{-1}$. And finally, a conjugation by $(-\varepsilon)^{[(m_2+1)/2]-[(m_1+1)/2)]}$ which does not effect the determinants in the correlation functions.\\
Our first result is that for large time we obtain the limiting situation as described in the Introduction and shown in Figure \ref{fig:domainD}.
\begin{figure}[t]\begin{center}
\subfigure[]{\begin{overpic}[scale=0.33]{dom}
\put(88.75,50.62){\makebox(0,0)[cc]{$\mathcal D_1$}}
\put(49.75,80.62){\makebox(0,0)[cc]{$\mathcal D_2$}}
\thicklines
\put(-1,1.5){\line(1,0){102}}
\put(90,0){\vector(1,0){10}}
\put(95,-3){\makebox(0,0)[cc]{$x$}}
\put(10,80){\vector(0,1){10}}
\put(5,85){\makebox(0,0)[cc]{$m$}}
\end{overpic}}
\hspace{1cm}
\subfigure[]{
\begin{overpic}[scale=0.33]{dom0}
\put(92.75,51.62){\makebox(0,0)[cc]{$\mathcal D_1$}}
\put(49.75,80.62){\makebox(0,0)[cc]{$\mathcal D_2$}}
\thicklines
\put(-1,1.5){\line(1,0){102}}
\put(90,0){\vector(1,0){10}}
\put(95,-3){\makebox(0,0)[cc]{$x$}}
\put(10,80){\vector(0,1){10}}
\put(5,85){\makebox(0,0)[cc]{$m$}}
\end{overpic}}
\caption{The typical shape of the limiting domains $\mathcal D_1$ and $\mathcal D_2$. The case $\varepsilon>0$ at the left and $\varepsilon=0$ at the right. In $\mathcal D_1$ the particle are still in the initial configuration. Outside $\mathcal D_1$ and $\mathcal D_2$ there are no particles.} \label{fig:domainD}
\end{center}
\end{figure}
\begin{theorem} \label{th:macro}
Let $\mathbb H=\{z\in \mathbb{C} \mid \Im x>0\}$ and $F:\mathbb H \to \mathbb{C}$ given by
\begin{align}
F(z)=\tau (z+z^{-1})+\frac{\mu }{2} \log\left(1+\varepsilon^{2}-\varepsilon(z+z^{-1})\right)-\xi \log z.
\end{align}
Define $\mathcal D_1$ by
\begin{align}
\mathcal D_1=\{ (\xi,\mu)\in \mathbb{R}\times \mathbb{R}_+ \mid \exists z \in \mathbb H \quad F'(z)=0\}.
\end{align}
The boundary $\partial D_1$ has two cusp points located at
\begin{align}
(0,\varepsilon+\varepsilon^{-1}\pm 2).
\end{align}
Set
\begin{align}
\left\{
\begin{array}{l}
t=\tau L\\
m=[\mu L]\\
x=[\xi L]
\end{array}
\right.\end{align}
then the limiting mean density is given by
\begin{align}
\lim_{L\to \infty} K(x, m, x,m)=
\left\{
\begin{array}{ll}
1, & (\xi,\mu)\in \mathcal D_2,\\
\frac{1}{\pi} \arg z(\xi,\mu) , & (\xi,\mu)\in \mathcal D_1,\\
0, & \textrm{ otherwise}.
\end{array}
\right.
\end{align}
where $z(\xi,\mu)$ is the unique solution in the upper half plane of the equation $F'(z)=0$.
\end{theorem}
The proof of this result follows from standard steepest descent analysis on the double integral formula for the kernel. Because the proof is standard, we will omit it in this paper. We do not use Theorem \ref{th:macro} in the sequel. See \cite{BorFer} for a proof of a similar statement in a comparable situation or \cite{O} for an exposition of the steepest descent technique on double integral formulas. \\
\begin{remark}
The boundary $\partial \mathcal D_1$ can be explicitly computed. Indeed, it is clear that
\begin{align}
\partial \mathcal D_1=\{(\xi,\mu)\in \mathbb{R}\times \mathbb{R}_+ \mid \exists z \in \mathbb{R} \quad F'(z)=0 \wedge F''(z)=0\}.
\end{align}
Now for each $z\in \mathbb{R}$ we have that
\begin{align}
\left\{
\begin{array}{l}
F'(z)=0\\
F''(z)=0
\end{array}\right.
\end{align}
is system of equations that is linear in $\xi$ and $\mu$. So we can easily express $(\xi,\mu)$ as a function of $z\in \mathbb{R}$. In fact, it is easily checked that for each $z\in \mathbb{R} \setminus \{0,\varepsilon,\varepsilon^{-1}\}$ the corresponding $(\xi,\mu)$ satisfy $\xi\in\mathbb{R}$ and $\mu\geq 0$ so that $(\xi,\mu)$ is a point on $\partial \mathcal D_1$. Therefore, the closure of the image of $\mathbb{R}\setminus\{0,\varepsilon,\varepsilon^{-1}\}$ under this map $z\mapsto (\xi,\mu)$ gives the boundary. The cusp pointing down corresponds to $z=-1$ and the cusp pointing up to $z=1$. The boundary touches the $x$-axis at $z=\varepsilon,\varepsilon^{-1}$.
\end{remark}
From Theorem \ref{th:macro} it follows that we can achieve the situation of a tacnode in the following way. For every fixed $\varepsilon$ the cusp points on the boundary differ by $4$. The location tends to infinity when we take the limit $\varepsilon\downarrow 0$. By rescaling $\mu$ with $\varepsilon$ the cusp points have a limit as $\varepsilon\downarrow 0$ and the gap between the cusp points vanishes, resulting in a tacnode.
From Theorem \ref{th:macro} and \eqref{eq:kernel} we expect to arrive at a process around the tacnode when we scale \begin{align}\label{eq:newparamCrK}
\left\{
\begin{array}{l}
t=\epsilon L\\
m_j=[L^2(1+\mu_j /L)]\\
\varepsilon=\epsilon/L
\end{array}
\right.
\end{align}
However, it is less clear how to describe the process that arises at the cusp. As shown in Figure \ref{fig:demo}, there will be long vertical strings of particles. In fact, the length of these strings will be of order $L$ and hence the density of the particles at this scale will diverge. Therefore we do not obtain a point process in the limit if we consider the process on the particles.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{demo3}
\end{center}
\caption{A close up picture of the process, for $\varepsilon=0$, around the point where the cusps touch. Note that the particle come in pairs that are slanted to the right in the right half and to the left in the left half. Around the tacnode, there are forming long vertical strings of particles. }\label{fig:demo}
\end{figure}
There are several ways of constructing a meaningful process in the limit. Perhaps the most straightforward approach is the following: instead of allowing $\mu_j$ to be a free variable, we choose $N\in \mathbb{N}$ and fix $\mu_j \in \mathbb{R}$ for $j=1,\ldots, N$ with $\mu_j\neq \mu_k$ if $j \neq k$ and cut the process at the section $m_j=[L^2(1+\mu_j/L)]$. For simplicity, we will also shift the particles on any even $m_j$-section to the left by a half. In this way, we obtain a determinantal point process on $\mathbb{Z}\times (1,\ldots,N)$ for each $L$ with correlation functions
\begin{align}
\mathop{\mathrm{Prob}}( \{\textrm{particle at }(x_k,j_k)\in \mathbb{Z} \times (1,\ldots, N) \mid k=1,\ldots,n\})=\det \left(K(x_k,m_{j_k},x_l,m_{j_l})\right)_{k,l=1}^n
\end{align}
for all $n\in \mathbb{N}$.
Now to obtain the limiting process as $L \to \infty$ for the limiting process on $\mathbb{Z}\times (1,\ldots,N)$, it suffices to compute the pointwise limit for the kernel $K$. The limit is given in the next theorem, which is the main result of the present paper.
\begin{figure}[t]\begin{center}
\input{CrK}
\end{center}
\caption{The contours of integration in the kernel $\mathcal K^\epsilon$ } \label{fig:contoursCrK}
\end{figure}
\begin{theorem} \label{th:Keps}
With $t,m_j$ as in \eqref{eq:newparamCrK} we have that
\begin{align}
\lim_{L\to \infty} K(x_1,m_1,x_2,m_2)=\mathcal K^{\epsilon}(x_1,\mu_1,x_2,\mu_2)
\end{align}
for $(x_j,\mu_j)\in \mathbb{Z}\times \mathbb{R}$, where
\begin{multline} \label{eq:kernelCr}
\mathcal K^{\epsilon}(x_1,\mu_1,x_2,\mu_2) =
-\frac{1}{\chi_{\mu_1<\mu_2}} \int_{\Gamma_0} {\rm e}^{\epsilon(\mu_2-\mu_1)(w+1/w)} w^{x_1-x_2-1}{\rm d} w
\\
+\frac{1}{(2\pi{\rm i})^2}\oint_{\Gamma_0} \oint_{\Sigma\cup \Sigma^{-1}}\frac{{\rm e}^{\epsilon \mu_2 (z+\frac{1}{z})+\frac{\epsilon^2}{2}(z+\frac{1}{z})^2}w^{x_1}}
{{\rm e}^{\epsilon \mu_1 (w+\frac{1}{w})+\frac{\epsilon^2}{2}(w+\frac{1}{w})^2}z^{x_2}}
\frac{{\rm d}z {\rm d}w}{z(w-z)}
\end{multline}
The contours of integration and their orientation are as indicated in Figure \ref{fig:contoursCrK}. More precisely, $\Gamma_0$ is a contour encircling the origin with counter clockwise orientation. The contour $\Sigma$ is a contour connecting $-{\rm i}\infty$ to ${\rm i}\infty$ that does not intersect $\Gamma_0$ and stays in the right-half plane.
\end{theorem}
To the best of our knowledge, the kernel $K^\epsilon$ has not appeared in the existing literature yet.
It is not difficult to show that after inserting the new parameters given in \eqref{eq:newparamCrK} in the integrands in \eqref{eq:kernel} and taking the pointwise limit as $L\to \infty$, one obtains the integrands as given in the right-hand side of \eqref{eq:kernelCr}. However, there is an important technical issue that needs to be taken care of. Note that the integrand in the double integral contains poles at $\varepsilon$ and $\varepsilon^{-1}$, and also an essential singularity at $0$. By taking the limit, the pole approaches the essential singularity at the origin, which complicates the contour deformation in the analysis. \\
A different way of creating a point process is the following. Instead of considering the location of the particles, one could consider the statistics of the upper endpoints of the vertical strings of particles. We will restrict our process only to the odd $m$-sections. Then $(x,m)$ is defined to be a upper endpoint if there is a particle at $(x,m)$ but there is no particle at $(x,m+2)$. The upper endpoints form a point process with correlation functions
\begin{multline}
\tilde \rho_N((x_1,m_1),\ldots,(x_N,m_N))=\mathop{\mathrm{Prob}}\left(\textrm{particle at } (x_j,m_j) \textrm{ and no particle at } (x_j,m_j+2) \mid \, j=1,\ldots,N\right)
\end{multline}
for $(x_j,m_j)\in \mathbb{Z}\times (2\mathbb{N}-1)$ and $N\in \mathbb{N}$.
\begin{theorem}\label{th:endpoints}
With $t=\epsilon L$ and $m_j$ the closest odd integer to $L^2(1+\mu_j/L)$, we have that
\begin{align}\label{eq:corendpoints}
\lim_{L\to \infty} L^N \tilde \rho_N((x_1,m_1),\ldots,(x_N,m_N))=\epsilon^N \det\left(\mathcal K^\epsilon(x_i-1,\mu_i,x_j,\mu_j)+K^\varepsilon(x_i+1,\mu_i,x_j,\mu_j) \right)_{i,j}^N
\end{align}
for $(x_j,\mu_j)\in \mathbb{Z}\times\mathbb{R}$ for $j=1,\ldots,N$.
\end{theorem}
\begin{remark}
Another argument for the fact that the lengths of the vertical strings of particles must be of order $L$, is that one can prove that the process restricted to two horizontal sections that are close, is just constructed of two copies of the process restricted to one of these horizontal sections. To be precise, let $\mu_1\in \mathbb{R}$ and take $m_1=[L^2(1+\mu_1/L)]$. For each $L$ let $m_2$ be such that $m_1-m_2=o(L)$ as $L\to \infty$. For simplicity, we assume that $m_2 \geq m_1$. Then it is not difficult to prove that
\begin{align}
\lim_{L \to \infty} K(x_1,m_1,x_2,m_1)&=\mathcal K^\epsilon(x_1,\mu_1,x_2,\mu_1),\\
\lim_{L \to \infty} K(x_1,m_2,x_2,m_2)&=\mathcal K^\epsilon(x_1,\mu_1,x_2,\mu_1),\\
\lim_{L \to \infty} K(x_1,m_2,x_2,m_1)&=\mathcal K^\epsilon(x_1,\mu_1,x_2,\mu_1),\\
\lim_{L \to \infty} K(x_1,m_1,x_2,m_2)&=-\delta_{x_1,x_2}+ \mathcal K^\epsilon(x_1,\mu_1,x_2,\mu_1),
\end{align}
for all $x_1,x_2\in \mathbb{Z}$. Hence, for $x_1,\ldots,x_l, y_1,\dots,y_k$ we have
\begin{multline}
\lim_{L\to \infty} \mathrm{Prob}(\textrm{particles at }(x_1,m_1),\ldots,(x_l,m_1),(y_{1},m_2),\ldots, (y_k,m_2))\\
=\det
\begin{pmatrix}
\left(\mathcal K^\epsilon(x_i,\mu_1,x_j,\mu_1)\right)_{i,j=1}^l & \left(\mathcal K^\epsilon(x_i,\mu_1,y_j,\mu_1)-\delta_{x_i.x_j}\right)_{i=1,j=1}^{l,k} \\
\left(\mathcal K^\epsilon(y_i,\mu_1,x_j,\mu_1)\right)_{i=1,j=1}^{k,l} & \left(\mathcal K^\epsilon(y_i,\mu_1,y_j,\mu_1)\right)_{i,j=1}^k.
\end{pmatrix}
\end{multline}
This implies that (in the limit $L\to \infty$) the process on the line $m_2$ is just a copy of the process on the line $m_1$. \end{remark}
We will now derive some properties of the kernel $\mathcal K^{\varepsilon}$. The following proposition shows the symmetry in the kernel.
\begin{proposition} \label{prop:sym}
We have that
\begin{enumerate}
\item $\mathcal K^\epsilon(-x_1,\mu_1,-x_2,\mu_2)=\mathcal K^\epsilon(x_1-1,\mu_1,x_2-1,\mu_2)$.
\item $(-1)^{x_1-x_2}\mathcal K^\epsilon(x_1,-\mu_1,x_2,-\mu_2)=\delta_{(x_1,\mu_1),(x_2,\mu_2)}- \mathcal K^\epsilon(x_1,\mu_1,x_2,\mu_2)$
\end{enumerate}
for all $(x_j,\mu_j)\in \mathbb{Z}\times \mathbb{R}$.
\end{proposition}
The first property shows that our point process is invariant with respect to the transform $x\mapsto -1-x$.
To interpret the second symmetry property in this proposition, we note that if $\mathcal P$ is a determinantal point process on a discrete set $\mathcal X$ with kernel $K$, then we have that $1-K$ is the kernel of the determinantal point process $\mathcal P'$ defined by $\mathcal P'(X)=\mathcal P(\mathcal X\setminus X)$, for $X\subset \mathcal X$. This process is sometimes referred to as the complementary process. It is obtained by replacing particles with holes and vice versa. For more details on this particle hole involution we refer to the appendix of \cite{BOO}.\\
In the final results of this paper we investigate the limiting behavior of the kernel as $\epsilon\downarrow 0$ and $\epsilon \to\infty$. We start with the first case.
\begin{figure}[t]
\begin{center}
\subfigure{\input{GUE}}
\hspace*{2cm}
\subfigure{\input{GUE2}}
\end{center}
\caption{From the GUE minor process to the point process with kernel given in \eqref{eq:kerneleps=0}. The open circles represent the point $(x,y_x^l)$ from $l=0,\ldots,x$ and $x=0,1,\ldots$. In the left picture we draw the vertical lines. The dotted lines are only auxillary. In the right picture, we draw the lines associated to the choice of the $\mu_j$. The solid circles are the intersection point of the dashed horizontal and solid vertical line. The solid circles describe the process with kernel \eqref{eq:kerneleps=0}.}
\label{fig:GUE}
\end{figure}
\begin{theorem}\label{th:hermite}
For $x_1,x_2<0$ we have that
\begin{multline}\label{eq:kerneleps=0a}
\lim_{\epsilon \downarrow 0}\epsilon^{x_1-x_2} \mathcal K^{\epsilon}(x_1, \mu_1,x_2,\mu_2)= -\chi_{\mu_1<\mu_2} \chi_{x_1\leq x_2} (\mu_2-\mu_1)^{x_2-x_1}\\
+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \int_\Sigma \frac{{\rm e}^{\mu_2 z+\frac{1}{2}z^2}}{{\rm e}^{\mu_1 w+\frac{1}{2} w^2}}\frac{w^{x_1}}{z^{x_2+1}} \frac{{\rm d}z{\rm d}w}{w-z},
\end{multline}
and for $x_1,x_2\geq 0$.
\begin{multline}\label{eq:kerneleps=0}
\lim_{\epsilon \downarrow 0}\epsilon^{x_2-x_1} \mathcal K^{\epsilon}(x_1, \mu_1,x_2,\mu_2)= -\chi_{\mu_1<\mu_2} \chi_{x_2\leq x_1} (\mu_2-\mu_1)^{x_2-x_1}\\
+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \int_\Sigma \frac{{\rm e}^{\mu_2 z+\frac{1}{2}z^2}}{{\rm e}^{\mu_1 w+\frac{1}{2} w^2}}\frac{z^{x_2}}{w^{x_1+1}} \frac{{\rm d}z{\rm d}w}{w-z}.
\end{multline}
Moreover, the limit of the kernel vanishes if neither of the two conditions on $x_1$ and $x_2$ are satisfied.
\end{theorem}
The one dimensional version $\mu_1=\mu_2$ of this kernel has appeared before in the literature, see \cite{BO}. By expanding the term $(w-z)^{-1}$ we can express the double integral as a sum of products of Hermite polynomials.
By inserting \eqref{eq:kerneleps=0} (or \eqref{eq:kerneleps=0a}) in \eqref{eq:corendpoints} we find the limiting process for the endpoints in the right (or left) half plane. It turns out that in each of the half planes one of the kernels in the determinant tends to zero as $\epsilon \downarrow 0$. In fact, the limiting kernel is the kernel corresponding to the GUE minor process, see for example \cite{JN,ORgue}. Indeed, in the case $x_1,x_2\geq 0$ we rewrite \eqref{eq:corendpoints} as
\begin{align}
\lim_{L\to \infty} L^N \tilde \rho_N((x_1,m_1),\ldots,(x_N,m_N))= \det\left( \epsilon^{x_j+1-x_i} \mathcal K^\epsilon(x_i-1,\mu_i,x_j,\mu_j)+ \varepsilon^2\epsilon^{x_j-1-x_j} K^\epsilon(x_i+1,\mu_i,x_j,\mu_j) \right)_{i,j}^N,
\end{align}
and then by \eqref{eq:kerneleps=0} we find
\begin{align}
\lim_{\epsilon \downarrow 0} \lim_{L\to \infty} L^N \tilde \rho_N((x_1,m_1),\ldots,(x_N,m_N))= \det\left(K_{\mathrm{GUE}}(x_i,\mu_i,x_j,\mu_j) \right)_{i,j}^N,
\end{align}
where
\begin{multline}
K_{\mathrm{GUE}}(x_i,\mu_i,x_j,\mu_j) = -\chi_{\mu_1<\mu_2} \chi_{x_2< x_1} (\mu_2-\mu_1)^{x_2-x_1-1}
+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \int_\Sigma \frac{{\rm e}^{\mu_2 z+\frac{1}{2}z^2}}{{\rm e}^{\mu_1 w+\frac{1}{2} w^2}}\frac{z^{x_2}}{w^{x_1}} \frac{{\rm d}z{\rm d}w}{w-z}.
\end{multline}
The case $x_1,x_2<0$ is similar. By comparing with \cite[Def. 1.2]{JN} we see that the kernel $K_{GUE}$ describes the GUE minor process.
\begin{remark}
Based upon the relation between the construction of the two processes just above and below Theorem \ref{th:Keps}, we see how the process defined by the kernel at the right-hand side of \eqref{eq:kerneleps=0} can be constructed out of the GUE minor process explicitly: pick a point configuration random from the GUE minor process and denote the points by $(x,y^l_x)\in \mathbb{N}\times \mathbb{R}$. At each vertical $x$ section we draw $x$ vertical line segments. Each segment has $y_x^l$ as a lower endpoint and $y_{x-1}^{l-1}$ as the upper endpoint. Here we define $y_x^0=\infty$. See also Figure \ref{fig:GUE}.
We next fix $N$ different real numbers $\mu_j\in \mathbb{R}$. Then we define a point process in $\mathbb{N}\times \{1,\ldots,N\}$ by
\begin{align}
\mathrm{Prob} \{\textrm{particle at } (x_i,j_i), \ i=1,\ldots k)\}=\mathrm{Prob}\{\textrm{Each }(x_i,\mu_{j_i}) \textrm{ is on a line segment}\}
\end{align}
The conclusion is that the new process is in fact a determinantal point process with kernel as in \eqref{eq:kerneleps=0}. \\
\end{remark}
The second situation that can be obtained is that for $\epsilon \to \infty$, in which the cusps should separate. Hence we expect to obtain the Pearcey process when we take the simultaneous limit $\mu_j\to -\infty$ (or $\mu_j\to +\infty$). The following Theorem states that we indeed find the Pearcey process at the lowest of the two cusps.
\begin{theorem}\label{th:pearcey}
Set
\begin{align}\label{eq:newparamP}
\left\{
\begin{array}{l}
\epsilon=M\\
\mu_j=- M(1-\nu_j/2M)\\
x=[\xi_j M^{1/2}]
\end{array}
\right.
\end{align}
Then
\begin{multline}\label{eq:kernelP}
\lim_{M\to\infty} \frac{{\rm e}^{M (\nu_1-\nu_2)}}{M^{1/2}}\mathcal K^{\varepsilon}(x_1,\mu_1,x_2,\mu_2)=
-\frac{\chi_{\nu_1<\nu_2}}{2\pi{\rm i}}\int_{-{\rm i}\infty}^{{\rm i}\infty} {\rm d}w\ {\rm e}^{(\nu_2-\nu_1)w^2-(\xi_2-\xi_1)w}\\
+\frac{1}{(2 \pi{\rm i})^2}\int_{-{\rm i}\infty}^{{\rm i}\infty} \int_{\mathcal C} \frac{{\rm e}^{\frac{1}{2}z^4+\nu_2 z^2-\xi_2 z}}{ {\rm e}^{\frac{1}{2}w^4+\nu_1 w^2-\xi_1 w}}\frac{{\rm d}w {\rm d}z}{w-z}
\end{multline}
The contour $\mathcal C$ conists of four rays, from $\pm {\rm e}^{\pi {\rm i}/4}\infty $ to $0$ and from $0$ to $\pm {\rm e}^{3\pi {\rm i}/4}\infty $ (see also Figure \ref{fig:pearcey}).
\end{theorem}
The kernel given \eqref{eq:kernelP} is known in the literature as the extended Pearcey kernel that describes the Pearcey process, see \cite{ABK,BH1,BH2,ORpearcey,TrW} for more details.
To conclude this section, note that if we combine Theorem \ref{th:pearcey} with the second symmetry property in Proposition \ref{prop:sym} then we see that the complementary process near the top cusp locally converges to the Pearcey process, as expected.
\begin{figure}[t] \begin{center}
\input{PearceyC}
\caption{The contours of integration for the Pearcey kernel.}\label{fig:pearcey}
\end{center}
\end{figure}
\section{Proofs}
In this section we prove our results.
\subsection{Proof of Theorem \ref{th:Keps}}
Let us first introduce some notation. Write the kernel in \eqref{eq:kernel} as
\begin{multline}
K(x_1, m_1,x_2,m_2) = -\frac{\chi_{m_1<m_2}}{2\pi{\rm i}} \oint_{\Gamma_0} \frac{G_{t,m_1,x_1}(z)}{ G_{t,m_2,x_2}(z)} \frac{{\rm d}z}{z}+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0}\oint_{\Gamma_{\varepsilon,\varepsilon^{-1}}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w,
\end{multline}
where
\begin{align} \label{eq:G}
G_{t,m,x}(w)={\rm e}^{t(w+1/w)}(1-\varepsilon w)^{[m_1/2]}(1-\varepsilon/w)^{[(m_1+1)/2]}w^{[x]}.
\end{align}
Let us for a moment ignore the contours of integration and take the limit $L\to \infty$ for the integrand. It is not difficult to see that we have the following pointwise limit
\begin{align}\label{eq:limitG}
\lim_{L\to \infty} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}=\frac{{\rm e}^{ \varepsilon \mu_2 (z+1/z)+\varepsilon^2 (z+1/z)^2}}{{\rm e}^{ \varepsilon \mu_2 (w+1/w)+\varepsilon^2 (w+1/w)^2}} \frac{w^{[x_1]}}{z^{[x_2]}}.
\end{align}
for $z,w\in \mathbb{C}\setminus \{0\}$. Hence the difficulty in the proof lies in the fact that we have to control the contours of integration (which clearly depend on $\varepsilon$ and hence $L$) while taking the limit.
As a first step we prove the following result.
\begin{lemma} \label{lem:deform}
With $G$ as in \eqref{eq:G} and $K$ as in \eqref{eq:kernel} we have that
\begin{multline}
K(x_1,m_1,x_2,m_2)= -\frac{\chi_{m_1<m_2}}{2\pi{\rm i}} \oint_{\Gamma_{0,\varepsilon}}{\rm d} z \ \frac{G_{t,m_1,x_1}(z)}{z G_{t,m_2,x_2}(z)} +\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0,\varepsilon} \oint_{\Gamma_\varepsilon\cup\Gamma_{\varepsilon^{-1}}}\frac{G_{t,m_1,x_1}(w)}{ G_{t,m_2,x_2}(z)} \frac{1}{z(w-z)}\end{multline}
Here $\Gamma_\varepsilon$ and $\Gamma_{\varepsilon^{-1}}$ are two contours encircling the poles $\varepsilon$ and $\varepsilon^{-1}$ respectively, but no other poles. The contour $\Gamma_{0,\varepsilon}$ is a contour encircling the origin and the contour $\Gamma_{\varepsilon}$ but not $\Gamma_{\varepsilon^{-1}}$. The contours are taken so that they do not intersect and have counter clockwise orientation. See also Figure \ref{fig:lemdeform}.
\end{lemma}
\begin{proof}
First split the contour $\Gamma_{\varepsilon,\varepsilon^{-1}}$ into two small contours $\Gamma_{\varepsilon}$ around $\varepsilon$ and $\Gamma_{\varepsilon^{-1}}$ around $\varepsilon^{-1}$. Then
\begin{multline}
K(x_1, m_1,x_2,m_2) = -\frac{\chi_{m_1<m_2}}{2\pi{\rm i}} \oint_{\Gamma_0} \frac{G_{t,m_1,x_1}(z)}{ G_{t,m_2,x_2}(z)} \frac{{\rm d}z}{z}+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \oint_{\Gamma_{\varepsilon}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w\\
+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \oint_{\Gamma_{\varepsilon^{-1}}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w
\end{multline}
Now in the first double integral in the right-hand side we deform $\Gamma_{0}$ so that it encircles $\Gamma_{\varepsilon}$. This deformation will be denoted by $\Gamma_{0,\varepsilon}$. Note that we now pick up a residue at $w=z$. Hence by deforming we obtain an extra single integral over the contour $\Gamma_{\varepsilon}$
\begin{multline}
K(x_1,m_1,x_2,m_2)= -\frac{\chi_{m_1<m_2}}{2\pi{\rm i}} \oint_{\Gamma_{0}} \frac{G_{t,m_1,x_1}(z)}{ G_{t,m_2,x_2}(z)} \frac{{\rm d}z}{z}-\frac{1}{2\pi{\rm i}} \oint_{\Gamma_\varepsilon} \frac{G_{t,m_1,x_1}(z)}{ G_{t,m_2,x_2}(z)} \frac{{\rm d}z}{z}\\+
\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0,\varepsilon} \oint_{\Gamma_{\varepsilon}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \oint_{\Gamma_{\varepsilon^{-1}}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w
\end{multline}
Now the extra single integral has the same integrand as the first single integral. The integration is now over a contour encircling the pole at $\varepsilon$ (and no other pole). However, this pole is not present in the case $m_1<m_2$ and then the integral vanishes. Therefore we can glue the integrals over $\Gamma_{0}$ and $\Gamma_{\varepsilon}$ together and obtain
\begin{multline}
K(x_1,m_1,x_2,m_2)= -\frac{\chi_{m_1<m_2}}{2\pi{\rm i}} \oint_{\Gamma_{0,\varepsilon}} \frac{G_{t,m_1,x_1}(z)}{ G_{t,m_2,x_2}(z)} \frac{{\rm d}z}{z}+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0,\varepsilon} \oint_{\Gamma_{\varepsilon}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w
\\+\frac{1}{(2\pi{\rm i})^2} \oint_{\Gamma_0} \oint_{\Gamma_{\varepsilon^{-1}}} \frac{G_{t,m_1,x_1}(w)}{G_{t,m_2,x_2}(z)}\frac{1}{z(w-z)} \ {\rm d}z {\rm d}w
\end{multline}
Finally, note that the integrand has no pole at $w=\varepsilon$ so in the second double integral we can safely deform $\Gamma_0$ to be $\Gamma_{0,\varepsilon}$. This proves the claim.
\begin{figure}
\begin{center}\input{deformC}
\caption{Deforming the contours as in Lemma \ref{lem:deform}.} \label{fig:lemdeform}
\end{center}
\end{figure}
\end{proof}
Now the proof of Theorem \ref{th:Keps} follows by simply taking the limit in the integrands and correctly choosing the contours $\Gamma_{\varepsilon} $ and $\Gamma_{\varepsilon^{-1}}$.
\begin{proof}[Proof of Theorem \ref{th:Keps}]
Since $\varepsilon$ will be small, we take the contour $\Gamma_{0,\varepsilon}$ to be some fixed contour encircling the origin independent of $\varepsilon$, say the unit circle for simplicity. Due to the behavior of $G_{t,m_2,x_2}(z)$ for $z\to \infty$ we can deform $\Gamma_{\varepsilon^{-1}}$ to any simple contour $\Sigma$ that connects $\infty$ to $\infty$ crossing the real axis once at a point between $1$ and $\varepsilon^{-1}$ and contained in the sector
\begin{align}
\{z\in \mathbb{C} \mid -\pi/2+\delta < \arg z <\pi/2-\delta\}.
\end{align}
for some fixed $\delta>0$. In fact we choose $\Sigma$ to be a contour so that it eventually falls inside the sector
\begin{align}
\{z\in \mathbb{C} \mid -\pi/2+\delta < \arg z<-\pi/4-\delta, \quad \pi/4+\delta <\arg z <\pi/2-\delta\},
\end{align}
for some $\delta>0$.
The behavior of $G_{t,m_2,x_2}(z)$ for $z\to 0$ is to a large extent similar to the behavior near $\infty$, which can be seen by performing the transform $z\mapsto 1/z$. We can (and do) deform the contour $\Gamma_{\varepsilon}$ to be $\Sigma^{-1}$. Note that in this way, we have deformed the contours, so that they go through the essential singularities of the integrand.
Now that we have chosen the contours, that are clearly independent of $\varepsilon$, we can compute the limits of the integrand, which is given in \eqref{eq:limitG}. Due to the choice of the sector in which $\Sigma$ eventually falls inside, the convergence is uniform in $z\in \Sigma\cup\Sigma_{-1}$ and $w\in \Gamma_0$. This proves the statement.
\end{proof}
\subsection{Proof of Theorem \ref{th:endpoints}}
\begin{proof}
We start by noting that by expanding a factor $(1-\varepsilon/w)(1+\varepsilon w)$ in the integrals for the kernel $K$ in \eqref{eq:kernel} we get for odd $n,m$
\begin{multline}\label{eq:proof:rec1}
K(x,n+2,y,m)=(1+\varepsilon^2)K(x,n,y,m)-\varepsilon(K(x+1,n,y,m)+K(x-1,n,y,m))\\+\frac{\delta_{n,m-2}}{2\pi} \oint (1-\varepsilon w)(1-\varepsilon/w) w^{x-y-1}{\rm d}w.
\end{multline}
There is an additional single integral because of the fact that $\xi_{n+2<m}=\xi_{n<m}-\delta_{n,m-1}-\delta_{n,m-2}$. Now since $n,m$ are both odd $\delta_{n,m-1}$ vanishes trivially and only $\delta_{n,m-2}$ remains.
Similarly,
\begin{multline}\label{eq:proof:rec2}
K(x,n,y,m-2)=(1+\varepsilon^2)K(x,n,y,m)-\varepsilon(K(x,n,y+1,m)+K(x,n,y-1,m))\\+\frac{\delta_{n,m-2}}{2\pi} \oint (1-\varepsilon w)(1-\varepsilon/w) w^{x-y-1}{\rm d}w.
\end{multline}
Therefore
\begin{multline}
-K(x_i,m_i+2,x_j,m_j)+K(x_i,m_i,x_j,m_j)=\varepsilon\Big(K(x_i+1,m_i,x_j,m_j)+K(x_i-1,m_i,x_j,m_j) \\+\delta_{m_i,m_j-2} (\delta_{x_i+1,x_j}+\delta_{x_i-1,x_j})\Big)-\delta_{m_i,m_j-2} \delta_{x_i,x_j}+\mathcal{O}(\varepsilon^2)\end{multline}
Now note that if we have $m_i\neq m_j$ then $m_i-m_j$ is of order $L$, so that we can ignore the term $\delta_{m_i,m_j-2}$, so that
\begin{multline}\label{eq:proof:rec3}
-K(x_i,m_i+2,x_j,m_j)+K(x_i,m_i,x_j,m_j)=\varepsilon\big(K(x_i+1,m_i,x_j,m_j)+K(x_i-1,m_i,x_j,m_j)\big)
\end{multline}
and
\begin{multline}\label{eq:proof:rec4}
-K(x_i,m_i+2,x_j,m_j+2)+K(x_i,m_i,x_j,m_j+2)=\varepsilon\Big(K(x_i+1,m_i,x_j,m_j+2)+K(x_i-1,m_i,x_j,m_j+2)\\+\delta_{m_i,m_j} (\delta_{x_i+1,x_j}+\delta_{x_i-1,x_j})\Big)-\delta_{m_i,m_j} \delta_{x_i,x_j}+\mathcal{O}(\varepsilon^2)\end{multline}
By subtracting \eqref{eq:proof:rec4} from \eqref{eq:proof:rec3} and inserting \eqref{eq:proof:rec2} we obtain
\begin{multline}\label{eq:proof:rec5}
-K(x_i,m_i+2,x_j,m_j)+K(x_i,m_i,x_j,m_j)+K(x_i,m_i+2,x_j,m_j+2)-K(x_i,m_i,x_j,m_j+2)=\delta_{m_i,m_j}\delta_{x_i,x_j}+\mathcal{O}(\varepsilon^2)
\end{multline}
By the complementation principle (see \cite{BOO})
\begin{align}
\tilde \rho_N((x_1,m_1),\ldots,(x_N,m_N))=\det
\begin{pmatrix}
K(x_i,m_i,x_j,m_j)& -K(x_i,m_i+2,x_j,m_j)\\
K(x_i,m_i,x_j,m_j+2)&I-K(x_i,m_i+2,x_j,m_j+2)
\end{pmatrix}
\end{align}
By adding the first column to the second and afterward subtracting the second row from the first we obtain that the determinant equals
\begin{align}
\det\begin{pmatrix}
K(x_i,m_i,x_j,m_j)-K(x_i,m_i,x_j,m_j+2)& -K(x_i,m_i+2,x_j,m_j)+K(x_i,m_i,x_j,m_j)\\
&-\delta_{(x_i,m_i),(x_j,m_j)}+K(x_i,m_i+2,x_j,m_j+2)-K(x_i,m_i,x_j,m_j+2)\\\\
K(x_i,m_i,x_j,m_j+2)&\delta_{(x_i,m_i),(x_j,m_j)}-K(x_i,m_i+2,x_j,m_j+2)+K(x_i,m_i,x_j,m_j+2)
\end{pmatrix}
\end{align}
Now using, \eqref{eq:proof:rec2} in the upper left block, \eqref{eq:proof:rec5} in the upper right block, and \eqref{eq:proof:rec4} and \eqref{eq:proof:rec2} in the lower right block, this determinant has the form\begin{align}\tilde \rho_N((x_1,m_1),\ldots,(x_N,m_N))=\det\begin{pmatrix}
1+\mathcal{O}(\varepsilon^2) & \mathcal{O}(\varepsilon^2)\\
\mathcal{O}(1) & \varepsilon(K(x_i+1,m_i,x_j,m_j)+K(x_i-1,m_i,x_j,m_j))
\end{pmatrix},
\end{align}
from which the proposition easily follows.
\end{proof}
\subsection{Proof of Proposition \ref{prop:sym}}
\begin{proof}
1. The statement easily follows by the transform $z\mapsto 1/z$ and $w\mapsto 1/w$ in the integral representation of $K^{\epsilon}(-x_1,\mu_1,-x_2,\mu_2)$.
2. The second property follows by the transformation $z\mapsto -z$ and $w\mapsto -w$ in the integrals and a deformation of $\Sigma$. In the definition of the kernel $K^{\epsilon}$ we take $\Sigma$ on the right of $\Gamma_0$. However, we can also take $\Sigma$ to be on the left at the cost of an extra integral as shown in Figure \ref{fig:contourswitch}. The second double integral at the right is easy to compute since the integral over $z$ encircles the pole $z=w$ only and hence can be computed by the Residue Theorem. The result of this is that we can rewrite the kernel in the following way
\begin{figure}
\begin{center}
\input{CrK} \input{CrKmin} \input{CrKRes}
\end{center}
\caption{In the definition of $\mathcal K^\epsilon$ we can switch the contour $\Sigma$ to $-\Sigma$ to obtain \eqref{eq:kernelepsleft}. }\label{fig:contourswitch}
\end{figure}
\begin{align}\label{eq:kernelepsleft}
K^\epsilon(x_1,\mu_1,x_2,\mu_2)= \delta_{(x_1,\mu_1,x_2,\mu_2)} +\xi_{\mu_2<\mu_1} \int_{\Gamma_0} -\oint_{\Gamma_0} \int_{-\left(\Sigma\cup \Sigma^{-1}\right)}
\end{align}
Now by inserting $\mu_j=-\mu_j$ and the transformation $w\mapsto -w$ and $z\mapsto -z$ we arrive at the statement.\end{proof}
\subsection{Proof of Theorem \ref{th:hermite}}
\begin{proof}
By the first symmetry property in Proposition \ref{prop:sym} it suffices to consider the case $x_2<0$ only. Using the transform $z\mapsto z/\epsilon$ and $w\mapsto w/\epsilon$ we obtain
\begin{multline}
\epsilon^{x_1-x_2} \mathcal K^{\epsilon}(x_1,\mu_1,x_2,\mu_2) =
-\chi_{\mu_1<\mu_2} \int_{\Gamma_0} {\rm e}^{(\mu_2-\mu_1)(w+\epsilon^2 /w)} w^{x_1-x_2-1}{\rm d} w
\\
+\frac{1}{(2\pi{\rm i})^2}\oint_{\Gamma_0} \int_{\Sigma\cup\Sigma^{-1}}\frac{{\rm e}^{ \mu_2 (z+\frac{\epsilon^2}{z})+\frac{1}{2}(z+\frac{\epsilon^2}{z})^2}w^{x_1}}
{{\rm e}^{ \mu_1 (w+\frac{\epsilon^2}{w})+\frac{1}{2}(w+\frac{\epsilon^2}{w})^2}z^{x_2}}
\frac{{\rm d}z {\rm d}w}{z(w-z)}.
\end{multline}
Setting $\epsilon=0$ at the right-hand side gives
\begin{multline}
\lim_{\epsilon\downarrow 0}\epsilon^{x_1-x_2} \mathcal K^{\epsilon}(x_1,\mu_1,x_2,\mu_2) =
-\chi_{\mu_1<\mu_2} \int_{\Gamma_0} {\rm e}^{(\mu_2-\mu_1)w} w^{x_1-x_2-1}{\rm d} w\\
+\frac{1}{(2\pi{\rm i})^2}\oint_{\Gamma_0} \int_{ \Sigma\cup \Sigma^{-1}}\frac{{\rm e}^{ \mu_2 z+\frac{1}{2}z^2}w^{x_1}}
{{\rm e}^{ \mu_1 w+\frac{1}{2}w^2}z^{x_2}}
\frac{{\rm d}z {\rm d}w}{z(w-z)}.
\end{multline}
The single integral can be easily computed. As for the double integral, we note that since $x_2< 0$ we have that the integral over $\Sigma^{-1}$ vanishes and hence
\begin{multline}
\lim_{\epsilon\downarrow 0}\epsilon^{x_2-x_1} \mathcal K^{\epsilon}(x_1,\mu_1,x_2,\mu_2) =
-\chi_{\mu_1<\mu_2} \int_{\Gamma_0} {\rm e}^{(\mu_2-\mu_1)w} w^{x_1-x_2-1}{\rm d} w\\
+\frac{1}{(2\pi{\rm i})^2}\oint_{\Gamma_0} \int_{ \Sigma}\frac{{\rm e}^{ \mu_2 z+\frac{1}{2}z^2}w^{x_1}}
{{\rm e}^{ \mu_1 w+\frac{1}{2}w^2}z^{x_2}}
\frac{{\rm d}z {\rm d}w}{z(w-z)}.
\end{multline}
The single integral can easily be computed. This proves \eqref{eq:kerneleps=0a}.
Finally, note that if $x_1\geq 0$ then both integrals vanish since there is no pole for $w=0$.
\end{proof}
\subsection{Proof of Theorem \ref{th:pearcey}}
\begin{proof}
The double integral in \eqref{eq:kernelCr} in the new parameters given by \eqref{eq:newparamP} reads
\begin{align}
\frac{1}{(2\pi {\rm i})^2} \oint_{\Gamma_{0}} {\rm d}w \int_{\Sigma\cup \Sigma^{-1}}{\rm d}z\ \frac
{{\rm e}^{\frac{M^2}{2} (z-1)^4/z^2+M \tilde \nu_2 (z+1/z)-M^{1/2} \xi_1 \ln z}}
{{\rm e}^{\frac{M^2}{2} (w-1)^4/z^2+M\tilde \nu_2 (w+1/w)-M^{1/2} \xi_2 \ln z}}
\frac{1}{z(w-z)}
\end{align}
For large $M$ the main contribution comes from the terms
\begin{align}
\frac{(z-1)^4}{z^2}\quad \textrm{ and } \quad \frac{(w-1)^4}{w^2}.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{steepdescC}
\end{center}
\caption{The line $\{w\mid \Im((w-1)^4/w^2)=0\}$ which are the contours of steepest descent/ascent leaving from $w=1$. }\label{fig:pathsstds}
\end{figure}
This term has a critical point at $z=1$ (resp. $w=1$) of order three. Therefore, there are four paths of steepest descent leaving from $z=1$ and four paths of steepest ascend, as shown in Figure \ref{fig:pathsstds}. In fact, the paths of steepest ascent leaving from $w=1$ are the unit circle and the real line. We now deform $\Gamma_0$ to be the unit circle. The contour $\Sigma$ is deformed so that it passes through $z=1$ and follows the path of steep descent outside the unit circle. Then locally around $z=1$ and $w=1$ the contours $\Gamma_0$, $\Sigma$ and $\Sigma^{-1}$ can be deformed to the contours for the Pearcey kernel as shown in Figure \ref{fig:pearcey}.
By standard steepest descent arguments one can now show that the dominant contribution comes from a neighborhood around $z=1$ and $w=1$. More precisely, after introducing the local variables \begin{align}
\left\{
\begin{array}{l}
w=1+\tilde w/M^{1/2}\\
z=1+\tilde z/M^{1/2}
\end{array}
\right.
\end{align}
it is not difficult to prove that
\begin{multline}\label{eq:proofPdouble}
\frac{1}{(2\pi {\rm i})^2} \oint_{\Gamma_{0}} {\rm d}w \int_{\Sigma\cup \Sigma^{-1}}{\rm d}z\ \frac
{{\rm e}^{\frac{M^2}{2} (z-1)^4/z^2+M \nu_2 (z+1/z)-M^{1/2} \xi_2 \ln z}}
{{\rm e}^{\frac{M^2}{2} (w-1)^4/z^2+M \nu_1 (w+1/w)-M^{1/2} \xi_1 \ln z}}
\frac{1}{z(w-z)}\\
=\frac{M^{1/2}{\rm e}^{2M(\nu_2-\mu_1)}}{(2\pi{\rm i})^2} \int_{-{\rm i}\infty}^{{\rm i}\infty}{\rm d}z \int_{\mathcal C} {\rm d}w \
\frac
{{\rm e}^{\frac{1}{2} \tilde z^4+ \nu_2 \tilde z^2- \xi_2 z }}
{{\rm e}^{\frac{1}{2} \tilde w^4+\nu_1 \tilde w^2- \xi_1 w}}
\frac{1}{w-z}\left(1+\mathcal{O}(M^{-1/2})\right),
\end{multline}
as $M\to \infty$.
Now consider the single integral in \eqref{eq:kernelCr}, which in the new parameters \eqref{eq:newparamP} reads
\begin{align}
\frac{1}{2\pi{\rm i}}\oint_{\Gamma_0} {\rm d}w \ {\rm e}^{M(\nu_2-\nu_1)(w+1/w)}w^{M^{1/2} (\xi_1-\xi_2)-1}
\end{align}
The dominant term in this integral is
\begin{align}
{\rm e}^{M(\nu_2-\nu_1) (w+1/w)}.
\end{align}
A simple computation shows that $w+1/w$ has two saddle points $w=\pm 1$, both of order one. Since $\nu_2>\nu_1$ (otherwise the single integral is not present) we have that $w=+1$ is the dominant saddle point. This means that, the main contribution comes from the part of the contour close to $w=1$. Hence, if we introduce the new local variable
\begin{align}
w=1+\tilde w/M^{1/2},
\end{align}
then by standard arguments one can prove that
\begin{multline} \label{eq:proofPsingle}
\frac{1}{2\pi{\rm i}}\oint_{\Gamma_0} {\rm d}w \ {\rm e}^{M(\nu_2-\nu_1)(w+1/w)}w^{M^{1/2} (\xi_1-\xi_2)-1}
\\
=\frac{M^{1/2} {\rm e}^{2M(\nu_2-\nu_1)}}{2\pi{\rm i}}\int_{-{\rm i}\infty}^{{\rm i}\infty}{\rm d}\tilde w \ {\rm e}^{(\nu_2-\nu_1)\tilde w^2-(\xi_1-\xi_2)\tilde w} \left(1+\mathcal{O}(M^{-1/2})\right)
\end{multline}
By inserting \eqref{eq:proofPsingle} and \eqref{eq:proofPdouble} in \eqref{eq:kernelCr} and taking the limit $M\to \infty$ we obtain \eqref{eq:kernelP}.
This proves Theorem \ref{th:pearcey}.
\end{proof}
|
1,108,101,563,865 | arxiv | \section{Introduction}
\label{intro}
A quasitoric manifold is a \(2n\)-dimensional manifold with a well-behaved action of an \(n\)-dimensional torus such that the orbit space is an \(n\)-dimensional simple polytope.
Quasitoric manifolds were introduced by Davis and Januszkiewicz \cite{0733.52006} as topological generalizations of non-singular projective toric varieties.
In this paper we study the degree of symmetry of quasitoric manifolds and give upper bounds in various situations.
For example we show that \(\mathbb{C} P^n\) is the most symmetric \(2n\)-dimensional quasitoric manifold.
Moreover, we construct infinitely many quasitoric manifolds of dimension \(2n=4k\), \(k>0\), which do not admit an action of a semi-simple compact connected Lie-group.
For a smooth manifold \(M\), the degree of symmetry \(N(M)\) of \(M\) is defined to be the maximum of the dimensions of those compact Lie-groups which act smoothly and effectively on \(M\).
Similarly one defines the semi-simple symmetry degree \(N^{ss}(M)\) of \(M\) as
\begin{align*}
N^{ss}(M)&=\max\{\dim G;\; G \text{ compact semi-simple Lie-group, }\\
&\quad\quad G \text{ acts smoothly and effectively on } M\}
\end{align*}
and the torus symmetry degree \(T(M)\) of \(M\) to be
the maximum of the dimensions of those compact tori which act smoothly and effectively on \(M\).
It is well known that, for an \(n\)-dimensional manifold \(M\), \(N(M)\leq \frac{n(n+1)}{2}\) with equality holding if and only if \(M=S^n\) or \(M=\mathbb{R} P^n\).
Moreover, we have \(T(M)\leq n\) with equality holding if and only if \(M\) is a torus.
If \(\chi(M)\neq 0\), then we have \(T(M)\leq \frac{n}{2}\).
A quasitoric manifold has positive Euler-characteristic.
Therefore the torus symmetry degree of a quasitoric manifold is maximal in the class of manifolds with non-vanishing Euler-characteristic.
In this paper we show that \(\mathbb{C} P^n\) has maximal degree of symmetry among the quasitoric manifolds of dimension \(2n\), i.e. \(N(M)<N(\mathbb{C} P^n)=n^2+2n\) for all quasitoric manifolds \(M\neq \mathbb{C} P^n\) with \(\dim M = 2n\) (see Theorem \ref{sec:highly-symm-quas-7}).
Moreover, we generalize a vanishing result for indices of certain twisted Dirac operators on \(\text{Spin}^c\)-manifolds with \(\text{Pin}(2)\)-action found by Dessai \cite{0963.19002} to manifolds with actions of more general groups (see Theorem~\ref{sec:twist-dirac-oper-8}).
This generalization allows us to prove that if a \(2n\)-dimensional \(\text{Spin}^c\)-manifold \(M\) with \(\chi(M)\neq 0\) admits such a twisted Dirac operator with non-vanishing index then its degree of symmetry is bounded from above by \(3n\) with equality holding if and only if \(M=\prod_{i=1}^n S^2\) (see Corollary~\ref{sec:twist-dirac-oper-11}).
We show that a \(2n\)-dimensional quasitoric manifold whose orbit polytope admits a facet coloring with \(n\) colors is an example of such a manifold.
Hence, we get:
\begin{theorem}[Corollary \ref{sec:twist-dirac-oper-16}]
\label{sec:introduction-1}
If \(M\) is a \(2n\)-dimensional quasitoric manifold whose orbit polytope admits a facet coloring with \(n\) colors, then we have \(N(M)\leq 3n\) with equality holding if and only if \(M=\prod_{i=1}^n S^2\).
\end{theorem}
Moreover, we show that if a \(2n\)-dimensional \(\text{Spin}^c\)-manifold \(M\) admits a twisted Dirac-operator with non-vanishing index and an effective action of a non-abelian compact connected Lie-group \(G\), then the order of the Weyl-group of \(G\) divides the Euler-characteristic of \(M\) (see Corollary \ref{sec:twist-dirac-oper-7}).
This enables us to prove the following result.
\begin{theorem}[Corollary~\ref{sec:twist-dirac-oper-15}, Corollary~\ref{sec:twist-dirac-oper-13}]
Let \(n\geq 2\). Then we have:
\begin{enumerate}
\item If \(n\) is odd, then there are infinitely many quasitoric manifolds \(M\) of dimension \(2n\) with \(N^{ss}(M)\leq 3\), i.e. the only semi-simple simply connected compact Lie-group which can act almost effectively on \(M\) is \(SU(2)\).
\item If \(n\) is even, then there are infinitely many quasitoric manifolds of dimension \(2n\) on which no semi-simple compact connected Lie-group can act effectively.
\end{enumerate}
\end{theorem}
We also study those \(2n\)-dimensional quasitoric manifolds whose orbit polytopes admit facet colorings with \(n\) colors and have relatively many non-abelian symmetries.
For these manifolds we have the following theorem.
\begin{theorem}[Theorem~\ref{sec:quas-manif-with-1}, Theorem~\ref{sec:quas-manif-with-2}]
Let \(M\) be a \(2n\)-dimensional quasitoric manifold whose orbit polytope admits a facet coloring with \(n\) colors. Assume that one of the following two conditions holds:
\begin{enumerate}
\item There is an action of a compact Lie-group on \(M\) such that \(\dim M/G \leq 1\).
\item We have \(N(M)\geq 3n-4\).
\end{enumerate}
Then \(M\) is the total space of a fiber bundle over \(\prod S^2\).
\end{theorem}
By considering twisted Dirac-operators we can also prove the following theorem:
\begin{theorem}[Corollary \ref{sec:two-vanish-results-2}]
\label{sec:introduction}
Let \(M\) be a \(\text{Spin}\)-manifold with \(p_1(M)=0\), \(G\) an exceptional Lie-group or \(G=\text{Spin}(2l+1)\) or \(G=Sp(l)\) with \(l=1,3,6\) or \(l\geq 15\) and \(T\) a maximal torus of \(G\). If the Witten-genus of \(M\) does not vanish, then we have \(N^{ss}(M\times \prod_{i=1}^k G/T)=k\dim G\).
\end{theorem}
If more generally \(G\) is a semi-simple compact connected Lie-group with maximal torus \(T\), then we still get upper bounds for the semi-simple symmetry degree of \(M\times G/T\).
But we do not get the exact value of \(N^{ss}(M\times G/T)\) in the more general setting.
It should be noted here, that it has been shown by Hauschild \cite{0623.57024} that the semi-simple symmetry degree of \(G/T\) is equal to \(\dim G\) if \(G\) is a semi-simple compact connected Lie-group with maximal torus \(T\).
So Theorem~\ref{sec:introduction} may be viewed as a partial generalization of his result.
This paper is organized as follows.
In sections \ref{sec:twisted} and \ref{sec:prod} we discuss indices of twisted Dirac-operators on \(\text{Spin}^c\)-manifolds.
Then we prove Theorem~\ref{sec:introduction} in section \ref{sec:van_witten}.
In section \ref{sec:qt_mfd} we apply the results of the previous sections to show that there are quasitoric manifolds with low semi-simple symmetry degree.
In sections \ref{sec:cohom1} and \ref{sec:cube} we study those quasitoric manifolds which have a non-vanishing index and have relatively many non-abelian symmetries.
In section \ref{sec:highly} we show that \(\mathbb{C} P^n\) is the most symmetric quasitoric manifold in dimension \(2n\). This section is independent of the other sections.
In appendix \ref{sec:tori} we prove some technical details which are needed in section \ref{sec:twisted}.
I would like to thank Anand Dessai for comments on an earlier version of this paper.
I would also like to thank Nigel Ray and the University of Manchester for hospitality while I was working on this paper.
\section{Twisted Dirac-operators and non-abelian Lie-group actions}
\label{sec:twisted}
The purpose of this section is to generalize some results of \cite{0963.19002} to a class of non-abelian compact non-connected Lie-groups.
We begin with a review of some well known facts about \(\text{Spin}^c\)-manifolds and the results of \cite{0963.19002} and \cite{0944.58019}.
For more background information about the group \(\text{Spin}^c(k)\) and \(\text{Spin}^c\)-structures on manifolds see for example \cite{0146.19001}, \cite{0247.57010} and \cite{0395.57020}.
An orientable manifold \(M\) has a \(\text{Spin}^c\)-structure if and only if the second Stiefel-Whitney-class \(w_2(M)\) of \(M\) is the \(\!\!\!\!\mod 2\)-reduction of an integral class \(c\in H^2(M;\mathbb{Z})\).
Associated to a \(\text{Spin}^c\)-structure on \(M\) there is a complex line bundle.
We denote the first Chern-class of this line bundle by \(c_1^c(M)\).
Its \(\!\!\!\!\mod 2\)-reduction is \(w_2(M)\) and we should note that any integral cohomology class whose \(\!\!\!\!\mod 2\)-reduction is \(w_2(M)\) may be realized as the first Chern-class of a line bundle associated to some \(\text{Spin}^c\)-structure on \(M\).
Let \(M\) be a \(2n\)-dimensional \(\text{Spin}^c\)-manifold on which \(S^1\) acts smoothly.
We say that the \(S^1\)-action on \(M\) lifts into the \(\text{Spin}^c\)-structure \(P\), if there is a \(S^1\)-action on \(P\) which commutes with the \(\text{Spin}^c(2n)\)-action on \(P\) such that the projection \(P\rightarrow M\) is \(S^1\)-equivariant.
\begin{lemma}
\label{sec:twist-dirac-oper-12}
The \(S^1\)-action on \(M\) lifts into the \(\text{Spin}^c\)-structure if and only if it lifts to an action on the line bundle associated to the \(\text{Spin}^c\)-structure.
\end{lemma}
\begin{proof}
If the \(S^1\)-action lifts to an action on the \(\text{Spin}^c\)-structure \(P\) of \(M\), then it also lifts into the associated line bundle \(P\times_{\text{Spin}^c} \mathbb{C}\).
Now assume that the \(S^1\)-action on \(M\) lifts into the associated line bundle of \(P\).
Let \(Q\) be the oriented orthogonal frame bundle of \(M\).
Then the \(S^1\)-action lifts into \(Q\).
Moreover, by \cite[p. 127-128]{0247.57010}, the action on \(M\) lifts into \(P\) if and only if the action on \(Q\) lifts into the \(S^1\)-bundle
\begin{equation*}
\xi:P\rightarrow P/S^1=Q.
\end{equation*}
Now we consider the Serre-spectral sequence for the fibration \(Q\rightarrow Q_{S^1}\rightarrow BS^1\).
By Corollary 1.3 of \cite[p. 14]{0346.57014}, the \(S^1\)-action lifts into \(\xi\) if and only if
\begin{align*}
d_2c_1(\xi)&=0&\text{and}&&d_3c_1(\xi)=0.
\end{align*}
Because \(H^*(BS^1;\mathbb{Z})\) is concentrated in even degrees, this holds if and only if \(d_2c_1(\xi)=0\).
Let \(\xi'\) be the \(S^1\)-bundle over \(Q\) associated to the pullback of the line bundle associated to \(P\).
Then the \(S^1\)-action on \(Q\) lifts into \(\xi'\).
Since \(\xi'=\xi^2\), we have \(2d_2c_1(\xi)=d_2c_1(\xi')=0\).
Because \(E_2^{2,1}=H^2(BS^1;H^1(Q;\mathbb{Z}))\) is torsion-free, it follows that the \(S^1\)-action lifts into \(P\).
\end{proof}
If the \(S^1\)-action on \(M\) lifts into the \(\text{Spin}^c\)-structure, then we have an \(S^1\)-equivariant \(\text{Spin}^c\)-Dirac operator \(\partial_c\).
Its \(S^1\)-equivariant index is an element of the representation ring of \(S^1\) and is defined as
\begin{equation*}
\ind_{S^1}(\partial_c) = \ker \partial_c - \coker \partial_c \in R(S^1).
\end{equation*}
Let \(V\) be a \(S^1\)-equivariant complex vector bundle over \(M\) and \(W\) an even-dimensional \(S^1\)-equivariant \(\text{Spin}\) vector bundle over \(M\).
With this data we build a power series \(R\in K_{S^1}(M)[[q]]\) defined by
\begin{align*}
R&= \bigotimes_{k=1}^\infty S_{q^k}(\tilde{TM}\otimes_\mathbb{R} \mathbb{C})\otimes \Lambda_{-1}(V^*)\otimes \bigotimes_{k=1}^\infty \Lambda_{-q^k}(\tilde{V}\otimes_\mathbb{R} \mathbb{C})\\& \otimes \Delta(W)\otimes\bigotimes_{k=1}^\infty \Lambda_{q^n}(\tilde{W}\otimes_\mathbb{R}\mathbb{C}).
\end{align*}
Here \(q\) is a formal variable, \(\tilde{E}\) denotes the reduced vector bundle \(E -\dim E\), \(\Delta(W)\) is the full complex spinor bundle associated to the \(\text{Spin}\)-vector bundle \(W\), and \(\Lambda_t\) (resp. \(S_t\)) denotes the exterior (resp. symmetric) power operation. The tensor products are, if not indicated otherwise, taken over the complex numbers.
After extending \(\ind_{S^1}\) to power series we may define:
\begin{definition}
Let \(\varphi^c(M;V,W)_{S^1}\) be the \(S^1\)-equivariant index of the \(\text{Spin}^c\)-Dirac operator twisted with \(R\):
\begin{equation*}
\varphi^c(M;V,W)_{S^1}= \ind_{S^1}(\partial_c \otimes R)\in R(S^1)[[q]].
\end{equation*}
We denote by \(\varphi^c(M;V,W)\) the non-equivariant version of this index:
\begin{equation*}
\varphi^c(M;V,W)= \ind(\partial_c \otimes R)\in \mathbb{Z}[[q]].
\end{equation*}
\end{definition}
The Atiyah-Singer index theorem \cite{0164.24301} allows us to calculate
\begin{equation*}
\varphi^c(M;V,W)=\langle e^{c_1^c(M)/2}\ch(R)\hat{A}(M),[M]\rangle.
\end{equation*}
Here we have
\begin{equation*}
\ch(R)=Q_1(TM)Q_2(V)Q_3(W)
\end{equation*}
with
\begin{align*}
Q_1(TM)&=\ch(\bigotimes_{k=1}^{\infty} S_{q^k}(\tilde{TM}\otimes_\mathbb{R} \mathbb{C}))=\prod_i\prod_{k=1}^\infty \frac{(1-q^k)^2}{(1-e^{x_i}q^k)(1-e^{-x_i}q^k)},\\
Q_2(V)&=\ch( \Lambda_{-1}(V^*)\otimes \bigotimes_{k=1}^\infty \Lambda_{-q^k}(\tilde{V}\otimes_\mathbb{R} \mathbb{C}))\\ &= \prod_i (1-e^{-v_i})\prod_{k=1}^{\infty} \frac{(1-e^{v_i}q^k)(1-e^{-v_i}q^k)}{(1-q^k)^2},\\
Q_3(W)&=\ch(\Delta(W)\otimes\bigotimes_{k=1}^\infty \Lambda_{q^n}(\tilde{W}\otimes_\mathbb{R}\mathbb{C}))\\ &=\prod_i(e^{w_i/2}+e^{-w_i/2})\prod_{k=1}^{\infty} \frac{(1+e^{w_i}q^k)(1+e^{-w_i}q^k)}{(1+q^k)^2}.
\end{align*}
Here \(\pm x_i\) (resp. \(v_i\) and \(\pm w_i\)) denote the formal roots of \(TM\) (resp. \(V\) and \(W\)).
If \(c_1^c(M)=c_1(V)\), then we have
\begin{equation*}
e^{c_1^c(M)/2}Q_2(V)= e(V)\frac{1}{\hat{A}(V)}\prod_i\prod_{k=1}^{\infty} \frac{(1-e^{v_i}q^k)(1-e^{-v_i}q^k)}{(1-q^k)^2}=e(V)Q_2'(V).
\end{equation*}
Note that if \(M\) is a \(\text{Spin}\)-manifold,
then there is a canonical \(\text{Spin}^c\)-structure on \(M\).
With this \(\text{Spin}^c\)-structure \(\varphi^c(M;0,TM)\) is equal to the elliptic genus of \(M\).
Moreover, \(\varphi^c(M;0,0)\) is the Witten-genus of \(M\).
Dessai proved the following theorem.
\begin{theorem}[{\cite[Theorem 3.2, p. 243]{0944.58019}}]
\label{sec:twist-dirac-oper-9}
Assume that the equivariant Pontrjagin-class \(p_1^{S^1}(V+W-TM)\) restricted to \(M^{S^1}\) is equal to \(\pi_{S^1}^*(Ix^2)\) modulo torsion, where \(\pi_{S^1}:BS^1\times M^{S^1} \rightarrow BS^1\) is the projection on the first factor, \(x\in H^2(BS^1;\mathbb{Z})\) is a generator and \(I\) is an integer.
Assume, moreover, that \(c_1^c(M)\) and \(c_1(V)\) are equal modulo torsion.
If \(I<0\), then \(\varphi^c(M;V,W)_{S^1}\) vanishes identically.
\end{theorem}
Let \(G\) be a compact Lie-group such that:
\begin{enumerate}
\item \label{item:1} There is an exact sequence of Lie-groups
\begin{equation*}
1 \rightarrow T \rightarrow G\rightarrow W(G)\rightarrow 1,
\end{equation*}
where \(T\) is a torus and \(W(G)\) a finite group.
\item\label{item:2} If condition (\ref{item:1}) holds, then \(G\) acts by conjugation on \(T\).
Since \(T\) is abelian this action factors through \(W(G)\). We assume that this action of \(W(G)\) is non-trivial on \(T\).
\end{enumerate}
An action of \(G\) on a manifold \(M\) is called nice if \(G\) acts almost effectively on \(M\) and if the induced action on \(H^*(M;\mathbb{Z})\) is trivial.
For nice \(G\)-actions on \(\text{Spin}^c\)-manifolds we have the following vanishing result.
\begin{theorem}
\label{sec:twist-dirac-oper-8}
Let \(M\) be a \(\text{Spin}^c\)-manifold on which \(G\) acts nicely such that \(M^G\neq \emptyset\).
Let \(V\) and \(W\) be sums of complex line bundles over \(M\) such that \(W\) is \(\text{Spin}\), \(c_1(V)=c_1^c(M)\) modulo torsion and \(p_1(V+W-TM)=0\) modulo torsion.
Assume that \(b_1(M)=0\) or that the \(G\)-action on \(M\) extends to an action of a simply connected compact Lie-group.
Then \(\varphi^c(M;V,W)\) vanishes.
\end{theorem}
\begin{remark}
Theorem \ref{sec:twist-dirac-oper-8} is a generalization of Theorem 4.4 of \cite[p. 521]{0963.19002}.
\end{remark}
Before we prove Theorem~\ref{sec:twist-dirac-oper-8} we state three lemmas about the equivariant cohomology of \(G\)-manifolds which are needed in the proof.
\begin{lemma}
\label{sec:twist-dirac-oper}
Let \(M\) be a nice \(G\)-manifold such that \(M^G\neq \emptyset\) and \(b_1(M)=0\). Then
\begin{equation*}
0 \rightarrow H^2(BG;\mathbb{Z})\rightarrow H^2_G(M;\mathbb{Z})\rightarrow H^2(M;\mathbb{Z})\rightarrow 0
\end{equation*}
is exact.
\end{lemma}
\begin{proof}
We consider the Serre-spectral sequence for the fibration \(M\rightarrow M_G\rightarrow BG\).
Because the \(G\)-action on \(M\) is nice we have
\begin{equation*}
E_2^{p,q}=H^p(BG;H^q(M;\mathbb{Z})).
\end{equation*}
Since \(b_1(M)=0\), we have \(E_\infty^{1,1}=0\).
Therefore we have an exact sequence
\begin{equation*}
0\rightarrow E_\infty^{2,0}\rightarrow H^2_G(M;\mathbb{Z})\rightarrow E_\infty^{0,2}\rightarrow 0.
\end{equation*}
Because \(M^{G}\neq \emptyset\), \(H^*(BG;\mathbb{Z})\rightarrow H_G^*(M;\mathbb{Z})\) is injective.
Hence, we have
\begin{equation*}
H^*(BG;\mathbb{Z})=E_2^{*,0}=E_\infty^{*,0}
\end{equation*}
and the differentials \(d_r:E_r^{*-r,r-1}\rightarrow E_r^{*,0}\) vanish.
It remains to show that \(E_\infty^{0,2}=E_2^{0,2}=H^2(M;\mathbb{Z})\).
That is equivalent to \(d_r:E_r^{0,2}\rightarrow E_r^{r,3-r}\) vanishes for all \(r\).
Now we have \(E_2^{2,1}=0\) because \(b_1(M)=0\).
Therefore \(d_2\) vanishes.
Since there are \(G\)-fixed points in \(M\), \(d_3\) vanishes.
The differentials \(d_r\), \(r>3\), vanish because \(E_r^{r,3-r}=0\) for \(r>3\).
Therefore the statement follows.
\end{proof}
\begin{lemma}
\label{sec:twist-dirac-oper-17}
Let \(M\) be a nice \(G\)-manifold. If the \(G\)-action on \(M\) extends to an action of a simply connected compact Lie-group \(\hat{G}\), then the natural map
\begin{equation*}
H_{\hat{G}}^2(M;\mathbb{Z})\rightarrow H^2(M;\mathbb{Z})
\end{equation*}
is an isomorphism. Moreover, \(H_{{G}}^2(M;\mathbb{Z})\rightarrow H^2(M;\mathbb{Z})\) is surjective.
\end{lemma}
\begin{proof}
Since \(B\hat{G}\) is three-connected the first statement follows from an inspection of the Serre spectral sequence for the fibration \(M\rightarrow M_{\hat{G}}\rightarrow B\hat{G}\) as in the proof of Lemma~\ref{sec:twist-dirac-oper}.
Then the second statement follows because \(H_{\hat{G}}^2(M;\mathbb{Z})\rightarrow H^2(M;\mathbb{Z})\) factors through \(H_G^2(M;\mathbb{Z})\).
\end{proof}
\begin{lemma}
\label{sec:twist-dirac-oper-1}
Assume that \(T^{W(G)}\) is finite or equivalently that \(\dim (LT)^{W(G)}=0\) or \(\dim (LT^*)^{W(G)}=0\).
Let \(M\) be a nice \(G\)-manifold, then
\begin{equation*}
H^4(BG;\mathbb{Q})\rightarrow H_G^4(M;\mathbb{Q})\rightarrow H^4(M;\mathbb{Q})
\end{equation*}
is exact.
\end{lemma}
\begin{proof}
Because \(\dim (LT^*)^{W(G)}=0\), we have by Proposition 20.4 of \cite[p. 68]{0158.20503} \(H^i(BG;\mathbb{Q})=0\) for \(i=1,2,3\).
Therefore from the Serre spectral sequence of the fibration \(M\rightarrow M_G\rightarrow BG\) we get an exact sequence
\begin{equation*}
0\rightarrow E_\infty^{4,0}\rightarrow H_G^4(M;\mathbb{Q})\rightarrow E_\infty^{0,4}\rightarrow 0.
\end{equation*}
Since \(H^4(BG;\mathbb{Q})\) surjects to \(E_\infty^{4,0}\) and \(E_\infty^{0,4}\) injects into \(H^4(M;\mathbb{Q})\), the statement follows.
\end{proof}
Now we are ready to prove Theorem~\ref{sec:twist-dirac-oper-8} in two special cases.
The general case will follow from these special cases.
\begin{lemma}
\label{sec:twist-dirac-oper-2}
Assume that \(T^{W(G)}\) is finite. Then Theorem~\ref{sec:twist-dirac-oper-8} holds.
\end{lemma}
\begin{proof}
Let \(V=\bigoplus L_i\) and \(W=\bigoplus L_i'\) with \(L_i,L_i'\) line bundles.
By Lemmas \ref{sec:twist-dirac-oper}, \ref{sec:twist-dirac-oper-17} and Corollary 1.2 of \cite[p. 13]{0346.57014}, the \(G\)-action on \(M\) lifts into each line bundle \(L_i,L_i'\).
Therefore \(p_1^G(V+W-TM)\) is well defined.
Moreover, by Lemma~\ref{sec:twist-dirac-oper-12}, the action of every \(S^1\subset T\subset G\) lifts into the \(\text{Spin}^c\)-structure on \(M\).
By Theorem~\ref{sec:twist-dirac-oper-9}, it is sufficient to show that, for \(S^1 \hookrightarrow T \hookrightarrow G\),
\begin{equation*}
p_1^{S^1}(V+W-TM)=\rho(S^1,G)^*p_1^G (V+W-TM) = a \pi_{S^1}^*(x^2),
\end{equation*}
with \(a\in \mathbb{Q}\), \(a<0\), and \(x\in H^2(BS^1;\mathbb{Z})\) a generator.
Here \(\rho(S^1,G)^*: H_G^*(M;\mathbb{Q})\rightarrow H_{S^1}^*(M;\mathbb{Q})\) is the map induced by the inclusion \(S^1\hookrightarrow G\) and \(\pi_{S^1}^*:H^*(BS^1;\mathbb{Q})\rightarrow H_{S^1}^*(M;\mathbb{Q})\) is the natural map.
By Lemma \ref{sec:twist-dirac-oper-1}, there is an \(\alpha\in H^4(BG;\mathbb{Q})\) with \(\pi_G^*(\alpha)=p_1^G(V+W-TM)\).
Therefore we have
\begin{equation*}
p_1^{S^1}(V+W-TM)=\pi_{S^1}^*\rho(S^1,G)^*\alpha = a \pi_{S^1}^*(x^2)
\end{equation*}
with \(a \in\mathbb{Q}\).
It remains to show that \(a< 0\).
We restrict \(p_1^{S^1}(V+W-TM)\) to a \(G\)-fixed point \(y\).
Then we have
\begin{equation*}
p_1^{S^1}(V+W-TM)|_y=\sum \alpha_i^2 + \sum \beta_i^2 - \sum \gamma_i^2,
\end{equation*}
where \(\alpha_i\) is the weight of the \(S^1\)-representation on the fiber of \(L_i\) over \(y\),
\(\beta_i\) is the weight of the \(S^1\)-representation on the fiber of \(L_i'\) over \(y\) and the \(\gamma_i\) are the weights of the \(S^1\)-representation \(T_yM\).
The representations on the fibers of \(L_i,L_i'\) are restrictions of one-dimensional \(G\)-representations to \(S^1\).
Because \((LT^*)^{W(G)}=0\), all such representations are trivial.
Therefore \(a = -\sum \gamma_i^2 < 0\) follows, because \(S^1\) acts non-trivially on \(M\).
\end{proof}
\begin{lemma}
\label{sec:twist-dirac-oper-4}
Assume that \(W(G)\) is cyclic. Then Theorem~\ref{sec:twist-dirac-oper-8} holds.
\end{lemma}
\begin{proof}
We show that \(G\) has a subgroup satisfying the assumptions of Lemma \ref{sec:twist-dirac-oper-2}.
Then the statement follows from that lemma.
By Lemma~\ref{sec:groups-acting-tori-1}, there are two \(W(G)\)-invariant subtori \(T_1\) and \(T_2\) of \(T\) such that
\begin{itemize}
\item \(W(G)\) acts trivially on \(T_1\).
\item \(T_2^{W(G)}\) is finite.
\item \(T\) is generated by \(T_1\) and \(T_2\).
\end{itemize}
Let \(g\in G\) be a preimage of a generator of \(W(G)\).
Then we have \(g^{\# W(G)}\in T\). Let \(t_1\in T_1\) and \(t_2\in T_2\) such that \(g^{\# W(G)}=t_1t_2\).
Moreover, let \(t\in T_1\) such that \(t_1=t^{\# W(G)}\).
Then \(gt^{-1}\) is another preimage of the generator of \(W(G)\) and \((gt^{-1})^{\# W(G)}\in T_2\).
Let \(G'\) be the subgroup of \(G\) generated by \(gt^{-1}\) and \(T_2\).
Then there is an exact sequence
\begin{equation*}
1\rightarrow T_2\rightarrow G'\rightarrow W(G)\rightarrow 1.
\end{equation*}
Therefore \(G'\) satisfies the assumptions of Lemma~\ref{sec:twist-dirac-oper-2}.
\end{proof}
In the situation of Theorem \ref{sec:twist-dirac-oper-8}, there is always a cyclic subgroup \(W(H)\) of \(W(G)\) which acts non-trivially on \(T\).
If \(H\) is the preimage of \(W(H)\) under the map \(G\rightarrow W(G)\), then Theorem \ref{sec:twist-dirac-oper-8} follows from Lemma~\ref{sec:twist-dirac-oper-4} applied to the restricted action of \(H\) on \(M\).
From Theorem~\ref{sec:twist-dirac-oper-8} we get the following corollaries about actions of compact connected non-abelian Lie-groups on \(\text{Spin}^c\)-manifolds.
\begin{cor}
\label{sec:twist-dirac-oper-5}
Let \(G\) be a compact connected non-abelian Lie-group and \(M\) a \(\text{Spin}^c\)-manifold with \(\varphi^c(M;V,W)\neq 0\) and \(V\), \(W\) as in Theorem~\ref{sec:twist-dirac-oper-8}.
Assume that \(G\) acts almost effectively on \(M\) and that \(T\) is a maximal torus of \(G\).
Then, for all \(x\in M^T\), \(G_x=T\) holds.
\end{cor}
\begin{proof}
Let \(\tilde{G}=G'\times T_0\) be a covering group of \(G\) with \(G'\) a semi-simple simply connected compact Lie-group and \(T_0\) a torus.
Then we have \(\tilde{G}_x=G'_x\times T_0\).
We will show that \(G'_x\) is a maximal torus of \(G'\).
From this the statement follows.
Since, for each compact connected non-abelian Lie-group \(H\), there is a group homomorphism \(\text{Pin}(2)\rightarrow H\) with finite kernel, \(G_x'^0=T'\) is a maximal torus of \(G'\) by Theorem \ref{sec:twist-dirac-oper-8}.
Assume that \(G'_x\neq T'\).
Then there is an exact sequence
\begin{equation*}
1 \rightarrow T'\rightarrow G'_x \rightarrow G'_x/T'\rightarrow 1
\end{equation*}
and we have \(G_x'/T'\subset N_{G'} T'/T'\).
Therefore \(G'_x/T'\) acts non-trivially on \(T'\).
But this is a contradiction to Theorem \ref{sec:twist-dirac-oper-8}.
\end{proof}
\begin{cor}
\label{sec:twist-dirac-oper-7}
Let \(M\) and \(G\) as in Corollary \ref{sec:twist-dirac-oper-5}. Then \(\# W(G) | \chi(M)\).
\end{cor}
\begin{proof}
We have \(\chi(M)=\chi(M^T)\), where \(T\) is a maximal torus of \(G\).
By Corollary \ref{sec:twist-dirac-oper-5}, \(W(G)\) acts freely on \(M^T\). Therefore we get
\begin{equation*}
\chi(M)=\# W(G) \chi (M^T/W(G)).
\end{equation*}
\end{proof}
The following two corollaries give upper bounds for the degree of symmetry of a \(\text{Spin}^c\)-manifold which admits a twisted Dirac-operator with non-zero index.
\begin{cor}
\label{sec:twist-dirac-oper-3}
Let \(M\) be a \(2n\)-dimensional \(\text{Spin}^c\)-manifold with \(\varphi^c(M;V,W)\neq 0\) and \(V\), \(W\) as in Theorem~\ref{sec:twist-dirac-oper-8} and \(G\) be a compact connected Lie-group with
\begin{enumerate}
\item \(\dim G -\rank G>2n\) or
\item \(\dim G- \rank G=2n\) and \(\rank G < T(M)\).
\end{enumerate}
Then there is no effective action of \(G\) on \(M\).
\end{cor}
\begin{proof}
Let \(\tilde{G}=G'\times T_0\) be a covering group of \(G\) with \(G'\) a semi-simple simply connected compact Lie-group and \(T_0\) a torus.
Let \(x\in M\).
Then by Theorem~\ref{sec:twist-dirac-oper-8} the identity component of \(G'_x\) must be a torus.
Therefore \(\dim G_x \leq \rank G\).
Moreover, there is an embedding of \(G/G_x\) in \(M\).
In case (1) this is impossible.
In case (2) we have, by dimension reasons, that \(M=G/H\) and \(H\) has maximal rank in \(G\).
By Corollary~\ref{sec:twist-dirac-oper-5}, \(H\) must be a maximal torus of \(G\).
Moreover, \(G\) is semi-simple because it acts effectively on \(M\).
The torus symmetry degree of \(G/H\) was calculated by Hauschild \cite[Theorem 3.3, p. 563]{0623.57024}.
It is equal to \(\rank G\), which contradicts our assumption that \(\rank G<T(M)\).
\end{proof}
Note that, if \(G\) is a compact Lie-group which acts effectively on a manifold \(M\) as in the above corollary, then the rank of \(G\) is bounded from above by the torus symmetry degree of \(M\).
Therefore we have \(N(M)\leq 2n + T(M)\).
If the Euler-characteristic of \(M\) is non-zero, we have \(T(M)\leq n\), so that we get the following corollary.
\begin{cor}
\label{sec:twist-dirac-oper-11}
Let \(M\) be a \(2n\)-dimensional \(\text{Spin}^c\)-manifold with \(\chi(M)\neq 0\) and \(\varphi^c(M;V,W)\neq 0\) with \(V\), \(W\) as in Theorem~\ref{sec:twist-dirac-oper-8} and \(G\) a compact connected Lie-group which acts effectively on \(M\).
Then \(\dim G \leq 3n\).
If \(\dim G= 3n\), then \(M=\prod S^2\).
\end{cor}
\begin{proof}
By the discussion above, we only have to prove the second statement.
If \(\dim G=3n\), then we must have \(\rank G =n\) and \(M=G/T\), where \(T\) is a maximal torus of \(G\).
Therefore \(G\) is semi-simple.
Because for a simple Lie-group \(G'\) we have \(\dim G' \geq 3\rank G'\) with equality holding if and only if \(G'\) is a quotient of \(SU(2)\), we see that \(G\) has a covering group of the form \(\prod SU(2)\).
Therefore the statement follows.
\end{proof}
\section{Products and connected sums}
\label{sec:prod}
In this section we discuss the calculation of the indices \(\varphi^c(M;V,W)\) for the case where \(M\) is a connected sum or a product of \(\text{Spin}^c\)-manifolds.
The formulas derived here will be used in our applications of the results of the previous section in Sections~\ref{sec:van_witten} and \ref{sec:qt_mfd}.
For cartesian products of \(\text{Spin}^c\)-manifolds we have the following lemma.
\begin{lemma}
\label{sec:prod-conn-sums-2}
Let \(M_1,M_2\) be even-dimensional \(\text{Spin}^c\)-manifolds, \(V_i\rightarrow M_i\) complex vector bundles and \(W_i\rightarrow M_i\), \(i=1,2\), \(\text{Spin}\) vector bundles.
Then \(M_1\times M_2\) is naturally a \(\text{Spin}^c\)-manifold and
\begin{equation*}
\varphi^c(M_1\times M_2;p_1^*V_1\oplus p_2^*V_2,p_1^*W_1\oplus p_2^*W_2)=
\varphi^c(M_1;V_1,W_1) \varphi^c(M_2;V_2,W_2),
\end{equation*}
where \(p_i:M_1\times M_2\rightarrow M_i\), \(i=1,2\), is the projection.
\end{lemma}
\begin{proof}
Let \(Q_i\in H^{\dim M_i}(M_i;\mathbb{Q})[[q]]\) be the degree \(\dim M_i\) part of
\begin{equation*}
e^{c_1^c(M_i)/2}\ch(R)\hat{A}(M_i)\in H^*(M_i;\mathbb{Q})[[q]],
\end{equation*}
\(i=1,2\).
Then we have
\begin{align*}
\varphi^c(M_1\times M_2;p_1^*V_1\oplus p_2^*V_2,p_1^*W_1\oplus p_2^*W_2)&=\langle p_1^*Q_1p_2^*Q_2,[M_1\times M_2]\rangle\\
&=\langle Q_1,[M_1]\rangle\langle Q_2,[M_2]\rangle\\
&= \varphi^c(M_1;V_1,W_1)\varphi^c(M_2;V_2,W_2).
\end{align*}
\end{proof}
The connected sum of two \(\text{Spin}^c\)-manifolds is again a \(\text{Spin}^c\)-manifold.
For these manifolds we have the following lemma.
\begin{lemma}
\label{sec:prod-conn-sums}
Let \(M_1,M_2\) be \(\text{Spin}^c\)-manifolds of the same even dimension greater or equal to four, \(V_i\rightarrow M_i\), \(i=1,2\), complex vector bundles which are sums of complex line bundles and \(W_i\rightarrow M_i\), \(i=1,2\), \(\text{Spin}\)-bundles which are sums of complex line bundles such that
\begin{align}
\label{eq:3}
c_1(V_i)&=c_1^c(M_i)&\text{and}&& p_1(V_i+W_i-TM_i)&=0.
\end{align}
Then \(M_1\# M_2\) has a \(\text{Spin}^c\)-structure, such that \(c_1^c(M_1 \# M_2)=c_1^c(M_1)+c_1^c(M_2)\).
If \(\dim V_1> \dim V_2\), then there are vector bundles \(V\rightarrow M_1\# M_2\), \(W\rightarrow M_1\# M_2\) which are sums of complex line bundles satisfying (\ref{eq:3}) such that
\begin{equation*}
\varphi^c(M_1\# M_2;V,W)=2^{\dim_\mathbb{C} W_2}\varphi^c(M_1;V_1,W_1).
\end{equation*}
If \(\dim V_1=\dim V_2\), then the same holds with
\begin{equation*}
\varphi^c(M_1\# M_2;V,W)=2^{\dim_\mathbb{C} W_2}\varphi^c(M_1;V_1,W_1)+2^{\dim_\mathbb{C} W_1}\varphi^c(M_2;V_2,W_2).
\end{equation*}
\end{lemma}
\begin{proof}
Let \(V_i=\bigoplus_{j=1}^{k_i}L_{ji}\) and \(W_i=\bigoplus_{j=1}^{k_i'}L_{ji}'\) for \(i=1,2\).
Then the \(L_{ji},L_{ji}'\) extend uniquely to vector bundles over \(M_1\# M_2\), such that the restriction to \(M_k \), \(k\neq i\) is trivial.
We denote these extensions also by \(L_{ji},L_{ji}'\).
Let
\begin{align*}
V&=\bigoplus_{j=1}^{\max\{k_1,k_2\}}L_{j1}\otimes L_{j2}& &\text{and}&
W&=\bigoplus_{i=1}^2\bigoplus_{j=1}^{k_i} L_{ji}',
\end{align*}
where \(L_{ji}\) is the trivial complex line bundle for \(j>k_i\).
The cohomology ring of \(M_1\# M_2\) with coefficients in a ring \(R\) is isomorphic to
\begin{equation}
\label{eq:4}
H^*(M_1;R)\times H^*(M_2;R)/I,
\end{equation}
where \(I\) is the ideal generated by \((1,-1)\) and \((\xi_1,-\xi_2)\).
Here \(\xi_i\) denotes the orientation class of \(M_i\).
Moreover, for the characteristic classes of \(M_1 \# M_2\) we have
\begin{align*}
w_i(M_1\# M_2)&=w_i(M_1)+w_i(M_2),&p_i(M_1\# M_2)&= p_i(M_1)+p_i(M_2), & i&>0.
\end{align*}
Therefore there is a \(\text{Spin}^c\)-structure on \(M_1 \# M_2\) with \(c_1^c(M_1 \# M_2)=c_1^c(M_1)+c_1^c(M_2)\).
For the vector bundles \(V\) and \(W\) defined above, we have
\begin{align*}
c_1(V)&=c_1(V_1)+c_1(V_2)=c_1^c(M_1)+c_1^c(M_2)=c_1^c(M_1\# M_2),\\
p_1(V)&=\sum_{j=1}^{k_1}c_1(L_{j1})^2 + \sum_{j=1}^{k_2}c_1(L_{j2})^2=p_1(V_1)+p_1(V_2),\\
p_1(W)&=p_1(W_1)+p_1(W_2).
\end{align*}
Therefore we have \(p_1(V+W-TM_1\# M_2)=0\).
Now we have assuming \(\dim V_1 \geq \dim V_2\),
\begin{align*}
\varphi^c(M_1\# M_2;V,W)&=\langle e(V) Q_2'(V)Q_3(W)Q_1(TM_1\# M_2)\hat{A}(M_1\# M_2),[M_1\# M_2]\rangle\\
&= \langle e(\bigoplus_{i=k_2+1}^{k_1}L_{j1})( e(\bigoplus_{i=1}^{k_2}L_{j1}) + e(\bigoplus_{i=1}^{k_2} L_{j2}))\\
&\quad\quad Q_2'(V)Q_3(W)Q_1(TM_1\# M_2)\hat{A}(M_1\# M_2),[M_1\# M_2]\rangle.
\end{align*}
It follows from (\ref{eq:4}) that for \(i>0\) the \(i\)th Pontrjagin-class of \(W\) is given by \(p_i(W_1)+p_i(W_2)\). A similar statement holds for the Chern-classes of \(V\).
Since \(2^{-\dim_\mathbb{C} W}Q_2'(V)Q_3(W)Q_1(TM)\hat{A}(M)\) is a power series with constant term one in the Pontrjagin-classes of \(V\), \(W\) and \(TM\) whose coefficients do not depend on \(V\), \(W\) and \(TM\), it follows that, for \(i>0\),
\begin{align*}
2^{-\dim_\mathbb{C} W}(Q_2'(V)Q_3(W)&Q_1(TM_1\# M_2)\hat{A}(M_1 \# M_2))_i\\
&=2^{-\dim_\mathbb{C} W_1}(Q_2'(V_1)Q_3(W_1)Q_1(TM_1)\hat{A}(M_1))_i\\
&+2^{-\dim_\mathbb{C} W_2}(Q_2'(V_2)Q_3(W_2)Q_1(TM_2)\hat{A}(M_2))_i.
\end{align*}
Here \((Q_2'(V)Q_3(W)Q_1(TM))_i\) denotes the degree \(4i\) part of \(Q_2'(V)Q_3(W)Q_1(TM)\).
Now the statement follows from (\ref{eq:4}).
\end{proof}
\section{Two vanishing results for the Witten-genus}
\label{sec:van_witten}
In this section we prove vanishing results for the Witten-genus of a \(\text{Spin}\)-manifold \(M\) with \(p_1(M)=0\) such that a product \(M\times M'\) admits an action of a compact connected semi-simple Lie-group of high rank.
Our first result is as follows.
\begin{theorem}
\label{sec:two-vanish-results}
Let \(M\) be a \(\text{Spin}\)-manifold such that \(p_1(M)\) is torsion.
Moreover, let \(M'\) be a \(2n\)-dimensional \(\text{Spin}^c\)-manifold such that there are \(x_1,\dots,x_n\in H^2(M';\mathbb{Z})\) with
\begin{enumerate}
\item \(\sum_{i=1}^n x_i=c_1^c(M')\) modulo torsion,
\item \(\sum_{i=1}^n x_i^2=p_1(M')\) modulo torsion,
\item\label{item:6} \(\langle \prod_{i=1}^n x_i, [M']\rangle \neq 0\).
\end{enumerate}
If there is an almost effective action of a semi-simple simply connected compact Lie-group \(G\) on \(M\times M'\) such that \(\rank G > \rank \langle x_1,\dots, x_n\rangle\), then the Witten-genus \(\varphi^c(M;0,0)\) of \(M\) vanishes.
\end{theorem}
\begin{proof}
Let \(L_i\), \(i=1,\dots,n\), be the line bundle over \(M'\) with \(c_1(L_i)=x_i\).
By Lemma~\ref{sec:twist-dirac-oper-17}, the natural map \(\iota^*:H^2_G(M\times M';\mathbb{Z})\rightarrow H^2(M\times M';\mathbb{Z})\) is an isomorphism.
Therefore by Corollary 1.2 of \cite[p. 13]{0346.57014} the \(G\)-action on \(M\times M'\) lifts into \(p'^*(L_i)\), \(i=1,\dots,n\).
Here \(p': M\times M'\rightarrow M'\) is the projection.
Moreover, by the above cited corollary and Lemma \ref{sec:twist-dirac-oper-12}, the action of every \(S^1\subset G\) lifts
into the \(\text{Spin}^c\)-structure on \(M\times M'\) induced by the \(\text{Spin}\)-structure on \(M\) and the \(\text{Spin}^c\)-structure on \(M'\).
By Lemma \ref{sec:prod-conn-sums-2}, we have
\begin{equation*}
\varphi^c(M\times M';\bigoplus_{i=1}^n p'^*L_i,0)=\varphi^c(M;0,0)\varphi^c(M';\bigoplus_{i=1}^n L_i,0).
\end{equation*}
By condition (\ref{item:6}), we have
\begin{align*}
\varphi^c(M';\bigoplus_{i=1}^n L_i,0)&=\langle Q_1(TM')\prod_{i=1}^n x_i Q_2'(\bigoplus_{i=1}^nL_i) \hat{A}(M'),[M']\rangle\\
&= \langle \prod_{i=1}^n x_i,[M']\rangle\neq 0.
\end{align*}
Hence, \(\varphi^c(M;0,0)\) vanishes if and only if \(\varphi^c(M\times M';\bigoplus_{i=1}^n p'^*L_i,0)\) vanishes.
Let \(T\) be a maximal torus of \(G\).
If there are no \(T\)-fixed points in \(M\times M'\), then the Lefschetz-fixed point formula implies that this index vanishes.
Therefore we may assume that there is a \(T\)-fixed point \(y\in (M\times M')^T\).
As in the proof of Lemma \ref{sec:twist-dirac-oper-1} one proves that
\begin{equation*}
H^4(BG;\mathbb{Q})\rightarrow H^4_G(M\times M';\mathbb{Q})\rightarrow H^4(M\times M';\mathbb{Q})
\end{equation*}
is exact.
Therefore there is an \(v\in H^4(BG;\mathbb{Q})\) such that \(p_1^T(\bigoplus_{i=1}^n p'^*L_i - T(M\times M')) = \pi_T^*\rho(T,G)^*v\).
By Theorem \ref{sec:twist-dirac-oper-9}, it is sufficient to show that there is a homomorphism \(S^1\hookrightarrow T\) such that \(\rho(S^1,T)^*\rho(T,G)^*v=a x^2\), where \(x\in H^2(BS^1;\mathbb{Z})\) is a generator and \(a\in \mathbb{Z}\), \(a<0\).
We have
\begin{equation*}
\rho(T,G)^*v=p_1^T(\bigoplus_{i=1}^n p'^*L_i - T(M\times M'))|_y = \sum_{i=1}^n a_i^2 -\sum v_i^2,
\end{equation*}
where the \(a_i\in H^2(BT;\mathbb{Z})\), \(i=1,\dots,n\), are the weights of the \(T\)-representations \(p'^*L_i|_y\) and the \(v_i\in H^2(BT;\mathbb{Z})\) are the weights of the \(T\)-representation \(T_y(M\times M')\).
Since \(\rank T>\rank \langle x_1,\dots,x_m\rangle\) and \(a_i=(\rho(T,G)^*(\iota^*)^{-1}p'^*(x_i))|_y\) for \(i=1,\dots,n\), there is a homomorphism \(S^1\hookrightarrow T\) such that \(\rho(S^1,T)^*a_i=0\) for \(i=1,\dots,n\).
For this \(S^1\) we have \(\rho(S^1,T)^*v=a x^2\) with \(a\in \mathbb{Z}\), \(a<0\), because the \(G\)-action is almost effective.
\end{proof}
We will see later in Lemma~\ref{sec:twist-dirac-oper-6} that those \(2n\)-dimensional quasitoric manifolds whose orbit polytopes admit facet colorings with \(n\) colors are examples of manifolds which satisfy the assumptions on \(M'\) in the above theorem.
Other examples of such manifolds are given by those manifolds whose tangent bundle is isomorphic to a sum of complex line bundles and which have non-zero Euler-characteristic.
In particular, homogeneous spaces of the form \(H/T\) with \(H\) a semi-simple compact connected Lie-group and \(T\) a maximal torus of \(H\) are examples of such manifolds.
Since in this case we have \(b_2(H/T)=\rank H\) we get the following corollary.
\begin{cor}
\label{sec:vanish-result-witt-1}
Let \(M\) be a \(\text{Spin}\)-manifold with \(p_1(M)=0\) and \(H\) a semi-simple compact connected Lie-group.
If there is an almost effective action of a semi-simple compact connected Lie-group \(G\) on \(M\times H/T\) such that \(\rank G>\rank H\),
then the Witten-genus of \(M\) vanishes.
\end{cor}
As an application of Corollary~\ref{sec:vanish-result-witt-1} we give a new proof for a theorem of Hauschild.
\begin{cor}[{\cite[Theorem 9, p. 552]{0623.57024}}]
Let \(H\) be a semi-simple compact connected Lie-group with maximal torus \(T\). Then we have \(N^{ss}(H/T)=\dim H\).
\end{cor}
\begin{proof}
Let \(G\) be a semi-simple compact connected Lie-group which acts effectively on \(H/T\).
Since the tangent bundle of \(H/T\) splits as a sum of complex line bundles and \(\chi(H/T)\neq 0\), there is a twisted Dirac operator with non-vanishing index on \(H/T\).
Therefore, by the first case in Corollary~\ref{sec:twist-dirac-oper-3}, we have
\begin{equation*}
\dim G - \rank G\leq \dim H/T=\dim H - \rank H.
\end{equation*}
By Corollary~\ref{sec:vanish-result-witt-1} applied in the case \(M=pt\), we see that \(\rank G\leq \rank H\).
Therefore it follows that \(\dim G\leq \dim H\).
Since there is an obvious action of \(H\) on \(H/T\), the statement follows.
\end{proof}
Similarly to Theorem~\ref{sec:two-vanish-results} we can prove the following vanishing result for actions of simple compact connected Lie-groups of high rank.
\begin{theorem}
\label{sec:two-vanish-results-3}
Let \(M\) be a \(\text{Spin}\)-manifold such that \(p_1(M)\) is torsion.
Moreover, let \(M'\) be a \(2n\)-dimensional \(\text{Spin}^c\)-manifold such that \(p_1(M')\) is torsion and there are \(x_1,\dots,x_n\in H^2(M';\mathbb{Z})\) and \(1=n_1<n_2<\dots<n_{k+1}=n+1\) with
\begin{enumerate}
\item \(\sum_{i=1}^n x_i =c_1^c(M')\) modulo torsion,
\item \(\sum_{i=n_j}^{n_{j+1}-1}x_i^2\) is torsion, for \(j=1,\dots,k\),
\item \(\langle \prod_{i=1}^n x_i, [M']\rangle \neq 0\).
\end{enumerate}
If there is an almost effective action of a simple simply connected compact Lie-group \(G\) on \(M\times M'\) such that \(\rank G>\rank \langle x_{n_j},\dots,x_{n_{j+1}-1}\rangle\) for all \(j=1,\dots,k\), then the Witten-genus \(\varphi^c(M;0,0)\) of \(M\) vanishes.
\end{theorem}
\begin{proof}
The relations between the \(G\)-equivariant and non-equivariant cohomology of \(M\times M'\) is as described in the proof of Theorem~\ref{sec:two-vanish-results}.
We consider the same index \(\varphi^c(M\times M';\bigoplus_{i=1}^n p'^*L_i,0)\) as in the proof of that theorem.
It vanishes if and only if the Witten-genus of \(M\) vanishes.
Let \(T\) be a maximal torus of \(G\).
We may assume that there is a \(T\)-fixed point \(y\) in \(M\times M'\).
As in the proof of Theorem~\ref{sec:two-vanish-results} one sees that there is an \(v\in H^4(BG;\mathbb{Q})\) such that \(p_1^T(\bigoplus_{i=1}^n p'^*L_i - T(M\times M'))= \pi_T^*\rho(T,G)^*v\).
By Theorem \ref{sec:twist-dirac-oper-9}, it is sufficient to show that there is a homomorphism \(S^1\hookrightarrow T\) such that \(\rho(S^1,T)^*\rho(T,G)^*v=a x^2\), where \(x\in H^2(BS^1;\mathbb{Z})\) is a generator and \(a\in \mathbb{Z}\), \(a<0\).
We have
\begin{equation*}
\rho(T,G)^*v=p_1^T(\bigoplus_{i=1}^n p'^*L_i - T(M\times M'))|_y = \sum_{i=1}^n a_i^2 -\sum v_i^2,
\end{equation*}
where the \(a_i\in H^2(BT;\mathbb{Z})\), \(i=1,\dots,n\) are the weights of the \(T\)-representations \(p'^*L_i|_y\) and the \(v_i\in H^2(BT;\mathbb{Z})\) are the weights of the \(T\)-representation \(T_y(M\times M')\).
We will show that the \(a_i\), \(i=1,\dots,n\), vanish.
Let \(1\leq j\leq k\).
Since \(H^4(BG;\mathbb{Q})\rightarrow H^4_G(M\times M';\mathbb{Q})\rightarrow H^4(M\times M';\mathbb{Q})\) is exact, there is an \(v_j'\in H^4(BG;\mathbb{Q})\) such that \(p_1^T(\bigoplus_{i=n_j}^{n_{j+1}-1}p'^*L_i)=\pi_T^*\rho(T,G)^*v_j'\).
Therefore we have
\begin{equation*}
\sum_{i=n_j}^{n_{j+1}-1}a_i^2 = p_1^T(\bigoplus_{i=n_j}^{n_{j+1}-1}p'^*L_i)|_y=\rho(T,G)^*v_j'.
\end{equation*}
Therefore \(\sum_{i=n_j}^{n_{j+1}-1}a_i^2\) is invariant under the action of the Weyl-group \(W(G)\) of \(G\) on \(H^4(BT;\mathbb{Q})\).
Because \(\dim T>\rank \langle x_{n_j},\dots,x_{n_{j+1}-1}\rangle\) and \(a_i=(\rho(T,G)^*(\iota^*)^{-1}p'^*(x_i))|_y\),
there is an \(S^1\subset T\) such that \(\rho(S^1,T)^*a_i=0\) for \(i=n_j,\dots,n_{j+1}-1\).
Since \(\sum_{i=n_j}^{n_{j+1}-1}a_i^2\in H^4(BT;\mathbb{Q})\) is \(W(G)\)-invariant, it follows that, for all \(w\in W(G)\),
\begin{equation*}
0=\rho(wS^1w^{-1},T)^*\sum_{i=n_j}^{n_{j+1}-1}a_i^2=\sum_{i=n_j}^{n_{j+1}-1}(\rho(wS^1w^{-1},T)^*a_i)^2.
\end{equation*}
Since \(H^*(BS^1;\mathbb{Z})=\mathbb{Z}[x]\), this implies that \(\rho(wS^1w^{-1},T)^*a_i=0\) for all \(i=n_j,\dots,n_{j+1}-1\).
Because \(G\) is simple, there are no non-trivial \(W(G)\)-invariant subtori in \(T\).
Therefore we have \(T=\langle wS^1w^{-1};w\in W(G)\rangle\).
Hence, all \(a_i\in H^2(BT;\mathbb{Z})\), \(i=n_j,\dots,n_{j+1}-1\), \(j=1,\dots,k\), vanish.
Hence, \(\rho(S^1,T)^*\rho(T,G)^*v=a x^2\) with \(a<0\) for all non-trivial homomorphisms \(S^1\hookrightarrow T\).
\end{proof}
Examples of those manifolds which satisfy the assumptions on \(M'\) in the above theorem are manifolds with non-vanishing Euler-characteristic whose tangent bundle is isomorphic to a direct sum of complex line bundles \(L_1,\dots,L_n\) such that there are \(1=n_1<n_2<\dots<n_{k+1}=n+1\) with \(p_1(\bigoplus_{i=n_j}^{n_{j+1}-1}L_i)=0\) for all \(j=1,\dots,k\).
If \(H\) is a simple compact connected Lie-group with maximal torus \(T\), then all Pontrjagin classes of \(H/T\) are torsion. Therefore we get the following corollary.
\begin{cor}
\label{sec:two-vanish-results-1}
Let \(M\) be a \(\text{Spin}\)-manifold with \(p_1(M)=0\) and \(H_1,\dots,H_k\) be simple compact connected Lie-groups with maximal tori \(T_1,\dots,T_k\).
If there is an almost effective action of a simple compact connected Lie-group \(G\) on \(M\times \prod_{i=1}^k H_i/T_i\) such that \(\rank G>\rank H_i\) for all \(i=1,\dots, k\),
then the Witten-genus of \(M\) vanishes.
\end{cor}
The Corollaries~\ref{sec:vanish-result-witt-1} and \ref{sec:two-vanish-results-1} can be used to find an upper bound for the semi-simple symmetry degree of \(M\times H/T\), where \(M\) is a \(\text{Spin}\)-manifold with \(p_1(M)=0\) and non-vanishing Witten-genus and \(H\) is a semi-simple compact Lie-group with maximal torus \(T\).
To give this upper bound we need the following constants.
For \(l\geq 1\) let
\begin{equation*}
\alpha_l=\max\left\{\frac{\dim G}{\rank G}; \; G \text{ a simple compact Lie-group with } \rank G \leq l\right\}.
\end{equation*}
The values of the \(\alpha_l\)'s are listed in Table \ref{tab:erste}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\(l\)&\(\alpha_l\)& \(G_l\)\\\hline\hline
\(1\)& \(3\)&\(\text{Spin}(3)\)\\\hline
\(2\)& \(7\)&\(G_2\)\\\hline
\(3\)& \(7\)&\(\text{Spin}(7), Sp(3)\)\\\hline
\(4\)& \(13\)&\(F_4\)\\\hline
\(5\)& \(13\)& none\\\hline
\(6\)& \(13\)& \(E_6, \text{Spin}(13), Sp(6)\)\\\hline
\(7\)& \(19\)& \(E_7\)\\\hline
\(8\)& \(31\)& \(E_8\)\\\hline
\(9\leq l\leq 14\)& \(31\)& none\\\hline
\(l\geq 15\)&\(2l+1\)&\(\text{Spin}(2l+1), Sp(l)\)\\
\end{tabular}
\caption{The values of \(\alpha_l\) and the simply connected compact simple Lie-groups \(G_l\) of rank \(l\) with \(\dim G_l= \alpha_l\cdot l\).}
\label{tab:erste}
\end{table}
\begin{cor}
\label{sec:two-vanish-results-2}
Let \(M\) be a \(\text{Spin}\)-manifold with \(p_1(M)=0\), such that the Witten-genus of \(M\) does not vanish and \(H_1,\dots, H_k\) simple compact connected Lie-groups with maximal tori \(T_1,\dots,T_k\).
Then we have
\begin{equation*}
\sum_{i=1}^k \dim H_i \leq N^{ss}(M\times \prod_{i=1}^k H_i/T_i) \leq \alpha_l\sum_{i=1}^k\rank H_i,
\end{equation*}
where \(l=\max\{\rank H_i;\; i=1,\dots,k\}\).
If all \(H_i\) have the same rank and each \(H_i\) has one of the groups listed in Table~\ref{tab:erste} as a covering group, then equality holds in both inequalities.
\end{cor}
\begin{proof}
Let \(G\) be a compact simply connected semi-simple Lie-group which acts almost effectively on \(M\times \prod_{i=1}^k H_i/T_i\).
Then, by Corollary \ref{sec:vanish-result-witt-1}, we have \(\rank G \leq \sum_{i=1}^k \rank H_i\).
By Corollary \ref{sec:two-vanish-results-1}, all simple factors of \(G\) must have rank smaller or equal to \(l\).
Therefore we have
\begin{equation*}
\dim G\leq \alpha_l\rank G \leq \alpha_l\sum_{i=1}^k \rank H_i.
\end{equation*}
Hence, \(N^{ss}(M\times \prod_{i=1}^k H_i/T_i)\leq \alpha_l\sum_{i=1}^k \rank H_i\) follows.
Since there is an obvious \(\prod_{i=1}^k H_i\)-action on \(M\times \prod_{i=1}^k H_i/T_i\), the other inequality follows.
If all \(H_i\) have the same rank \(l\) and each \(H_i\) has one of the groups listed in Table \ref{tab:erste} as a covering group, then \(\sum_{i=1}^k\dim H_i=\alpha_l\sum_{i=1}^k \rank H_i\).
Therefore we get equality in this case.
\end{proof}
\begin{remark}
Our methods to prove Corollary \ref{sec:two-vanish-results-2} break down if we consider the stabilization of \(M\) with a homogeneous space \(H/K\) where \(K\) is a closed subgroup of \(H\) which is not a maximal torus.
If \(K\) has not maximal rank in \(H\), then there is a fixed-point-free torus action on \(M\times H/K\).
Therefore all indices \(\varphi^c(M\times H/K;V,W)\) vanish by the Lefschetz-fixed point formula.
If \(K\) is non-abelian and has maximal rank in \(H\), then all indices \(\varphi^c(M\times H/K;V,W)\) vanish by Corollary~\ref{sec:twist-dirac-oper-5}.
Therefore in both cases the starting point of the proofs of Theorems~\ref{sec:two-vanish-results} and \ref{sec:two-vanish-results-3}, namely the existence of an index \(\varphi^c(M\times M';V,W)\) which vanishes if and only if the Witten-genus of \(M\) vanishes, does not hold in the case \(M'=H/K\).
\end{remark}
\section{Twisted Dirac operators and quasitoric manifolds}
\label{sec:qt_mfd}
In this section we apply the results of the previous sections to the study of quasitoric manifolds.
We begin by recalling the definition of quasitoric manifolds and some of their properties established in \cite{0733.52006} (see also \cite{1012.52021}).
A smooth closed simply connected \(2n\)-dimensional manifold \(M\) with a smooth action of an \(n\)-dimensional torus \(T\) is called quasitoric if the following two conditions are satisfied:
\begin{enumerate}
\item The \(T\)-action on \(M\) is locally isomorphic to the standard action of \(T\) on \(\mathbb{C}^n\).
\item The orbit space \(M/T\) is a simple convex \(n\)-dimensional polytope \(P\).
\end{enumerate}
We denote by \(\mathfrak{F}=\{F_1,\dots,F_m\}\) the set of facets of \(P\). Then for each \(F_i\in\mathfrak{F}\), \(M_i=\pi^{-1}(F_i)\) is a closed connected submanifold of codimension two in \(M\) which is fixed pointwise by a one-dimensional subtorus \(\lambda(F_i)=\lambda(M_i)\) of \(T\).
Here \(\pi:M\rightarrow P\) denotes the orbit map.
These \(M_i\) are called the characteristic submanifolds of \(M\).
The cohomology ring of \(M\) is generated by elements of degree two \(u_1,\dots,u_m\in H^2(M;\mathbb{Z})\) such that
\begin{equation*}
H^*(M;\mathbb{Z})=\mathbb{Z}[u_1,\dots,u_m]/(I+J),
\end{equation*}
where \(I\) is the ideal generated by
\begin{equation*}
\left\{\prod_{j=1}^k u_{i_j};\; \bigcap_{j=1}^k F_{i_j}=\emptyset\right\}
\end{equation*}
and \(J\) is generated by linear relations between the \(u_i\), which depend on the function \(\lambda:\mathfrak{F}\rightarrow \{\text{one-dimensional subtori of } T\}\). It should be noted that each \(u_i\) is the Poincar\'e-dual of \(M_i\).
The stable tangent bundle of \(M\) splits as a sum of complex line bundles \(L_1,\dots,L_m\):
\begin{equation*}
TM\oplus \mathbb{R}^{2m-2n}\cong \bigoplus_{i=1}^m{L_i},
\end{equation*}
such that \(c_1(L_i)=\pm u_i\).
In particular, a quasitoric manifold has always a stable almost complex structure and therefore a \(\text{Spin}^c\)-structure.
So the results of Section \ref{sec:twisted} might be used to find quasitoric manifolds with only a few non-abelian symmetries.
To do so, we have to find quasitoric manifolds \(M\) which admit vector bundles \(V\rightarrow M\) and \(W\rightarrow M\) which satisfy the assumptions of Theorem~\ref{sec:twist-dirac-oper-8} such that \(\varphi^c(M;V,W)\neq 0\).
In the following we say that \(M\) has a non-vanishing index \(\varphi^c(M;V,W)\) if these assumptions are satisfied.
Now we turn to the construction of such quasitoric manifolds.
Because the stable tangent bundle of a quasitoric manifold \(M\) splits as a sum of line bundles \(\bigoplus_{i=1}^m L_i\) it seems to be natural to consider indices \(\varphi^c(M;V,W)\) with \(V=\bigoplus_{i=1}^k L_i\), \(W=\bigoplus_{i=k+1}^m L_i\) and \(W\) a \(\text{Spin}\)-bundle.
But we have the following result:
\begin{theorem}
\label{sec:twist-dirac-oper-14}
Let \(M\) be quasitoric. Moreover, let \(M_1,\dots,M_m\) be the characteristic submanifolds of \(M\) and \(L_i\rightarrow M\) the complex line bundles with \(c_1(L_i)=PD(M_i)\). Let
\begin{align*}
V&=\bigoplus_{i=1}^kL_i &&\text{and}& W&=\bigoplus_{i=k+1}^m L_i
\end{align*}
with \(c_1(V)\equiv c_1(M)\mod 2\) and \(c_1(W)\equiv 0\mod 2\).
Let \(\partial_c\) be the Dirac-operator for a \(\text{Spin}^c\)-structure on \(M\) with \(c_1^c(M)=c_1(V)\).
Then \(\varphi^c(M;V,W)=0\).
\end{theorem}
\begin{proof}
We have
\begin{align*}
\varphi^c(M;V,W)&= \langle e^{c_1^c(M)/2} \ch(R) \hat{A}(M),[M]\rangle\\
&=\langle e(V) Q_1(TM)Q_2'(V)Q_3(W) \hat{A}(V\oplus W),[M]\rangle\\
&=\langle Q_1(V) Q_1(W) Q_2'(V)Q_3(W) \hat{A}(V) \hat{A}(W),[N]\rangle\\
&=\langle Q_1(W)Q_3(W)\hat{A}(W),[N]\rangle\\
&=2^{m-n}\varphi^c(N;0,TN)=0.
\end{align*}
Here \(N\) is the intersection \(\bigcap_{i=1}^k M_i\), which is a quasitoric \text{Spin}-manifold.
\(N\) can not be a point, since otherwise the first Chern-classes of the summands of \(W\) form a basis of \(H^2(M;\mathbb{Z})\). Therefore \(W\) cannot be \(\text{Spin}\) if \(N\) is a point.
The elliptic genus \(\varphi^c(N;0,TN)\) of \(N\) vanishes, because there is an odd \(S^1\)-action on \(N\) \cite[p. 317]{0712.57010}.
\end{proof}
So we need another idea to construct quasitoric manifolds \(M\) which have non-trivial indices \(\varphi^c(M;V,W)\).
We will prove that those \(2n\)-dimensional quasitoric manifolds whose orbit polytopes admit facet colorings with \(n\) colors are such examples.
Before we do so we summarize some properties of \(n\)-dimensional polytopes and facet colorings.
A facet coloring with \(d\) colors of a simple \(n\)-dimensional polytope \(P\)
is a map \(f:\mathfrak{F}\rightarrow \{1,\dots,d\}\) such that \(f(F_i)\neq f(F_j)\) whenever \(F_i\cap F_j\neq \emptyset\) and \(F_i\neq F_j\).
Because in each vertex of \(P\) there meet \(n\) facets, one needs at least \(n\) colors to color \(P\).
The following description of simple \(n\)-dimensional polytopes which admit a coloring with \(n\) colors is due to Joswig.
\begin{theorem}[{\cite[Theorem 16, p. 255]{1054.05039}}]
\label{sec:twist-dirac-oper-10}
Let \(P\) be a simple \(n\)-dimensional polytope. Then the following statements are equivalent:
\begin{enumerate}
\item \(P\) is even, i.e. each two-dimensional face of \(P\) has an even number of vertices.
\item The graph which consists out of the vertices and edges of \(P\) is bipartite.
\item\label{item:3} The boundary complex \(\partial P^*\) of the dual polytope of \(P\) is balanced, i.e. there is a non-degenerate simplicial map \(\partial P^*\rightarrow \Delta^{n-1}\). Here \(\Delta^{n-1}\) denotes the \((n-1)\)-dimensional simplex.
\item \(P\) admits a facet coloring with \(n\) colors.
\end{enumerate}
\end{theorem}
Quasitoric manifolds whose orbit polytopes satisfy condition (\ref{item:3}) in the above theorem were described by Davis and Januszkiewicz \cite[p. 425-426]{0733.52006}.
They show that this is a very rich class of quasitoric manifolds.
We should note that the \(n\)-dimensional cube admits a facet coloring with \(n\) colors.
Moreover, a simple polytope belongs to this class if \(\partial P^*\) is the barycentric subdivision of a convex polytope.
Now we construct a non-vanishing index \(\varphi^c(M;V,W)\) on every \(2n\)-dimensional quasitoric manifold \(M\) whose orbit polytope admits a facet coloring with \(n\) colors.
\begin{lemma}
\label{sec:twist-dirac-oper-6}
Let \(M\) be a quasitoric manifold of dimension \(2n\) over the polytope \(P\).
Assume that \(P\) admits a facet coloring with \(n\) colors.
Then there is a \(\text{Spin}^c\)-structure on \(M\) and a complex vector bundle \(V\) which is a sum of line bundles with \(c_1(V)= c_1^c(M)\) and \(p_1(M)=p_1(V)\), such that \(\varphi^c(M;V,0)\) does not vanish.
\end{lemma}
\begin{proof}
Let \(f:\mathfrak{F}\rightarrow \{1,\dots,n\}\) be a facet coloring of \(P\) with \(n\) colors.
Let \(V=\bigoplus_{i=1}^nL_{f^{-1}(i)}\), where \(L_{f^{-1}(i)}\) is the line bundle with \(c_1(L_{f^{-1}(i)})=\sum_{F_j\in f^{-1}(i)}\pm u_j\).
Then we have \(c_1(V) \equiv c_1(M)\mod 2\) and
\begin{equation*}
p_1(V)=\sum_{i=1}^n(\sum_{F_j\in f^{-1}(i)}\pm u_j)^2=\sum_{i=1}^n\sum_{F_j\in f^{-1}(i)} u_j^2=p_1(M).
\end{equation*}
Consider a \(\text{Spin}^c\)-structure on \(M\) with \(c_1^c(M)=c_1(V)\) and assume that the associated index \(\varphi^c(M;V,0)\) vanishes.
Then \(\varphi^c(M;V,0)\) may be calculated as:
\begin{align*}
0&= \langle Q_1(TM) \prod_{i=1}^n c_1(L_{f^{-1}(i)}) Q_2'(V)\hat{A}(M),[M]\rangle\\
&=\langle\prod_{i=1}^n c_1(L_{f^{-1}(i)}),[M]\rangle.
\end{align*}
Therefore we have
\begin{equation}
\label{eq:1}
\prod_{i=1}^n \sum_{F_j\in f^{-1}(i)}\pm u_j =0.
\end{equation}
Since the signs of the \(u_j\) may be changed freely, we get, by considering different \(\text{Spin}^c\)-structures and summing up in equation (\ref{eq:1}):
\begin{equation*}
\forall (F_{i_1},\dots,F_{i_n})\in f^{-1}(1)\times \dots\times f^{-1}(n) \quad\quad \prod_{j=1}^n u_{i_j}=0.
\end{equation*}
But there is at least one tuple \((F_{i_1},\dots,F_{i_n})\in \mathfrak{F}\times \dots\times \mathfrak{F}\) such that \(\bigcap_{j=1}^nF_{i_j}\) is a vertex of \(P\).
For this tuple we have \(\prod_{j=1}^n u_{i_j}\neq 0\).
Moreover, because \(f\) is a coloring with \(n\) colors, for each \(k\in\{1,\dots,n\}\) there is exactly one \(F_{i_{j_k}}\) with \(f(F_{i_{j_k}})=k\).
Therefore we get a contradiction.
\end{proof}
As a consequence of Lemma~\ref{sec:twist-dirac-oper-6} and the corollaries at the end of Section~\ref{sec:twisted} we get the following corollaries.
\begin{cor}
\label{sec:twist-dirac-oper-16}
Let \(M\) be a \(2n\)-dimensional quasitoric manifold. If the orbit polytope of \(M\) admits a facet coloring with \(n\) colors, then we have \(N(M)\leq 3n\) with equality holding if and only if \(M=\prod S^2\).
\end{cor}
\begin{proof}
This follows directly from Lemma~\ref{sec:twist-dirac-oper-6} and Corollary~\ref{sec:twist-dirac-oper-11}.
\end{proof}
\begin{cor}
\label{sec:quas-manif-over-1}
Let \(M\) be a quasitoric manifold over the \(n\)-dimensional cube.
Then the only simple simply connected compact Lie-groups which can act almost effectively on \(M\) are \(SU(2)\) and \(\text{Spin}(5)\).
\end{cor}
\begin{proof}
By Lemma~\ref{sec:twist-dirac-oper-6}, there is a twisted Dirac operator on \(M\), whose index does not vanish.
By Corollary \ref{sec:twist-dirac-oper-7}, the order of the Weyl-group of a simple simply connected compact Lie-group, which acts on \(M\), divides the Euler-characteristic of \(M\).
Because \(\chi(M)=2^n\) and \(SU(2)\) and \(\text{Spin}(5)\) are the only simple simply connected compact Lie-groups \(G\) with \(\#W(G)|2^n\) \cite[p. 74-84]{0708.17005}, the statement follows.
\end{proof}
In the proof of the next corollary of Lemma~\ref{sec:twist-dirac-oper-6} we construct quasitoric manifolds with low semi-simple symmetry degree.
\begin{cor}
\label{sec:twist-dirac-oper-15}
In each dimension greater or equal to four, there are infinitely many quasitoric manifolds \(M\) with \(N^{ss}(M)\leq 3\).
\end{cor}
\begin{proof}
Let \(M_1\) be a four-dimensional quasitoric manifold over a polygon with \(6k\) vertices, \(k\in \mathbb{N}\).
Moreover, let \(M_2\) be a \(2n\)-dimensional quasitoric manifold over the \(n\)-cube.
Then the orbit polytopes of \(M_1\) and \(M_2\) admit facet colorings with \(2\) and \(n\) colors, respectively.
Therefore the orbit polytope of \(M_1\times M_2\) admits a facet coloring with \(n+2\) colors.
Hence, by Lemma~\ref{sec:twist-dirac-oper-6}, there is a non-vanishing index \(\varphi^c(M_1\times M_2;V,0)\) on \(M_1\times M_2\).
By Lemma~\ref{sec:prod-conn-sums} applied in the case \(V_1=V_2=V\) and \(W_1=W_2=0\), it follows that
\begin{equation*}
M=(M_1\times M_2)\#(M_1\times M_2)
\end{equation*}
has a non-vanishing index.
Because \(\chi(M)=2\cdot 6k \cdot 2^n-2\) is not divisible by three and four, it follows from Corollary~\ref{sec:twist-dirac-oper-7} and \cite[p. 74-84]{0708.17005} that the only compact simply connected semi-simple Lie-group which can act almost effectively on \(M\) is \(SU(2)\).
Because connected sums of quasitoric manifolds are quasitoric, the statement follows.
\end{proof}
The connected sum of two quasitoric manifolds is again a quasitoric manifold.
Therefore Lemma~\ref{sec:twist-dirac-oper-6} and the following result may be used to construct more quasitoric manifolds with non-vanishing indices.
\begin{lemma}
\label{sec:prod-conn-sums-1}
Let \(M_1,M_2\) be quasitoric manifolds of dimension \(2n\geq 4\). Assume that there are vector bundles \(V_1\rightarrow M_1\) and \(W_1\rightarrow M_1\) as in Lemma \ref{sec:prod-conn-sums} and \(b_2(M_2)\leq \dim V_1\) or \(M_2\) is a \(\text{Spin}\)-manifold.
Then there are sums of line bundles \(V,W\) over \(M_1\# M_2\) and a \(\text{Spin}^c\)-structure on \(M_1 \# M_2\) with \(c_1(V)=c_1^c(M_1\# M_2)\), \(c_1(W)\equiv 0 \mod 2\), \(p_1(V+W-TM_1\# M_2)=0\), such that
\begin{equation*}
\varphi^c(M_1\# M_2;V,W)=2^k\varphi^c(M_1;V_1,W_1)
\end{equation*}
for some \(k\geq 0\).
\end{lemma}
\begin{proof}
Let \(L_i\rightarrow M_2\), \(i=1,\dots,m\), be line bundles such that the Chern-classes of the \(L_i\) are the Poincar\'e-duals of the characteristic submanifolds of \(M_2\).
Then we have \(TM_2 \oplus \mathbb{R}^{2m-2n}\cong\bigoplus_{i=1}^m L_i\).
We order the \(L_i\) in such a way that
\(c_1(L_1),\dots,c_1(L_{b_2(M_2)})\) form a basis of \(H^2(M_2;\mathbb{Z})\).
Therefore there are \(a_1,\dots,a_{b_2(M)}\in \{0,1\}\) such that
\(c_1(\bigoplus_{i=1}^{b_2(M)} a_i L_i) \equiv c_1(M_2)\mod 2\) and \(W_2=\bigoplus_{i=1}^{b_2(M)} (1-a_i) L_i\oplus \bigoplus_{i=b_2(M)+1}^m L_i\) is a \(\text{Spin}\) bundle.
Consider a \(\text{Spin}^c\)-structure on \(M_2\) such that \(c_1^c(M_2)=c_1(\bigoplus_{i=1}^{b_2(M)} a_i L_i)\).
By Theorem~\ref{sec:twist-dirac-oper-14} we have \(\varphi^c(M_2;V_2,W_2)=0\), where \(V_2=\bigoplus_{i=1}^{b_2(M)} a_i L_i\).
Therefore, by Lemma \ref{sec:prod-conn-sums} the statement follows.
\end{proof}
In dimensions divisible by four we can use Lemma~\ref{sec:prod-conn-sums-1} to improve the results of Corollary \ref{sec:twist-dirac-oper-15} and prove that there are quasitoric manifolds on which no semi-simple compact Lie-group can act effectively.
\begin{cor}
\label{sec:twist-dirac-oper-13}
In dimensions \(4k\), \(k>0\), there are infinitely many quasitoric manifolds \(M\) with \(N^{ss}(M)=0\).
\end{cor}
\begin{proof}
Let \(M'\) be as in Lemma \ref{sec:twist-dirac-oper-6} with \(\dim M'=2n=4k\).
Then, by an iterated application of Lemma~\ref{sec:prod-conn-sums-1}, there are non-vanishing indices \(\varphi^c(M;V,W)\) on \(M=M'\# l \mathbb{C} P^{2k}\) with \(l\in \mathbb{N}\).
Because connected sums of quasitoric manifolds are quasitoric, \(M\) is quasitoric.
Since a bipartite regular graph has an even number of vertices, it follows from Theorem \ref{sec:twist-dirac-oper-10} that the Euler-characteristic of \(M'\) is even.
Therefore \(\chi(M)=\chi(M')+l\chi(\mathbb{C} P^n) -2l\) is odd if \(l\) is odd.
Because the order of the Weyl-group of a semi-simple compact connected Lie-group is even \cite[p. 74-84]{0708.17005},
the statement follows from Corollary~\ref{sec:twist-dirac-oper-7}.
\end{proof}
\begin{remark}
Non-singular projective toric varieties are examples of quasitoric manifolds.
If, in the situation of the proof of Corollary~\ref{sec:twist-dirac-oper-13}, \(M'\) is such a variety, then we can construct infinitely many non-singular toric varieties \(M\) with \(N^{ss}(M)=0\) by blowing-up isolated fixed points in \(M'\) repeatedly, i.e. by taking connected sums with several copies of \(\bar{\mathbb{C} P}^n\).
\end{remark}
\section{Quasitoric manifolds admitting low cohomogeneity actions}
\label{sec:cohom1}
In this section we study quasitoric manifolds which admit a cohomogeneity one or zero action of a compact connected Lie-group and have a non-zero index \(\varphi^c(M;V,W)\).
To do so we need the notion of spaces of \(q\)-type which was introduced by Hauschild \cite{0623.57024}.
A space of \(q\)-type is defined to be a topological space \(X\) satisfying the following cohomological properties:
\begin{itemize}
\item The cohomology ring \(H^*(X;\mathbb{Q})\) is generated as a \(\mathbb{Q}\)-algebra by elements of degree two, i.e. \(H^*(X;\mathbb{Q})=\mathbb{Q}[x_1,\dots,x_n]/I_0\) and \(\deg x_i=2\).
\item The defining ideal \(I_0\) contains a definite quadratic form \(Q\).
\end{itemize}
Examples of spaces of \(q\)-type are homogenous spaces of the form \(G/T\) where \(G\) is a semi-simple compact connected Lie-group and \(T\) a maximal torus of \(G\).
Quasitoric manifolds of \(q\)-type were studied in \cite{chapter6}.
For the proof of the main result of this section we need the following lemma.
\begin{lemma}
\label{sec:quas-manif-with}
Let \(F\rightarrow E\rightarrow B\) be a fibration such that \(\pi_1(B)\) acts trivially on \(H^*(F;\mathbb{Q})\). If \(F\) and \(B\) are spaces of \(q\)-type then \(E\) is a space of \(q\)-type.
\end{lemma}
\begin{proof}
Because \(H^*(F;\mathbb{Q})\) and \(H^*(B;\mathbb{Q})\) are generated by their degree two parts, it follows from the Serre spectral sequence that \(H^*(E;\mathbb{Q})\) is generated by its degree two part.
Let \(x_1,\dots,x_m\) be a basis of \(H^2(F;\mathbb{Q})\) and \(y_1,\dots,y_{m'}\) be a basis of \(H^2(B;\mathbb{Q})\).
Then there is a basis \(X_1,\dots,X_m,Y_1,\dots,Y_{m'}\) of \(H^2(E;\mathbb{Q})\) such that \(\iota^*X_i=x_i\), \(i=1,\dots,m\), and \(\pi^*y_i=Y_i\), \(i=1,\dots,m'\).
Here \(\iota:F\rightarrow E\) is the inclusion and \(\pi:E\rightarrow B\) is the projection.
Let \(Q_F\) and \(Q_B\) be positive definite bilinear forms such that \(Q_F(x_1,\dots,x_m)=0\in H^4(F;\mathbb{Q})\) and \(Q_B(y_1,\dots,y_{m'})=0\in H^4(B;\mathbb{Q})\).
Then there are \(\alpha_{11},\dots,\alpha_{mm'}\in \mathbb{Q}\) and \(\beta_1,\dots,\beta_{m'}\in \mathbb{Q}\) such that for all \(\lambda\in \mathbb{Q}\):
\begin{align*}
Q_\lambda(X_1,\dots,X_m,Y_1,\dots,Y_{m'})&= Q_F(X_1,\dots,X_m) + \lambda Q_B(Y_1,\dots,Y_{m'})\\ & + \sum_{i,j} \alpha_{ij} X_iY_j+ \sum_i \beta_i Y_i^2 \\ &=0\in H^4(E;\mathbb{Q}).
\end{align*}
We claim that \(Q_\lambda\) is positive definite for sufficient large \(\lambda\).
To see that it is sufficient to show that for all \(a\in S^{m+m'-1}\subset \mathbb{R}^{m+m'}\), \(Q_\lambda(a)>0\).
We may write \(a=\gamma_1 x + \gamma_2 y\) with \(x\in \mathbb{R}^m\), \(y\in \mathbb{R}^{m'}\), \(\|x\|=\|y\|=1\) and \(\gamma_1^2+\gamma_2^2=1\).
Because \(Q_F\) is positive definite and \(S^{m+m'-1}\cap \mathbb{R}^m\) is compact, there is an \(\epsilon >0\) such that \(Q_\lambda(a) - \lambda Q_B(a)>0\) for all \(\gamma_2<\epsilon\).
Because \(Q_B\) is positive definite and \(S^{m+m'-1}\cap \{\gamma_2\geq \epsilon\}\) is compact, we may take \(\lambda\) sufficiently large such that
\begin{align*}
\lambda \min \{Q_B(a);\;a\in S^{m+m'-1}\cap \{\gamma_2\geq \epsilon\}\}& > -
\min \{Q_\lambda(a)-\lambda Q_B(a);\\
&\quad\quad a\in S^{m+m'-1}\cap \{\gamma_2\geq \epsilon\}\}.
\end{align*}
Therefore \(Q_\lambda\) is positive definite for sufficient large \(\lambda\).
\end{proof}
Now we can prove the following theorem.
\begin{theorem}
\label{sec:quas-manif-with-1}
Let \(M\) be a quasitoric manifold on which a compact connected Lie-group \(G\) acts such that \(\dim M/G\leq 1\).
Assume that \(M\) has a non-vanishing index \(\varphi^c(M;V,W)\) with \(V\), \(W\) as in Theorem~\ref{sec:twist-dirac-oper-8}.
Then \(M=\prod S^2\) if \(\dim M/G=0\) or \(M\) is a \(S^2\)-bundle with structure group a maximal torus of \(G\) over \(\prod S^2\) if \(\dim M/G=1\).
\end{theorem}
\begin{proof}
If \(\dim M/G=0\) then \(M\) is a homogeneous space \(G/H\).
Because \(\chi(M)\neq 0\), \(H\) must have maximal rank in \(G\).
Therefore we may assume that \(G\) is semi-simple.
Hence, it follows from Corollary~\ref{sec:twist-dirac-oper-5} that \(H\) is a maximal torus of \(G\).
As in section 3 of \cite{chapter6}, one sees that \(M=\prod S^2\).
Now assume that \(\dim M/G=1\).
Because \(\chi(M)\neq 0\), it follows from Corollary~\ref{sec:twist-dirac-oper-5} that there is an orbit of type \(G/T\) with \(T\) a maximal torus of \(G\).
Because \(\dim G/T\) is even this must be a non-principal orbit.
Hence, the orbit space \(M/G\) is homeomorphic to the compact interval \([0,1]\) and there is exactly one other non-principal orbit.
Let \(S\subset G\) be a principal isotropy group.
Then we may assume \(S\subset T\).
Moreover, \(T/S\) is a sphere.
Therefore \(S\) has codimension one in \(T\).
Let \(K^+\) be the isotropy group of the other non-principal orbit.
Then \(K^+/S\) is a sphere and the identity component of \(K^+\) is a torus by Theorem \ref{sec:twist-dirac-oper-8}.
Therefore there are two cases
\begin{itemize}
\item \(\dim K^+=\dim S\) and \(K^+/S=\mathbb{Z}_2\).
\item \(K^+\) is a maximal torus of \(G\).
\end{itemize}
In the first case, we have by Seifert-van Kampen's theorem
\begin{equation*}
\pi_1(M)= \pi_1(G/T)*_{\pi_1(G/S)}\pi_1(G/K^+)= \pi_1(G/K^+)/\pi_1(G/S)=\mathbb{Z}_2
\end{equation*}
because \(G/S\rightarrow G/K^+\) is a twofold covering.
But \(M\) is simply connected because it is quasitoric.
So this case does not occur.
Now as in the remark before Lemma 5.2 of \cite{chapter6} one sees that \(M\) is a \(S^2\)-bundle with structure group \(T\) over \(G/T\).
By Lemma~\ref{sec:quas-manif-with}, \(M\) is a quasitoric manifold which is of \(q\)-type.
Therefore it follows from Theorem 5.3 of \cite{chapter6} that \(M\) is a \(S^2\)-bundle over \(\prod S^2\).
\end{proof}
\section{Quasitoric manifolds with non-vanishing indices and $N(M)\geq 3n-4$}
\label{sec:cube}
By Corollary~\ref{sec:twist-dirac-oper-11} the symmetry degree of a quasitoric manifold \(M\) with a non-trivial index \(\varphi^c(M;V,W)\) is bounded from above by \(3n\).
In this section we classify those \(2n\)-dimensional quasitoric manifolds which admit a twisted Dirac-operator with a non-vanishing index and have degree of symmetry greater or equal to \(3n-4\).
For the statement of our first theorem we need the notion of a torus manifold.
A torus manifold is a \(2n\)-dimensional closed connected orientable smooth manifold \(M\) with an effective smooth action of an \(n\)-dimensional torus \(T\), such that \(M^T\) is non-empty.
\begin{theorem}
\label{sec:quas-manif-over}
Let \(M\) be a \(2n\)-dimensional quasitoric manifold with non-vanishing index \(\varphi^c(M;V,W)\) with \(V\), \(W\) as in Theorem~\ref{sec:twist-dirac-oper-8} and \(G\) be a compact connected Lie-group of rank \(n\), which acts almost effectively on \(M\).
Then \(G\) has a covering group of the form \(\prod SU(2)\times T^{l_0}\).
Moreover, \(M\) is a fiber bundle with fiber a \(2l_0\)-dimensional torus manifold over \(\prod S^2\).
\end{theorem}
\begin{proof}
\(M\) is a torus manifold with \(G\)-action in the sense of \cite{torus} and \(H^*(M;\mathbb{Z})\) is generated by \(H^2(M;\mathbb{Z})\). Therefore \(G\) has a covering group of the form \(\tilde{G}=\prod_i SU(l_i+1)\times T^{l_0}\) by Remark 2.9 of \cite{torus}.
Let \(T\) be a maximal torus of \(G\) and \(x\in M^T\). Then, by Lemmas 3.1 and 3.4 of \cite{torus}, we have
\begin{align*}
SU(l_i+1)_{x}&=S(U(l_i)\times U(1)) & \text{or}&& SU(l_i+1)_{x}&=SU(l_i+1).
\end{align*}
Therefore by Corollary~\ref{sec:twist-dirac-oper-5}, we have \(l_i=1\).
Moreover, each factor \(SU(l_i+1)\) does not have a fixed point in \(M\).
Therefore the second statement follows from an iterated application of Corollary 5.6 of \cite{torus}.
\end{proof}
The next theorem is the classification announced in the introduction of this section.
\begin{theorem}
\label{sec:quas-manif-with-2}
Let \(M\) be a \(2n\)-dimensional quasitoric manifold with non-vanishing index \(\varphi^c(M;V,W)\) with \(V\), \(W\) as in Theorem~\ref{sec:twist-dirac-oper-8}. If \(N(M)\geq 3n-4\), then \(M\) is diffeomorphic to one of the manifolds in the following list.
\begin{center}
\begin{tabular}{|c|c|}
\(N(M)\) & \(M\)\\\hline\hline
\(3n\) & \(\prod S^2\)\\\hline
\(3n-1\)& impossible\\\hline
\(3n-2\)& \(S^2\)-bundle over \(\prod S^2\)\\\hline
\(3n-3\)& impossible\\\hline
\(3n-4\)& \(N\)-bundle over \(\prod S^2\) with \(N\) a quasitoric manifold, \(\dim N=4\) \\
\end{tabular}
\end{center}
\end{theorem}
\begin{proof}
The statement about the quasitoric manifolds with \(N(M)=3n\) follows from Corollary~\ref{sec:twist-dirac-oper-11}.
Therefore assume that \(M\) is a \(2n\)-dimensional quasitoric manifold with non-vanishing index \(\varphi^c(M;V,W)\) and \(G\) is a compact connected Lie-group of dimension \(3n-1\), which acts effectively on \(M\).
Let \(T\) be a maximal torus of \(G\).
Because, by Corollary~\ref{sec:twist-dirac-oper-3},
\begin{equation}
\label{eq:2}
\dim G -\dim T \leq 2n,
\end{equation}
we have \(\dim T = n-1\) or \(\dim T=n\).
If \(\dim T=n\), then \(\dim G-\dim T\) is odd, which is impossible.
But \(\dim T=n-1\) is impossible by Corollary~\ref{sec:twist-dirac-oper-3}.
Let \(M\), \(G\), \(T\) as above. But with \(\dim G=3n-2\).
By (\ref{eq:2}), we have \(\dim T=n-2,n-1,n\).
As in the first case one sees that \(\dim T=n-2,n-1\) are impossible.
If \(\dim T=n\), we see with Theorem~\ref{sec:quas-manif-over} that \(M\) is a \(S^2\)-bundle over \(\prod S^2\).
Let \(M\), \(G\), \(T\) as above. But with \(\dim G = 3n-3\).
By (\ref{eq:2}), we have \(\dim T= n-3,n-2,n-1,n\).
As above one sees that \(\dim T = n-3,n-2,n\) are impossible.
Therefore we have \(\dim T=n-1\).
Because \(\chi(M)\neq 0\), there is by Corollary~\ref{sec:twist-dirac-oper-5} an orbit of type \(G/T\) which has dimension \(2n-2\).
Therefore the principal orbit type has dimension \(2n-2\) or \(2n-1\).
In the first case the principal orbit type is \(G/T\) and by Corollary~\ref{sec:twist-dirac-oper-5} there is no exceptional or singular orbit.
Hence, \(M\) is a fiber bundle over a simply connected surface with fiber \(G/T\) and structure group \(N_GT/T\).
Since \(N_GT/T\) is finite, we have \(M=S^2\times G/T\).
Therefore we have \(N(M)\geq 3 + \dim G =3n\).
Now assume that the principal orbit \(G/S\) has codimension one in \(M\).
Then, by Theorem~\ref{sec:quas-manif-with-1}, \(M\) is a \(S^2\)-bundle with structure group a torus over \(\prod_{i=1}^{n-1} S^2\).
Therefore we have \(N(M)\geq 3n-2\).
Now let \(M\), \(G\), \(T\) as above, but with \(\dim G=3n-4\).
By (\ref{eq:2}), we have \(\dim T= n-4,n-3,n-2,n-1,n\).
As above one sees that \(\dim T= n-4,n-3,n-1\) are impossible.
Therefore we have \(\dim T=n-2,n\).
At first assume that \(\dim T=n\).
Then \(M\) is a torus manifold with \(G\)-action.
By Theorem~\ref{sec:quas-manif-over} we have \(G=\prod_{i=1}^k SU(2)\times T^{l_0}\) with \(3n-4 = 3k+ l_0\) and \(n=k+l_0\).
Therefore we have \(l_0=2\) and \(M\) is a fiber bundle with fiber a four-dimensional torus manifold over \(\prod_{i=1}^kS^2\).
By Lemma 5.17 of \cite{torus}, the fiber of this bundle is simply connected.
Therefore it is quasitoric because every four-dimensional simply connected torus manifold is quasitoric \cite[Section 5]{0216.20202}.
Now assume that \(\dim T = n-2\).
Then we have \(\dim G/T=2n-2\).
Therefore the principal orbit type of the \(G\)-action on \(M\) has dimension \(2n-2\) or \(2n-1\).
In both cases one sees as in the case \(\dim G= 3n-3\) that \(N(M)\geq 3n-2\).
\end{proof}
\section{Highly symmetric quasitoric manifolds}
\label{sec:highly}
In this section we show that \(\mathbb{C} P^n\) is the most symmetric quasitoric manifold of dimension \(2n\).
The main result of this section is the following theorem.
\begin{theorem}
\label{sec:highly-symm-quas-7}
Let \(M\) be a \(2n\)-dimensional quasitoric manifold. Then we have
\begin{equation*}
N(M)\leq n^2+2n
\end{equation*}
with equality only holding for \(M=\mathbb{C} P^n\).
\end{theorem}
The proof of this theorem is subdivided into several lemmas.
We prove it separately in each dimension.
We begin with dimensions \(2n\geq 20\).
\begin{lemma}
\label{sec:highly-symm-quas-5}
Let \(M\) be a quasitoric manifold of dimension \(2n\geq 20\) with \(M\neq \mathbb{C} P^n\). Then we have \(N(M)\leq n^2+n+1<n^2+2n=N(\mathbb{C} P^n)\).
\end{lemma}
\begin{proof}
It was shown by Ku, Mann, Sicks and Su \cite[Theorem 1, p.141]{0191.54401} that if \(H^\alpha(M;\mathbb{Q})\neq 0\) and \(M\neq \mathbb{C} P^n\), then
\begin{equation*}
N(M)\leq \frac{\alpha(\alpha+1)}{2} + \frac{(2n-\alpha)(2n-\alpha +1)}{2}.
\end{equation*}
The statement follows from this result applied in the cases \(\alpha =n\) or \(\alpha=n-1\).
\end{proof}
Now we turn to the low dimensional case \(2n\leq 8\).
\begin{lemma}
\label{sec:highly-symm-quas-4}
Let \(M\) be a quasitoric manifold of dimension \(2n\), \(n\leq 4\), and \(G\) a compact connected Lie-group which acts almost effectively on \(M\).
Then \(\dim G \leq n^2+2n\) and equality only holds for \(M=\mathbb{C} P^n\) and \(\tilde{G}=SU(n+1)\).
\end{lemma}
\begin{proof}
Because \(M\) has non-zero Euler-characteristic, we have \(\rank G \leq n\).
If \(\rank G=n\), it follows from Remark 2.9 of \cite{torus} that \(G\) has a covering group of the form \(\tilde{G}=\prod_{i=1}^k SU(l_i+1)\times T^{l_0}\) with \(\sum_{i=0}^k l_i =n\).
Therefore we have \(\dim G\leq n^2+2n\) with equality holding if and only if \(\tilde{G}=SU(n+1)\).
In the latter case it follows from Corollary 8.9 of \cite{torus} that \(M=\mathbb{C} P^n\).
Now assume that \(\rank G\leq n-1\).
The highest dimensional Lie-groups of rank \(k\) are as follows
\begin{center}
\begin{tabular}{|c|c|c|c|}
\(k\)&\(G\)&\(\dim G\)&\((k+1)^2+2(k+1)\)\\\hline\hline
\(1\)&\(\text{Spin}(3)\) & \(3\)& \(4\)\\\hline
\(2\)&\(G_2\)&\(14\)&\(15\)\\\hline
\(3\)&\(\text{Spin}(7)\)&\(21\)&\(24\)
\end{tabular}
\end{center}
Therefore the statement follows.
\end{proof}
Now we turn to the middle dimensions \(10\leq 2n\leq 18\).
Those \(2n\)-dimensional simply connected manifolds on which compact connected non-abelian Lie-groups of rank \(n\) act were classified in \cite{torus}.
Therefore we first focus on actions of those groups which have a rank which is smaller than \(n\).
As a first step we show that if a high-dimensional Lie-group acts on a quasitoric manifolds of these dimensions, then its simply connected covering group has a big simple factor which is isomorphic to \(\text{Spin}(k)\).
\begin{lemma}
\label{sec:highly-symm-quas}
Let \(M\) be a manifold of dimension \(2n\), \(5\leq n\leq 9\), and \(G\) a compact connected Lie-group with \(\rank G \leq n-1\) and \(\dim G \geq n^2+2n\) which acts almost effectively on \(M\). Then \(G\) has a covering group of the form
\begin{equation*}
\tilde{G}=\text{Spin}(k)\times G'
\end{equation*}
with \(k=9\) if \(n=5\) and
\begin{equation*}
k\geq
\begin{cases}
11 &\text{if } n=6\\
12 &\text{if } n=7\\
13 &\text{if } n=8\\
15 &\text{if } n=9.
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
Let \(G/H\) be a principal orbit type of the \(G\)-action on \(M\).
Then
\begin{equation*}
\dim G \geq n^2+2n\geq (\frac{n}{2}+1)\dim M \geq(\frac{n}{2}+1)\dim G/H
\end{equation*}
Because \(\frac{n}{2}+1\geq \frac{14}{4}\), we may apply Proposition B of \cite[p. 135]{0191.54401} with \(r=\frac{n}{2}+1-\epsilon\).
Therefore \(\tilde{G}\) is of the form
\begin{align*}
&\text{Spin}(k)\times G'& k&\geq n+2, \text{ or}\\
&SU(k)\times G'&k&\geq n+1,\text{ or}\\
&Sp(k)\times G'&k&\geq n.
\end{align*}
Because \(\rank G \leq n-1\), the last two cases do not occur.
It remains to prove that the lower bound for \(k\) given in the lemma holds.
This follows from an inspection of the dimensions of those groups which have \(\text{Spin}(k)\), \(k\geq n+2\), as a simple factor and rank bounded from above by \(n-1\).
These groups are listed in the following tables.
Here we have omitted those groups which are not isomorphic to \(\text{Spin}(k)\) and for which the \(\text{Spin}(k)\)-factor alone has a dimension greater or equal to \(n^2+2n\).
If the \(\text{Spin}(k)\)-factor has a lower dimension, we have only listed those groups which have maximal dimension among those groups which have this \(\text{Spin}(k)\)-factor.
If \(n=5\) we have
\begin{center}
\begin{tabular}{|c|c|c|}
\(G\)&\(\dim G\)&\(n^2+2n=35\)\\\hline\hline
\(\text{Spin}(9)\)&\(36\)&\\\hline
\(\text{Spin}(8)\)&\(28\)&\\\hline
\(\text{Spin}(3)\times \text{Spin}(7)\)&\(24\)&
\end{tabular}
\end{center}
For \(n=6\) we have
\begin{center}
\begin{tabular}{|c|c|c|}
\(G\)&\(\dim G\)&\(n^2+2n=48\)\\\hline\hline
\(\text{Spin}(11)\)&\(55\)&\\\hline
\(\text{Spin}(10)\)&\(45\)&\\\hline
\(\text{Spin}(3)\times \text{Spin}(9)\)&\(39\)&\\
\end{tabular}
\end{center}
For \(n=7\) we have
\begin{center}
\begin{tabular}{|c|c|c|}
\(G\)&\(\dim G\)&\(n^2+2n=63\)\\\hline\hline
\(\text{Spin}(13)\)&\(78\)&\\\hline
\(\text{Spin}(12)\)&\(66\)&\\\hline
\(\text{Spin}(3)\times \text{Spin}(11)\)&\(58\)&\\\hline
\(G_2\times \text{Spin}(9)\)&\(50\)&
\end{tabular}
\end{center}
For \(n=8\) we have
\begin{center}
\begin{tabular}{|c|c|c|}
\(G\)&\(\dim G\)&\(n^2+2n=80\)\\\hline\hline
\(\text{Spin}(15)\)&\(105\)&\\\hline
\(\text{Spin}(14)\)&\(91\)&\\\hline
\(\text{Spin}(3)\times \text{Spin}(13)\)&\(81\)&\\\hline
\(\text{Spin}(3)\times \text{Spin}(12)\)&\(69\)&\\\hline
\(G_2\times \text{Spin}(11)\)&\(69\)&
\end{tabular}
\end{center}
For \(n=9\) we have
\begin{center}
\begin{tabular}{|c|c|c|}
\(G\)&\(\dim G\)&\(n^2+2n=99\)\\\hline\hline
\(\text{Spin}(17)\)&\(136\)&\\\hline
\(\text{Spin}(16)\)&\(120\)&\\\hline
\(\text{Spin}(15)\)&\(105\)&\\\hline
\(\text{Spin}(3)\times \text{Spin}(14)\)&\(94\)&\\\hline
\(G_2\times \text{Spin}(13)\)&\(92\)&\\\hline
\(\text{Spin}(7)\times\text{Spin}(11)\)&\(69\)&\\
\end{tabular}
\end{center}
Therefore the statement about \(k\) follows.
\end{proof}
The next step is to identify the identity component of the principal isotropy group of the \(\text{Spin}(k)\)-action on \(M\).
\begin{lemma}
\label{sec:highly-symm-quas-3}
Let \(M\), \(G\) as in Lemma~\ref{sec:highly-symm-quas}.
If \(n=5\), then also assume that \(\chi(M)\neq 0\).
Then the identity component of the principal isotropy group of the \(\text{Spin}(k)\)-action on \(M\) is \(\text{Spin}(k-1)\).
\end{lemma}
\begin{proof}
If \(6\leq n\leq 9\), one can argue as in the proof of the main lemma of \cite[p.135]{0191.54401} in case III.
Therefore assume that \(n=5\).
Then we have \(k=9\).
Because \(\chi(M)\neq 0\), there is a point \(x\in M\) such that \(\text{Spin}(9)_x\) has maximal rank in \(\text{Spin(9)}\).
By the classification of maximal rank subgroups of \(\text{Spin}(9)\) in \cite{0034.30701} and the dimension assumption, it follows that \(\text{Spin}(9)_x^0=\text{Spin}(8)\) or \(\text{Spin}(9)_x^0=\text{Spin}(9)\).
If \(\text{Spin(9)}_x^0=\text{Spin}(8)\), then the orbit of \(x\) has codimension two in \(M\). Because \(\text{Spin}(8)\) has no non-trivial \(2\)-dimensional representation, it follows that \(\text{Spin}(8)\) is the identity component of a principal isotropy group.
If \(\text{Spin}(9)_x^0=\text{Spin}(9)\), then \(T_xM\) is a \(10\)-dimensional representation of \(\text{Spin}(9)\).
Therefore it is the sum of the standard \(9\)-dimensional representation of \(\text{Spin}(9)\) and the trivial one dimensional representation.
Hence, the statement follows in this case.
\end{proof}
As a consequence of Lemmas~ \ref{sec:highly-symm-quas} and \ref{sec:highly-symm-quas-3} we get the following lemma which implies Theorem~\ref{sec:highly-symm-quas-7} in the remaining dimensions.
\begin{lemma}
\label{sec:highly-symm-quas-6}
Let \(M\) be a quasitoric manifold of dimension \(2n\), \(5\leq n\leq 9\), and \(G\) be a compact connected Lie-group which acts almost effectively on \(M\).
Then \(\dim G\leq n^2+2n\) and equality only holds for \(M=\mathbb{C} P^n\) and \(\tilde{G}=SU(n+1)\).
\end{lemma}
\begin{proof}
Since \(M\) has non-zero Euler-characteristic, we have \(\rank G\leq n\).
In the case \(\rank G=n\), one can argue as in the proof of Lemma~\ref{sec:highly-symm-quas-4}.
Therefore we may assume that \(\rank G\leq n-1\).
Assume that \(\dim G\geq n^2+2n\).
By Lemmas \ref{sec:highly-symm-quas} and \ref{sec:highly-symm-quas-3}, there is an almost effective action of \(\text{Spin}(k)\) on \(M\) such that \(\dim M/\text{Spin}(k)\leq 4\) and all orbits are acyclic over \(\mathbb{Q}\) up to dimension \(7\).
By the Vietoris-Begle-mapping theorem, it follows that
\begin{equation*}
0\neq H^6(M;\mathbb{Q})\cong H^6(M/\text{Spin}(k);\mathbb{Q})=0.
\end{equation*}
This is a contradiction.
\end{proof}
|
1,108,101,563,866 | arxiv | \section{Introduction}
Many decades after their discovery, neutrino oscillations are still among the most interesting and subtle observed phenomena, and one of the few clearly established evidences of Beyond the Standard Model (BSM) physics~\cite{Giunti:1053706,barger2012physics}. Two complementary approaches have been used to describe oscillations: a Quantum Mechanical one, in which the neutrino wave packet propagates between the production and detection locations, and a Quantum Field Theory one, in which production, propagation and detection are considered as a unique process. For a review and a comparison between the two formalisms we refer the reader to~\cite{Akhmedov:2010ms} and references therein. In addition to pointing to the need of BSM physics, neutrino oscillations are themselves affected by New Physics (NP), either modifying the production and detection processes or generating new interactions that affect neutrino propagation. NP effects are traditionally encoded in the so-called Non-Standard Interaction (NSI) parameters~\cite{Dev:2019anc}. The ultraviolet (UV) origin of the NSI parameters can be traced to specific models or, more generally, to higher dimensional operators constructed out of SM fields~\cite{Altmannshofer:2018xyo,Bischer:2019ttk,Falkowski:2019kfn}. The Effective Field Theory (EFT) constructed out of SM fields is known as SMEFT. A complete classification of the SMEFT operators is available for operators of dimension five~\cite{Weinberg:1979sa}, six~\cite{Grzadkowski:2010es}, seven~\cite{Lehman:2014jma}, eight~\cite{Murphy:2020rsh,Li:2020gnx} and nine~\cite{Li:2020xlh}. A second EFT relevant for neutrino physics is the so-called $\nu$SMEFT, in which right handed neutrinos are added to the SM particle content. A complete classification is available up to operators of dimension $7$~\cite{Graesser:2007yj,Graesser:2007pc,delAguila:2008ir,Aparici:2009fh,Bhattacharya:2015vja,Liao:2016qyd}.
Although of paramount importance, the use of EFTs is hindered by the fact that not all operators that can be written at a fixed dimension are independent. Indeed, fields redefinitions can be used to eliminate certain operators is favor of others, making the determination of a basis of independent operators a non-straightforward task. Even worse, although the physics cannot depend on the chosen basis, the physical effects of individual operators may be transparent in one basis and opaque in another.~\footnote{For fixed dimension, the total number of independent operators can be computed without resorting on explicit expressions~\cite{Henning:2015alf}. For the $d=6$ operators of the SMEFT automatic tools are available for the conversion between different basis~\cite{Falkowski:2015wza}.}
Such problems can be circumvented using the on-shell amplitudes formalism~\cite{Dixon:2013uaa,Arkani-Hamed:2017jhn,Chung:2018kqs}. This approach bypasses the need for quantum fields and Lagrangians, constructing scattering amplitudes in terms of physical states and ``fundamental'' $3$-point amplitudes. Such objects can be simply determined from the covariance of the S-matrix under little group transformations. Once the $3$-point amplitudes are known, they can be ``glued'' together to construct higher-point amplitudes simply imposing locality, i.e. requiring that poles appear only when an intermediate particle is exchanged on-shell. This formalism not only allows to derive very strong results for massless theories (for instance, that the only consistent theory of massless spin-2 particles is general relativity or that, conversely, the only consistent theory of interacting massless spin-1 particles is Yang-Mills~\cite{Elvang:2013cua,Benincasa:2007xk,McGady:2013sga}), but is an effective way to treat the Standard Model (SM)~\cite{Christensen:2018zcq} and even EFTs~\cite{Shadmi:2018xan,Ma:2019gtx,Aoude:2019tzn,Durieux:2019eor,Bachu:2019ehv,Durieux:2019siw,Gu:2020thj}: once the particle content of the $n$-point amplitude is chosen, on-shell techniques allow to enumerate all possible kinematic structures permitted by the little group, independently of the order in the EFT at which they are generated for the first time. Scattering amplitudes are thus a powerful tool to include all-order effects without having to worry about operators basis and fields redefinitions.
In this paper we apply the on-shell amplitudes formalism to describe neutrino oscillations. Recall that oscillations are a phenomenon with intrinsic finite-distance (and time) dependence, and as such appropriate wave functions must be introduced to localize the neutrino creation and detection processes. Following~\cite{Akhmedov:2010ua}, we write the overall amplitude for the neutrino creation and detection processes as
\begin{equation}
\mathcal{M} = \int d^4x_1 \int d^4 x_2 \Psi_{P}(x_1, X_P) \Psi_{D}(x_2, X_D) M(x_1, x_2),
\end{equation}
where the wave functions $\Psi_A(x, X_A)$ are localized at the production ($X_P$) and detection ($X_D$) spacetime points, and the amplitude $M(x_1, x_2)$ can be Fourier transformed according to
\begin{equation}
M(x_1, x_2) = \int\frac{d^4q}{(2\pi)^4} \mathcal{A}_q e^{-i q (x_1-x_2)} .
\end{equation}
Alternative expressions that give physically equivalent results can be found in~\cite{Akhmedov:2010ms,Falkowski:2019kfn}.
Once the production and detection channels are fixed, on-shell techniques will allow us to compute the amplitude in momentum space $\mathcal{A}_q$. For simplicity, we will focus on production and detection occurring with a $W$ boson and a charged lepton. As we are going to see, the scattering amplitude we will obtain contains all possible modifications of the $\nu e W$ amplitude, and not only the SM term. Needless to say, the flavor structure must be carefully taken into account to correctly account for oscillations. We do this by identifying first the only SM structure appearing in the 3-point amplitude $\nu e W$ and then extending the discussion to the additional structures appearing.
The remainder of the paper is organized as follows. We start in Sec.~\ref{sec:scatt_amp} with a brief overview of the scattering amplitudes techniques for massless and massive particles. In Sec.~\ref{sec:3_point_amp} we write the 3-point amplitude $\mathcal{A}(\nu \bar{e} W)$ for massive particles. We will also describe a ``dictionary'' between the terms in the amplitude and the operators that generate them in the usual EFT approach. In Sec.~\ref{sec:PMNS} we discuss explicitly flavor and how the PMNS matrix appears in our approach. Sec.~\ref{sec:4point} is instead devoted to the construction of the 4-point amplitude $WWe\bar{e}$, which will provide us with neutrino oscillations. Finally, we conclude in Sec.~\ref{sec:conc}. We also add two Appendices: App.~\ref{app:Dirac} is devoted to the explicit connection between the on-shell amplitudes and the usual field theory computation, while App.~\ref{app:explicit_4point} contains explicit formulas for the amplitudes appearing in Sec.~\ref{sec:4point}.
\section{A brief review of on-shell scattering amplitudes}\label{sec:scatt_amp}
As it is well-known, one-particle states in a quantum theory transform in irreducible representations of the Little Group (LG)~\cite{Weinberg:1995mt}. For massive particles ${\rm LG} = SU(2)$, while for massless particles ${\rm LG} = U(1)$. Invariance of the S-matrix under Lorentz transformations thus implies covariance of the scattering amplitudes under LG transformations~\cite{Weinberg:1995mt,Arkani-Hamed:2017jhn}:
\begin{equation}\label{eq:amp_covariance}
\mathcal{A}\left[\left\{p_1, \sigma_1 \right\} , \dots, \left\{p_N, \sigma_N \right\} \right] \to D^{(j_1)}_{\sigma_1' \sigma_1} \dots D^{(j_N)}_{\sigma_N' \sigma_N} \mathcal{A}\left[\left\{\Lambda p_1, \sigma_1' \right\} , \dots, \left\{\Lambda p_N, \sigma_N' \right\} \right] .
\end{equation}
In the previous equation we have denoted by $\left\{ p_i, \sigma_i \right\}$ the momentum and spin (helicity) of massive (massless) particles, while $D^{(j_i)}_{\sigma_i' \sigma_i} $ is the LG transformation. For massive particles this is the usual $2 j_i +1$ dimensional representation of $SU(2)$, while for massless particles it is simply given by a phase.
The on-shell amplitudes formalism consists in constructing all possible terms satisfying Eq.~\eqref{eq:amp_covariance} without introducing quantum fields nor Lagrangians. It is convenient to employ helicity variables to construct amplitudes. This is motivated by the well-known relation between the Lorentz group $SO(1,4)$ and $SL(2, \mathbb{C})$, that allows writing the 4-momentum $p$ of a particle as a $2\times 2$ matrix according to
\begin{equation}
p^{\dot{\alpha}\alpha} \equiv p_\mu \bar{\sigma}^{\mu \dot{\alpha} \alpha}, ~~~~ p_{\alpha\dot{\alpha}} \equiv p_\mu \sigma^\mu_{\alpha\dot{\alpha}} ,
\end{equation}
where $\alpha$ and $\dot{\alpha}$ denote the $(1/2,0)$ and $(0, 1/2)$ representation of the Lorentz group, respectively. For a comprehensive review see~\cite{Dreiner:2008tw}.
As usual, $\sigma^\mu = (1, \bm{\sigma})$ and $\bar{\sigma}^\mu = (1, -\bm{\sigma})$. The two 4-vectors are related by
\begin{equation}
\bar{\sigma}^{\mu \dot{\alpha}\alpha} = \epsilon^{\alpha\beta} \epsilon^{\dot{\alpha}\dot{\beta}} \sigma^\mu_{\beta\dot{\beta}},
\end{equation}
where $\epsilon$ is the Levi-Civita symbol in two dimensions. We fix $\epsilon^{12} = 1 = - \epsilon_{12}$. The $\epsilon$ matrix is used to raise and lower the spinor indices $\alpha$ and $\dot{\alpha}$.
The momentum matrices transform according to $p \to \mathcal{L}\, p\, \mathcal{L}^\dag$ under Lorentz transformations.
For massless particles, the matrix $p$ has rank 1 (because $\det p =0$) and can be written as the product of helicity spinors. Those transforming as the $(1/2,0)$ representation of the Lorentz group will be denoted by angle brakets $\ket{p}_\alpha$ and $\bra{p}^\alpha$, while those transforming in the $(0, 1/2)$ representation will be denoted by square brakets $\sbra{p}_{\dot{\alpha}}$ and $\sket{p}^{\dot{\alpha}}$:
\begin{equation}
p_{\alpha\dot{\alpha}} = \ket{p} \sbra{p}, ~~~ p^{\dot{\alpha}\alpha} = \sket{p} \bra{p}.
\end{equation}
We leave implicit the spinor indices, since their position is fixed by consistency between the left and right sides. For reasons that will be justified later on, we will also take complex momenta. Under a Lorentz transformation, the helicity spinors behave as the one-particle states of the theory~\cite{Weinberg:1995mt,Arkani-Hamed:2017jhn}: they will transform with an $SU(2)$ matrix acting on their spinor index and under the $U(1)$ LG as well. Concretely, we have
\begin{equation}\label{eq:massless_transf}
\ket{p}_\alpha \to \mathcal{L}_\alpha^\beta\, t \ket{p}_\beta, ~~~ \sket{p}^{\dot{\alpha}} \to (\mathcal{L}^*)^{\dot{\alpha}}_{\dot{\beta}}\, t^{-1} \sket{p}^{\dot{\beta}} ,
\end{equation}
where $\mathcal{L}$ is the spinor representation of the Lorentz transformation and $t$ is a complex number related to the LG transformation\footnote{For real momenta $t$ reduces to a phase $t = e^{i \theta/2}$, where $\theta$ parametrizes the LG transformation.}. Analogous transformations apply to $\bra{p}$ and $\sbra{p}$. As expected, the momentum matrix is invariant under LG transformations.
In the massive case things are slightly more complicated: since $p$ is now a rank 2 matrix, we need to introduce a pair of helicity variables to correctly take into account the degrees of freedom. We will call them $\ket{p^I}$ and $\sket{p^I}$, with $I = 1,2$. Again we leave implicit the spinor indices because their position will be determined univocally once we write the momentum matrix in terms of these variables. We interpret the index $I$ appearing in the helicity variables as the $SU(2)$ LG index. Once more, a Lorentz transformation acts on all the indices separately:
\begin{equation}\label{eq:massive_transf}
\ket{p^I}_\alpha \to \mathcal{L}_\alpha^\beta\, W^I_{\phantom{I}J} \ket{p^I}_\beta , ~~~ \sket{p^I}^{\dot{\alpha}} \to (\mathcal{L}^*)^{\dot{\alpha}}_{\dot{\beta}}\, W^I_{\phantom{I}J} \sket{p^I}^{\dot{\beta}} .
\end{equation}
As in Eq.~\eqref{eq:massless_transf} the matrix $\mathcal{L}$ is the spinor representation of the Lorentz transformation, while $W$ is the $SU(2)$ LG transformation. Momentum invariance under LG transformations is then guaranteed provided we define
\begin{equation}\label{eq:massive_p}
p^{\dot{\alpha} \alpha} \equiv \epsilon_{IJ} \sket{p^I} \bra{p^J}, ~~~ p_{\alpha\dot{\alpha}} \equiv - \epsilon_{IJ} \ket{p^I} \sbra{p^J}.
\end{equation}
The minus sign in the second term is necessary to have consistency with the expression obtained lowering the spinor indices in $p^{\dot{\alpha} \alpha}$. Before going back to scattering amplitudes, let us write explicit expressions for the helicity spinors. Using the spherical decomposition $\bm{p} = p (\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)$ for the three dimensional momentum we obtain
\begin{equation}\label{eq:massive_helicity}
\bra{p^{I}} =
\left(
\begin{array}{c|c}
\sqrt{E+p} \,c & - \sqrt{E-p}\, s \\
\sqrt{E+p}\, s^* & \sqrt{E-p} \,c
\end{array}\right), ~~~~ \sket{p^{ I}} =
\left(\begin{array}{c|c}
(\sqrt{E-p}\, s)^* & (\sqrt{E+p} \,c)^* \\
- (\sqrt{E-p}\, c)^* & (\sqrt{E+p}\, s^*)^*
\end{array}\right) ,
\end{equation}
where $c = \cos(\theta/2)$, $s = \sin(\theta/2) e^{i\phi}$ and $\sqrt{E\pm p}$ is a complex number. The first column refers to $I=1$, while the second corresponds to $I=2$. The spinors $\ket{p^I}$ and $\sbra{p^I}$ can be obtained by simply lowering the spinor index. The helicity variables so defined satisfy the massive Weyl equations
\begin{equation}
\begin{array}{ccc}
p \ket{p^I} = M \sket{p^I} & ~~~~~ & p \sket{p^I} = M^\dag \ket{ p^I} , \\
\sbra{p^I} p = - M^\dag \bra{p^I}, & ~~~~~ & \bra{p^I} \bar{p} = - M \sbra{p^I} ,
\end{array}
\end{equation}
where the complex number $M$ is defined as $M M^\dag = m^2$, with $m$ the physical mass of the particle.
The Weyl equations written in this form are valid provided
\begin{equation}
\braket{p^I p^J} = M \epsilon^{IJ}, ~~~~~ \sbraket{p^I p^J} = - M^\dag \epsilon^{IJ},
\end{equation}
where we have defined the Lorentz invariant products
\begin{equation}\label{eq:inv_products}
\braket{\lambda \chi} \equiv \bra{\lambda}^\alpha \ket{\chi}_\alpha, ~~~ \sbraket{\lambda \chi} \equiv \sbra{\lambda}_{\dot{\alpha}} \sket{\chi}^{\dot{\alpha}} .
\end{equation}
For our purposes it is useful to notice that products like $\braket{p^I q^J}$ or $\sbraket{p^I q^J}$ are Lorentz invariant but transform as tensors under the massive LG. The same is true for the invariant products of massless spinor variables, which are themselves covariant under $U(1)$ transformations.
A virtue of the explicit expression in Eq~\eqref{eq:massive_helicity} is that it makes obvious what happens in the high energy limit:
\begin{equation}\label{eq:high_en_limit}
\bra{p^{I}} \stackrel{p \to E}{\longrightarrow}
\left(
\begin{array}{c|c}
\sqrt{2E} \,c & 0 \\
\sqrt{2E}\, s^* & 0
\end{array}\right),
~~~~
\sket{p^{ I}} \stackrel{p \to E}{\longrightarrow}
\left(\begin{array}{c|c}
0 & (\sqrt{2E} \,c)^* \\
0 & (\sqrt{2E}\, s^*)^*
\end{array}\right) .
\end{equation}
We define the massless helicity variables
\begin{equation}
\bra{p} = \begin{pmatrix} \sqrt{2E} \,c \\
\sqrt{2E}\, s^* \end{pmatrix}, ~~~ \sket{p} = \begin{pmatrix}(\sqrt{2E} \,c)^* \\ (\sqrt{2E}\, s^*)^* \end{pmatrix}.
\end{equation}
Let us now finally discuss how to determine the expression of on-shell amplitudes. We will begin with the massive case, which will be our starting point in Section~\ref{sec:3_point_amp}. The idea is straightforward: we need to list all possible structures transforming according to Eq.~\eqref{eq:amp_covariance} constructed out of the helicity variables transforming as in Eq.~\eqref{eq:massive_transf}. First of all we see that only Lorentz invariant combinations can appear, i.e. only the invariant products of Eq.~\eqref{eq:inv_products} are allowed in the amplitude. Second, to correctly reproduce the $SU(2)$ LG transformation of a spin-$j$ object we use the well known result that the spin-$j$ representation of $SU(2)$ can be constructed taking the symmetric combination of $2j$ spinors~\cite{Zee:2003mt,Tung:1985na}. For instance, a 3-point amplitude involving three particles of momentum and spin $(p_i, j_i)$, $i=1,2,3$ must be an $SU(2)$ tensor of the form
\begin{equation}
\mathcal{A}\left[1_{j_1}^{\left\{I_1 \dots I_{2 j_1} \right\} } 2_{j_2}^{\left\{ J_1 \dots J_{2j_2} \right\}} 3_{j_3}^{\left\{ K_1 \dots K_{2j_3} \right\}} \right],
\end{equation}
where the each index transforms according to $O^I \to W^I_{\phantom{I}J} O^J$ in terms of the two-dimensional LG representation. We have denoted by $1$, $2$ and $3$ the momenta of the particles, a notation that we will use throughout the paper. The curly brackets indicate complete symmetrization of the indices included in order to obtain the desired spin-$j$ representation. The explicit form of the amplitude will depend on the spin of the particles involved, and no general expression is available. We will discuss an explicit example in Sec.~\ref{sec:3_point_amp}, and refer the reader to~\cite{Arkani-Hamed:2017jhn,Aoude:2019tzn,Durieux:2019eor,Bachu:2019ehv,Durieux:2019siw} for more examples.
Massless amplitudes can be obtained by applying the same procedure. We will take all momenta incoming. As is well known, 3-point amplitudes involving on-shell massless particles vanish identically due to kinematics. This motivates our choice of complexify the momentum in~\eqref{eq:massless_transf}, since for complex momenta it is no longer true that 3-points amplitude vanishes. With this choice the massless amplitude must transform under the LG as
\begin{equation}\label{eq:amp_massless_tr}
\mathcal{A}\left[1^{h_1} 2^{h_2} 3^{h_3} \right] \to t_1^{-2 h_1} t_2^{-2h_2} t_3^{-2h_3} \mathcal{A}\left[1^{h_1} 2^{h_2} 3^{h_3} \right] .
\end{equation}
It can be shown~\cite{Arkani-Hamed:2017jhn,Elvang:2013cua} that the 3-point amplitude is completely fixed by LG covariance up to an overall constant:
\begin{equation}\label{eq:amp_massless}
\mathcal{A}\left[1^{h_1} 2^{h_2} 3^{h_3} \right] = \left\{
\begin{array}{lccl}
g \braket{12}^{h_3 - h_1 - h_2} \braket{13}^{h_2 - h_1 - h_3} \braket{23}^{h_1 - h_2 - h_3} & ~~~ & {\rm when} & \sum h_i <0 \\
\bar{g} \sbraket{12}^{h_1+h_2 - h_3} \sbraket{13}^{h_1 + h_3 - h_2} \sbraket{23}^{h_2 + h_3 - h_1} & ~~~ & {\rm when} & \sum h_i > 0
\end{array}
\right.
\end{equation}
As briefly mentioned in the Introduction, the simplicity of the structure has far reaching consequences. High point amplitudes do not have the same structural simplicity as 3-point amplitudes, but can be determined using the same LG covariance. An additional ingredient of fundamental importance in determine $n>3$-point amplitudes is the factorization in the various kinematical channels. We will use this factorization in Sec.~\ref{sec:4point} to construct the 4-point amplitude. We defer a more thorough discussion to that section.
We conclude this section by listing a number of identities satisfied by the spinor variables that will be necessary for the following computations. For all incoming particles, momentum conservation can be recast as
\begin{equation}\label{eq:momentum_conservation}
\sum_i p^i_{\alpha\dot{\alpha}}=\sum_i \ket{i}\sbra{i}=0,
\end{equation}
where $p^i$ can be either a massive or massless momentum. The following Schouten identity for the Levi-Civita symbol is also useful,
\begin{equation}\label{Schouten}
\epsilon^{\delta\alpha}\epsilon^{\delta\alpha}+\epsilon^{\delta\beta}\epsilon^{\gamma\alpha}+\epsilon^{\delta\gamma}\epsilon^{\alpha\beta}=0,
\end{equation}
in particular when contracted with spinor variables. For instance, contracting the equation above with three massless angle spinors results in
\begin{equation}
\ket{i}\braket{jk}+\ket{j}\braket{ki}+\ket{k}\braket{ij}=0.
\end{equation}
An analogous identity holds for dotted indices and square bracket spinors. We also have identities involving spinor variables of the same type:
\begin{equation}
\ket{p^I}_\alpha\bra{p_I}^\beta=M\delta_\alpha^\beta, ~~~~\sket{p^I}^{\dot{\alpha}}\sbra{p_I}_{\dot{\beta}}=-M^\dagger\delta^{\dot{\alpha}}_{\dot{\beta}}.
\end{equation}
Finally, the scalar product between two 4-vectors $p$ and $q$ is equal to
\begin{equation}
2 p\cdot q = \braket{p^I q^J} \sbraket{q_J p_I},
\end{equation}
where, as usual, repeated indices are contracted.
\section{$\nu \bar{e} W$ 3-point amplitude}\label{sec:3_point_amp}
We now apply the techniques outlined in Sec.~\ref{sec:scatt_amp} to write the massive amplitude involving one neutrino, one charged lepton and one $W$ boson. According to our discussion, since the amplitude involves two fermions and one spin-1 particles, it must be of the form
\begin{equation}
\mathcal{A}\left[ 1_\nu^I \, 2_{\bar{e}}^J \, 3_W^{\left\{ K_1 K_2 \right\} } \right] .
\end{equation}
Since the brakets $\braket{\lambda \chi}$ and $\sbraket{\lambda \chi}$ are antisymmetric, the combinations $\braket{3^{\left\{ K_1\right.} 3^{\left. K_2 \right\}} } $ and $\sbraket{3^{\left\{ K_1\right.} 3^{\left. K_2 \right\}} } $ vanish identically. This implies that only the angle or square bracket products involving the helicity variables $(1,3)$ and $(2,3)$ can appear, and we are left with only 4 terms satisfying the correct LG transformation: $\braket{1^I 3^{\left\{ K_1\right.}} \braket{2^J 3^{\left. K_2 \right\}} }$, $\braket{1^I 3^{\left\{ K_1\right.}} \sbraket{2^J 3^{\left. K_2 \right\}} }$, $\sbraket{1^I 3^{\left\{ K_1\right.}} \braket{2^J 3^{\left. K_2 \right\}} }$ and $\sbraket{1^I 3^{\left\{ K_1\right.}} \sbraket{2^J 3^{\left. K_2 \right\}} }$. All the terms are independent, and each of them will appear with an independent coupling in the amplitude. We will omit flavor indices at this stage, getting back to this issue in Sec.~\ref{sec:PMNS}. To avoid cluttering in the notation due to the LG indices, we follow Ref.~\cite{Arkani-Hamed:2017jhn} and introduce the bold notation: massive spinor helicity variables will be denoted in bold, leaving implicit the symmetrization operation. With this convention the amplitude reads
\begin{equation}\label{eq:3-point_noflavor}
\mathcal{A}\left[ 1_\nu 2_{\bar{e}} 3_W \right] = \frac{y_L}{M} \braket{\bm{13}} \braket{\bm{23}} + \frac{g_L}{m_W} \braket{\bm{13}} \sbraket{\bm{23}} + \frac{g_R}{m_W} \sbraket{\bm{13}} \braket{\bm{23}} + \frac{y_R}{M} \sbraket{\bm{13}} \sbraket{\bm{23}} .
\end{equation}
Our result agrees with what was found in Ref.~\cite{Durieux:2019eor}. In four dimensions a $n$-point amplitude has dimension $4-n$. The angle and square brackets each have dimension $1$, implying that the coefficients of the four terms appearing in the previous equation must have dimension $-1$. We make this explicit by introducing a mass scale in the denominator. Our choice will become clear later on (see discussion below Eq.~\eqref{eq:massless_limit}). Let us now discuss the UV origin of the different terms. To understand the helicity and chirality of the particles involved in each of the terms it is useful to take the high energy limit in which all the particles become massless. This amounts to unbolding the amplitude and apply a $U(1)$ LG transformation as in Eq.~\eqref{eq:massless_transf}, with $e^{i \theta/2} = t$ as explained in the previous Section. Once this is done, one can infer the helicity of the particles involved by simply comparing with Eqs.~\eqref{eq:amp_massless_tr} and~\eqref{eq:amp_massless}.
The high energy limit of the terms involving only angle or square brackets are straightforward and read
\al{
\frac{y_L}{M} \braket{\bm{13}} \braket{\bm{23}} & \to \frac{y_L}{M}\braket{13} \braket{23} = \mathcal{A}\left[1_\nu^- 2_{\bar{e}}^- 3_W^{-1}\right] ,\\
\frac{y_R}{M}\sbraket{\bm{13}} \sbraket{\bm{23}} & \to \frac{y_R}{M}\sbraket{13} \sbraket{23} = \mathcal{A}\left[1_\nu^+ 2_{\bar{e}}^+ 3_W^{+1}\right] .
}
Using the explicit formulas of App.~\ref{app:Dirac} we obtain that a negative-helicity massless fermion maps to a field with left handed chirality, while a negative-helicity massless antifermion maps to a field with right handed chirality. Inverting the helicity corresponds to an inversion of the chirality. Since these terms involve a fermion-antifermion pair with the same helicity, they must correspond to dipole operators in the Lagrangian language. More specifically,
\begin{equation}\label{eq:dipole}
\begin{array}{lll}
\braket{\bm{13}}\braket{\bm{23}} &~{\rm is~generated~by~the~dipole~operator}~ &\bar{e}_R \sigma^{\mu\nu} \nu_L W_{\mu\nu}^-, \\
\sbraket{\bm{13}} \sbraket{\bm{23}}&~{\rm is~generated~by~the~dipole~operator}~ &\bar{e}_L \sigma^{\mu\nu} \nu_R W_{\mu\nu}^-.
\end{array}
\end{equation}
Explicit computation of the amplitude using the results of App.~\ref{app:Dirac} confirms this conclusion.
The high energy limit of the remaining two terms is richer, since in this case both the positive and negative helicity of the spin-1 particle can be reached. To be more precise, the correct form of the amplitude in the UV can be reached allowing the masses to vanish one at a time. Since the result will be important in Sec.~\ref{sec:PMNS}, it is worth to show it in detail. We have
\begin{equation}\label{eq:massless_limit}
\frac{g_L}{m_W} \braket{\bm{13}} \sbraket{\bm{23}} \stackrel{m_1 \to 0}{\longrightarrow}\frac{g_L}{m_W} \braket{1\bm{3}} \sbraket{\bm{23}} \stackrel{m_2 \to 0}{\longrightarrow} \frac{g_L}{m_W} \braket{1\bm{3}} \sbraket{2\bm{3}} = - g_L \frac{\braket{1 \bm{3} }^2 }{\braket{12}} = g_L \frac{\sbraket{2 \bm{3}}^2 }{\sbraket{12}},
\end{equation}
where in the last step we have used momentum conservation and the Weyl equations to obtain a factor of $m_W$ that cancel with the denominator. This justifies our choice of writing the coefficient as $g_L/m_W$ since the presence of the vector mass in the denominator is necessary for a consistent massless limit within the SM. The term $\braket{1\bm{3}}^2/\braket{12}$ unbolds to $\mathcal{A}[1_\nu^- 2_{\bar{e}}^+ 3_W^{-1} ]$, while $\sbraket{2\bm{3}}^2/\sbraket{12}$ unbolds to $\mathcal{A}[ 1_\nu^- 2_{\bar{e}}^+ 3_W^{+1} ]$. Analogous reasoning can be applied to the $\sbraket{\bm{13}} \braket{\bm{23}}$ term, which unbolds to $\mathcal{A}[1_\nu^+ 2_{\bar{e}}^- 3_W^{\pm1}]$. Since all these amplitudes involve a fermion-antifermion pair with opposite helicity we conclude that they must generated, in field theory language, by the following terms:
\begin{equation}\label{eq:monopole}
\begin{array}{lll}
\braket{\bm{13}}\sbraket{\bm{23}} &~{\rm is~generated~by~the~operator}~ &\bar{e}_L \gamma^{\mu} \nu_L W_{\mu}^-, \\
\sbraket{\bm{13}} \braket{\bm{23}}&~{\rm is~generated~by~the~operator}~ &\bar{e}_R \gamma^{\mu} \nu_R W_{\mu}^-.
\end{array}
\end{equation}
It is now straightforward to determine the $SU(2)_L \times U(1)_Y$ operators that generate the terms in Eqs.~\eqref{eq:dipole} and~\eqref{eq:monopole}. They can be found in Tab.~\ref{tab:UV_origin}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{c|c|c}
amplitude & $U(1)_{EM}$ theory & $SU(2)_L \times U(1)_Y$ theory \\[0.4ex]
\hline
$\braket{\bm{13}}\braket{\bm{23}} $ & $\bar{e}_R \,\sigma^{\mu\nu} \nu_L W_{\mu\nu}^-$ ($d=5$) & $\bar{e}_R \sigma^{\mu\nu} H^\dag \tau^a L W^a_{\mu\nu}$ ($d=6$) \\[0.4ex]
\hline
\multirow{2}{*}{$\sbraket{\bm{13}} \sbraket{\bm{23}}$} & \multirow{2}{*}{$\bar{e}_L \sigma^{\mu\nu} \nu_R W_{\mu\nu}^-$ ($d=5$)} & $(\bar{L} H) \sigma^{\mu\nu} (H^T \epsilon \tau^a L^c) W_{\mu\nu}^a$ ($d=7$, Majorana) \\[0.4ex]
& & $\bar{L} \sigma^{\mu\nu} \tau^a N_R \epsilon H^* W_{\mu\nu}^a$ ($d=6$, Dirac) \\[0.4ex]
\hline
$\braket{\bm{13}}\sbraket{\bm{23}}$ & $\bar{e}_L \gamma^\mu \nu_L W_\mu^-$ ($d=4$) & $\bar{L} \gamma^\mu \tau^a L W_\mu^a$ ($d=4$) \\[0.4ex]
\hline
\multirow{2}{*}{$\sbraket{\bm{13}} \braket{\bm{23}} $} & \multirow{2}{*}{$ \bar{e}_R \gamma^{\mu} \nu_R W_{\mu}^-$ ($d=4$)} & $\bar{e}_R \gamma^\mu (H^\dag \epsilon L^c) H^\dag \epsilon D_\mu H^*$ ($d=7$, Majorana) \\[0.4ex]
& & $\bar{e}_R \gamma^\mu N_R H^\dag \epsilon D_\mu H^*$ ($d=6$, Dirac)
\end{tabular}
\caption{\label{tab:UV_origin} Amplitude/operators dictionary. In the $SU(2)_L \times U(1)_Y$ invariant theory we list the smallest dimensional operators contributing to the corresponding amplitude, distinguishing between the Majorana and Dirac cases, if needed. }
\end{center}
\end{table}
For Majorana neutrinos we have $\nu_R = \nu_L^c$, while for Dirac neutrinos this is an independent degree of freedom $\nu_R = N_R$.
The table emphasizes one of the crucial points raised in the Introduction: on-shell amplitudes automatically include structures that (i) appear at different order in an expansion over the cutoff scale and (ii) that traditionally belong to different EFTs: the SMEFT for the terms involving $L^c$ and the $\nu$SMEFT for the terms involving $N_R$ (see discussion in the Introduction).\footnote{Typically, in the $\nu$SMEFT the right handed neutrinos are supposed to be heavier than the left handed ones and are responsible for the generation of neutrino masses. Since we are considering Dirac neutrinos, we are implicitly assuming that the Majorana term for the $N_R$ fields is forbidden by some symmetry.} Moreover, we observe from Tab.~\ref{tab:UV_origin} that the structures involving the right handed neutrino helicity are more suppressed for Majorana than for Dirac neutrinos when embedded in an EFT framework.
Comparing the terms in Eq.~\eqref{eq:3-point_noflavor} with the operators in Tab.~\ref{tab:UV_origin} it is possible to infer the correct $\Lambda$ dependence of the scale $M$. For the terms generated by $d=6$ operators we have $M = \Lambda^2/v$, while for the terms generated at $d=7$ we have $M = \Lambda^3/v^2$. Can this be done from a purely on-shell perspective? An apparent obstacle to this program is the fact that only the $\braket{\bm{13}} \sbraket{\bm{23}}$ structure can be directly UV completed into the $SU(2)_L \times U(1)_Y$ invariant 3-point amplitudes (we will write such amplitudes later on in Eq.~\eqref{eq:SM_amplitudes}). According to Tab.~\ref{tab:UV_origin} all other terms should be generated by higher-point amplitudes. Two arguments can be used to perform the IR/UV matching between amplitudes involving a different number of particles. The first one was presented in Ref.~\cite{Durieux:2019eor}, and amounts to notice that, in the soft limit in which the Higgs boson momentum vanishes, amplitudes involving the Higgs boson are indistinguishable from amplitudes without the Higgs boson. To compensate for the dimension mismatch between higher and lower points amplitudes, a new mass scale must be introduced, analogous to the Higgs vev. A second argument has been presented in Ref.~\cite{Bachu:2019ehv} and amounts to impose correlations between coefficients to tame a possible growth with energy of the amplitude. In our case this procedure can be applied to the $\sbraket{\bm{13}} \braket{\bm{23}}$ structure. Focussing for simplicity on Dirac neutrinos, the procedure amounts to ``glue'' the 3-point amplitudes $\nu\bar{e}W^\pm$ and $h W^\pm W^0$ (where $W^0$ denotes the longitudinal component of the $W$ boson and $W^\pm$ the transverse ones) to obtain the 4-point amplitude $\nu \bar{e} W^0 H$. The last amplitude can be UV completed in $\nu \bar{e} H^\dag H$, which is $SU(2)_L \times U(1)_Y$ invariant and has a coefficient given at leading order by $1/\Lambda^2$. Demanding this 4-point amplitude not to grow with the energy requires $g_R \sim m_W^2/\Lambda^2$, apart from an $\mathcal{O}(1)$ coefficient. A similar reasoning applies to the Majorana case, in which however we need to construct the 5-point amplitude $L^c \bar{e} H^* H^* H^*$ to obtain an $SU(2)_L \times U(1)_Y$ object. There is in any case an obstruction to the application of the same procedure to the dipole amplitudes, since these are generated at loop level. Although interesting, this point lies outside the scope of this paper, and will be explored elsewhere.
Before concluding, we observe that the kinematic structure of Eq.~\eqref{eq:3-point_noflavor} is completely generic, and applies to any 3-point interaction involving two spin-1/2 and one spin-1 particle. In particular, the same expression will be valid -- with different coefficients -- for charged interactions involving quarks and for neutral interactions involving leptons or quarks.
\section{Flavor and the PMNS matrix}\label{sec:PMNS}
In this section we discuss how flavor is implemented in the 3-point amplitude of Eq.~\eqref{eq:3-point_noflavor}. Clearly, we could use directly the operators in Tab.~\ref{tab:UV_origin} to infer the flavor structure of the interactions. Nevertheless, it is interesting to study how we can obtain the same result using the on-shell formalism only. We will not assume any mismatch between flavor and mass basis as is usually done in the traditional QFT computation, since in the on-shell formalism there is no mass term in the Lagrangian to be diagonalized.
Our derivation rests on one important assumption: in the absence of mass, no quantum number can be used to distinguish between 1-particle states of different generations. In the massless limit we thus gain the freedom to perform unitary transformations on the states of each species. In our case these flavor transformations amount to
\begin{equation}\label{eq:flavor_tr}
\ket{\nu_i(\bm{p}, h)} \to (U_\nu^*)_{ji} \ket{\nu_j(\bm{p}, h)}, ~~~ \ket{\bar{e}_i( \bm{p}, h)} \to (U_e)_{ji} \ket{\bar{e}_j( \bm{p}, h)}~\footnote{We take the usual transformation in which antiparticles transform in the conjugate representation with respect to particles.},
\end{equation}
where $U_\nu$ and $U_e$ are unitary matrices. At the same time, in the massless limit the spinor variables depend only on the particle momentum, and are thus flavor blind. This means that all the flavor dependence must be encoded in the coefficients in front of each term in the amplitude, which must thus have non-trivial flavor transformations. Stated in another way: when a certain type of particle becomes massless, the amplitude must be a covariant tensor under a flavor transformation of that type of particles. Notice that this is not the case for massive amplitudes, in which also the spinor variables depend on the particle mass.
We are now going to use these observations to deduce the flavor structures of the coefficients $g_{L,R}$ and $y_{L,R}$. In order to do so, we are going to take the limit in which each type of particle becomes massless one at a time: first all the neutrinos, then all the charged leptons and finally the $W$ boson. When all the particles are massless we will match into a SM gauge amplitude~\cite{Christensen:2018zcq,Baratella:2020lzz}:
\begin{equation}\label{eq:SM_amplitudes}
\mathcal{A}_{SM}\left[1_{L_{A,i}}^- 2_{\bar{L}_{B,j}}^+ 3_{W^a}^{-1} \right] = g^{ij} (T^a)_{AB} \frac{\braket{13}^2}{\braket{12}}, ~~~ \mathcal{A}_{SM}\left[1_{L_{A,i}}^- 2_{\bar{L}_{B,j}}^+ 3_{W^a}^{+1} \right] = g^{ij} (T^a)_{AB} \frac{\sbraket{13}^2}{\sbraket{12}},
\end{equation}
where $L$ is the usual lepton doublet and $T^a$ is a gauge generator. We have written explicitly the gauge indices ($A$ and $B$) as well as the flavor indices ($i$ and $j$). As observed above, in the massless limit only the coefficients can depend on flavor. The hypothesis of flavor universality we are using allows us to employ a $L$ flavor transformation to make the $g^{ij}$ coefficients proportional to the identity: $g^{ij} \to g \delta^{ij}$, where $g$ is the $SU(2)_L$ gauge coupling.
We now have all the ingredients to discuss the flavor dependence of the couplings. To fix our notation, we rewrite Eq.~\eqref{eq:3-point_noflavor} making explicit the flavor indices:
\begin{equation}\label{eq:3-point_flavor}
\mathcal{A}\left[ 1_{\nu_i} 2_{\bar{e}_j} 3_W \right] = \frac{y_L^{ij}}{M} \braket{\bm{1}_i\bm{3}} \braket{\bm{2}_j\bm{3}} + \frac{g_L^{ij}}{m_W} \braket{\bm{1}_i\bm{3}} \sbraket{\bm{2}_j\bm{3}} + \frac{g_R^{ij}}{m_W} \sbraket{\bm{1}_i\bm{3}} \braket{\bm{2}_j\bm{3}} + \frac{y_R^{ij}}{M} \sbraket{\bm{1}_i\bm{3}} \sbraket{\bm{2}_j\bm{3}} .
\end{equation}
At this stage, all the coefficients are complex matrices in flavor space. Before taking the massless limits, we observe that, following the discussion in Sec.~\ref{sec:3_point_amp}, only the $\braket{\bm{1_i3}} \sbraket{\bm{2_j3}}$ has the correct particle content to be matched into the SM amplitudes~\eqref{eq:SM_amplitudes}. Let us first discuss this term. The various massless limits can be achieved following Eq.~\eqref{eq:massless_limit}. At each stage we gain the freedom to perform a flavor transformation of the specie that became massless. To remember this freedom we will explicitly apply a generic flavor transformation at each stage. Focussing on the coefficients only and writing them as matrices in flavor space we obtain
\begin{equation}\label{eq:flavor_matching}
\frac{g_L}{m_W} \stackrel{m_\nu \to 0}{\longrightarrow} \frac{U_\nu^\dag g_L}{m_W} \stackrel{m_e \to 0}{\longrightarrow} \frac{U_\nu^\dag g_L U_e}{m_W} \stackrel{m_W \to 0}{\longrightarrow} U_\nu^\dag g_L U_e = g \bm{1} ,
\end{equation}
where in the last step we have matched onto the SM amplitude with coefficient proportional to the identity in flavor space. The last identity can be true only if $g_L$ is proportional to a unitary matrix. We will thus make contact with the usual notation and write
\begin{equation}\label{eq:PMNS}
g_L = g \,U ,
\end{equation}
where $U$ is the PMNS matrix. Notice that we have obtained the PMNS matrix without having to ever talk about mass diagonalization and mismatches between flavor and mass basis. Having established that $g_L$ is proportional to a unitary matrix we also observe that, as soon as the neutrinos become massless, it is possible to use the freedom of rotating the neutrino states to make $g_L$ proportional to the identity. This matches, as it should, the usual QFT conclusion that no PMNS matrix appears in the limit of massless neutrinos. Furthermore, it is interesting to observe that the standard counting of PMNS parameters is guaranteed thanks to the possibility of applying arbitrary phase transformations to the neutrino and charged antilepton 1-particle states. For Dirac neutrinos the phase transformations can be deduced directly from Eq.~\eqref{eq:flavor_tr} by identifying $(U_\nu)_{ij} \equiv e^{i \alpha_i} \delta_{ij}$ and $(U_e)_{ij} \equiv e^{i\beta_i} \delta_{ij}$. With this identification we are back to the usual textbook deduction of the number of phases~\cite{Giganti_2018}. The situation is different for Majorana neutrinos, since in this case particle and antiparticle coincide. Given that the latter transforms in the conjugate representation, consistency is ensured requiring $U_\nu = U_\nu^*$, i.e. the transformation is constrained to be orthogonal. This means that no phase transformation can be applied in a consistent way on the Majorana neutrino 1-particle states, leaving us with 3 phases in $U$. Once more, we recover the usual parameter counting for the PMNS matrix.
When the other terms in the amplitude are turned on, a similar reasoning applies. A crucial difference emerges, however, since they cannot matched into any SM 3-point amplitude. We thus cannot conclude that the $y_{L,R}$ and $g_R$ coefficients are unitary unless a special ``conspiracy'' aligns the flavor structure to the flavor identity in the UV. Comparing with the operators in Tab.~\ref{tab:UV_origin} we confirm the validity of our argument.
We conclude this section observing that the same line of reasoning that lead us to the PMNS matrix can be applied to charged interactions between quarks. Although the quark and $W$ mass hierarchy forbids separate massless limits, the UV matching can be done taking all the particles massless at the same time. We again conclude that the coefficient of the $\braket{\bm{1}_i\bm{3}} \sbraket{\bm{2}_j\bm{3}}$ term must be proportional to a unitary matrix, to be identified with the CKM matrix. The main difference with respect to the present case is the UV origin of the other three terms.~\footnote{For completeness we report here the operators that generate the various terms in the $\mathcal{A}\left[1_u 2_{\bar{d}} 3_W\right]$ amplitude: the term $\braket{\bm{13}} \braket{\bm{13}}$ is generated at leading order by the operator $\bar{d}_R \sigma^{\mu\nu} \tau^a H^\dag Q W_{\mu\nu}^a$; $\sbraket{\bm{13}} \sbraket{\bm{23}}$ by $\bar{Q} \tilde{H} \sigma^{\mu\nu} \tau^a u_R W_{\mu\nu}^a$; $\braket{\bm{13}} \sbraket{\bm{23}}$ by the SM operator $\bar{Q} \gamma^\mu \tau^a Q W_\mu^a$; finally, $\sbraket{\bm{13}} \braket{\bm{13}}$ is generated by $\bar{d}_R \gamma^\mu u_R H^\dag \epsilon D_\mu H^*$. } Analogous reasoning will also apply to neutral currents, which are relevant to the computation of the neutrino matter potential. We defer the study of this issue from the on-shell perspective to future work.
\section{Oscillations from the 4-point amplitude}\label{sec:4point}
We are finally in the position of discussing the main point of the paper: how neutrino oscillations follow from the 4-point amplitude involving two charged leptons and two $W$ bosons. As mentioned in the Introduction, we use this amplitude as a proxy to study neutrino oscillations in the on-shell formalism. We will comment later on more realistic production processes involving mesons or lepton decays.
The 4-point amplitude can be written in full generality as
\begin{equation}
\mathcal{A}\left[1_W 2_W 3_{\bar{e}_j} 4_{e_k}\right] \equiv \mathcal{A}_{WWjk} = \sum_i \frac{\mathcal{R}_s^i}{s - m_{\nu_i}^2} + {\cal A}_{\rm contact} ,
\end{equation}
where we have defined a lighter notation for the amplitude symbol that will be used from now on. The first term is the one obtained exchanging a neutrino in the s-channel and will be obtained ``gluing'' together two 3-point amplitudes. The second one consists of all the additional contact terms that cannot be obtained from lower point amplitudes. More specifically, while the first term has a pole for $s \to m_{\nu_i}^2$, the second one is regular in this limit. The quantity $\mathcal{R}_s^i$ is the residue at the $s = m_{\nu_i}^2$ pole that, as we will see, can be completely fixed by unitarity arguments. Notice that the amplitude is dominated by the kinematic region in which the exchanged neutrinos are closed to their mass-shell. For this reason we will discard the massive contact terms in what follows (see however~\cite{Durieux:2020gip} for a classification of the terms appearing). Another way to see that contact terms do not contribute follows from the fact that such terms are related to local interactions, and thus will not be relevant to the macroscopic propagation of the neutrino.
The residue at the pole in the s-channel of the 4-point amplitude can be computed according to~\cite{Arkani-Hamed:2017jhn}
\al{\label{eq:factorization}
\mathcal{R}_s^i =
\adjustbox{valign=m}{
\begin{tikzpicture}[line width=0.75]
\draw[v] (-0.75, 0.75) node[left]{$1_W$} -- (0,0) node[midway, above]{$\searrow$};
\draw[f] (0,0) -- (-0.75, -0.75) node[left]{$3_{\bar{e}_j}$} node[midway,below]{$\nearrow$};
\draw[f] (0.75,0) -- (0,0) node[midway,above]{$\stackrel{P}{\rightarrow}$} node[midway,below]{$\nu_i$};
\draw[v] (0.75,0) -- (1.5, 0.75) node[right]{$2_W$} node[midway,above]{$\swarrow$};
\draw[f] (1.5, -0.75) node[right]{$4_{e_k}$} -- (0.75, 0) node[midway,below]{$\nwarrow$};
\end{tikzpicture}} = \lim_{P^2 = s \to m_{\nu_i}^2} {\cal A}\left[1_W 3_{\bar{e}_j} (-P)_{\nu_i}^I \right] \epsilon_{IJ} {\cal A}[P_{\bar{\nu}_i}^J 2_W 4_{e_k} ] .
}
The main idea is that, close to the s-channel mass shell, the 4-point amplitude must factorize into the product of two 3-point amplitudes~\cite{Weinberg:1995mt}. To take into account negative momenta we use the analytic continuation
\begin{equation}
\ket{-\bm{P}} = \ket{\bm{P}}, ~~~~ \sket{-\bm{P}} = - \sket{\bm{P}},
\end{equation}
which is compatible with the Weyl equations and the definition of momentum in terms of spinor variables. The amplitude with an antineutrino and a charged lepton can be obtained from the one with a neutrino and a charged antilepton (see Eq.~\eqref{eq:3-point_flavor}) simply by complex conjugation, i.e. $\braket{\bm{ij}}^* = \sbraket{\bm{ji}}$. The two 3-point amplitudes appearing in Eq.~\eqref{eq:factorization} are
\al{\label{eq:amp1}
{\cal A}\left[1_W 3_{\bar{e}_j} (-P)_{\nu_i}^I \right] & = \frac{y_{L}^{ij}}{M} \braket{\bm{P}_i^I \bm{1}} \braket{\bm{3}_j \bm{1}} + \frac{g\, U^{ij}}{m_W} \braket{\bm{P}_i^I \bm{1}} \sbraket{\bm{3}_j \bm{1}} \\
& \qquad{} - \frac{g_{R}^{ij}}{m_W} \sbraket{\bm{P}_i^I \bm{1}} \braket{\bm{3}_j \bm{1}} - \frac{y_{R}^{ij}}{M} \sbraket{\bm{P}_i^I \bm{1}} \sbraket{\bm{3}_j \bm{1}}
}
and
\al{\label{eq:amp2}
{\cal A}[P_{\nu_i}^J 2_W 4_{e_k} ] & = \frac{y_{L}^{*ik}}{M} \sbraket{\bm{P}_i^J \bm{2}} \sbraket{\bm{4}_k \bm{2}} + \frac{g\, U^{*ik}}{m_W} \sbraket{\bm{P}_i^J \bm{2}} \braket{\bm{4}_k \bm{2}} \\
& \qquad{} + \frac{g_{R}^{*ik}}{m_W} \braket{\bm{P}_i^J \bm{2}} \sbraket{\bm{4}_k \bm{2}} + \frac{y_{R}^{*ik}}{M} \braket{\bm{P}_i^J \bm{2}} \braket{\bm{4}_k \bm{2}} .
}
The residue and the factorizable part of the 4-point amplitude can now be computed using the identities listed in Sec.~\eqref{sec:scatt_amp}. They can be classified in (i) pure SM interactions, (ii) SM/NP interference and (iii) pure NP contribution. The different terms of the 4-point amplitude are the following: the pure SM contribution reads
\begin{equation}\label{eq:4amp_SMSM}
\mathcal{A}_{WWjk}^{\rm SM \times SM} = \frac{g^2}{m_W^2} \left(U^T \frac{1}{s - m_\nu^2} U^*\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})^{jk}_{\rm SM} ,
\end{equation}
the interference between the SM and NP contributions is
\al{\label{eq:4amp_SMNP}
\mathcal{A}_{WWjk}^{\rm SM \times NP} & =\frac{g}{m_W} \left[ \left(U^T \frac{1}{s - m_\nu^2} \frac{y_L^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm int},1}^{jk} + \left(U^T \frac{m_\nu}{s - m_\nu^2} \frac{g_R^*}{m_W}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm int}, 2}^{jk} \right.\\
& {} + \left(U^T \frac{m_\nu}{s - m_\nu^2} \frac{y_R^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm int},3}^{jk}
+ \left(\frac{y_L}{M} \frac{1}{s - m_\nu^2} U^*\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm int} ,4}^{jk} \\
& \left. + \left(\frac{g_R}{m_W} \frac{m_\nu}{s - m_\nu^2} U^*\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm int}, 5}^{jk} + \left(\frac{y_R}{M} \frac{m_\nu}{s - m_\nu^2} U^*\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm int},6}^{jk}\right]
}
and finally the pure NP contribution is given by
\al{\label{eq:4amp_NPNP}
\mathcal{A}_{WWjk}^{\rm NP \times NP} & = \left[ \left(\frac{y_L}{M} \frac{1}{s - m_\nu^2} \frac{y_L^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},1}^{jk} + \left(\frac{y_L}{M} \frac{m_\nu}{s - m_\nu^2} \frac{g_R^*}{m_W}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},2}^{jk} \right.\\
& + \left(\frac{y_L}{M} \frac{m_\nu}{s - m_\nu^2} \frac{y_R^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},3}^{jk} + \left(\frac{g_R}{m_{W}} \frac{m_\nu}{s - m_\nu^2} \frac{y_L^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},4}^{jk} \\
& + \left(\frac{g_R}{m_{W}} \frac{1}{s - m_\nu^2} \frac{g_R^*}{m_{W}}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP|},5}^{jk} + \left(\frac{g_R}{m_{W}} \frac{1}{s - m_\nu^2} \frac{y_R^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},6}^{jk} \\
& + \left(\frac{y_R}{M} \frac{m_\nu}{s - m_\nu^2} \frac{y_L^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},7}^{jk}+ \left(\frac{y_R}{M} \frac{1}{s - m_\nu^2} \frac{g_R^*}{m_{W}}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},8}^{jk}\\
& \left. {} + \left(\frac{y_R}{M} \frac{1}{s - m_\nu^2} \frac{y_R^*}{M}\right)_{jk} (\mathcal{M} \overline{\mathcal{M}})_{{\rm NP},9}^{jk}\right].
}
Explicit expressions for the $\mathcal{M}\overline{\mathcal{M}}$ terms are presented in App.~\ref{app:explicit_4point}. These objects are independent from neutrino masses and depend only on the flavor of the external charged leptons.
Notice that the coefficient of the pure SM term in Eq.~\eqref{eq:4amp_SMSM} is precisely what is expected from the usual QFT treatment of neutrino oscillations. In our approach, however, we automatically obtain also the contributions due to BSM physics. In this sense, our procedure is similar in spirit to the one of Ref.~\cite{Falkowski:2019kfn}, in which NP effects are considered together with the SM ones. Interestingly, some of the contributions appearing in the SM/NP interference and pure NP term are not only suppressed by the high scale $MM$, but by the smallness of neutrino masses as well. With the 4-point amplitude at our disposal it is now a simple matter of algebra to compute the oscillation probability in terms of the PMNS matrix and of the coefficients $g_R$ and $y_{L,R}$. Since the computation is standard and can be found in Refs.~\cite{Akhmedov:2010ms,Falkowski:2019kfn}, we will not report it explicitly here. Useful formulas for the computation of the squared matrix element with spinor variables can be found in~\cite{Christensen:2019mch}.
\section{Conclusions}\label{sec:conc}
In this work we have taken the first steps towards the application of on-shell amplitude technique to phenomenological issues. We have chosen to analyze in detail how the formalism of neutrino oscillations in vacuum emerge in this framework, focussing for simplicity on processes involving the $W$ boson. We have first determined the structure of the 3-point amplitude and then used the factorization of higher-point amplitudes into lower-point amplitudes close to the kinematical poles to determine the structure of the 4-point amplitude in which neutrinos are exchanged in the s-channel. We have also discussed how the PMNS matrix emerges in this framework, since no mass terms to be diagonalized and no Lagrangians are available. To the best of our knowledge, this is the first time that the flavor properties of amplitudes are discussed from a completely on-shell perspective. Our main results are summarized in Eqs.~\eqref{eq:4amp_SMSM} --~\eqref{eq:4amp_NPNP}, in which we show the 4-point amplitudes. This result can be used to compute the neutrino oscillation probability in the standard way. As can be seen, the SM term corresponds to the usual result, as it should. Additional terms appear, however, highlighting an important feature of on-shell amplitude techniques: all possible kinematic structures appear, independently of their UV origin. In this sense, the on-shell approach allows us to perform an all-order computation. We have explicitly determined the UV origin (in operator language) of the different terms appearing in the 3-point amplitude in Sec.~\ref{sec:3_point_amp}, finding differences between the Dirac and Majorana neutrino cases. Notice that another virtue of the on-shell formalism is to allow to perform the equivalent of an EFT computation without having to worry about operator basis and operator redundancies.
Our study can be extended in several directions. First of all, neutrino oscillations in matter can be analyzed using the neutrino interactions with the $Z$ boson. Moreover, neutrino oscillations can be studied using as starting point the 4-point amplitude involving two leptons and two quarks or four leptons. This should allow to make a more concrete connection with the physical processes involved in neutrino production and detection. Finally, it would be interesting to determine the experimental bounds on the various coefficients appearing in the 3-point amplitudes solely from the on-shell perspective. We plan to explore some of these issues in forthcoming papers.
\acknowledgments
We thank Renata Zukanovich Funchal for carefully reading the manuscript and for raising interesting points. GFSA acknowledges financial support from FAPESP under contracts 2019/04837-9 and 2020/08096-0. EB acknowledges financial support from FAPESP under contracts 2019/15149-6 and 2019/04837-9. GMS acknowledges financial support from CAPES, Grant No. 88887.601945/2021-00.
|
1,108,101,563,867 | arxiv | \section{\label{sec:Introduction} Introduction}
In quantum computers basic unit of information is a qubit. Qubits are highly susceptible to noise. Hence to protect the information, we use quantum codes.
A very popular class of quantum codes for protecting information are topological quantum codes. In this paper we focus on a subclass of topological codes in two spatial dimensions called color codes \cite{bombin06}.
To correct the impact of noise on the encoded information we would need a decoder. Novel decoding algorithms for 2D color codes have been proposed earlier in \cite{wang2009graphical,sarvepalli12,delfosse14,bombin2012universal}.
However, these are not optimal and do not meet the theoretical bounds for performance.
Furthermore, designing decoders for non-Pauli noise is a challenging problem.
Recent developments in the fields of machine learning (ML) and deep learning (DL) have motivated many researchers to apply these methods
to decoding quantum codes.
Torlai and Melko were the first to propose a decoder for surface codes based on neural networks \cite{PhysRevLett.119.030501}.
Since then, many other researchers have applied neural networks to study a variety of problems in the context of decoding
~\cite{PhysRevLett.119.030501,varsamopoulos2017decoding,Krastanov2017,baireuther2018neural,chamberland2018deep,davaasuren2018general,jia2018efficient,Breuckmann2018scalableneural,Baireuther2018machinelearning,maskara2018advantages}.
In this paper we only focus on decoding of color codes using neural networks.
Early work based on neural networks attempted to the solve the problem using using neural networks entirely.
These did not beat the non-neural methods.
An important development in this context was due to \cite{varsamopoulos2017decoding} who proposed a combination of neural networks and non-neural decoders.
More precisely, they have a two-step decoder where in the first-step, they estimate an pure-error and in the second-step, they use a neural network which estimates the logical.
In their recent work \cite{varsamopoulos2018designing}, they mention that any simple decoder can be used in the first-step.
The authors of \cite{davaasuren2018general} claim that the work of \cite{varsamopoulos2017decoding} is a special case of their generalized framework of building neural networks for decoding stabilizer codes.
The works of \cite{baireuther2018neural,chamberland2018deep} attempt to use neural networks for fault-tolerant setting.
The most relevant work to ours is \cite{maskara2018advantages} in which a similar combination of two decoders is employed to conclusively demonstrate the usefulness of neural decoders.
They proposed a neural decoder with progressive training procedure that outperformed previously known decoders for 2D color codes.
In this work, we propose a similar two-step neural decoder for color codes and study its performance for the hexagonal color code on the torus.
We propose two variations, one which achieves a threshold of $10 \%$ and another with an important modification that achieves a near optimal threshold for independent bit-flip/phase-flip noise model.
This modification can be incorporated in other neural network based decoders and could be of potentially larger importance.
The main challenge involved with neural networks is determining the correct architecture in order to improve the overall threshold.
We model our non-neural decoder in a simple way and show the advantages of doing so with the improvement in performance of the neural decoder, the reduction in cost of training and scaling associated with the distance of the code.
Our main contributions are,
\begin{compactenum}[1)]
\item We propose a two-step neural decoder with a simple decoding procedure in the first-step, applicable for all stabilizer codes.
\item We suggest an alternative approach on combining the non-neural and the neural decoder which can be incorporated in other neural network based decoders.
\item Our proposed approaches seem to have significant advantages with respect to training cost and complexity of the network for higher lengths when compared to the previous work of Maskara {\em et al.} \cite{maskara2018advantages}.
\end{compactenum}
The paper is organized as follows. We review the necessary background on Quantum Error Correction (QEC), ML and DL in Section~\ref{sec:Background}. We then describe our approach, the neural architecture used in detail and compare it with related work in Section~\ref{sec:OurWork}. In Section~\ref{sec:Insights}, we point out valuable insights from our work and conclude in Section~\ref{sec:Conclusion}.
\section{\label{sec:Background} Background}
In this section, we summarize the necessary background on Quantum Error Correcting Codes (QECC).
In Section~\ref{subsec:StabilizerFormalism}, we briefly review stabilizer codes.
In this paper we focus on color codes which are introduced in Section~\ref{subsec:ColorCodes}. Lastly, in the Section~\ref{subsec:Background:ML_DL} we describe basics of ML and DL with emphasis on deep learning by discussing the various components in a neural network which can be changed depending on the problem to be solved.
\subsection{\label{subsec:StabilizerFormalism} Stabilizer codes}
In this section, we briefly review stabilizer codes.
Recall, that the Pauli group on a single qubit is generated by the Pauli matrices $\{\pm i I,X,Y,Z\}$.
The group $\mathcal{P}_n$ consists of tensor products on $n$ single qubit Pauli operators, $P_{1} \otimes P_{2}\otimes ... \otimes P_{n}$.
A stabilizer code is defined by an abelian subgroup $\mathcal{S} \subset \mathcal{P}_n$, such that $-I\not\in \mathcal{S}$.
The codespace $\mathcal{Q}$, is joint +1-eigenspace of $\mathcal{S}$.
\begin{eqnarray*}
\mathcal{Q} = \{\: \ket \psi \in (\mathbb{C}^2)^{\otimes n} \mid S \ket \psi = \ket \psi \: \text{ for all }\: S\in \mathcal{S} \: \}
\end{eqnarray*}
An $[[n,k]]$ stabilizer code encodes $k$ logical qubits into $n$ physical qubits and its stabilizer $\mathcal{S}$ will have $n-k$ independent generators.
We assume that $\mathcal{S}$ is generated by $\mathcal{S}_g = \{S_1,\hdots, S_m \}$, where $m \geq n-k$ and $S_1,\ldots, S_{n-k}$ are linearly independent.
Let $\mathcal{C}(S)$ be the centralizer of $\mathcal{S}$ i.e. the set of all Pauli operators that commute with all the elements of $\mathcal{S}$.
Let $\mathcal{L}_{g} =\{\overline{X}_{i},\overline{Z}_{i}\}_{i=1}^k$, where $\overline{X}_i$ and $\overline{Z}_{j}$ denote the logical $X$ and $Z$ operators of the code.
Also, $\overline{X}_{i},\overline{Z}_{j}$ commute if $i \ne j$ and anti-commute if $i = j$.
Let $\mathcal{L} = \langle \overline{X}_{1}, \ldots, \overline{X}_{k}, \overline{Z}_{1}, \ldots, \overline{Z}_{k}\rangle$.
We define another set of operators $\mathcal{T}_{g}=\{ T_{1}, T_{2}, \hdots T_{n-k}\}$ called the pure errors,
such that $T_{i}$ and $S_{j}$ commute if $i \neq j$ and anti-commute if $i = j$.
The pure errors commute with each other and also with the logical operators.
Let $\mathcal{T} = \langle T_1,\ldots, T_{n-k}\rangle$.
Note that $\{\mathcal{S}_{g}, \mathcal{L}_{g}, \mathcal{T}_{g}\}$ together form a generating set for $\mathcal{P}_{n}$.
An error operator, $E \notin \mathcal{C}(S)$ will anti-commute with at least one stabilizer operator in group $\mathcal{S}$. If $E$ anti-commutes with the $i^{th}$ stabilizer $S_{i} \in \mathcal{S}$, the $i^{th}$ syndrome bit $s_i$ is one and zero otherwise.
By calculating the syndrome values for all the stabilizer generators, the syndrome vector can be written as, $\mathbf{s}=(s_{1}, s_{2}, ... , s_{m})$ where $m \geq n-k$.
We can write the error operator $E = T L S$ up to a phase as proposed in \cite{duclos2010renormalization}. Here $T \in \mathcal{T}$, $S \in \mathcal{S}$ and $L \in \mathcal{L}$. Note that the operators $T$, $L$, $S$ depend on the error $E$.
The effect of $S$ is trivial, implying two error patterns $E$ and $E'=\mathcal{S}E$ will have same effect on codespace. $\mathcal{S}$ introduces an equivalence relation in error operators and hence finding $S$ is of little interest. Also, given syndrome $\left(\mathbf{s}\right)$, we can uniquely identify $T$ but identifying $L$ is a difficult task.
The problem of error correction for stabilizer codes is finding the most likely $L$ given the syndrome vector $\mathbf{s}$.
Mathematically, we can write this as,
\begin{gather}
\widehat{L} = \underset{\gamma \> \in \> \mathcal{L}}{\argmax} \> Pr\left(\gamma \> \vert \> \mathbf{s}\right) = \underset{\gamma \> \in \> \mathcal{L}}{\argmax} \> \underset{\delta \> \in \> \mathcal{S}}{\sum} Pr\left(\gamma\delta \> \vert \> \mathbf{s}\right)
\label{eq:mlestimate}
\end{gather}
Decoding can be thought of as a classification problem.
We have $4^k$ classes, which is exponential in $k$ and this reformulation of the decoding problem as a classification is not much help for large $k$.
Fortunately, surface codes and color codes have fixed number of logical operators for any length and this reformulation can be taken advantage of.
However, this is not sufficient, note that the computation of the probabilities in Eq.~\eqref{eq:mlestimate}, requires the summation over $2^{n-k}$ terms which is of exponential complexity.
So the reformulation of the decoding as a classification is not adequate, but further work is required to fully exploit this perspective.
\subsection{\label{subsec:ColorCodes} Color codes}
Topological codes are a class of stabilizer codes where the stabilizer generators are spatially local. Popular examples of topological codes are Toric codes \cite{KITAEV20032} and Color codes \cite{bombin06}.
Color codes are defined using a lattice embedded on a surface.
Every vertex is trivalent and faces are 3-colorable.
Qubits are placed on the vertices of the lattice and for each face $f$, we define an $X$ and $Z$ type operators called the face operators.
We define the the stabilizers as,
\begin{gather*}
Z^{\left(f\right)} = \underset{v \in f}{\prod} Z_{v}, \hspace{1cm}
X^{\left(f\right)} = \underset{v \in f}{\prod} X_{v}
\end{gather*}
All $X$ and $Z$ type operators corresponding to every face generate the stabilizers of the color code.
The color code on a hexagonal lattice with periodic boundary is shown in the Fig.~\ref{fig:colorCode}. It encodes four logical qubits~\cite{bombin06}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{Figs/colorCode}
\caption{\label{fig:colorCode}Periodic color code on a hexagonal lattice illustrated with a face and a stabilizer.}
\end{figure}
\subsection{\label{subsec:Background:ML_DL}Machine Learning and Deep Learning}
\subsubsection{\label{subsubsec:ML}An overview of Machine Learning}
In traditional computing, algorithms are sets of explicitly programmed instructions which perform a specific task as to give out correct output for the given input.
ML is a concept to learn patterns from data through statistical analysis and make predictions without those rules being programmed explicitly.
These ML algorithms are therefore data driven methods and the process of learning these rules or patterns is called training of the ML model. Training is essentially an optimization process minimizing an objective function called the loss function. This loss function plays an important role in the algorithm learning these patterns and making good predictions.
There are many such algorithms for solving problems of classification, regression etc and some of them are mentioned in~\cite{Kotsiantis:2007:SML:1566770.1566773,Domingos:2012:FUT:2347736.2347755}. Any function can be used as a loss function but they need not necessarily help the algorithm learn. There exist specific loss functions which are mathematically proven to be apt for solving each of the above mentioned tasks.
Mathematically, the core of any ML algorithm is to estimate the parameters of a function or set of functions which solve the given task.
Training can be classified into two types, supervised learning and the unsupervised learning.
The requirement for supervised learning is labeled dataset of inputs $\left(\mathbf{x}\right)$ and the corresponding true outputs $\left(\mathbf{y}\right)$. These true outputs are sometimes referred to as ground truth. The ML algorithm will learn the patterns in the data by this information of input and correct output during training and tries to predict $\left(\mathbf{\widehat{y}}\right)$, the correct prediction during testing. Eg. Classification, Regression.
In unsupervised learning, we still have input data but the corresponding ground truth information is not present. The ML algorithm is required to learn the patterns from the input data alone without the information of the ground truth. Eg. Clustering.
\subsubsection{\label{subsubsec:DL}An overview of Deep Learning}
\noindent
\textbf{\textit{Neuron and Activation functions:}}
A \textit{neuron} is an element which takes an input $\mathbf{x}$ and performs the operation $f\left(\mathbf{w}^{\top}\mathbf{x} + b\right)$ as shown in the Fig.~\ref{fig:neuron}.
The parameters $\mathbf{w}$ are called weights and the parameter $b$ is called the bias.
Each element of these vectors $\mathbf{x}, \mathbf{w}$ and $b$ are real numbers. The function $f$ is a non-linear function and is called the \textit{activation function}. Some common activation functions include \verb+Sigmoid+, \verb+TanH+, \verb+ReLU+ (Rectified Linear Unit) etc as shown in the Fig.~\ref{fig:activationFunctions} and are exhaustively discussed in~\cite{Goodfellow-et-al-2016}.
Deep Learning (DL) is a method in ML to estimate the parameters of a function using a combinations of this basic element neuron.
It is common to address the combined set of parameters in $\mathbf{w}$ and $b$ as weights or parameters and we follow this same convention in our subsequent discussion.
The activation function plays a very crucial role in DL since without that, a neuron just performs a linear operation.
\begin{figure}
\centering
\begin{tikzpicture}[
init/.style={
draw,
circle,
inner sep=2pt,
font=\Huge,
join = by -latex
},
squa/.style={
draw,
inner sep=2pt,
font=\Large,
join = by -latex
},
start chain=2,node distance=6mm
]
\node[on chain=2]
(x2) {$x_2$};
\node[on chain=2,join=by o-latex]
{$w_2$};
\node[on chain=2,init] (sigma)
{$\displaystyle\Sigma$};
\node[on chain=2,squa,label=above:{\parbox{2cm}{\centering Activation \\ function}}]
{$f$};
\node[on chain=2,label=above:Output,join=by -latex]
{$f\left(\mathbf{w}^{\top}\mathbf{x}+b\right)$};
\begin{scope}[start chain=1]
\node[on chain=1] at (0,1.5cm)
(x1) {$x_1$};
\node[on chain=1,label=above:{Weights $\left(\mathbf{w}\right)$},join=by o-latex]
(w1) {$w_1$};
(w1) {$\vdots$};
\end{scope}
\begin{scope}[start chain=3]
\node[on chain=3] at (0,-1.5cm)
(x3) {$x_n$};
\node[on chain=3,join=by o-latex]
(w3) {$w_n$};
\end{scope}
\node[label=above:\parbox{2cm}{\centering Bias \\ $b$}] at (sigma|-w1) (b) {};
\begin{scope}[start chain=4]
\node[on chain=4] at (0,-0.6cm)
(x4) {$\vdots$};
\node[on chain=4] at (0.5cm, -0.6cm)
(w4) {$\vdots$};
\end{scope}
\draw[-latex] (w1) -- (sigma);
\draw[-latex] (w3) -- (sigma);
\draw[o-latex] (b) -- (sigma);
\draw[decorate,decoration={brace,mirror}] (x1.north west) -- node[left=10pt] {Inputs} (x3.south west);
\end{tikzpicture}
\caption{\label{fig:neuron} A single neuron which accepts input $\mathbf{x}$ and outputs $f\left(\mathbf{w}^{\top}\mathbf{x} + b\right)$ where $f$ is an activation function. The vectors $\mathbf{x}, \mathbf{w} \in \mathbb{R}^{n}$ and $b \in \mathbb{R}$.}
\end{figure}
\begin{figure*}
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=\textwidth, xlabel={$x$}, legend pos=north west,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
}
]
\addplot[smooth, thick, solid, color=blue] {1/(1+exp{-x})};
\legend{$\frac{1}{1+e^{-x}}$}
\end{axis}
\end{tikzpicture}
}
\label{subfig:sigmoid}
\subcaption{\texttt{Sigmoid} function}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=\textwidth, xlabel={$x$}, legend pos=north west,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
}
]
\addplot[smooth, thick, solid, color=blue] {tanh(x)};
\legend{$\tanh\left(x\right)$}
\end{axis}
\end{tikzpicture}
}
\label{subfig:tanh}
\subcaption{\texttt{TanH} function}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=\textwidth, xlabel={$x$}, legend pos=north west, domain=-3:3,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
}
]
\addplot+[mark=none, smooth, thick, blue, domain=-1:0] {0};
\addplot+[mark=none, smooth, thick, blue, domain=0:1] {x};
\legend{$\max\left(x,0\right)$\\}
\end{axis}
\end{tikzpicture}
}
\label{subfig:relu}
\subcaption{\texttt{ReLU} function}
\end{subfigure}
\caption{\label{fig:activationFunctions} Various activation functions used commonly in DL. Note that \texttt{ReLU} does not saturate for high inputs.}
\end{figure*}
\noindent
\textbf{\textit{Architectures:}}
Different combinations of these basic neurons result in different architectures.
Some of such famous architectures are Fully-Connected Networks (FC), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) etc.
All these architectures comprise of layers which are again a combination of neurons. Essentially, these architectures can be characterized by these layers.
\noindent
\textbf{\textit{Fully-connected Network:}}
We briefly describe the FC architecture which we use in our work as shown in Fig.~\ref{fig:FCNetwork}. Any FC network has an input layer, an output layer and hidden layers. Each layer comprises of neurons and each neuron is connected to every other neuron in the adjacent layers. Connectedness implies that each neuron receives the output of the neurons it is connected to in the previous layer and it passes the output of itself to all the connected neurons in the next layer. All the neurons in every layer follow this rule except that the neurons in the input layer take the input from the data and the neurons in the output layer give us the final prediction. The input data and the output prediction varies from problem to problem. In a simple image classification task, the input data is the image and the output is the class label.
As mentioned before, the non-linear function plays a crucial role in the success of DL in estimating complicated functions efficiently, making DL a very powerful tool.
\tikzset{%
every neuron/.style={
circle,
draw,
minimum size=1cm
},
neuron missing/.style={
draw=none,
scale=4,
text height=0.333cm,
execute at begin node=\color{black}$\vdots$
},
}
\begin{figure}
\centering
\begin{tikzpicture}[x=1.5cm, y=1.5cm, >=stealth]
\foreach \m/\l [count=\y] in {1,2,3,missing,4}
\node [every neuron/.try, neuron \m/.try] (input-\m) at (0,2.5-\y) {};
\foreach \m [count=\y] in {1,missing,2}
\node [every neuron/.try, neuron \m/.try ] (hidden-\m) at (1.5,2-\y*1.25) {};
\foreach \m [count=\y] in {1,missing,2}
\node [every neuron/.try, neuron \m/.try ] (output-\m) at (3,1.5-\y) {};
\foreach \l [count=\i] in {1,2,i,m}
\draw [<-] (input-\i) -- ++(-1,0)
node [above, midway] {$x_\l$};
\foreach \l [count=\i] in {1,n}
\node [] at (hidden-\i.center) {$h_\l$};
\foreach \l [count=\i] in {1,t}
\draw [->] (output-\i) -- ++(1,0)
node [above, midway] {$\widehat{y_\l}$};
\foreach \i in {1,...,2}
\foreach \j in {1,...,2}
\draw [->] (input-\i) -- (hidden-\j);
\foreach \i in {3}
\foreach \j in {1}
\draw [->] (input-\i) -- (hidden-\j)
node [above, midway] {$w^{(0)}_{i\j}$};
\foreach \i in {3}
\foreach \j in {2}
\draw [->] (input-\i) -- (hidden-\j)
node [above, midway] {$w^{(0)}_{in}$};
\foreach \i in {4}
\foreach \j in {1,...,2}
\draw [->] (input-\i) -- (hidden-\j);
\foreach \i in {1}
\foreach \j in {1}
\draw [->] (hidden-\i) -- (output-\j)
node [above, midway] {$w^{(1)}_{\i\j}$};
\foreach \i in {1}
\foreach \j in {2}
\draw [->] (hidden-\i) -- (output-\j)
node [above, midway] {$w^{(1)}_{\i t}$};
\foreach \i in {2}
\foreach \j in {1}
\draw [->] (hidden-\i) -- (output-\j)
node [below, midway] {$w^{(1)}_{n\j}$};
\foreach \i in {2}
\foreach \j in {2}
\draw [->] (hidden-\i) -- (output-\j)
node [below, midway] {$w^{(1)}_{nt}$};
\foreach \l [count=\x from 0] in {Input, Hidden, Output}
\node [align=center, above] at (\x*1.5,2) {\l \\ layer};
\end{tikzpicture}
\caption{\label{fig:FCNetwork} A sample fully-connected architecture with one hidden layer. Each neuron in every layer is connected to every other neuron in the adjacent layers. In this example, the size of the input vector is $m$ and the size of the output vector is $t$. There are $n$ hidden nodes in the hidden layer. The parameters $\mathbf{w}$ represent the weights of the network.}
\end{figure}
\noindent
\textbf{\textit{Loss functions:}}
The loss function plays a prominent role in the performance of any DL model. It is calculated between the true label $\left(\mathbf{y}\right)$ or the ground truth and the prediction made by the network $\left(\mathbf{\widehat{y}}\right)$. The training procedure as described next ensures that the predictions made by the network get closer to the ground truth by minimizing the loss function as the training progresses. For regression problem, commonly used loss functions are are $\ell_{2}$ and $\ell_{1}$ norms as defined below.
\begin{gather*}
\ell_{2}\left(\mathbf{y}, \mathbf{\widehat{y}}\right) = \left\lVert \mathbf{y} - \mathbf{\widehat{y}}\right\rVert_{2} = \sum_{i} \left(y_{i} - \widehat{y_{i}}\right)^{2} \\
\ell_{1}\left(\mathbf{y}, \mathbf{\widehat{y}}\right) = \left\lVert \mathbf{y} - \mathbf{\widehat{y}}\right\rVert_{1} = \sum_{i} \left\vert y_{i} - \widehat{y_{i}}\right\vert
\end{gather*}
For classification problems, \textit{cross-entropy} ($\ell_{CE}$) is used as the loss function which is defined in the following equation.
\begin{gather*}
\ell_{CE}\left(\mathbf{y}, \mathbf{\widehat{y}}\right) = - \sum_{i}y_{i}\log \left(\widehat{y}_{i}\right)
\end{gather*}
We use this cross-entropy loss in our work since QEC can be viewed as a classification problem as described in Section~\ref{subsec:OurWork:ProblemModeling}. We discuss the reasons for using this loss in Section~\ref{subsec:OurWork:TrainingProcedure}.
\noindent
\textbf{\textit{Training:}}
Training is nothing but estimating the values of the weights of the network which minimizes the chosen loss function for the given training data or the input-output pairs. One of the traditional method of updating the weights to minimize a function is \textit{Gradient Descent} (GD) algorithm. It is an iterative algorithm which tries to optimize the objective function and in our case minimize the loss function $\left(\ell\right)$ through updating the weights $\left(\mathbf{w}\right)$ of the network in each iteration by following the update rule defined below, as discussed in~\cite{Goodfellow-et-al-2016}.
\begin{gather*}
\mathbf{w}_{t+1} = \mathbf{w}_{t} - \alpha \nabla_{\mathbf{w}} \ell\left(\mathbf{y}, \mathbf{x}, \mathbf{w}_{t}\right)
\end{gather*}
Here, $\mathbf{w}_{i}$ are the weights of the network at the $i^{th}$ iteration. The weights $\mathbf{w}_{0}$ are initialized randomly. There are many methods to initialize these weights and we mention about them shortly. The parameter $\alpha$ is called the \textit{learning-rate} and is a \textit{hyper-parameter}. There are many such hyper-parameters and we also discuss them later in this section. The speed with which and the optima to which the model converges to, depends on $\alpha$.
The gradient descent algorithm requires us to train on the entire training dataset at once, i.e calculate the average loss for all the inputs in the dataset and update the weights. Since that is not usually computationally feasible, a popular variant of it called the \textit{Stochastic Gradient Descent} (SGD) is employed. Instead of training on the entire dataset at once, the model is trained on small batches of data until all the training data is exhausted which completes one \textit{epoch}. The size of this batch is called the \textit{batch-size} as mentioned in~\cite{Goodfellow-et-al-2016}. For example, if the entire dataset contains $1000$ data points, then GD requires us to calculate the average loss on all the $1000$ inputs and then update the weights in one iteration. In SGD, say we choose the batch-size to be $50$, then $50$ data points are chosen randomly from the entire dataset of $1000$. The average loss is calculated for that batch of $50$ and the weights are updated. This completes one iteration. In the second iteration, another set of $50$ data points are chosen randomly from the remaining $950$ data points and the rest of the procedure follows. In this example, a total of $20$ iterations are required to exhaust the entire dataset which completes an epoch.
One of the major limitation of gradient descent and its variants is that it does not guarantee convergence to global optima. Since the loss is calculated between the true label $\left(\mathbf{y}\right)$ and the prediction of the network $\left(\mathbf{\widehat{y}}\right)$, it is indirectly a function of the weights of the network $\mathbf{w}$, since $\mathbf{\widehat{y}}$ is a function of $\mathbf{w}$ and $\mathbf{x}$.
\noindent
\textbf{\textit{Weight initialization and back-propagation:}}
Before training, the weights of the NN, $\mathbf{w}$ are randomly initialized. Weight initialization plays a crucial role in training and performance of the NN. There are many weight initialization methods but the popular ones are proposed by~\cite{He:2015:DDR:2919332.2919814} and~\cite{pmlr-v9-glorot10a}. These methods have been shown to perform well in solving classification problems. Training neural networks can be incredibly costly with GD or SGD but with the use of a dynamic programming based algorithm called the \textit{back-propagation} algorithm, the cost of training reduces significantly as discussed in~\cite{Goodfellow-et-al-2016}. The back-propagation algorithm also uses gradient-descent but stores the values of the gradients to the current layer in order to calculate the gradients to the weights of the previous layer.
\noindent
\textbf{\textit{Optimizers:}}
There are many variants of the SGD algorithm described above like RMSProp, AdaGrad as mentioned in~\cite{Goodfellow-et-al-2016} which have a modified update rule. All these rules are commonly called \textit{optimizers} since they optimize the weights of our network in order to minimize the loss function. We use \textit{Adam} optimizer, proposed by \cite{kingma2014adam} because of the significant improvements it offers during training and also in the performance of deep neural networks.
\noindent
\textbf{\textit{Hyper-parameters:}}
As we can see, numerous design decisions are required to build a neural network like the architecture, the loss function, activation function, weight initialization, optimizer etc. Once those are selected, we have few more parameters to experiment with, listed as follows,
\begin{compactenum}[i)]
\item The number of hidden layers
\item The learning rate
\item The number of neurons in each layer
\item The batch-size
\end{compactenum}
These parameters are called \textit{hyper-parameters} of the network. Choosing the right set of hyper-parameters for a give problem is one of the biggest challenges of DL. These parameters play a crucial role in both training and performance of the networks because the training procedure does not guarantee convergence to global minima of the loss function, as mentioned previously.
\subsubsection{\label{subsubsec:DL:process_flow} Process flow of a common DL architecture}
The process flow of any DL architecture can be modeled as shown in Fig.~\ref{fig:DLProcessFlow}. The NN can be any neural network as described previously. The NN takes an input $\mathbf{x}$ from the training data and makes a prediction $\mathbf{\widehat{y}}$. The loss is calculated between the ground truth $\mathbf{y}$ and the prediction $\mathbf{\widehat{y}}$. The optimizer then updates the weights of the NN according to the update rule. This whole process completes one iteration during training. We repeat this process until the loss value between $\mathbf{y}$ and $\mathbf{\widehat{y}}$ saturates over multiple iterations.
\begin{figure}
\centering
\captionsetup{justification=centering}
\includegraphics[scale=0.57]{Figs/DLProcessFlow}
\caption{\label{fig:DLProcessFlow}The process flow of any deep learning network. The NN represents any neural network either FC, CNN, RNN etc. It takes input $\mathbf{x}$ and makes the prediction $\mathbf{\widehat{y}}$. The loss is calculated between the ground truth $\mathbf{y}$ and the prediction $\mathbf{\widehat{y}}$ using the weights during the iteration $t$. The optimizer calculates the updates $\Delta \mathbf{w}$ according to the update rule and modifies the weights of the network for the $\left(t+1\right)^{th}$ iteration.}
\end{figure}
\subsubsection{\label{subsubsec:ML:classification} Classification problem}
In machine learning and statistics, classification is the problem of identifying to which of a set of categories or classes a new observation belongs to. This relation is statistically obtained from training data. A classification algorithm will predict the confidence score or the probability of the new observation belonging to a particular class. This can be illustrated in a dummy example of classification between domestic cats and dogs with the knowledge of their weight and length as shown in Fig.~\ref{fig:classification}. The weight and height are called the \textit{features} since the algorithm classifies with that information. Estimating the parameters of the line is solving the classification problem. In general the boundary could be a complicated curve and there could be multiple classes with multiple features. Commonly, these features might not be available and we have to devise algorithms to extract them from the input.
Mathematically, if we assume the feature vector to be $\mathbf{f}$ for an observation $x$ and the total classes are the set $\mathcal{C}$, then the prediction $\widehat{y}$ is the most likely class that $x$ belongs to as defined in the following equation.
\begin{gather*}
\widehat{y} = \underset{c \> \in \> \mathcal{C}}{\argmax} \> Pr\left(x \in c \> \vert \> \mathbf{f}\right)
\end{gather*}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.9]
\begin{axis}[xlabel={Weight}, ylabel={Height}, legend pos=south east]
\addplot[color=blue, only marks, mark=*] plot coordinates {
(15, 30)
(16, 25)
(16.5, 26)
(17, 32)
(19, 30)
(18.5, 35)
(18, 33)
(19, 34)
};
\addplot[color=red, only marks, mark=star] plot coordinates {
(20, 50)
(22, 45)
(23, 47)
(21, 51.2)
(25, 45.6)
(24, 49)
(22.3, 48)
(23.2, 47.6)
};
\legend{cats, dogs}
\addplot+[mark=none, smooth, thick, black, domain=16:23.5] {90 - 2.5*x};
\end{axis}
\end{tikzpicture}
\caption{\label{fig:classification} Simple classification between domestic cats and dogs depending on weight and height using dummy data. Estimating the parameters of the boundary solves the classification problem.}
\end{figure}
Generally, traditional ML algorithms requires us to extract these features $\left(\mathbf{f}\right)$ from the input $\left(\mathbf{x}\right)$ using some rules where as neural networks are known to extract them by themselves from the input directly, for example as shown in \cite{krizhevsky2012imagenet}. This helps immensely in the success of DL since the network learns to extract the important features for solving the problem, instead of us using hand coded rules to extract what we think are important features.
\section{\label{sec:OurWork} Decoding Color Codes using Neural Networks}
In this section, we describe our problem formulation for correction of phase errors and how the decoding can be modeled as a classification problem. For any stabilizer code, every error $E$ can be uniquely decomposed to the pure error $T$, logical error $L$ and a stabilizer $S$ as mentioned in the Section~\ref{subsec:StabilizerFormalism}.
\begin{gather*}
E = T L S
\end{gather*}
Given the syndrome $\mathbf{s}$, we can uniquely identify $T$. Since the stabilizers $S$ form the equivalence class, the decoding problem comes down to correctly estimating $L$ given $\mathbf{s}$.
In this work, we study CSS codes which have two types of stabilizers, $X$ and $Z$. They can be written in the matrix form as,
\begin{eqnarray*}
\mathbf{S} = \left[
\begin{array}{cc}
\mathbf{H}_{X} & 0 \\
0 & \mathbf{H}_{Z}
\end{array}
\right]
\end{eqnarray*}
Phase errors create $X$ non-zero syndromes and hence we consider only $X$ stabilizers from now on. The matrix $\mathbf{H}_{X}$ represents the $X$ stabilizers and $\mathbf{H}_{Z}$ represents the $Z$ stabilizers. For 2D color codes, $\mathbf{H}_{X} = \mathbf{H}_{Z}$ and in the subsequent equations, we use $\mathbf{H}$ instead of $\mathbf{H}_{X}$ for simplicity. Denote the binary representation of $E$ as $\mathbf{e} \in \mathbb{F}_2^{n}$.
Then we can calculate the corresponding syndrome as,
\begin{gather}
\mathbf{s}^{\top} = \mathbf{H} \mathbf{e}^{\top}
\label{eq:synPC}
\end{gather}
The matrix $\mathbf{H}$ is not full rank. In color code, $X$ stabilizers corresponding to faces have two dependencies as mentioned in~\cite{PhysRevLett.97.180501}. We remove those two dependent stabilizers from the $\mathbf{H}$ matrix, one stabilizer each corresponding to two different colors and denote it as $\mathbf{H}_{f}$ which is full rank. We calculate the right pseudo-inverse of $\mathbf{H}_{f}$ and denote it as $\mathbf{H}_{f}^{\dagger}$.
\begin{gather}
\mathbf{H}_{f} \mathbf{H}_{f}^{\dagger} = \mathbf{I} \label{eq:pseudoInverse}
\end{gather}
The resultant syndrome which does not list the syndromes calculated by the removed dependent stabilizers is denoted by $\mathbf{s}_{f}$ as shown below.
\begin{gather}
\mathbf{s}_{f}^{\top} = \mathbf{H}_{f} \mathbf{e}^{\top} \label{eq:syndromeCalc}
\end{gather}
\subsection{\label{subsec:OurWork:ProblemModeling} QEC as a classification problem}
Researchers have previously studied the perspective of quantum error correction as a classification problem using neural networks \cite{varsamopoulos2017decoding,maskara2018advantages,chamberland2018deep}.
As mentioned before, we model our decoder as a two-step process. The first-step is a simple inversion where we calculate an estimate $\widehat{E}$ of the actual error $E$ which has occurred. We first calculate the syndrome from Eq.~\eqref{eq:syndromeCalc} and then estimate $\mathbf{\widehat{e}} \in \mathbb{F}_{2}^{n}$, the binary representation of the operator $\widehat{E}$ as follows,
\begin{gather}
\mathbf{\widehat{e}}^{\top} = \mathbf{H}_{f}^{\dagger} \mathbf{s}_{f}^{\top}
\label{eq:errorEstimateHInv}
\end{gather}
Note that the syndrome of the estimate $\mathbf{\widehat{e}}$ will be same as the syndrome of $\mathbf{e}$. Hence, they have the same pure error $T$.
\begin{gather}
\mathbf{H}_{f} \mathbf{\widehat{e}}^{\top} = \mathbf{H}_{f} \mathbf{H}_{f}^{\dagger} \mathbf{s}_{f}^{\top} = \mathbf{s}_{f}^{\top} \nonumber\\
\Longrightarrow \mathbf{H} \mathbf{\widehat{e}}^{\top} = \mathbf{H} \mathbf{e}^{\top} = \mathbf{s}^{\top} \label{eq:sameSyn}
\end{gather}
This estimate $\mathbf{\widehat{e}}$ computed using Eq.~\eqref{eq:errorEstimateHInv} need not be same as $\mathbf{e}$. This is because there exist multiple errors with the same syndrome. We have chosen one solution by fixing $\mathbf{H}_{f}^{\dagger}$ which is calculated only once. This makes the first-step of the decoder simple. From Eq.~\eqref{eq:sameSyn}, we can conclude that the pure error is same in both $E$ and $\widehat{E}$ and we denote it by $T$. Applying this initial estimate $\widehat{E}$ onto the system might result in logical error. This can be concluded through the following equations.
\begin{gather}
E = T L S \hspace{0.5cm} \text{and} \hspace{0.5cm} \widehat{E} = T \widehat{L} \widehat{S} \nonumber \\
\Longrightarrow \widehat{E} E = T \widehat{L} \widehat{S} \> T L S = \left(\pm\right) L\widehat{L} S\widehat{S} \nonumber \\
\Longrightarrow \widehat{E} E = \left(\pm\right) \widetilde{L} \widetilde{S} \label{eq:correctionHomology}
\end{gather}
Here $\widetilde{L} = L\widehat{L}$ and $\widetilde{S} = S\widehat{S}$. The reason for occurrence of $\left(\pm\right)$ in Eq.~\eqref{eq:correctionHomology} is because the Pauli operators $T$, $\widehat{S}$ might commute or anti-commute. This is of little interest to us because we estimate the error up to a global phase.
The homology of $\widehat{E} E$ is same as the homology of $\widetilde{L}$ since $\widetilde{S}$ has a trivial homology.
If we can predict the resultant homology $\widetilde{L}$, we can get back to the trivial state and the decoding succeeds. Since the number of homologies are fixed, this is modeled in the second-step of our decoder as a classification problem using NN. The goal of the NN is to predict $\widetilde{L}$ given the syndrome $\mathbf{s}$. Our final error correction will be,
\begin{gather}
\widetilde{E} = \widetilde{L} \widehat{E}
\label{eq:finalErrorCorrection}
\end{gather}
If the NN properly predicts $\widetilde{L}$ this correction will restore the state up to a global phase which is evident through the following equations.
\begin{gather}
\widetilde{E} E = \widetilde{L} \widehat{E} E \nonumber \\
\Longrightarrow \widetilde{E} E = \left(\pm\right) \widetilde{L} \widetilde{L} \widetilde{S} \nonumber \\
\Longrightarrow \widetilde{E} E = \left(\pm\right) \widetilde{S} \nonumber
\end{gather}
\begin{figure*}
\includegraphics[scale=0.55]{Figs/decoderFlowDigOne.pdf}
\caption{\label{fig:decoderFlowDigOne} Flow diagram of our two-step decoder. The black dots represent error on the qubits and the marked regions represent the syndrome caused. In the first-step we get an estimate of the error $\mathbf{\widehat{e}}$ and in the second-step, we predict the correction homology $\widetilde{L}$ using our trained NN. Our final error correction is $\widetilde{L} \widehat{E}$. Refer Eqs.~\eqref{eq:errorEstimateHInv},~\eqref{eq:correctionHomology}, and~\eqref{eq:finalErrorCorrection}. Note that the $\mathbf{H}$-inverse decoder in step-one need not always give us pure error. In this example, the error estimate operator $\widehat{E}$ anti-commutes with a logical operator (red dashed line) and hence cannot be a pure error.}
\end{figure*}
The work by \cite{maskara2018advantages} used a naive decoder which removes syndromes by pushing errors to the boundary in the first-step.
Their neural network tries to improve upon this estimate by predicting the correction homology.
Mathematically, this means that their decoder could implement different inverse for a different syndrome.
In our approach, we fix the inverse in the first-step, making our initial decoder much simpler. We discuss more on this in the Section~\ref{sec:Insights}.
The first-step decoder in \cite{varsamopoulos2017decoding} is to estimate the pure-error which needs to satisfy many properties.
We want to emphasize that our inverse matrix $\mathbf{H}_{f}^{\dagger}$ in step-one gives us an error estimate which need not always be pure error. It entirely depends on the construction of $\mathbf{H}_{f}^{\dagger}$. We used \textit{SageMath} \footnote{\url{http://sagemath.org}}, an open-source mathematics software for calculating $\mathbf{H}_{f}^{\dagger}$ from Eq.~\eqref{eq:pseudoInverse}.
\subsection{\label{subsec:OurWork:Architecture} Neural decoder}
In this section, we describe our neural decoder in the second-step. As mentioned before, we have modeled our NN in two ways and in both of them we have used a fully-connected architecture where every neuron in one layer is connected to every other neuron in the adjacent layers. The output of the network is the homology vector where each element of it represents a homology class. Since this is a classification problem, we use cross-entropy as our loss function which needs to be minimized during training. We have used Adam optimizer proposed by~\cite{kingma2014adam} since it has been observed to perform better than the other optimizers in terms of convergence of the loss. We have also used 1D batch normalization layer after every layer in the network. It is proven to significantly boost the training speed as shown in~\cite{ioffe2015batch}. The activation function used for every neuron is \verb+ReLU+ since it has shown to perform well when compared to other functions like \verb+Sigmoid+ or \verb+TanH+ by reducing the problem of vanishing gradients as the network goes deeper as shown in~\cite{karlik2011performance,pmlr-v15-glorot11a}.
\begin{table}
\caption{\label{tab:hyperParameters}The values of the hyper-parameters used in the neural decoder in our first approach.}
\begin{ruledtabular}
\begin{tabular}{c|cccccc}
\diagbox{$d\footnote{Distance of the code}$}{parameters} & $h_{d}\footnote{Number of hidden layers}$ & $f_{d}\footnote{Hidden dimension factor}$ & $b_{d}\footnote{Batch size}$ & $\alpha\footnote{Learning rate}$ & $t_{d, p_{err}}\footnote{Number of training samples per each $p_{err}$}$ & $T_{d}\footnote{Total number of training samples for all $p_{err}$ combined}$\\
\hline
$6$ & $2$ & $2$ & $500$ & $0.001$ & $2 \times 10^{7}$ & $1.4 \times 10^{8}$\\
$8$ & $3$ & $5$ & $750$ & $0.001$ & $4 \times 10^{7}$ & $2.8 \times 10^{8}$\\
$9$ & $4$ & $5$ & $750$ & $0.001$ & $4 \times 10^{7}$ & $2.8 \times 10^{8}$\\
$12$ & $7$ & $10$ & $2500$ & $0.001$ & $10 \times 10^{7}$ & $7 \times 10^{8}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure*}
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$h_{d}$}, ymin=0, ymax=8,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.05)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 2)
(8, 3)
(9, 4)
(12, 7)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:hdvsd}
\subcaption{$h_{d}$ vs $d$}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$f_{d}$}, ymin=0, ymax=12,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.05)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 2)
(8, 5)
(9, 5)
(12, 10)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:fdvsd}
\subcaption{$f_{d}$ vs $d$}
\end{subfigure}
\begin{subfigure}[b]{0.2525\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$b_{d}$},
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.025)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 500)
(8, 750)
(9, 750)
(12, 2500)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:bdvsd}
\subcaption{$b_{d}$ vs $d$}
\end{subfigure}
\begin{subfigure}[b]{0.255\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$t_{d, p_{err}}$},
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.025)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 2*10^7)
(8, 4*10^7)
(9, 4*10^7)
(12, 10*10^7)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:tdperrvsd}
\subcaption{$t_{d, p_{err}}$ vs $d$}
\end{subfigure}
\bigskip
\begin{subfigure}[b]{0.24\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$h_{d}$}, ymin=0, ymax=8,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.05)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 1)
(8, 2)
(9, 3)
(12, 6)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:hdvsd_two}
\subcaption{$h_{d}$ vs $d$}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$f_{d}$}, ymin=0, ymax=12,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.05)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 1)
(8, 3)
(9, 4)
(12, 10)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:fdvsd_two}
\subcaption{$f_{d}$ vs $d$}
\end{subfigure}
\begin{subfigure}[b]{0.2525\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$b_{d}$},
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.025)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 500)
(8, 750)
(9, 750)
(12, 2500)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:bdvsd_two}
\subcaption{$b_{d}$ vs $d$}
\end{subfigure}
\begin{subfigure}[b]{0.255\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel={$d$}, ylabel={$t_{d, p_{err}}$},
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
every axis y label/.style={
at={(ticklabel* cs:1.025)},
anchor=south east
}
]
\addplot[solid, color=blue, mark=star] plot coordinates {
(6, 2*10^7)
(8, 4*10^7)
(9, 4*10^7)
(12, 10*10^7)
};
\end{axis}
\end{tikzpicture}
}
\label{subfig:tdperrvsd_two}
\subcaption{$t_{d, p_{err}}$ vs $d$}
\end{subfigure}
\caption{\label{fig:HPvsD} Plots of the various hyper-parameters of our neural networks with the distance $d$ of the code. Figs.~(a)-(d) are for the first-approach and the Figs.~(e)-(h) are for the second-approach.}
\end{figure*}
\subsection{\label{subsec:OurWork:TrainingProcedure} Training procedure}
For the network to decode correctly, it needs to be trained. We employ a supervised training procedure where we have labeled data of input (we generate $\mathbf{e}$ according to the noise and calculate syndromes $\mathbf{s}$ from Eq.~\eqref{eq:synPC}) and the corresponding output (homology $\widetilde{L}$). This output is the ground truth. Training is nothing but an optimization process where the weights of the network are optimized to minimize an objective function. This objective function is called loss function. The loss function plays a crucial role during training since certain loss functions are apt for certain problems. Since our NN needs to solve a classification problem, we use cross-entropy as our loss function. This is because given a syndrome $\left(\mathbf{s}\right)$, the NN predicts a probability distribution over all the possible classes. If we assume input is $\mathbf{x}$, the output of the NN is a distribution $\mathbf{q}\left(\mathbf{x}\right)$ and the true distribution is $\mathbf{p}\left(\mathbf{x}\right)$, cross-entropy can be written as follows.
\begin{gather}
\ell_{CE}\left(\mathbf{p}, \mathbf{q}\right) = - \sum_{x}\mathbf{p}\left(\mathbf{x}\right)\log \mathbf{q}\left(\mathbf{x}\right)
\label{eq:crossEntropy}
\end{gather}
This is same as minimizing the Kullback-Liebler divergence $\left(D_{KL}\right)$ between the distributions $\mathbf{p}\left(\mathbf{x}\right)$ and $\mathbf{q}\left(\mathbf{x}\right)$ up to a constant since $D_{KL}\left(\mathbf{p} \Vert \mathbf{q}\right)$ can be written as,
\begin{gather}
D_{KL}\left(\mathbf{p} \Vert \mathbf{q}\right) = \ell_{CE}\left(\mathbf{p}, \mathbf{q}\right) - \sum_{x}\mathbf{p}\left(\mathbf{x}\right)\log \mathbf{p}\left(\mathbf{x}\right)
\nonumber
\end{gather}
and the term $\sum_{x}\mathbf{p}\left(\mathbf{x}\right)\log \mathbf{p}\left(\mathbf{x}\right)$ is a constant because it is completely determined by the true distribution $\mathbf{p}$. This implies that minimizing $\ell_{CE}$ in Eq.~\eqref{eq:crossEntropy} gets the distribution learned by our NN i.e, $\mathbf{q}$ closer to the true distribution $\mathbf{p}$.
Given a syndrome vector $\mathbf{s}$, a trained NN should be able to correctly predict the correct correction homology class $\widetilde{L}$ for all error rates under the threshold. In order to train a NN which is independent of the error rate, we employ a progressive training procedure as described in \cite{maskara2018advantages}. We generate training samples at a fixed error rate $p_{err}$ in each case and we train our NN for that noise until the loss function in Eq.~\eqref{eq:crossEntropy} saturates. We then move on to a higher $p_{err}$ and repeat the process for various error rates under the threshold. For our experiments (bit-flip noise), we have trained our NN for the error rates $\left\{ 0.05, 0.06, 0.07, 0.08, 0.09, 0.10, 0.11 \right\}$. We use Xavier normal initialization for the parameters in fully-connected layers and Gaussian normal initialization for the parameters in batch-normalization layer before we start training. We do not reinitialize the weights during the progressive training while we train on the higher $p_{err}$. We discuss the importance of this progressive training with evidence in the Section~\ref{sec:Insights}.
In our first approach, we use the syndrome $\mathbf{s}$ alone as the input to the network whereas in our second approach, we use the concatenated vector of both initial estimate $\widehat{\mathbf{e}}$ and the syndrome $\mathbf{s}$. In both cases, the network is trained to predict correction homology $\widetilde{L}$.
Our $\mathbf{H}$-inverse decoder in step-one can be summarized in Alg.~\ref{alg:HInvDec}. The neural decoders can be summarized in Algs.~\ref{alg:neuralDecFirst},~\ref{alg:neuralDecSecond} for our first and second approaches respectively. The architectures for our decoders are illustrated in Figs.~\ref{fig:decoderFlowDigOne},~\ref{fig:decoderFlowDigTwo} for first and second approaches respectively.
\begin{algorithm}[H]
\caption{$\mathbf{H}$-inverse decoder (step-one)}
\begin{algorithmic}[1]
\REQUIRE {Syndrome vector $\mathbf{s}$ and requires pre-computed $\mathbf{H}_{f}^{\dagger}$ matrix}
\ENSURE {Error estimate operator $\widehat{E}$}
\STATE Compute $\mathbf{s}_{f}$ from $\mathbf{s}$ by removing the syndromes of the removed dependent stabilizers while computing the matrix $\mathbf{H}_{f}$
\STATE Compute $\widehat{\mathbf{e}}^{\top} = \mathbf{H}_{f}^{\dagger}\mathbf{s}_{f}^{\top}$ \COMMENT{from Eq.~\eqref{eq:errorEstimateHInv}}
\STATE Return $\widehat{E}$, the error operator of $\widehat{\mathbf{e}}$ as the initial error estimate
\end{algorithmic}
\label{alg:HInvDec}
\end{algorithm}
\begin{algorithm}[H]
\caption{Neural decoder (step-two, first approach)}
\begin{algorithmic}[1]
\REQUIRE {Syndrome vector $\mathbf{s}$, requires the trained neural network to predict the correction homology $\widetilde{L}$ and the initial estimate $\widehat{E}$}
\ENSURE {Final error correction operator $\widetilde{E}$}
\STATE Using the trained neural network, predict the correction homology $\widetilde{L}$ by giving the syndrome vector $\mathbf{s}$ as the input
\STATE Compute $\widetilde{E} = \widetilde{L} \widehat{E}$ \COMMENT{from Eq.~\eqref{eq:finalErrorCorrection}}
\STATE Return $\widetilde{E}$ as the final error correction
\end{algorithmic}
\label{alg:neuralDecFirst}
\end{algorithm}
\begin{algorithm}[H]
\caption{Neural decoder (step-two, second approach)}
\begin{algorithmic}[1]
\REQUIRE {Syndrome vector $\mathbf{s}$ and the initial estimate $\widehat{E}$, requires the trained neural network to predict the correction homology $\widetilde{L}$}
\ENSURE {Final error correction operator $\widetilde{E}$}
\STATE Using the trained neural network, predict the correction homology $\widetilde{L}$ by giving the concatenated vector of initial estimate $\mathbf{\widehat{e}}$ and the syndrome $\mathbf{s}$ as the input
\STATE Compute $\widetilde{E} = \widetilde{L} \widehat{E}$ \COMMENT{from Eq.~\eqref{eq:finalErrorCorrection}}
\STATE Return $\widetilde{E}$ as the final error correction
\end{algorithmic}
\label{alg:neuralDecSecond}
\end{algorithm}
\subsection{\label{subsec:OurWork:Results} Results}
We describe our simulation results for bit-flip noise model in this section. As described earlier in the Section~\ref{sec:OurWork}, our decoder is a two-step decoder where we use a naive and deterministic $\mathbf{H}$-inverse $\left(\mathbf{H}_{f}^{\dagger}\right)$ decoder in step-one and then improve its performance in step-two using a NN. The performance of our $\mathbf{H}$-inverse decoder in the step-one by itself is shown in the Fig.~\ref{fig:HInvDecResults}. It shows that $\mathbf{H}$-inverse alone is a very bad decoder since the logical error increases as the length of the code increases for a fixed $p_{err}$. It is quite evident that this decoder does not have a threshold since the curves do not meet anywhere below the theoretical threshold of $10.97 \%$ \cite{PhysRevLett.103.090501}.
The performance of our neural decoder in first approach (Fig.~\ref{fig:decoderFlowDigOne}) trained according to the training procedure mentioned in Section~\ref{subsec:OurWork:TrainingProcedure} is shown in the Fig.~\ref{fig:HInvNNDec}. The fully trained NN model is independent of the $p_{err}$ and the it outperforms the previous state-of-the art methods which are not based on neural networks by~\cite{sarvepalli12,delfosse14,bombin2012universal}. We report that our neural decoder achieves a threshold of $10\%$ and is comparable to the result mentioned in \cite{maskara2018advantages}.
In our second approach, we have given additional information of $\mathbf{\widehat{e}}$ along with the syndrome vector $\mathbf{s}$ (by concatenating them both) to our NN (Fig.~\ref{fig:decoderFlowDigTwo}) and saw a dramatic improvement in the threshold for small lengths, as well as a reduction in logical errors for each error rate as shown in the Fig.~\ref{fig:HInvNNDecTwo}. The training is exactly similar to the previous case.
This shows that the NN is able to understand and learn the behaviour of the $\mathbf{H}$-inverse decoder much better with the additional knowledge of the initial estimate $\mathbf{\widehat{e}}$ and hence is able to perform better correction. This implies that the data driven methods and in particular neural networks' performance can be improved by providing all the information available to us relevant to the problem to be solved. This modification can be incorporated into other works of building two-step decoders using neural networks and improve the overall performance.
The hyper-parameters (as described in the Section~\ref{subsec:Background:ML_DL}) of our networks are listed in the Tables~\ref{tab:hyperParameters},~\ref{tab:hyperParametersTwo} for first and second approaches respectively. The variation of some of them with the distance $d$ are shown in the Fig.~\ref{fig:HPvsD} for both the approaches. The distance of the code is denoted by $d$ and the number of hidden layers in our network is denoted by $h_{d}$. The batch size used for each length is denoted by $b_{d}$. The number of nodes in each hidden layer are characterized by the hidden dimension factor $f_{d}$ which is equal to $f_{d}$ multiplied by the dimension of the input syndrome vector $\mathbf{s}$. The parameter $t_{d, p_{err}}$ is the number of samples required for training for each $p_{err}$ and $T_{d}$ determines the total number of samples the final trained NN has seen entirely. The parameter $\alpha$ is the learning rate used for optimization. We used \textit{PyTorch} \footnote{\url{https://pytorch.org/}}, an open-source deep learning framework for training our neural networks.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[
xlabel={$p_{err}$},
ylabel={Logical error},
legend pos=south east,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
ymin=1e-1, ymax=1e0
]
\addplot[solid, color=blue, mark=*] plot coordinates {
(0.01, 0.1956)
(0.05, 0.6424)
(0.10, 0.8418)
(0.15, 0.9073)
(0.20, 0.9275)
(0.25, 0.9360)
};
\addplot[solid, color=red, mark=square*] plot coordinates {
(0.01, 0.2680)
(0.05, 0.7513)
(0.10, 0.8944)
(0.15, 0.9269)
(0.20, 0.9343)
(0.25, 0.9365)
};
\addplot[solid, color=orange, mark=triangle*] plot coordinates {
(0.01, 0.2954)
(0.05, 0.7817)
(0.10, 0.9059)
(0.15, 0.9303)
(0.20, 0.9366)
(0.25, 0.9395)
};
\addplot[solid, color=olive, mark=star] plot coordinates {
(0.01, 0.3585)
(0.05, 0.8361)
(0.10, 0.9211)
(0.15, 0.9361)
(0.20, 0.9364)
(0.25, 0.9377)
};
\legend{$d=6$,$d=8$,$d=9$,$d=12$}
\end{semilogyaxis}
\end{tikzpicture}
\caption{\label{fig:HInvDecResults} Performance of our $\mathbf{H}$-inverse $\left(\mathbf{H}_{f}^{\dagger}\right)$ decoder in step-one. Note that it is a very bad decoder by itself since for a fixed $p_{err}$, the logical error increases as the length of the code increases and this decoder on its own does not have a threshold.}
\end{figure}
\begin{figure*}
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel={$p_{err}$},
ylabel={Logical error},
legend pos=south east,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
ymin=1e-2, ymax=1e0
]
\addplot[solid, color=blue, mark=*] plot coordinates {
(0.05, 0.1023)
(0.06, 0.1700)
(0.07, 0.2526)
(0.08, 0.3404)
(0.09, 0.4281)
(0.10, 0.5120)
(0.11, 0.5895)
(0.12, 0.6576)
};
\addplot[solid, color=red, mark=square*] plot coordinates {
(0.05, 0.0487)
(0.06, 0.1051)
(0.07, 0.1862)
(0.08, 0.2847)
(0.09, 0.3954)
(0.10, 0.5035)
(0.11, 0.6033)
(0.12, 0.6878)
};
\addplot[solid, color=orange, mark=triangle*] plot coordinates {
(0.05, 0.0321)
(0.06, 0.0805)
(0.07, 0.1580)
(0.08, 0.2627)
(0.09, 0.3823)
(0.10, 0.5059)
(0.11, 0.6143)
(0.12, 0.7042)
};
\addplot[solid, color=olive, mark=star] plot coordinates {
(0.05, 0.0158)
(0.06, 0.0492)
(0.07, 0.1186)
(0.08, 0.2298)
(0.09, 0.3629)
(0.10, 0.5036)
(0.11, 0.6318)
(0.12, 0.7332)
};
\legend{$d=6$,$d=8$,$d=9$,$d=12$}
\end{semilogyaxis}
\end{tikzpicture}
}
\subcaption{\label{fig:HInvNNDec}}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel={$p_{err}$},
ylabel={Logical error},
legend pos=south east,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
ymin=1e-2, ymax=1e0
]
\addplot[solid, color=blue, mark=*] plot coordinates {
(0.05, 0.0874)
(0.06, 0.1483)
(0.07, 0.2254)
(0.08, 0.3102)
(0.09, 0.3966)
(0.10, 0.4841)
(0.11, 0.5636)
(0.12, 0.6355)
};
\addplot[solid, color=red, mark=square*] plot coordinates {
(0.05, 0.0394)
(0.06, 0.0883)
(0.07, 0.1615)
(0.08, 0.2556)
(0.09, 0.3623)
(0.10, 0.4717)
(0.11, 0.5757)
(0.12, 0.6641)
};
\addplot[solid, color=orange, mark=triangle*] plot coordinates {
(0.05, 0.0237)
(0.06, 0.0615)
(0.07, 0.1289)
(0.08, 0.2235)
(0.09, 0.3383)
(0.10, 0.4602)
(0.11, 0.5750)
(0.12, 0.6730)
};
\addplot[solid, color=olive, mark=star] plot coordinates {
(0.05, 0.0122)
(0.06, 0.0388)
(0.07, 0.0971)
(0.08, 0.1967)
(0.09, 0.3232)
(0.10, 0.4656)
(0.11, 0.5969)
(0.12, 0.7085)
};
\legend{$d=6$,$d=8$,$d=9$,$d=12$}
\end{semilogyaxis}
\end{tikzpicture}
}
\subcaption{\label{fig:HInvNNDecTwo}}
\end{subfigure}
\caption{\label{fig:performanceFigs} The performance of neural decoder in first approach, achieving a threshold of $10 \%$ is shown in (a). The performance of neural decoder in second approach, achieving a near optimal threshold is shown in (b). Note the reduction in logical error for decoder in second approach (b) when compared to that of first approach (a).}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.55]{Figs/decoderFlowDigTwo.pdf}
\caption{\label{fig:decoderFlowDigTwo} Flow diagram of our two-step decoder. The black dots represent error on the qubits and the marked regions represent the syndrome caused. In the first-step we get an estimate of the error $\mathbf{\widehat{e}}$ and in the second-step, we predict the correction homology $\widetilde{L}$ using our trained NN with the information of both $\widehat{\mathbf{e}}$ and $\mathbf{s}$. Our final error correction is same as $\widetilde{L} \widehat{E}$.}
\end{figure*}
\begin{table}
\caption{\label{tab:hyperParametersTwo}The values of the hyper-parameters used in the neural decoder in our second approach.}
\begin{ruledtabular}
\begin{tabular}{c|cccccc}
\diagbox{$d\footnote{Distance of the code}$}{parameters} & $h_{d}\footnote{Number of hidden layers}$ & $f_{d}\footnote{Hidden dimension factor}$ & $b_{d}\footnote{Batch size}$ & $\alpha\footnote{Learning rate}$ & $t_{d, p_{err}}\footnote{Number of training samples per each $p_{err}$}$ & $T_{d}\footnote{Total number of training samples for all $p_{err}$ combined}$\\
\hline
$6$ & $1$ & $1$ & $500$ & $0.001$ & $2 \times 10^{7}$ & $1.4 \times 10^{8}$\\
$8$ & $2$ & $3$ & $750$ & $0.001$ & $4 \times 10^{7}$ & $2.8 \times 10^{8}$\\
$9$ & $3$ & $4$ & $750$ & $0.001$ & $4 \times 10^{7}$ & $2.8 \times 10^{8}$\\
$12$ & $6$ & $10$ & $2500$ & $0.001$ & $10 \times 10^{7}$ & $7 \times 10^{8}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{\label{sec:Insights} Remarks and Insights}
We clearly demonstrate the power of data-driven methods and in particular neural networks, through which we were able to improve the performance of a very bad decoder which does not even have a threshold.
When compared to the previous state-of-the-art on neural decoders for color codes, our decoder requires significantly less training data for higher lengths like $d=9, 12$.
In addition to the gains in training cost, our decoder has less complexity with respect to the number of layers and number of nodes in each layer when compared to the previous work and still achieved a comparable threshold.
In Section~\ref{subsec:OurWork:TrainingProcedure}, we mentioned the importance of the progressive training. We ran our simulations by training a new NN with Xavier normal and Gaussian normal initializations for every $p_{err}$, without employing the progressive training. The performance of that decoder with similar hyper-parameters as mentioned in the Table~\ref{tab:hyperParameters} is shown in the Fig.~\ref{fig:nonProgressiveNNDec}. This shows that without the progressive training, the threshold of the decoder drops to about $7.2 \%$. This is because as the $p_{err}$ increases, it would be very likely that our optimizer converges to a bad local minima. This progressive training is similar to the common practice of \textit{curriculum-learning} in neural networks so that the optimizer converges to a better local minima in the hyperspace of the network weights as proposed in~\cite{Bengio:2009:CL:1553374.1553380}.
We also report that this progressive training should be carried on till the $p_{err}$ equals the theoretical threshold and we have observed constant decrement in logical errors at all error rates. Training the model with a $p_{err}$ above the threshold is not desirable as we have seen increments in the logical errors.
This concept of $\mathbf{H}$-inverse as a base decoder improved with a neural decoder can be effectively extended to other noise models and also to codes in higher dimension including other stabilizer codes.
Any decoder which does error correction essentially solves the equation $\mathbf{H} \mathbf{x}^{\top} = \mathbf{s}$.
Since there are many solutions, it implies there exist many pseudo-inverses to $\mathbf{H}$.
To implement a good decoder, choosing the correct inverse for a given syndrome is an important task.
Different inverses must be chosen for different syndrome patterns in order to have a threshold.
The choice of decoder in the step-one can be anything as long as it clears the syndrome and good decoders which have a threshold can also be chosen.
In such cases, these good decoders take care of selecting the inverse depending on the syndrome.
This makes these step-one decoders not entirely simple and there is a lot more for the NN to learn to improve the initial estimate.
This is because the inverse selected will be different for different syndromes.
In our approach, we fix the inverse $\mathbf{H}_{f}^{\dagger}$ though it does not have a threshold and make the step-one decoder very simple.
Our NN only has to understand on inverse which is $\mathbf{H}_{f}^{\dagger}$ to improve the initial estimate.
Intuitively, this means that the learning should be easier for our NN which can be verified empirically through the superior performance with comparatively lesser training cost and complexity when compared to \cite{maskara2018advantages}.
Our approach is applicable for any decoding problem where the equation, $\mathbf{H} \mathbf{x}^{\top} = \mathbf{s}$ needs to be solved.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[
xlabel={$p_{err}$},
ylabel={Logical error},
legend pos=south east,
every axis x label/.style={
at={(ticklabel* cs:1.025)},
anchor=north west
},
ymin=1e-2, ymax=1e0
]
\addplot[solid, color=blue, mark=*] plot coordinates {
(0.05, 0.1157)
(0.06, 0.1869)
(0.07, 0.2709)
(0.08, 0.3619)
(0.09, 0.4559)
(0.10, 0.5432)
(0.11, 0.6270)
(0.12, 0.6993)
};
\addplot[solid, color=red, mark=triangle*] plot coordinates {
(0.05, 0.0769)
(0.06, 0.1502)
(0.07, 0.2489)
(0.08, 0.3768)
(0.09, 0.5067)
(0.10, 0.6315)
(0.11, 0.7290)
(0.12, 0.7993)
};
\addplot[solid, color=olive, mark=star] plot coordinates {
(0.05, 0.0424)
(0.06, 0.1102)
(0.07, 0.2397)
(0.08, 0.4561)
(0.09, 0.6535)
(0.10, 0.8029)
(0.11, 0.8694)
(0.12, 0.8952)
};
\legend{$d=6$,$d=8$,$d=12$}
\end{semilogyaxis}
\end{tikzpicture}
\caption{\label{fig:nonProgressiveNNDec} Performance of our neural decoder without the progressive training procedure. The threshold achieved is just about $7.2 \%$.}
\end{figure}
\section{\label{sec:Conclusion} Conclusion}
We have demonstrated that data-driven methods like NN can perform superior decoding when compared to the traditional approaches.
We propose a neural decoder with simplified non-neural part achieving a threshold of $10 \%$ for 2D color codes.
We suggest an alternative approach to combine non-neural and neural decoders reducing the logical error which can be incorporated into other NN based decoders.
The drawbacks of NN based decoders are figuring out the right set of hyper-parameters for each length and practical issues of convergence of the loss when the number of trainable parameters increase.
Our approach can be extended to other realistic noise models and codes in higher dimensions or other stabilizer codes.
\section{\label{sec:Acknowledgements} Acknowledgements}
The authors would like to thank Arun B. Aloshious for valuable discussions. During the preparation of this manuscript, five related preprints were made available \cite{ni2018neural,sweke2018reinforcement,liu2018neural,andreasson2018quantum,nautrup2018optimizing}, however their scope and emphasis are different from our work. This work was completed when CC was associated with Indian Institute of Technology Madras as a part of his Dual Degree thesis.
|
1,108,101,563,868 | arxiv | \section{Introduction}
A general formulation for the discrete logarithm problem is the following: given a group $G$, a generator $g \in G$ and another element $h \in G$, find an integer $z$ such that
$g^z = h$. The hardness of this problem, which depends on the choice of $G$, is relevant for public-key cryptography since the very beginning of it \cite{DifHel}.
We are concerned with the cases where $G$ is the multiplicative group of a finite field of \emph{small characteristic}, which, for us, means a field of characteristic $p$ and cardinality $p^n$ for some integers $n > p$. Our main result is the following.
\begin{Theorem}\label{maintheorem1}
There exists a probabilistic algorithm, described in Section \ref{sec_algo}, that solves the discrete logarithm problem in $K^\times$ for all finite fields $K$ of small characteristic in expected time
\[
( \log \# K)^{O(\log \log \# K )} \,.
\]
\end{Theorem}
An algorithm with the above complexity is called \emph{quasi-polynomial}.
The first heuristic quasi-polynomial algorithm solving the discrete logarithm in small characteristic was presented by Barbulescu, Gaudry, Joux and Thom\'e in \cite{BGJT}. One of their main ideas, originally in \cite{Joux}, is to look for a ``simple'' description of the Frobenius automorphism $\phi \colon K \to K$ and, if one can find such a simple description, to use it in an index calculus algorithm to find relations among the elements of the factor base more easily. In \cite{ZKG} was then presented another algorithm, based on similar ideas, whose expected complexity is proven to be quasi-polynomial when there exists ``simple'' description of the Frobenius automorphism (the notion of ``simple'' in \cite{ZKG} is slightly stricter than in \cite{BGJT}). In particular, we could deduce Theorem \ref{maintheorem1} if we knew that all finite fields of small characteristic $K$ can be embedded in a slightly larger field $K'$ admitting a presentation as in \cite{ZKG}. The author is not aware of any proof of this fact, even though computations like \cite[Table 1]{Joux} support it and \cite{Micheli} gives a partial answer.
The author's first incomplete attempt to prove Theorem \ref{maintheorem1} is his master's thesis \cite{tesi}, which already contained the main idea of this article: using a different presentation for the finite field, stated in terms of the action of the Frobenius on an elliptic curve. Such ``elliptic presentations'' were first introduced in \cite{EllPres} and, since over a finite field $\mathbb F_q$ there are many non-isogenous elliptic curves, it is easy to prove that all finite fields of small characteristic can be embedded in a slightly larger field admitting such a presentation. The algorithm in \cite{tesi} adapts the approach in \cite{ZKG} to finite fields with an elliptic presentations, but the proof of its correctness and quasi-polynomiality was not completed.
The idea of using elliptic presentations in this context has been independently developed by Kleinjung and Wesolowski, who have proved Theorem \ref{maintheorem1} in \cite{KW2}.
One of the differences between the present approach and the one in \cite{KW2} is the proof of correctness and the quasi-polynomiality of the algorithms. In both cases it is a matter of showing the irreducibility of certain curves: the approach in \cite{KW2} is to describe those curves as components of certain fibered products, as in \cite{KW}, while we mostly rely on some Galois theory over function fields. Both approaches use some leght computations, which here are here mostly contained in Proposition \ref{prop:we_can_use_other_prop} and in the Claims \ref{hope_distinct_points}, \ref{hope_no_conics}, \ref{claim_43_use_lemma}. A second difference between \cite{KW2} and the present work is that in our algorithm the number of ``traps'' is finite and small, so that they can be included in the factor base, while in \cite{KW2} there are infinitely many ``traps'' that need to be avoided by the algorithm.
In \cite{JEll} Joux and Pierrot study a practical algorithm for the discrete logarithm based on elliptic presentations and apply it successfully in $\mathbb F_{3^{1345}}^\times$. Their experiments indicate that the efficiency of this algorithm is inferior, yet comparable, to the one in \cite{JP}, the (heuristically) fastest currently known.
\paragraph{Structure of the paper}
The structure of the article is as follows. In Section \ref{sec:ell_pres} we define elliptic presentations and we prove that all finite fields of small characteristic can be embedded in a slightly larger field admitting an elliptic presentation.
Section \ref{sec:traps} is about ``traps'' (cases that need different treatment in the algorithm).
In Section \ref{sec:divs} we describe the general setup of our algorithm and we explain how to pass from a factor base made of irreducible polynomials in $\mathbb F_q[x]$ to a factor base made of irreducible divisors on an elliptic curve $E/\mathbb F_q$.
In Section \ref{sec_algo} we give our algorithm, stated in terms of a descent procedure that is described in Section \ref{sec:idea_descent}. A more precise statement about the complexity of the main algorithm is given in Theorem \ref{maintheorem3}. Our descent procedure consists of two steps, presented and analysed in Section \ref{sec:idea_descent} under an assumption on certain varieties. These assumptions are proven in the subsequent sections: Section \ref{sec:lemma} gives a technical lemma, Section \ref{sec:3-2} proves the assumptions needed for the second and easier step and Section \ref{sec:43} proves the assumptions needed for the first step.
\paragraph{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
I thank Ren\'e Schoof for introducing me to this research problem in $2016$ and for the useful ideas that lead to substantial simplifications of the proof.
The author is supported by the MIUR ``Excellence Department Project MATH@TOV'',
awarded to the Department of Mathematics, University of Rome, Tor
Vergata, and by the ``Programma Operativo (PON) “Ricerca e Innovazione”
2014-2020''
\section{Elliptic presentations} \label{sec:ell_pres}
One of the main ideas in \cite{Joux} and in \cite{BGJT}, is to present a field $K$ using two subfields $\mathbb F_q\subsetneq \mathbb F_Q \subseteq K$ of order $q,Q$ (both ``small'' compared to $\# K$) and an element $x_1 \in K $ generating the extension $\mathbb F_Q\subset K$ such that the $q$-th Frobenius acts on $x_1$ in a simple way, namely $x_1^q=f(x_1)$ for some $f \in \mathbb F_q(x)$ of degree at most $2$. We now define a presentation based on a similar idea: describing $K$ as $\mathbb F_q(x_1,y_1)$ where $\mathbb F_q$ is a finite field of order $q$ ``small'' compared to $\# K$ and $x_1,y_1$ are two elements of $K$ on which the $q$-th Frobenius acts in a ``simple'' way.
Let $\mathbb F_q$ be a finite field of cardinality $q$, and let $K/\mathbb F_q$ be a field extension of of degree $n$. Suppose there exists an elliptic curve $E/\mathbb F_q$ defined by a Weierstrass equation and a point $P_0 \in E(\mathbb F_q)$ of order $n$. Denoting by $\phi$ be the $q$-th Frobenius on the elliptic curve $E$, the map $E\to E$ given by $P \mapsto \phi(P) {-} P$ is surjective. Therefore there is a point $P_1 = (x_1,y_1)$ on $E$ such that $\phi(P_1)= P_1 + P_0$. Hence
\begin{eqn}\label{eq:proof_deg_n}
(x_1^{q^i}, y_1^{q^i}) = \phi^i(P_1)=P_1 + i\cdot P_0 \quad \text{for every }i\in \mathbb Z\,,
\end{eqn}
implying that the field extension $\mathbb F_q \subset \mathbb F_q(x_1,y_1)$ has degree $n$. Hence $\mathbb F_q(x_1,y_1)$ is isomorphic to $K$.
Moreover the $q$-th Frobenius acts on the pair $(x_1, y_1)$ in a ``simple'' way in the following sense: the addition formulas on $E$ give polynomials $f_1,f_2,f_3 \in \mathbb F_q(x,y)$ of small degree such that
$x_1^q = f_1(x_1,y_1)/f_3(x_1,y_1)$ and $y_1^q = f_2(x_1, y_1)/f_3(x_1,y_1)$.
With this heuristic in mind, we give the following definition.
\begin{Definition}\label{def:ell_pres}
Let $E/\mathbb F_q$ be an elliptic curve defined by a Weierstrass polynomial in $\mathbb F_q[x,y]$, let $P_0$ be a $\mathbb F_q$-point on $E$ and let $\phi\colon E \to E$ be the $q$-th Frobenius. An \emph{$(E/\mathbb F_q,P_0)$-presentation} of a finite field $K$ is an ideal $\mathfrak{M} \subset \mathbb F_q[x,y]$ such that
\begin{enumerate}[label=(\roman*)]
\item\label{def:ell_pres_1} $K$ is isomorphic to $\mathbb F_q[x,y]/\mathfrak{M}$ with a chosen isomorphism;
\item there exists a point $P_1 =(x_1,y_1)$ on $E$ such that $\mathfrak{M} = \{ f \in \mathbb F_q[x,y]: f(x_1, y_1)=0 \}$ and $\phi(P_1)=P_1 + P_0$;
\item\label{def:ell_pres_3} $q>2$ and $[K:\mathbb F_q]>2$.
\end{enumerate}
\end{Definition}
We sometimes omit the dependence on $(E/\mathbb F_q,P_0)$ and we simply write ``elliptic presentation''.
The hypothesis $q>2$ is used in the proof of Claim \ref{hope_distinct_points}, while the hypothesis $[K:\mathbb F_q]>2$ is used in the following Remark, which is used in step $3$ of the main algorithm to lift elements of $K$ to polynomials in $\mathbb F_q[x]$, in particular to polynomials of degree a power of two.
\begin{Remark}\label{rem:mu}
If $\mathfrak{M}$ is an elliptic presentation, then the inclusion $\mathbb F_q[x] \to \mathbb F_q[x,y]$ induces an isomorphism $\mathbb F_q[x]/\mu \cong \mathbb F_q[x,y]/\mathfrak{M}$ for a certain $\mu \in \mathbb F_q[x]$.
Proving this is equivalent to proving that $x$ generates the extension $\mathbb F_q \subset \mathbb F_q[x,y]/\mathfrak{M}$. Using the notation in Definition \ref{def:ell_pres}, this is equivalent to proving that $\mathbb F_q(x_1)$ is equal to $\mathbb F_q(x_1,y_1)$. If, for the sake of contradiction, this is not the case, then the Weierstrass equation satisfied by $x_1$ and $y_1$ implies that the extension $\mathbb F_q(x_1)\subset \mathbb F_q(x_1,y_1)$ has degree $2$, hence $[\mathbb F_q(x_1):\mathbb F_q]=\tfrac n2$, where $n:=[\mathbb F_q(x_1,y_1):\mathbb F_q] = [K:\mathbb F_q]$.
Using Equation \ref{eq:proof_deg_n}, we deduce that
\[
x(P_1) = x_1 = x_1^{q^{n/2}} = x(\phi^{n/2}P_1) = x(P_1 + \tfrac n2 P_0) \quad \implies \quad P_1 + \tfrac n2 P_0 = \pm P_1\,.
\]
Since, by Equation \ref{eq:proof_deg_n}, the order of $P_0$ is equal to $n$, we have $P_1 + \tfrac n2 P_0 = -P_1$, implying that $2P_1$ lies $E(\mathbb F_q)$. Therefore $P_0$ has order $2$, contradicting $n = [K:\mathbb F_q]>2$ in \ref{def:ell_pres_3}.
\end{Remark}
We now show that any finite field $K$ of small characteristic can be embedded in a ``slightly larger'' field admitting an elliptic presentation with $q$ ``small'' compared to $\# K$.
\begin{Proposition}\label{propfieldswithpresentation}
For any finite field $K$ of small characteristic there exists an extension $K \subset K'$ having a elliptic presentation $\mathfrak{M} \subset \mathbb F_q[x,y]$ of $K'$ such that
\[
\log (\# K') \leq 13 \log (\# K) \log \log(\# K) \quad \text{and}\quad q \leq \log (\# K')^{4} \,.
\]
Moreover such $K'$ and its presentation can be computed in polynomial time in $\log(\# K)$.
\end{Proposition}
\begin{proof}
Let $\# K = p^n$ for a prime $p$ and an integer $n> p$. Put $k_0:=\lceil\log_p n \rceil$ and $q:=p^{2k_0}$, so that $n$ has a multiple $n_1$ in the interval $[q-\sqrt{q}+1, q+1 ]$. If $n_1 \equiv 1\bmod p$ we define $n_2:=n_1+n$, otherwise we define $n_2:=n_1$. Since $n_2$ in an integer contained in the Hasse interval $ [q - 2\sqrt q + 1 ; q+2\sqrt q+1]$ that is not congruent to $1$ modulo $p$, by \cite[Theorems 1a, 3]{Ruc} there exists an elliptic curve $E/\mathbb F_q$ whose group of rational points $E(\mathbb F_q)$ is cyclic of order $n_2$. Since $n$ divides $n_2$, there exists a point $P_0 \in E(\mathbb F_q)$ of order $n$.
We can assume $E$ is defined by a Weierstrass polynomial. Since the map $P \mapsto \phi(P ){-} P $ is surjective, there exists a point $(x_1, y_1)=P_1 \in E(\overline{\mathbb F_q})$ such that $\phi(P_1)=P_1 + P_0$. Under the definition
\[
\mathfrak{M}:= \{ f \in \mathbb F_q[x,y]: f(x_1, y_1)=0 \}\,, \quad K':=\mathbb F_q(x_1, y_1) \subset \overline{\mathbb F_q}\,,
\]
it is clear that the map $\mathbb F_q[x,y]\to K$ sending $x\mapsto x_1, y \mapsto y_1$ induces an isomorphism $\mathbb F_q[x,y]/\mathfrak{M} \cong K'$.
To prove that $\mathfrak{M}$ is an elliptic presentation of $K'$, we are left to show that $q>2$ and $[K':\mathbb F_q]>2$: the first is is a consequence of the inequality $k_0=\lceil\log_p n \rceil >1$, the latter because, by (\ref{eq:proof_deg_n}), the degree of $\mathbb F_q \subset K'$ is equal to the order $n$ of $P_0$, and $n>p\ge 2$.
Since $[K':\mathbb F_q]=n$ divides $[K':\mathbb F_p]$, the field $K'$ has a subfield with $p^n$ elements. In other words $K$ can be embedded in $K'$. Moreover we have
\[
\begin{aligned}
\log(\# K')= n \log q < 2n \log(p) (\log_p(n) {+} 1) \leq 4 \log(p) \log(n) \leq 13 \log(\# K) \log \log (\# K) , \\
q =p^{2 \lceil \log_p n \rceil} < p^{2+2\log_p n} = (pn)^2 \leq n^4 < \log(q^n)^4 = \log(\# K')^4 \,. \qquad \qquad
\end{aligned}
\]
We now prove that it is possible to compute such $K'$ and $\mathfrak{M}$ in polynomial time in $\log(\# K)$. We describe a procedure following the abstract part of the proof. Computing $k_0,q,n_1$ is easy. We can construct a field $\mathbb F_q$ by testing the primality of all polynomials of degree $2k_0$ over~$\mathbb F_p$ until an irreducible $\nu$ is found and define $\mathbb F_q= \mathbb F_p[T]/\nu$; since there are less than $n^2$ polynomials of this type, this takes polynomial time. Similarly we can find an elliptic curve $E$ with an $\mathbb F_q$-point $P_0$ of order $n$ in polynomial time, by listing all possible Weierstrass equations (there are less than $q^6$), testing if they define an elliptic curve and, when they do, enumerate all their $\mathbb F_q$-points. Then, using the addition formula on $E$, we write down the ideal $I \subset \mathbb F_q[x,y]$ whose vanishing locus inside $\mathbb A^2$ is the set of points $P=(x, y) \in E(\overline{\mathbb F_q})$ such that $\phi(P) = P + P_0$. As we showed before, the set of such points is non-empty, hence $I$ is a proper ideal and we can find a maximal ideal $\mathfrak{M}$ containing $I$. We don't need general algorithms for primary decomposition since we can take $\mathfrak{M}=(\mu(x), \lambda(x,y))$, with $(\mu)$ being an
irreducible factor of the generator of the ideal $J{\cap}\mathbb F_q[x]$ and $\lambda(x,y)$ being an irreducible factor of the image of the Weierstrass equation of $E$ inside $(\mathbb F_q[x]/\mu)[y]$. Since the Weiestrass polynomial is monic in $y$, we can assume that $\lambda$ is monic in $y$ too. Hence there is a point $P_1 =(x_1, y_1)$ in the vanishing locus of $(\mu(x), \lambda(x,y))=\mathfrak{M}$. Since $\mathfrak{M}$ contains $I$, the point $P_1$ lies on $E$ and satisfies $\phi(P_1)=P_1+P_0$. The maximality of $\mathfrak{M}$ implies that $\mathbb F_q[x,y](\mathfrak{M}) = \mathbb F_q(x_1,y_1)=K'$. Hence $\mathfrak{M}$ is the elliptic presentation we want.
\end{proof}
\begin{notation}
For the rest of the article $\mathbb F_q$ is a finite field with $q$ elements, $\overline{\mathbb F_q}$ is its algebraic closure, $K$ is a finite extension of $\mathbb F_q$, the ideal $\mathfrak{M}\subset \mathbb F_q[x,y]$ is a $(E/\mathbb F_q,P_0)$-presentation of $K$, the map $\phi\colon E \to E$ is the $q$-th Frobenius and $P_1=(x_1, y_1) \in E(\overline{\mathbb F_q})$ is a point such that $\mathfrak{M}= \{ f \in \mathbb F_q[x,y]: f(x_1, y_1)=0 \}$. By $O_E$ we denote the neutral element of $E(\mathbb F_q)$.
\end{notation}
\section{Traps}\label{sec:traps}
As first pointed out in \cite{traps}, there are certain polynomials, called ``traps'' for which the descent procedure in \cite{BGJT} does not work. In \cite{BGJT} such traps are dealt with differently than the other polynomials. In \cite{ZKG} the notion of ``trap'' is extended: it includes not only polynomials for which the descent procedure is proven not to work, but also polynomials for which the strategy to prove the descent's correctness does not work. In \cite{ZKG} traps are avoided by the algorithm.
In the following sections we describe a descent procedure stated in terms of points and divisors on $E$ and there are certain points in $E(\overline{\mathbb F_q})$ that play the role of ``traps'', as in \cite{ZKG}. The definition of this subset of $E(\overline{\mathbb F_q})$ is rather cumbersome, but it is easy to deduce that we have less than $15q^4$ traps. In particular, in contrast to \cite{ZKG}, we can include them in the factor base without enlarging it too much.
\begin{Definition}
A point $P\in E(\overline{\mathbb F_q})$ is a \emph{trap} if it satisfies one of the following conditions:
\[
\begin{aligned}
2P =0\,, \quad \text{ or }\quad (2\phi-\mathrm{Id})(\phi^2-\phi+\mathrm{Id})(P) = P_0\,, \quad \text{ or } \quad (2\phi-\mathrm{Id})(\phi+\mathrm{Id})(P) = 2P_0\\
\text{or } (\phi^4-\mathrm{Id})(P) = 4P_0\,, \quad \text{ or } 2(\phi^3-\mathrm{Id})(P) = 6P_0\,, \quad \text{ or }\quad (2\phi+\mathrm{Id})(\phi-\mathrm{Id})(P) = 2P_0\,.
\end{aligned}
\]
\end{Definition}
In (\ref{eq:use_traps}) and at the beginning of the proof of Claim \ref{hope_distinct_points} we explain why these points interfere with our strategy of proof.
\section{Divisors and discrete logarithm}\label{sec:divs}
We recall that the Galois group of $\mathbb F_q$ acts on the group of divisors on $E$ by the formula
\[
\sigma \left(\sum_{P \in E(\overline{\mathbb F_q})} n_P P \right) =\sum_{P \in E(\overline{\mathbb F_q})} n_P \, \sigma(P)\,.
\]
For any algebraic extension $\mathbb F_q \subset k$ we denote $\mathrm{Div}_k(E)$ the the set of divisors \emph{defined over~$k$}, namely the divisors $D$ such that $\sigma D = D$ for all $\sigma \in \mathrm{Gal}(\overline{\mathbb F_q}/k)$. We say that a divisor is \emph{irreducible over~$k$} if it is the sum, with multiplicity $1$, of all the $\mathrm{Gal}(\overline{\mathbb F_q}/k)$-conjugates of some point $P \in E(\overline{\mathbb F_q})$. Every divisor defined over~$k$ is a $\mathbb Z$-combination of irreducible divisors over~$k$.
We refer to \cite[Chapter $2$]{Sil} for the definitions of principal divisor and support of a divisor.
We need two quantities to describe the ``complexity'' of a divisor.
The first one is the \emph{absolute degree} of a divisor, defined as as
\[
\mathrm{absdeg}\left( \sum_{P \in E(\overline{\mathbb F_q})} n_P (P)\right) := \sum_{P \in E(\overline{\mathbb F_q})} |n_P| \,.
\]
The second quantity is analogous to the degree of the splitting field of a polynomial, but we decide to ``ignore'' trap points.
Given a divisor $D\in \mathrm{Div}_k(E)$, we denote $D^{\mathrm{noTrap}}$ the part of $D$ that is supported outside the set of trap points, which is also defined over $k$ because the set of trap points is $\mathrm{Gal}(\overline{\mathbb F_q}/\mathbb F_q)$-invariant. We define the \emph{essential degree of $D$ over~$k$} to be the least common multiple of the degrees of the irreducible divisors appearing in $D^\mathrm{noTrap}$. In other words, if we denote as $k(D^{\mathrm{noTrap}})$ the minimal algebraic extension $\widetilde k \supset k$ such that the support of $D$ is contained in $E(\widetilde k)$, then
\[
\mathrm{essdeg}_{k} (D):= [k(D^{\mathrm{noTrap}}): k] \,.
\]
If $D^\mathrm{noTrap} =0$ we take $\mathrm{essdeg}_{k}(D) = 1$.
Now consider the discrete logarithm problem in a field having an elliptic presentation $\mathfrak{M}$. First of all, if $q$ is small compared to $\# K$, for example $q \leq (\log K)^{4}$ as in Proposition \ref{propfieldswithpresentation}, and if we are able to compute discrete logarithms in $K^\times / \mathbb F_q^\times$ in quasi-polynomial time, then we can also compute discrete logarithms in $K^\times$ in quasi-polynomial time. Hence in the rest of the article we are concerned with computing discrete logarithms in $K^\times / \mathbb F_q^\times$.
Denoting $\mathbb F_q[x,y]_\mathfrak{M}$ the localization of $\mathbb F_q[x,y]$ at the maximal ideal $\mathfrak{M}$, we have
\[
K \, \cong \, \mathbb F_q[x,y]/\mathfrak{M} \,\cong\, \mathbb F_q[x,y]_\mathfrak{M}/\mathfrak{M}_\mathfrak{M} \,.
\]
An element $f$ of $(\mathbb F_q[x,y]_\mathfrak{M})^\times$ defines a rational function on $E$ which is defined over~$\mathbb F_q$ and regular and non-vanishing in $P_1$.
We represent elements in $K^\times /\mathbb F_q^\times$ with elements of $\mathbb F_q(E)$ that are regular and non-vanishing on $P_1$.
Let $g,h$ be elements of $\mathbb F_q(E)$ both regular and non-vanishing on $P_1$ and let us suppose that $g$ generates the group $K^\times/\mathbb F_q^\times$. Then the logarithm of $h$ in base $g$ is a well defined integer modulo $\tfrac{\# K{-}1}{q{-}1}$ that we denote $\log_{\mathfrak{M},g}(h)$ or simply $\log h$. Since we are working modulo $\mathbb F_q^\times$, the logarithm of $h$ only depends on the divisor of zeroes and poles of $h$: if $h' \in \mathbb F_q(E)$ satisfies $\mathrm{div}(h) = \mathrm{div}(h')$, then $h/h' \in \mathbb F_q^\times$ and consequently $\log(h) = \log(h')$. Hence, putting
\[
\log (\mathrm{div}(h)) := \log (h) \,,
\]
we define the discrete logarithm as homomorphism whose domain is the subgroup of $\mathrm{Div}_{\mathbb F_q}(E)$ made of principal divisors, supported outside $P_1$ and whose image is $\mathbb Z/(\tfrac{\# K{-}1}{q{-}1})\mathbb Z$.
The kernel of this morphism is a subgroup of $\mathrm{Div}_{\mathbb F_q}(E)$, hence it defines the following equivalence relation on $\mathrm{Div}_{\mathbb F_q}(E)$
\begin{eqn}\label{eq_equiv_divisors}
\begin{aligned}
D_1\sim D_2 & \iff D_1-D_2 \in \mathrm{Ker}(\log) \\
& \iff \exists f \in \mathbb F_q(E) \mbox{ such that }f(P_1) =1 \mbox{ and } \mathrm{div}(f) = D_1 - D_2 \,.
\end{aligned}
\end{eqn}
We notice that this equivalence relation does not depend on $g$ and that,
given rational functions $h_1, h_2 \in \mathbb F_q(E)$ regular and non-vanishing on $P_1$, we have $\log h_1 = \log h_2$ if and only if $\mathrm{div}(h_1) \sim \mathrm{div}(h_2)$. Motivated by this, for all divisors $D_1,D_2 \in \mathrm{Div}_{\mathbb F_q}(E)$ we use the notation
\[
\log_\mathfrak{M} D_1 = \log_\mathfrak{M} D_2 \iff D_1\sim D_2 \,.
\]
Notice that we do not define the expression $\log_{\mathfrak{M}} (D)$ or $\log_{\mathfrak{M},g}(D)$ for any $D$ in $\mathrm{Div}_{\mathbb F_q}(E)$, since the function $\log$ might not extend to a morphism $\mathrm{Div}_{\mathbb F_q}(E) \to \mathbb Z/(\tfrac{\# K{-}1}{q{-}1})\mathbb Z$. In our algorithm we use the equivalence relation (\ref{eq_equiv_divisors}) to recover equalities of the form $\log h_1 = \log h_2$.
\section{The main algorithm} \label{sec_algo}
As in \cite{ZKG} our algorithm is based on a descent procedure, stated in terms of divisors on $E$.
\begin{Theorem}\label{theo_descent}
There exists an algorithm, described in the proof, that takes as input an $(E/\mathbb F_q,P_0)$-presentation $\mathfrak{M}$ and a divisor $D \in \mathrm{Div}_{\mathbb F_q}(E)$ such that $\mathrm{essdeg}_{\mathbb F_q}(D) = 2^{m}$ for some integer $m\ge 7$ and computes a divisor $D' \in \mathrm{Div}_{\mathbb F_q}(E)$ such that
\[
\log_\mathfrak{M} D= \log_\mathfrak{M} D' \,, \quad (\mathrm{essdeg}_{\mathbb F_q} D') \mid 2^{m-1}\,, \quad \mathrm{absdeg} (D') \leq 4q^2 \mathrm{absdeg} D \,.
\]
This algorithm is probabilistic and runs in expected polynomial time in $q\mathrm{absdeg}(D)$.
\end{Theorem}
Applying repeatedly the algorithm of the above theorem we deduce the following result.
\begin{Corollary}\label{cor:descent}
There exists an algorithm, described in the proof, that takes as input an $(E/\mathbb F_q,P_0)$-presentation and a divisor $D \in \mathrm{Div}_{\mathbb F_q}(E)$ such that $\mathrm{essdeg}_{\mathbb F_q}D = 2^m$ for some integer $m$ and computes a divisor $D' \in \mathrm{Div}_{\mathbb F_q}(E)$ such that
\[
\log_\mathfrak{M} D= \log_\mathfrak{M} D' \,, \quad \mathrm{essdeg}_{\mathbb F_q} D' \mid 64 \,,\quad \mathrm{absdeg}(D')\leq (2q)^{2m} \mathrm{absdeg}(D) \,.
\]
This algorithm is probabilistic and runs in expected polynomial time in $q^m \mathrm{absdeg}(D)$.
\end{Corollary}
The algorithm in \cite{ZKG} is based on the descent procedure \cite[Theorem 3]{ZKG}. Using the same ideas we use the descent procedure of the last corollary to describe our main algorithm, which computes discrete logarithms in finite fields with an elliptic presentation.
The idea is setting up an index calculus with factor base the irreducible divisors whose essential degree divides $64$. To collect relations we use a ``zig-zag descent'': for every $f = g^ah^b$, we first use the polynomial $\mu$ determined in Remark \ref{rem:mu} to find $f' \equiv f \bmod \mathfrak{M}$ such that the essential degree of $\mathrm{div}(f')$ is a power of $2$, and we then apply the descent procedure to express $\log(f)=\log(f')$ as the logarithm of sums of elements in the factor base.
\subparagraph*{Main Algorithm}
Input: an $(E/\mathbb F_q,P_0)$-epresentation $\mathfrak{M} \subset \mathbb F_q[x,y]$ of a field $K$ and two polynomials $g,h \in \mathbb F_q[x,y]\setminus \mathfrak{M}$ such that $g$ generates the group $\left( \mathbb F_q[x,y]/\mathfrak{M}\right)^\times/\mathbb F_q^\times$.
Output: an integer $z$ such that
\[
g^z \equiv \gamma \cdot h \pmod{\mathfrak{M}} \quad \mbox{ for some }\gamma \in \mathbb F_q^\times \,,
\]
which is equivalent to $g^z=h$ in the group $K^\times/\mathbb F_q^\times$.
\begin{enumerate}
\item \emph{Preparation:} Compute the monic polynomial $\mu \in \mathbb F_q[x]$ generating the ideal $\mathfrak{M}\cap \mathbb F_q[x]$. Compute polynomials $\tilde g, \tilde h \in \mathbb F_q[x]$ such that $\tilde g \equiv g$ and $\tilde h \equiv h$ modulo~$\mathfrak{M}$. Put $c := \# E(\mathbb F_q)$, $n:= \deg \mu$ and $m := \lceil \log n \rceil +3 $.
\item \emph{Factor base:} List the irreducible divisors $D_1,\ldots, D_t\in \mathrm{Div}_{\mathbb F_q}(E)$ that do not contain $P_1$ and either have degree dividing $64$ or are supported on the trap points.
\item \emph{Collecting relations}: For $j=1,\ldots, t{+}1$ do the following:
Pick random integers $\alpha_j,\beta_j \in \{1,\ldots, \tfrac{q^n-1}{q-1}\}$ and compute $\tilde{g}^{\alpha_j} \tilde h^{\beta_j}$. Pick random polynomials $f(x)$ of degree $2^m$ such that $f\equiv \tilde{g}^{\alpha_j} \tilde h^{\beta_j} \pmod \mu$ until $f$ is irreducible. Apply the descent procedure in Corollary \ref{cor:descent} to find $v_j = (v_{j,1},\ldots,v_{j,t}) \in \mathbb Z^t$ such that
\[
\log_{\mathfrak{M}} \left( \mathrm{div}(f) \right) = \log_\mathfrak{M} \left(v_{j,1} D_1 + \ldots + v_{j,t} D_t \right) \,.
\]
\item \emph{Linear algebra}: Compute $d_1, \ldots, d_{t+1} \in \mathbb Z$ such that $\gcd(d_1, \ldots, d_{t+1})=1$ and
\[ d_1v_1+\ldots +d_{t+1}v_{t+1} \equiv (0,\ldots,0) \pmod{ \tfrac{q^n-1}{q-1}c }\,.
\]
Put $a:= d_1 \alpha_1 + \ldots + d_{t+1}\alpha_{t+1} $ and $b := d_1 \beta_1 + \ldots + d_{t+1}\beta_{t+1}$.
\item \emph{Finished?}: If $b$ is not invertible modulo $\tfrac{q^n-1}{q-1}$ go back to step $3$, otherwise output
\[
z := -ab^{-1} \,\left(\bmod{\,\tfrac{q^n-1}{q-1}}\right)
\]
\end{enumerate}
\subparagraph*{Analysis of the main algorithm}
We first prove, assuming Theorem \ref{theo_descent}, that the algorithm, when it terminates, gives correct output. First of all we notice that, as explained in Remark \ref{rem:mu}, the polynomials $\mu, \tilde g$ and $\tilde h$ exist and that $\tilde g$ and $\tilde h$ define the same element as $g$, respectively $h$, in $K \cong \mathbb F_q[x,y]/\mathfrak{M}$.
Let $d_j, \alpha_j, \beta_j$ and $v_j$ be the integers and vectors of integers stored at the beginning of the fourth step the last time it is executed. By definition of $d_j$, we have
\[
\sum_{j=1}^{t+1} \sum_{i=1}^t d_j v_{j,i} D_i = \tfrac{q^n-1}{q-1} c \cdot
D \,,
\]
for a certain $D \in \mathrm{Div}_{\mathbb F_q}(E)$.
The divisor $c D$ is principal because $c= \# \mathrm{Pic}^0(E/\mathbb F_q)$ and, since for all $j$ the divisor $\sum_i v_{j,i}D_i$ is principal, $D$ has degree $0$. Choosing $\lambda$ in $\mathbb F_q(E)$ such that $\mathrm{div}(\lambda) = cD$, we have
\begin{eqn}\label{eq:lambda}
\sum_{j=1}^{t+1} \sum_{i=1}^t d_j v_{j,i} D_i = \mathrm{div}(\lambda^{\frac{q^n-1}{q-1}})\,.
\end{eqn}
Writing $\log$ for $\log_{\mathfrak{M},g}$, by definition of $v_j$ we have
\begin{equation*}\label{eq:v_j}
\log( g^{\alpha_j} h^{\beta_j}) = \log \left(\sum_{i=1}^t v_{j,i} D_i \right) \,.
\end{equation*}
This, together with Equation (\ref{eq:lambda}), imply the following equalities in $\mathbb Z/\tfrac{q^n-1}{q-1}\mathbb Z$
\[
\begin{aligned}
a+ b\log(h) &= \sum_{j=1}^{t+1} d_j(\alpha_j + \beta_j \log(h)) = \sum_{j=1}^{t+1} d_j \log( g^{\alpha_j} h^{\beta_j})
= \sum_{j=1}^{t+1} d_j \log \left(\sum_{i=1}^t v_{j,i} D_i \right) \\
& = \log \left( \sum_{j=1}^{t+1} \sum_{i=1}^t d_j v_{j,i} D_i \right) = \log\left(\mathrm{div}(\lambda^{\frac{q^n-1}{q-1}})\right) =\tfrac{q^n-1}{q-1} \log(\lambda) = 0 \,,
\end{aligned}
\]
implying that the output $z$ of the algorithm is correct.
We now estimate the running time step by step.
The first step can be performed with easy Groebner basis computations. Now the second step. We represent irreducible divisors $D$ not supported on $O_E$ in the following way: either $D$ is the vanishing locus of a prime ideal $(a(x),W(x,y))$ with $a$ monic and irreducible and $W$ the Weierstrass polynomial defining $E$, or $D$ is the vanishing locus of a prime ideal $(a(x),y-b(x))$ for some polynomials $a,b \in \mathbb F_q[x]$ and $a$ monic irreducible; in the first case $\deg D = 2\deg a$, in the second case $\deg D = \deg a$.
We can list all the irreducible divisors with degree dividing $64$ by listing all monic irreducible polynomials $\mu_1, \ldots, \mu_r \in \mathbb F_q[x]$ of degree dividing $64$ and, for each $i$ compute the prime ideals containing $(\mu_i, W)$, which amounts to factoring $W$ as a polynomial in $y$, considered over the field $\mathbb F_q[x]/\mu_i$.
Listing all the divisors supported on the trap points can be done case by case. For example we can list the irreducible divisors supported on the set $S:= \{P \in E(\overline{\mathbb F_q}) : \phi^4(P)-P = 4P_0\}$ by writing down, with the addition formula on $E$, an ideal $J \subset \mathbb F_q[x,y]$ whose vanishing locus is $S \subset \mathbb A^2(\overline \mathbb F_q)$ and computing all the prime ideals containing $J$. The divisor $O_E$ appears among $D_1,\ldots, D_s$ because $O_E$ is a trap point.
Since there are $q^{64}$ monic polynomials of degree $64$ and at most $15q^4$ trap points and since, using \cite{Ber}, factoring a polynomial of degree $d$ in $\mathbb F_q[x]$ takes on average $O(\log(q)d^3)$ operations, the second step takes polynomial time in $q$.
Moreover, we have $t \le 2q^{64}$.
Now the third step. By \cite[Theorem 5.1]{Dirichlet}, if $f(x)$ is a random polynomial of degree $2^m$ congruent to $\tilde g^{\alpha_j} \tilde h^{\beta_j}$ modulo~$\mu$, then the probability of $f$ being irreducible is at least $2^{-m-1}$. Therefore finding a good $f$ requires on average $O(2^m) = O(n)$ primality tests, hence $O(n^4\log q)$ operations. By assumption finding the vector $v_j$ requires polynomial time in $q^m 2^{m+1}$. We deduce that the third step has probabilistic complexity $t q^{O(\log n)} = q^{O(\log n)}$.
The fourth step can be can be performed by computing a Hermite normal form of the matrix having the $v_j$'s as columns. Since $c\leq q{+}2\sqrt{q}+1$, the entries of the $v_j$ are at most as big as $4q^{n+1}$. Therefore the fourth step is polynomial in $t\log(q^n)$, hence polynomial in $n$.
The last step only requires arithmetic modulo $(q^n{-}1)/(q{-}1)$.
To understand how many times each step is repeated on average, we need to estimate the probability that, in the last step, $b$ is invertible modulo $(q^n{-}1)/(q{-}1)$ and to do so we look at the quantities in the algorithms as if they were random variables. The vector $(d_1,\ldots, d_{t+1})$ only depends on the elements $h^{\alpha_j}g^{\beta_j}$'s and on the randomness contained in the descent procedure and in step $2$. Since the $\alpha_j$'s and $\beta_j$'s are independent variables and since $g$ is a generator, we deduce that the vector $(\beta_1 , \ldots, \beta_{t+1})$ is independent of $(g^{\alpha_1}h^{\beta_1}, \ldots, g^{\alpha_{t+1}}h^{\beta_{t*1}})$, hence also independent of the vector $(d_1, \ldots, d_{t+1})$. Since $(\beta_1 , \ldots, \beta_{t+1})$ takes on all values in $\{0,\ldots, q^n-1 \}^{t+1}$ with the same probability and $\mathrm{gcd}(d_1,\ldots, d_{t+1})=1$, then
\[
b= d_1 \beta_1 + \ldots d_{t+1}\beta_{t+1}
\]
takes all values in $\mathbb Z/ (q^n-1)\mathbb Z$ with the same probability. Hence
\[
\Big(\text{probability that $b$ is coprime to }\tfrac{q^n-1}{q-1} \Big) = \phi\left(\tfrac{q^n-1}{q-1}\right)/\tfrac{q^n-1}{q-1} \gg \frac{1}{\log \log q^n}
\]
When running the algorithm, the first and the second step get executed once and the other steps get executed the same number of times, say $r$, whose expected value is the inverse of the above probability. Since $r$ is $O(\log \log (q^n))$ on average and each step has average complexity at most $q^{O(\log n)}$, the average complexity of the algorithm is $O(q^{O(\log n)})$.
Hence, assuming Theorem \ref{theo_descent} we have proved the following theorem.
\begin{Theorem}\label{maintheorem3}
The above Main Algorithm solves the discrete logarithm problem in the group $K^\times/\mathbb F_q^\times$ for all finite fields $K$ having an elliptic presentation $\mathfrak{M} \subset \mathbb F_q[x,y]$. It runs in expected time $ q^{O(\log[K:\mathbb F_q])}$.
\end{Theorem}
Theorem \ref{maintheorem1} follows from Theorem \ref{maintheorem3} and Proposition \ref{propfieldswithpresentation}: the latter states that any finite field of small characteristic $K$ can be embedded in a slightly larger field $K'$ having an elliptic presentation $\mathfrak{M} \subset \mathbb F_q[x,y]$ such that $ q \leq \log (\# K')^4$ and Theorem \ref{maintheorem1} implies that the discrete logarithm problem is at most quasi-polynomial for such a $K'$. Moreover, by Proposition \ref{propfieldswithpresentation}, such a $K'$, together with its elliptic presentation, can be found in polynomial time in $\log(\# K)$, by \cite{LenIso} we can compute an embedding $K \hookrightarrow K'$ in polynomial time in $\log(\# K)$ and by \cite[Theorem $15$]{RosSch} a random element $g' \in K'$ has probability $\phi(\# K')/\# K' \gg 1/ \log \log \# K'$ of being a generator of $K'$: hence, given elements $g,h \in K$, we can compute $\log_g(h)$ by embedding $K$ inside $K'$ and trying to compute the pair $(\log_{g'}g, \log_{g'}h)$ for different random values of $g' \in K'$.
Proposition \ref{propfieldswithpresentation} is proven, while Theorem \ref{maintheorem3} relies on the the existence of a descent procedure as described in Theorem \ref{theo_descent}. In the rest of the article, we describe this descent procedure.
\section{The descent procedure}\label{sec:idea_descent}
In this chapter we describe the descent procedure of Theorem \ref{theo_descent}. To do so, we do a couple of reductions, we split the descent in two steps and, in both steps, we reduce our problem to the computation of $k$-rational points on certain varieties for a certain extension $k$ of $\mathbb F_q$. In Sections \ref{sec:lemma}, \ref{sec:3-2}, \ref{sec:43} we give the exact definition of these varieties and we prove that they have many $k$-rational points, which implies that our algorithm has a complexity as in Theorem \ref{theo_descent}.
Let $D$ be as in Theorem \ref{theo_descent}. If $D$ is supported on the set of trap points, we can gust pick $D'=D$, hence we can suppose that $D$ is supported outside the traps. Moreover,
we can write $D$ as a combination of divisors $D_i$ that are irreducible over~$\mathbb F_q$, apply the descent to the $D_i$'s and take a linear combination of the results to reconstruct a possible $D'$.
Hence, we can suppose that $D$ is irreducible over~$\mathbb F_q$, that is, if we write $2^m = 4l$, we can suppose that
\[
D = Q + \sigma Q + \ldots + \sigma^{4l-1}Q\,,
\]
for $Q$ a non-trap point on $E$ such that $[\mathbb F_q(Q ):\mathbb F_q] = 4l =2^{m}$ and $\sigma$ a generator of $\mathrm{Gal}(\mathbb F_q(Q)/\mathbb F_q)$.
We can do a sort of ``base change to $k$''.
Let $k$ be the unique subfield of $\mathbb F_q(Q)$ such that $[k:\mathbb F_q]=l$ and let us define
\[
\tilde D := Q + \sigma^lQ + \sigma^{2l}Q + \sigma^{3l}Q \quad \in \mathrm{Div}_k(E)\,.
\]
If we are able to find a divisor $\tilde D' \in \mathrm{Div}_k(E)$ and a rational function $g \in k(E)$ such that
\begin{eqn}\label{eq:g=1_onGaloisconjugates}
\begin{aligned}
& \qquad \mathrm{absdeg}\tilde D' \leq 16 q^2\,, \qquad \qquad \mathrm{essdeg}_k \tilde D' \mid 2 \\
& \mathrm{div}(g) = \tilde D - \tilde D'\,, \qquad \qquad g(\tau(P_1)) = 1 \quad \mbox{for all }\tau \in \mathrm{Gal}(\overline{\mathbb F_q}/\mathbb F_q)\,,
\,,
\end{aligned}
\end{eqn}
then the divisor
\[
D' := \tilde D' + \sigma (\tilde D') + \ldots \sigma^{l-1}(\tilde D') \,,
\]
satisfies the conditions in Theorem \ref{theo_descent}:
the absolute and essential degree of $D'$ are easy to estimate and we have $\log_\mathfrak{M} D = \log_\mathfrak{M} D'$ because the rational function $f :=g g^{\sigma}\cdots g^{\sigma^{l-1}}$ satisfies $f(P_1)=1$ and $\mathrm{div}(f) = D - D'$.
Hence, in order to prove Theorem \ref{theo_descent}, it is enough to describe a probabilistic algorithm that takes $k$ and $\tilde D$ as input and, in expected polynomial time in $ql$, computes $g, \tilde D'$ as in \ref{eq:g=1_onGaloisconjugates}.
Such an algorithm can be obtained by applying in sequence the algorithms given by the following two propositions. In other words, the descent procedure is split in two steps.
\begin{Proposition}\label{firsthalfdescentprop}
There is an algorithm, described in the proof, with the following properties
\begin{itemize}
\item it takes as input an $(E/\mathbb F_q,P_0)$-presentation, a finite extension $\mathbb F_q \subset k$ of degree at least $80$ and a divisor $D \in \mathrm{Div}_k(E)$ such that $\mathrm{essdeg}_{k}D=4$
\item it computes a rational function $g\in k(E)$ and a divisor $D' = D_1 + D_2 \in \mathrm{Div}_k(E)$ such that
\[
\begin{aligned}
D - D' = \mathrm{div}(g)\,, \quad g(P) = 1 \,\, \mbox{for all }P \in E(\overline{\mathbb F_q}) \mbox{ such that }\phi(P)=P+P_0\,,\\
\mathrm{essdeg}_{k} (D_1) \mid 3\,, \quad \mathrm{essdeg}_{k} (D_2) \mid 2 \,,\quad \mathrm{absdeg} D_1 + \mathrm{absdeg} D_2 \leq 2q \mathrm{absdeg} D\,; \qquad
\end{aligned}
\]
\item it is probabilistic and runs in expected polynomial time in $q{\cdot}\log(\# k){\cdot}\mathrm{absdeg}(D)$.
\end{itemize}
\end{Proposition}
\begin{Proposition}\label{secondhalfdescentprop}
There is an algorithm, described in the proof, with the following properties
\begin{itemize}
\item it takes as input an $(E/\mathbb F_q,P_0)$-presentation, an extension of finite fields $\mathbb F_q \subset k$ of degree at least $80$ and a divisor $D \in \mathrm{Div}_k(E)$ such that $\mathrm{essdeg}_{k}D=3$;
\item it computes a rational function $g\in k(E)$ and a divisor $D' \in \mathrm{Div}_k(E)$ such that
\[
\begin{aligned}
D-D' = \mathrm{div}(g) \,, \quad g(P) = 1 \,\, \mbox{for all }P \in E(\overline{\mathbb F_q}) \mbox{ such that }\phi(P)=P+P_0\,,\\
\quad \mathrm{essdeg}_{k} (D') \mid 2 \,, \quad \mathrm{absdeg} (D') \leq 2q \mathrm{absdeg} (D) \,; \qquad \qquad \qquad \qquad
\end{aligned}\]
\item it is probabilistic and runs in expected polynomial time in $q{\cdot}\log(\# k){\cdot}\mathrm{absdeg}(D)$.
\end{itemize}
\end{Proposition}
We now describe our strategy of proof for the above propositions. Let $\varepsilon := \mathrm{essdeg}_k(D)$ (hence $\varepsilon=3$ for Proposition \ref{secondhalfdescentprop} and $\varepsilon=4$ for Proposition \ref{firsthalfdescentprop}).
Again, $D$ can be supposed to be irreducible over~$k$ and supported outside the traps, i.e.
\[
D= Q + \ldots + \sigma^{\varepsilon -1}Q\,,
\]
for $Q$ a non-trap point on $E$ such that $[k(Q):k]= \varepsilon$, and $\sigma$ a generator of $\mathrm{Gal}(k(Q)/k)$.
Let $\tau_{P_0}$ be the translation by $P_0$ on $E$ and let $h \mapsto h^\phi $ be the automorphism of $k(E)$ that ``applies $\phi$ to the coefficients of $h$'' (denoting $x,y$ be the usual coordinates on $E$, we have $x^\phi =x$, $y^\phi=y $ and $\alpha^\phi=\alpha^q $ for all $\alpha \in k$). Using this notation, we rephrase one of Joux's ideas (\cite{Joux}) in the following way: for every point $P \in E(\overline{\mathbb F_q})$ such that $\phi(P)=P + P_0$ and for every function $f \in k(E)$ regular on $P$ we have
\begin{eqn}\label{eq:use_P1}
f(P)^q = f^\phi(\phi(P)) = f^\phi(P+P_0) = (f^\phi \circ \tau_{P_0})(P) \,,
\end{eqn}
hence, for all $a,b,c,d \in k$ such that $cf^{q+1}{+} df^q {+} af {+} b$ does not vanish on $P$, the rational function
\begin{eqn}\label{eq:def_g_from_f}
g:=\frac{( cf + d)(f^{\phi}\circ \tau_{P_0}) + af + b}{ cf^{q+1}+ df^q + af + b} \,,
\end{eqn}
satisfies
\[
\text{$g(P) = 1 \quad $ for all $P\in E(\overline{\mathbb F_q})$ such that $\phi(P)=P + P_0$.}
\]
Hence, for a function $g$ as in (\ref{eq:def_g_from_f}), one of the requirements of Propositions \ref{firsthalfdescentprop} (respectively \ref{secondhalfdescentprop}) is automatically satisfied. In the algorithm we look for a function $g$ of that form.
We now look for conditions on $f$ and $a,b,c,d$ implying that the function $g$ and the divisor
\begin{eqn} \label{def:D'}
D':= D - \mathrm{div}(g) \,,
\end{eqn}
have the desired properties.
One of these properties is ``$\mathrm{essdeg}_k(D')\leq \varepsilon-1$''. for which it is enough that ``$[k(P):k]\le \varepsilon -1$ for all the points $P$ in the support of $D$''.
In particular the support of $D'$ contains all the poles of $g$, which are either poles of $f$, poles of $f^\phi \circ \tau_{P_0}$ or zeroes of $cf^{q+1}{+} df^q {+} af {+} b$. Since the zeroes and poles of a function $h\in k(E)$ are defined over an extension of $k$ of degree at most $\deg(h)$, the following conditions are sufficient to take care of the poles of $g$:
\begin{enumerate}[label=(\Roman*), itemsep=0pt, start=1]
\item \label{item1:degree} the function $f$ has at most $\varepsilon{-}1$ poles counted with multiplicity;
\item \label{item2:split} the polynomial $cT^{q+1} + dT^q + aT + b$ splits into linear factors in $k[T]$.
\end{enumerate}
Here we are using another idea from \cite{Joux}, namely that, if $f$ has low degree (i.e. few poles), then the numerator of (\ref{eq:def_g_from_f}) has low degree too, and the denominator has a probability about $1/q^3$ of splitting into linear polynomials in $f$.
Another requirement is for $Q$ and all its conjugates to be zeroes of $g$. Assuming \ref{item1:degree} and \ref{item2:split}, this is equivalent to
\begin{enumerate}[label=(\Roman*), start=3]
\item\label{item3:gamma} $ \smt abcd \cdot f(\sigma^iQ) = -f^\phi(\sigma^iQ + P_0)\quad $ for $i=0,1,\ldots, \varepsilon{-}1$,
\end{enumerate}
where $(\smt abcd, x) \mapsto \smt abcd \cdot x = \tfrac{ax+b}{cx+d}$ is the usual action of $\mathrm{PGL}_2$ on $\P^1$. Notice that the definition of $g$ only depends on the class of $\smt abcd$ in $\mathrm{PGL}_2(k)$.
Notice that Conditions \ref{item1:degree}, \ref{item2:split} and \ref{item3:gamma} together imply that $\mathrm{essdeg}(D')\leq \varepsilon{-}1$: a point $P$ in the support of $D'$ is either a pole of $f$, a pole of $f^\phi \circ \tau_{P_0}$, a zero of $cf^{q+1}{+} df^q {+} af {+} b$ or a zero of the numerator of (\ref{eq:def_g_from_f}); the only case left to treat is the last, where we have $[k(P):k]\leq \epsilon-2$ because the divisor of zeroes of the numerator of (\ref{eq:def_g_from_f}), which has degree at most $2(\varepsilon{-}1)$, is larger than the sum of the conjugates of $P$ summed with the conjugates of $Q$.
Condition \ref{item1:degree} easily implies that $\mathrm{absdeg}(D')$ is at most $2q\epsilon$.
Finally, as noticed when defining $g$, we want
\begin{enumerate}[label=(\Roman*), start=4]
\item\label{item4:traps} for every point $P$ on $E$ such that $\phi(P)=P+P_0$, the function $f$ is regular on $P$ and $cf^{q+1}{+}df^q{+}af{+}b$ does not vanish on $P$.
\end{enumerate}
We showed that if $(f, \smt abcd )$ satisfies the conditions \ref{item1:degree}, \ref{item2:split}, \ref{item3:gamma}, \ref{item4:traps}, then the formulas (\ref{eq:def_g_from_f}) and (\ref{def:D'}) give $g$ and $D'$ that satisfy the requirements of Proposition \ref{secondhalfdescentprop}, respectively Proposition \ref{firsthalfdescentprop}. In Section \ref{sec:3-2}, respectively Section \ref{sec:43}, we prove that there are many such pairs $(f, \smt abcd )$ and we give a procedure to find them when $\varepsilon=3$, respectively $\varepsilon=4$. In both cases we proceed as follows:
\begin{itemize}
\item\label{fase1f} We choose a family of functions $f$ satisfying \ref{item1:degree} and we parametrize them with $k$-points on a variety $\mathcal F$.
\item\label{fase2C} We impose some conditions slightly stronger than \ref{item2:split}, \ref{item3:gamma}, \ref{item4:traps}, describing a variety $\mathcal{C} \subset \mathcal F{\times}\mathrm{PGL}_2 {\times }\mathbb A^1$: for all point $(f,\smt abcd,z) \in \mathcal{C}(k)$, the pair $(f,\smt abcd)$ satisfies \ref{item1:degree}, \ref{item2:split}, \ref{item3:gamma}, \ref{item4:traps}.
In particular, $\mathcal{C}$ is a curve in the case $\epsilon=3$, a surface in the case $\epsilon=4$
\item We prove that the geometrically irreducible components of $\mathcal{C}$ are defined over~$k$ and we deduce that $\mathcal{C}(k)$ has cardinality at least $\tfrac{1}{2} (\# k)^{\dim \mathcal{C}}$; this is the point in the proof where we use the technical hypothesis $[k:\mathbb F_q]\ge 80$ (details after Equations (\ref{eq:BettiD32}), (\ref{eq:BettiD43})).
\end{itemize}
Using $\mathcal{C}$ we can easily describe the algorithm of Proposition \ref{firsthalfdescentprop} (respectively Proposition \ref{secondhalfdescentprop}) when $D$ is an irreducible divisor defined over~$k$: one first computes equations for $\mathcal{C}$ (which we describe explicitly), then looks for a point $(f,\smt abcd, z)$ in $\mathcal{C}(k)$ and finally computes $g$ and $D$ using the formulas (\ref{eq:def_g_from_f}) and (\ref{def:D'}). This procedure takes average polynomial time in $q\log(\# k)$ because, as explained in Sections \ref{subsec:points_32} and \ref{subsec:points_43}, the variety $\mathcal{C}$ is a closed subvariety of $\mathbb A^9$ with degree $O(q^9)$.
\begin{Remark}\label{rem:traps}
If $Q \notin \mathrm{Gal}(\overline{\mathbb F_q}/\mathbb F_q) \cdot P_1$ is a point such that $\phi(Q)=Q+P_0$, then Equation \ref{eq:use_P1} implies that conditions \ref{item3:gamma} and \ref{item4:traps} exclude each other. This explains why such points $Q$ create problems to our strategy and need to be marked as \emph{traps}.
\end{Remark}
\section{A technical lemma}\label{sec:lemma}
In this section we take a break from our main topic and we prove Lemma \ref{lem:good_curves}, which we use to prove that the varieties $\mathcal{C}$ used in the algorithms of Propositions \ref{firsthalfdescentprop} and \ref{secondhalfdescentprop} have geometrically irreducible components defined over~$k$.
Our method for that is to look at the field of constants:
for any extension of fields $k \subset \mathbb{K}$, its \emph{field of constants} is the subfield of $\mathbb{K}$ containing all the elements that are algebraic over~$k$. In particular, when $k$ is perfect, an irreducible variety $\mathcal{C}/k$ is geometrically irreducible iff $k$ is equal to the field of constants of the extension $k\subset k(\mathcal{C})$.
In the next proposition we study the splitting field of polynomials $cT^{q+1} {+} dT^q {+} aT {+} b$ (as in condition \ref{item2:split}) over fields with a valuation, so that we can apply it to function fields.
\begin{Proposition} \label{prop:pure_fields}
Let $\mathbb F_q \subset k$ be an extension of finite fields and let $k \subset \mathbb{K}$ be a field extension with field of constants $k$. Let $v:\mathbb{K}^\times \to \mathbb Z$ be a valuation
with ring of integral elements $\mathcal{O}_v \subset \mathbb{K}$ and generator $\pi_v$ of the maximal ideal of $\mathcal{O}_v$.
Let $a,b,c,d$ be elements of $\mathcal{O}_v$ such that
\begin{subeqn}\label{eq:congruence_derivative}
\begin{aligned}
& v(ad-bc) =1, \qquad v(d^qc - ac^q)=0 \qquad \text{and} \quad \\
& c\lambda^q - c^q(ad-bc)\lambda^{-1} \not\equiv d^qc - ac^q \pmod{\pi_v^2} \quad \forall \lambda \in \mathcal{O}_v^\times\,.
\end{aligned}\end{subeqn}
Then the splitting field of the polynomial
\[
F(T) := cT^{q+1} + dT^q + aT + b \quad \in \mathbb{K}[T]\,,
\]
is an extension of $k$ having field of constants equal to $k$.
\end{Proposition}
\begin{proof}
For any field extension $\mathbb{K} \subset \widetilde{\mathbb{K}}$, we denote $\widetilde{\mathbb{K}}(F)$ the splitting field of $F$ over~$\widetilde{\mathbb{K}}$, which is a separable extension of $\widetilde{\mathbb{K}}$ because the discriminant of $F$ is a power of $ad{-}bc$, which is non-zero.
Since the field of constants of $k \subset \mathbb{K}$ is equal to $k$, then $\mathbb{K}' :=\mathbb{K}\otimes_k \overline k$ is a field and the
statement of the proposition is equivalent to the equality
\[
\mathrm{Gal}(\mathbb{K}(F)/\mathbb{K}) = \mathrm{Gal}(\mathbb{K}'(F)/\mathbb{K}') \,.
\]
By \cite[Theorems $2.5$ and $3.2$]{HidePGL2} there exists a bijection $\{\text{roots of }F\} \leftrightarrow \P^1(\mathbb F_q)$ that identifies the action of $\mathrm{Gal}(\mathbb{K}(F)/\mathbb{K})$ on the roots with the action of a subgroup of $G := \mathrm{PGL}_2(\F_q)$ on $\P^1(\mathbb F_q)$. We choose such a bijection and we identify $\mathrm{Gal}(\mathbb{K}(F)/\mathbb{K})$ and $\mathrm{Gal}(\mathbb{K}'(F)/\mathbb{K}')$ with two subgroups of $G$.
If we prove that $\mathrm{Gal}(\mathbb{K}'(F)/\mathbb{K}')$ contains a Borel subgroup $B$ of $G$ the proposition follows: the only subgroups of $\mathrm{PGL}_2$ containing $B$ are the whole $G$ and $B$ itself and, since $B$ is not normal inside $G$, we deduce that either $\mathrm{Gal}(\mathbb{K}(F)/\mathbb{K})=\mathrm{Gal}(\mathbb{K}(F)/\mathbb{K}')=B$ or $\mathrm{Gal}(\mathbb{K}(F)/\mathbb{K}) =\mathrm{Gal}(\mathbb{K}'(F)/\mathbb{K}')=G$.
In the rest of the proof we show that $\mathrm{Gal}(\mathbb{K}'(F)/\mathbb{K}')$ contains a Borel subgroup working locally at $v$. We choose an extension of $v$ to $\mathbb{K}'$ and consider the completion $\mathbb{K}'_v$ of $\mathbb{K}'$. Since $\mathrm{Gal}(\mathbb{K}'_v(F)/\mathbb{K}'_v)$ is a subgroup of $\mathrm{Gal}(\mathbb{K}'(F)/\mathbb{K}')$, it is enough to show that $\mathrm{Gal}(\mathbb{K}'_v(F)/\mathbb{K}'_v)$ is a Borel subgroup to prove the proposition.
Since $ad{-}bc \equiv 0$ and $c\not\equiv 0$ modulo~${\pi_v}$, we have
\[
F(T) \equiv c\Big(T^q + \frac ac\,\Big) \Big(T + \frac dc\,\Big) \pmod{\pi_v} \,,
\]
and, since $d^qc \neq ac^q \bmod{\pi_v}$, we deduce that ${-}\frac dc$ is a simple root of $F \bmod{\pi_v}$. By Hensel's Lemma, there exists a root $r_0 \in \mathbb{K}'_v$ of $F$ that is $v$-integral and congruent to ${-}\tfrac dc$ modulo~$\pi_v$. The group $\mathrm{Gal}(\mathbb{K}'_v(F)/\mathbb{K}_v') \subset G$ stabilizes the element of $\P^1(\mathbb F_q)$ corresponding to $r_0$, hence it is contained in a Borel subgroup of $G$. Since Borel subgroups have cardinality $q(q{-}1)$, in order to prove the proposition it is enough showing that $[\mathbb{K}'(F): \mathbb{K}']$ is at least $q(q{-}1)$. We show that the inertia degree of $\mathbb{K}' \subset \mathbb{K}'(F)$ is at least $q(q{-}1)$.
Since $\tfrac ac$ is a $q$-th power modulo $\pi_v$, then there exists a $v$-integral element $\gamma \in \mathbb{K}'_v$ such that $F(T) \equiv c(T+\gamma)^q(T+ d/c) \bmod{\pi_v}$. Up to the substitution $F(T) \mapsto F(T-\gamma)$, which does not change $\mathbb{K}'_v(F)$ nor the quantities $c$, $ad{-}bc$ and $d^qc{-}ac^q$, we can suppose that
\[
F(T) \equiv c\, T^q \Big(T + \frac dc\,\Big) \pmod{\pi_v} \,.
\]
This implies that
$v(d/c)=0$, $v(a)\ge 1$ and $v(b)\ge 1$. If we had $v(b)\geq 2$, then the choice $\lambda := d$ would contradict the last congruence in (\ref{eq:congruence_derivative}). Hence we have $v(b)=1$. The Newton polygon of $F$ tells us that the roots
$r_0, \ldots, r_q $ of $F$ in the algebraic closure $\overline{\mathbb{K}'_v}$ of $\mathbb{K}'_v$ satisfy
\begin{eqn}\label{eq:lemma_poly_valuation_roots}
v(r_0)=0\,, \quad v(r_1) = \ldots = v(r_q)= \frac 1 q\,.
\end{eqn}
We now consider the polynomial
\[
F_1 (T):= F(T + r_1) = c_1T^{q+1} + d_1 T^q + a_1 T + b_1 = cT^{q+1} + d_1 T^q + a_1 T \quad \in \overline{\mathbb{K}'_v}[T]\,.
\]
The roots of $F_1$ are $r_i{-}r_1$. Using Equation (\ref{eq:lemma_poly_valuation_roots}), we deduce $v(c_1)=v(d_1)=0$ and $v(a_1) > 0$. Using $a_1d_1{-}b_1c_1=ad{-}bc$, we see that $v(a_1) = v(a_1d_1 {-} c_1b_1) = v(ad{-}bc) = 1$. The Newton polygon of $F_1$ tells us that
\[
v(r_2-r_1)=\ldots = v(r_q-r_1)= \frac{1}{q-1} \,.
\]
This, together with Equation (\ref{eq:lemma_poly_valuation_roots}) and the fact that $\mathbb{K} \subset \mathbb{K}'$ is unramified, imply that the inertia degree of $\mathbb{K}'_v \subset \mathbb{K}'_v(F)$ is a multiple of $q(q{-}1)$ and consequently that $\mathrm{Gal}(\mathbb{K}'_v(F)/\mathbb{K}')$ is a Borel subgroup of $G$.
\end{proof}
We now prove that, for certain choices of $\mathbb{K}, a,b,c,d$, Equation (\ref{eq:congruence_derivative}) is satisfied.
\begin{Proposition}\label{prop:we_can_use_other_prop}
Let $\mathbb{K}$ be a field extension of $\mathbb F_q$, let $u_1, u_2, u_3,w_1, w_2, w_3$ be distinct elements of $\mathbb{K}$ and
let $a,b,c,d \in \mathbb{K}$ be the elements defined by the following equality in $\text{GL}_2(\mathbb{K})$
\[
\mt abcd = \mt{w_3^q}{w_1^q}{1}{1} \mt{w_1^q-w_2^q}{0}{0}{w_2^q-w_3^q} \mt{u_2-u_3}{0}{0}{u_1-u_2} \mt{1}{-u_1}{-1}{u_3} \,.
\]
Then $\smt abcd$ sends the three elements $u_1,u_2,u_3 \in \P^1(\mathbb{K})$ to $w_1^q,w_2^q,w_3^q \in \P^1(\mathbb{K})$ respectively.
Suppose, moreover, that $\mathbb{K}$ is equipped with a discrete valuation $v:\mathbb{K}^\times \to \mathbb Z$, that $u_i, w_i$ are $v$-integral, that $v(w_i{-}w_j) = v(w_3{+}u_i) = v(u_2{-}u_3) = 0 $ for $i \neq j$ and that $v(u_1{-}u_2)=1$.
Then $a,b,c,d$
satisfy (\ref{eq:congruence_derivative}).
\end{Proposition}
\begin{proof}
To prove first part we notice that, given distinct elements $x,y,z \in \mathbb{K}$, the matrix
\[
N_{x,y,z} := \mt{z}{x}{1}{1} \mt{x-y}{0}{0}{y-z}
\]
is invertible and acts on $\P^1(\mathbb{K})$ sending $0,1,\infty = \spv 10$ to $x,y,z$ respectively. Using this definition we have $
\smt{a}{b}{c}d = \det(N_{u_1,u_2,u_3}) N_{w_1^q,w_2^q,w_3^q} N_{u_1,u_2,u_3}^{-1}$, hence $\smt abcd$ acts on $\P^1(\mathbb{K})$
sending
\[
u_1\mapsto 0\mapsto w_1^q\,, \quad u_2\mapsto 1\mapsto w_2^q\,, \quad u_3\mapsto \infty \mapsto w_3^q\,.
\]
Now the second part of the lemma. Computing $\det(N_{u_1,u_2,u_3})$ and $\det(N_{w_1^q,w_2^q,w_3^q})$ we see that
\[
ad - bc = (u_1-u_2)(u_2-u_3)(u_1-u_3)(w_1-w_2)^q(w_2-w_3)^q(w_1-w_3)^q
\]
hence $v(ad{-}bc)=v(u_1{-}u_2)=1$ (the other factors have valuation zero by hypothesis or, in the case of $u_3{-}u_1$, because they are the sum of $u_3{-}u_2$ and $u_2{-}u_1$, whose valuations are $0$ and $1$). Writing $a, b, c, d$ as polynomials in the $u_i$'s and the $w_i$'s, we check that there is a multivariate polynomial $f$ such that
\begin{eqn}\label{eq:sviluppo_inv2}
\begin{aligned}\label{eq:inv2}
d^qc - ac^q =& f(u_1,u_2,u_3,w_1,w_2,w_3)\cdot\big(u_1 - u_2\big)^q \\
& + (u_1 - u_3)^q (w_1 - w_2)^{q^2} (w_1 - w_3)^q (u_2 + w_2)^q \cdot\big(u_1 - u_2\big) \\
& - (w_1 - w_2)^{q^2+q}(u_1 - u_3)^{q+1} (u_1 + w_3)^q \,.
\end{aligned}
\end{eqn}
Since $v(w_2{-}w_1) = v(u_3{-}u_1) = v(w_3{+}u_1)=0$, we have $v(d^qc{-}ac^q)=0$. Let $\mathcal{O}_v$ be the integral subring of $\mathbb{K}$, let $\pi_v:=u_1{-}u_2$, which is a generator of the maximal ideal of $\mathcal{O}_v$. Now suppose by contradiction that there exists $\lambda \in \mathcal{O}_v^\times$ such that
\begin{eqn}\label{eq:absurd_cong}
c\lambda^q - a^q(ad-bc)\lambda^{-1} \equiv d^qc - ac^q \pmod{\pi_v^2}\,.
\end{eqn}
Using $ad{-}bc\equiv 0 \bmod{\pi_v}$ and the equality $c = (w_1 {-} w_2)^q(u_1 {-} u_3) - \pi_v(w_1{-}w_3)^q$, we deduce
\[
\lambda^q \equiv \frac{d^qc - ac^q}{c} \equiv
\Big(-(u_1- u_3)(u_1 +w_3)(w_1-w_2)^{q}\Big)^q \pmod{\pi_v} \,,
\]
If we replace $\lambda$ by some $\lambda' \equiv \lambda$ modulo $\pi_v$, then the congruences (\ref{eq:congruence_derivative}) are still satisfied, hence we may suppose $\lambda = {-}(u_1{-} u_3)(u_1{+}w_3)(w_1{-}w_2)^{q}$. Substituting $\lambda$ and (\ref{eq:inv2}) in (\ref{eq:absurd_cong}) we get
\[
\begin{aligned}
0 &\equiv c^q(ad-bc) + (d^qc - ac^q)\lambda - c\lambda^{q+1} \\
& \equiv - \pi_v (w_1{-}w_2)^{q^2+q}(w_1{-}w_3)^q(u_1{-} u_3)^{q+1} (w_2{-}w_3)^q(w_3{+}u_3)\qquad \pmod {\pi_v^2}
\end{aligned}
\]
which is absurd because $v(w_i{-}w_j)=v(u_1{-}u_3)=v(w_3{+}u_3)=0$.
\end{proof}
We now prove the main result of this section. Varieties like $\mathcal{C}$ in the following lemma arise in Sections \ref{sec:3-2} and \ref{sec:43} when imposing conditions \ref{item2:split} and \ref{item3:gamma}.
\begin{Lemma}\label{lem:good_curves}
Let $\mathbb F_q \subset k$ be an extension of finite fields and let $\mathcal{B}/k$ be a geometrically irreducible variety. Let
$u_1, u_2, u_3$, $w_1, w_2, w_3$ be distinct elements of $\overline{k}(\mathcal{B})$ and suppose there exists an irreducible divisor $Z \subset \mathcal{B}_{\overline{k}}$, generically contained in the smooth locus of $\mathcal{B}$, such that $u_i,w_i$ are defined on the generic point of $Z$ and such that $Z$ is a zero of order $1$ of $u_1{-}u_2$ and it is not a zero of $w_3{+}u_i, u_2{-}u_3, w_i{-}w_j$ for $i{\neq}j$.
Let $\mathcal{C} \subset \mathcal{B} \times \mathrm{PGL}_2 \times \mathbb A^1$ be the variety whose the points are the tuples $(R,\smt abcd,z)$ such that
\[
\begin{aligned}
&u_i(R) \text{ are defined and distinct,}\quad w_i(R)\text{ are defined and distinct,} \quad d^qc - ac^q \neq 0, \\% \quad z \notin \mathbb F_q,\\
\qquad \qquad \qquad \smt abcd \cdot u_i(R) = w_i^q(R) \text{ for }i =1,2,3 \qquad \text{and} \\
&\qquad (d^qc - ac^q)^{q+1}(z^q-z)^{q^2-q} = c^{q^2+1}(ad - bc)^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1}
\end{aligned}
\]
If $\mathcal{C}$ is defined over~$k$, then its geometrically irreducible components are defined over~$k$ and pairwise disjoint.
\end{Lemma}
\begin{proof}
We first look at the variety $\mathcal{B}_0 \subset \mathcal{B} \times \mathrm{PGL}_2$ whose points are the pairs $(R,A)$ such that
\[
\begin{aligned}
&u_i(R) \text{ are defined and distinct,}\quad w_i(R)\text{ are defined and distinct,} \\
&\qquad \qquad A \cdot u_i(R) = w_i^q(R) \text{ for }i =1,2,3\,.
\end{aligned}
\]
Since an element $\mathrm{PGL}_2$ is uniquely determined by its action on three distinct points of $\P^1$, the projection $\mathcal{B}_0 \to \mathcal{B}$ is a birational equivalence, whose inverse, by the first part of Proposition \ref{prop:we_can_use_other_prop}, is given by $R \mapsto \smt{a_1}{b_1}{c_1}{d_1} (R)$, where $a_1,b_1,c_1,d_1 \in \overline k(\mathcal{B})$ are defined by the following equality in $\mathrm{GL}_2(\overline k(\mathcal{B}))$
\[
\mt{a_1}{b_1}{c_1}{d_1} = \mt{w_3^q}{w_1^q}{1}{1} \mt{w_1^q-w_2^q}{0}{0}{w_2^q-w_3^q} \mt{u_2-u_3}{0}{0}{u_1-u_2} \mt{1}{-u_1}{-1}{u_3} \,.
\]
Let $v \colon \overline{k}(\mathcal{B})^\times \to \mathbb Z$ be the valuation that determines the order of vanishing in $Z$ of a rational function. The second part of Proposition \ref{prop:we_can_use_other_prop} implies that $a_1,b_1,c_1,d_1$ satisfy (\ref{eq:congruence_derivative}), over the field $\overline k(\mathcal{B})$.
In particular we have $c_1 \neq 0$ and $v(c_1)=0$. Hence we can define the following rational functions on $\mathcal{C}$
\[
a_2:= a_1/c_1\,, \quad b_2 := b_1/c_1\,, \quad c_2 := 1\,, \quad d_2 := d_1/c_1
\]
which again satisfy (\ref{eq:congruence_derivative}) over the field $\overline k(\mathcal{B})$. The advantage of $a_2,b_2,c_2,d_2$ is that, as we now show, they are defined over~$k$. Let $\mathcal{B}_1$ be the projection of $\mathcal{C}$ inside $\mathcal{B} \times \mathrm{PGL}_2$: since $\mathcal{C}$ is defined over~$k$, the variety $\mathcal{B}_1$ is defined over~$k$ and, since $\mathcal{B}_1$ is a dense open subvariety of $\mathcal{B}_0$, the variety $\mathcal{B}_1$ is birational to $\mathcal{B}$ through the natural projection. Since $a/c$ is a rational function on $\mathcal{B}_1$ defined over~$k$, we deduce that $a_2 = a/c$ lies in $k(\mathcal{B}_1)=k(\mathcal{B})$ and analogously $b_2, c_2, d_2 \in k(\mathcal{B})$.
A fortiori $a_2,b_2,c_2,d_2$ satisfy (\ref{eq:congruence_derivative}) inside the field $\mathbb{K}= k(\mathcal{B})$. We can now apply Proposition \ref{prop:pure_fields} and deduce that $k$ is the field of constants of the extension $k\subset \Sigma$, where $\Sigma$ is the splitting field of
\[
F(T) := c_2T^{q+1} + d_2T^q + a_2 T + b_2 \,,
\]
over~$k(\mathcal{B})$.
We deduce that there exists a geometrically irreducible variety $\mathcal E/k$ having field of rational functions $\Sigma$. Let $\pi \colon \mathcal E \dashrightarrow \mathcal B$ be the rational map induced by $k(\mathcal{B}) \subset \Sigma$ and let $r_0,\ldots, r_q \in \Sigma$ be the roots of $F$, interpreted as rational functions on $\mathcal E$. Using \cite[Lemma $2.3$]{HidePGL2} we see that, for any choice of pairwise distinct integers $0 \le i,j,m\le q$,
\[
\begin{aligned}
& z = z_{i,j,k}:= \frac{r_i-r_j}{r_i-r_k} \in \Sigma = k(\mathcal E) \quad \text{ satisfies } \\
& (d_2^qc_2 -a_2c_2^q)^{q+1}(z^q-z)^{q^2-q} = C_2^{q^2+1}(a_2d_2 - b_2c_2)^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1} \,.
\end{aligned}
\]
Hence, for each choice of pairwise distinct integers $0 \le i,j,m\le q$ we get a map
\[
\phi_{i,j,m} \colon \mathcal E \dashrightarrow \mathcal C, \qquad
S\longmapsto \left(\pi(S), \smt {a_2}{b_2}{c_2}{d_2}(S),z_{i,j,m}(S)\right) \,.
\]
Since $\mathcal E$ geometrically irreducible, then all the images $\phi_{i,j,m}(\mathcal E)$ are also geometrically irreducible. Moreover, since all the $z_{i,j,m}$ are different, the maps $\phi_{i,j,m} $ are distinct and as many as the degree of the projection $\mathcal{C}\to\mathcal{B}$, implying that the union of all the images $\phi_{i,j,m}(\mathcal E)$ is dense inside $\mathcal C$. This implies that a geometrically irreducible component of $\mathcal{C}$ is the Zariski closure of an image $\phi_{i,j,m}(\mathcal E)$. Since $\mathcal E$ and $\phi_{i,j,m}$ are defined over~$k$, then also $\phi_{i,j,m}(\mathcal E)$ and the geometrically irreducible components of $\mathcal{C}$ are defined over~$k$.
Finally, we prove that the components of $\mathcal{C}$ are pairwise disjoint. The projection $\pi \colon \mathcal{C} \to \mathcal{B}_1$ has finite fibers whose number of $\overline k$-points counted with multiplicity is $q^3{-}q$ , that is the degree, in $z$, of the polynomial
\[
(d^qc - ac^q)^{q+1}(z^q-z)^{q^2-q} - c^{q^2+1}(ad - bc)^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1}.
\]
If, by contradiction, there is a point $(R',\smt {a'}{b'}{c'}{d'}, z')$ lying in the intersection of two components of $\mathcal{C}$, then the fiber $\pi^{-1}(R', \smt{a'}{b'}{c'}{d'})$ has cardinality smaller than $q^3{-}q$, which is equivalent to the following polynomial having less than $q^3{-}q$ roots
\[
G(z):= (d'^qc' - a'c'^q)^{q+1}(z^q-z)^{q^2-q} - c'^{q^2+1}(a'd' - b'c')^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1} \in \overline{\mathbb F_q}[z]\,.
\]
Since $a'd' {-} b'c'\neq 0$ and $d^qc' {-} a'c'^q \neq 0$, there is no root of $G$ that is also a root of $z^q{-}z$ or $\frac{z^{q^2}{-}z}{z^q{-}z}$. In other words, the roots of $G$ lie outside the finite field $\mathbb F_{q^2}$ with $q^2$ elements. Moreover, since $G$ is a $\overline{\mathbb F_q}$-linear combination of powers of $z^q{-}z$ and $\frac{z^{q^2}{-}z}{z^q{-}z}$, the set of roots of $G$ is stable under the action of $\mathrm{PGL}_2(\F_q)$. Since this action is free on $\overline{\mathbb F_q}\setminus\mathbb F_{q^2}$, the set of roots is large at least $\# \mathrm{PGL}_2(\F_q) = q^3{-}q = \deg G$, which is a contradiction.
\end{proof}
\begin{Remark}\label{rem:split_field}
Let $\mathbb F_q \subset k$ be a field extension and let $F(T)= cT^{q+1} + dT^q + a T + b$ be a polynomial with coefficients in $k$ such that, $ad{-}bc \neq 0$ and $a^qc{-}dc^q \neq 0$.
By \cite[Theorem $4.3$ and Lemma $2.3$]{HidePGL2}, the polynomial $F$ splits in linear factors over~$k$ if and only if there exists an element $z \in k$ such that
\[
(d^qc - ac^q)^{q+1}(z^q-z)^{q^2-q} = c^{q^2+1}(ad - bc)^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1}\,.
\]
In particular, in the notation of Lemma \ref{lem:good_curves}, the field of rational functions of any component of $\mathcal{C}$ is the splitting field of $F$.
\end{Remark}
\section{Descent 3-to-2}\label{sec:3-2}
In this section we finish the proof of Proposition \ref{secondhalfdescentprop}, started in Section \ref{sec:idea_descent}. Following the notation of Section \ref{sec:idea_descent} when $\varepsilon=3$, let $k$ be a finite extension of $\mathbb F_q$ of degree at least $80$, let $Q$ be a non-trap point on $E$ such that $[k(Q):k]=3$, and let $\sigma$ be a generator of $\mathrm{Gal}(k(Q)/k)$. Then, we look for a function $f \in k(E)$ and a matrix $\smt abcd \in \mathrm{PGL}_2(k)$ satisfying properties \ref{item1:degree}, \ref{item2:split}, \ref{item3:gamma}, \ref{item4:traps}: we describe a curve $\mathcal{C}$ whose $k$-points give such pairs $(f,\smt abcd)$, and we prove that $\#\mathcal{C}(k)> \tfrac 12 (\# k)$.
\subsection{The definition of $\mathcal{C}$} \label{sec:3-2_defC}
Property \ref{item1:degree} requires that $f \in k(E)$ has at most two poles: we look for $f$ of the form
\begin{subeqn}\label{eq:f_P}
f_P := \frac{y - y(P)}{x-x(P)}
\end{subeqn}
with $P$ in $E(k)\setminus\{O_E\}$. Indeed $f_P$ has exactly two simple poles: $O_E$ and ${-}P$.
Property \ref{item2:split} requires the polynomial $cT^{q+1} {+ }dT^q {+ }a T {+ }b$ to split completely in $k$, for which, as recalled in Remark \ref{rem:split_field},
it is sufficient that $d^qc \neq ac^q$ and that
\begin{subeqn}\label{eq:32_item2}
(d^qc - ac^q)^{q+1}(z^q-z)^{q^2-q} = c^{q^2+1}(ad - bc)^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1}\,,
\end{subeqn}
for some $z$ in $k$.
Notice that definition (\ref{eq:f_P}) makes sense for $P\in E(\overline{\mathbb F_q})\setminus\{O_E\}$ and that we have the following symmetry: for any $P,P' \in E(\overline{\mathbb F_q})\setminus O_E$, we have $f_P(P')=f_{P'}(P)$.
Using this and the fact that $h^\phi(\phi(P))=h(P)^q$ for all $h \in \overline{\mathbb F_q}(E)$ and $P \in E(\overline{\mathbb F_q})$, we have
\[
\begin{aligned}
f_P(\sigma^iQ) = f_{\sigma^iQ}(P) \,, \quad
f_P^\phi(\sigma^iQ {+} P_0) = f_P^\phi(\phi(\sigma^iR)) = f_P(\sigma^iR)^q = f_{\sigma^iR}(P)^q \,,
\end{aligned}
\]
where $R$ is the unique point on $E$ such that $\phi(R) = Q + P_0$. Hence \ref{item3:gamma} is equivalent to
\begin{subeqn}\label{eq:32_item3}
\smt abcd\cdot f_{\sigma_i Q}(P) = - f_{\sigma^iR}(P)^q \quad \text{for each } i =0,1,2\,.
\end{subeqn}
We now impose \ref{item4:traps}. Let $B$ be a point on $E$
such that $\phi(B)=B+P_0$.
If the rational function $cf_P^{q+1}{+}df_P^q{+} af_P {+} b$ vanishes on $B$, then $\smt abcd \cdot f_B(P) = -f_B(P)^q$. This, if we also assume Equation (\ref{eq:32_item3}) and that $f_{\sigma^iQ}(P)$ are distinct, implies that the cross ratio of $f_{Q}(P)$, $f_{\sigma Q}(P)$, $f_{\sigma^2 Q}(P)$, $f_B(P)$ equals the cross ratio of $f_{R}(P)^q, f_{\sigma R}(P)^q, f_{\sigma^2 R}(P)^q$, $f_B(P)^q$.
Hence, assuming (\ref{eq:32_item3}), condition \ref{item4:traps} is implied by
\begin{subeqn}\label{eq:32_item4bis}
f_{\sigma^iQ}(P)\neq f_{\sigma^jQ}(P)\text{ for all } i\neq j \in \{0,1,2\}
\end{subeqn}
together with
\begin{subeqn}\label{eq:32_item4}
\begin{aligned}
&\text{for} \text{
all } B \text{ such that } \phi(B) = B+P_0 : \qquad \S\neq {-}B \quad \text{and} \\
&\!\mathrm{CrRat}\big (f_{Q}(P), f_{\sigma Q}(P), f_{\sigma^2 Q}(P), f_B(P)) \neq \mathrm{CrRat}(f_{R}(P)^q, f_{\sigma R}(P)^q, f_{\sigma^2 R}(P)^q, f_B(P)^q) \,,
\end{aligned}
\end{subeqn}
where, given four elements $\lambda_1, \lambda_2, \lambda_3, \lambda_4 \in \P^1(\overline{\mathbb F_q})$, we denote their cross-ratio by
\[
\mathrm{CrRat}(\lambda_1,\lambda_2,\lambda_3,\lambda_4) = \frac{(\lambda_3-\lambda_1)(\lambda_4-\lambda_2)}{(\lambda_2-\lambda_1)(\lambda_4-\lambda_3)} \in \P^1(\overline{\mathbb F_q}) \,.
\]
Finally we define $E':=E\setminus\{O_E, {-}Q, {-}R,\ldots, {-}\sigma^2Q, {-}\sigma^2R\}$, so that $f_{\sigma^iR}$ and $f_{\sigma^iQ}$ are regular on $E'$, and we define $\mathcal{C} \subset E' \times \mathrm{PGL}_2 \times \mathbb A^1$ as the curve made of points $(P,\smt abcd,z)$ that satisfy Equations (\ref{eq:32_item3}), (\ref{eq:32_item2}), (\ref{eq:32_item4bis}) and (\ref{eq:32_item4}), and $d^qc {-}ac^q\neq 0$.
Notice that $\mathcal{C}$ is defined over~$k$: even though the equations $\smt abcd f_{\sigma^i Q}(P) = -f_{\sigmaR}^q(P)$ have coefficients in the field $k(Q)$, the Galois group of $ k \subset k(Q)$ permutes these equations.
We constructed $\mathcal{C}$ so that, for any point $(P,\smt abcd,z) \in \mathcal{C}(k)$, the pair $(f_P,\smt abcd)$ satisfies properties \ref{item1:degree}, \ref{item2:split}, \ref{item3:gamma} and \ref{item4:traps}.
\subsection{The irreducible components of $\mathcal{C}$}
In this subsection we prove that all the geometrically irreducible components of $\mathcal{C}$ are defined over~$k$. We can leave out (\ref{eq:32_item4}) from the definition of $\mathcal{C}$. Our strategy is applying Lemma \ref{lem:good_curves} to the variety $\mathcal{B} = E'$, using the rational functions $u_i =f_{\sigma^{i-1}Q}$,
$w_i ={-}f_{\sigma^{i-1}R}$ and the irreducible divisor $Z$ equals to the point ${-}Q{-}\sigma{Q} \in \mathcal{B}(\overline{\mathbb F_q}) \subset E(\overline{\mathbb F_q})$.
Notice that, given distinct points $P', P'' \in E(\overline{\mathbb F_q})\setminus\{O_E\}$, the function $f_{P'}{-}f_{P''}$ is regular at $O_E$ and moreover $(f_{P'}{-}f_{P''})(O_E)=0$. Since the sum of zeroes and poles of a rational function is equal to $O_E$ in the group $E(\overline{\mathbb F_q})$, we deduce that, given distinct points $P', P'' \in E(\overline{\mathbb F_q})\setminus\{O_E\}$,
\begin{subeqn}\label{eq:zeroes_superuseful}
\begin{aligned}
f_{P'} {-} f_{P''} \qquad &\text{ has two simple poles, namely ${-}P'$ and ${-}P''$} \\ &\text{ and two zeroes counted with multiplicity, namely $O_E$ and ${-}P'{-}P''$.}
\end{aligned}
\end{subeqn}
Let $Z := {-}Q{-}\sigma{Q}$. By (\ref{eq:zeroes_superuseful}) and the fact that $Q$ is not a trap, the point $Q$ is not a pole of any of the $u_i$ and the $w_i$ and it is not a zero of any of the functions
$u_2{-}u_3$, $w_3{+}u_i$ and $w_i{-}w_j$ for $i \neq j$: if, for example, ${-}f_{\sigma R}$ is not regular on $Z$, then $Z = {-}R$. Hence, using that $\sigma$ acts as $\phi^l$ on $E(\overline{\mathbb F_q})$ for $l:=[k:\mathbb F_q]$, we have
\[
Q + P_0 = \phi(R) = \phi(-Z) = \phi^{l+1}(Q) + \phi(Q) \quad \implies \quad \phi^{l+1}(Q) = (1-\phi)(Q) + P_0 \,,
\]
hence
\begin{subeqn}\label{eq:use_traps}
\begin{aligned}
\phi^{3}(Q) &= \phi^{3l+3}(Q) = \phi^{2l+2}\left( (1-\phi)(Q) + P_0 \right) = ((1-\phi){\circ} \phi^{2l+2})(Q) + P_0 \\
& = ((1-\phi){\circ}\phi^{l+1})\left( (1-\phi)(Q) + P_0 \right) + P_0 = ((1-\phi){\circ}(1-\phi))(\phi^{l+1}(Q)) + P_0 \\
& = (1-\phi)^2\left( (1-\phi)(Q) + P_0 \right) + P_0 = (1-\phi)^3(Q) + P_0 \,,
\end{aligned}
\end{subeqn}
implying that
\[
((2\phi-1){\circ}(\phi^2-\phi+1))(Q) = (\phi^3 + (\phi-1)^3)(Q) = P_0\,,
\]
which contradicts the hypothesis that $Q$ was not a trap point. Moreover, by (\ref{eq:zeroes_superuseful}), the function $f_{Q}{-}f_{\sigma Q}$ has a simple zero in $Z$. Hence, by Lemma \ref{lem:good_curves}, all the geometrically irreducible components of $\mathcal{C}$ are defined over~$k$ and disjoint.
\subsection{$k$-rational points on $\mathcal{C}$}\label{subsec:points_32}
We now prove that $\#\mathcal{C}(k)$ is larger than $\tfrac 1{2} \# k$. The curve $\mathcal{C}$ is contained in the open subset of $(E \setminus \{O_E\}) \times \mathrm{PGL}_2 \times \mathbb A^1$ made of points $((x,y), \smt abcd, z)$ such that $c \neq 0$. Hence $\mathcal{C}$ is contained in $\mathbb A^6$, with variables $x,y$, $a,b,d$, $z$ and it is defined by the following equations:
\begin{itemize}
\item $0=p_1:=W(x,y)$, the Weierstrass equation defining $E$;
\item $0=p_2 :=(d^q {-} a)^{q+1}(z^q{-}z)^{q^2{-}q} - (ad {-} b)^q (\tfrac{z^{q^2}{-}z}{z^q{-}z})^{q+1}$,
the dehomogenization of (\ref{eq:32_item2}) in $c$;
\item $0=p_i(x,y,a,b,d)$ for $i = 3,4,5$, obtained by (\ref{eq:32_item3}) after dehomogenizing in $c$, substituting $f_{\sigma^iQ}(P)$ and $f_{\sigma^iR}(P)$ by their expressions in $x,y$ and clearing denominators;
\item a number of conditions $0\neq q_j$ ensuring that $\S\neq {-}\sigma^iQ$, $\S \neq {-} \sigma^iR$, $d^q {-} a\neq 0$, $ad-b\neq0$, that $f_{\sigma^iQ}(P)$ are pairwise distinct
and that (\ref{eq:32_item4}) is satisfied.
\end{itemize}
In particular, $\mathcal{C}$ can be seen as a closed subvariety of $\mathbb A^7$, with variables $x,y$, $a,b,d,z$ and $t$ defined by the equations $p_1=0, \ldots, p_5=0$ and $0=p_6:=tq_1\cdots q_r -1$.
Let $\mathcal{C}_1, \ldots, \mathcal{C}_s$ be the irreducible components of $\mathcal{C}$.
By \cite[Remark $11.3$]{Npoints},
we have
\begin{subeqn}\label{eq:Weil32}
\# \mathcal{C}(k) \ge \# \mathcal{C}_1(k) \ge \# k - (\delta-1)(\delta-2)(\# k)^{\frac 12} - K(\mathcal{C}_1) \,,
\end{subeqn}
where $\delta$ is the degree of $\mathcal{C}_1$ and $K(\mathcal{C}_1)$ is the sum of the Betti numbers of $\mathcal{C}$ relative to the compact $\ell$-adic cohomology. Since $\mathcal{C}_1$ is a component of $\mathcal{C}$ then
\begin{subeqn}\label{eq:degD32}
\delta
\leq \deg(p_1)\cdots \deg(p_6) \,.
\end{subeqn}
Since $\mathcal{C}$ is the disjoint union of the $\mathcal{C}_i$, the Betti numbers of $\mathcal{C}$ are the sums of the Betti numbers of the $\mathcal{C}_i$ and using \cite[Corollary of Theorem 1]{Katz} we deduce that
\begin{subeqn}\label{eq:BettiD32}
K(\mathcal{C}_1) \leq K(\mathcal{C}) \leq 6 \cdot 2^{6}\cdot \left(3+7\max_{i=1,\dots ,6}\{\deg(p_i)\}\right)^{8}\,.
\end{subeqn}
Since $\deg p_1 \le 3$, $\deg p_2 \le q^3{+}q$, $\deg p_3, \ldots, \deg p_5 \le q{+}2$, $\deg p_6 \le 8q^2+29q+29$ , then Equations (\ref{eq:Weil32}), (\ref{eq:degD32}) and (\ref{eq:BettiD32}) imply that $\#\mathcal{C}(k)> \tfrac 12 (\# k)$ when $\# k \ge q^{80}$ and $q \ge 3$.
\section{Descent 4-to-3}\label{sec:43}
In this section we finish the proof of Proposition \ref{firsthalfdescentprop}, started in Section \ref{sec:idea_descent}. Following the notation of Section \ref{sec:idea_descent} when $\varepsilon=4$, let $k$ be a finite extension of $\mathbb F_q$ of degree at least $80$, let $Q$ be a non-trap point on $E$ such that $[k(Q):k]=4$, and let $\sigma$ be a generator of $\mathrm{Gal}(k(Q)/k)$. Then, we look for a function $f \in k(E)$ and a matrix $\smt abcd \in \mathrm{PGL}_2(k)$ satisfying properties \ref{item1:degree}, \ref{item2:split}, \ref{item3:gamma}, \ref{item4:traps}: we describe a surface $\mathcal{C}$ whose $k$-points give such pairs $(f,\smt abcd)$, and we prove that there are many $k$-points on $\mathcal{C}$.
\subsection{The definition of $\mathcal{C}$}
Property \ref{item1:degree} requires that $f \in k(E)$ has at most $3$ poles: we look for $f$ of the form
\begin{subeqn}\label{eq:43_def_f}
f=f_{\alpha,\beta,\S} := \frac{f_\S + \alpha}{f_{\widetilde{\S}}+\beta} \,,
\end{subeqn}
where $\alpha, \beta$ are elements of $k$, the points $\S, \widetilde{\S}$ lie in $E(k)\setminus \{O_E\}$ and $f_\S$ is the rational function defined in (\ref{eq:f_P}). We see directly from the definition that the function $f_{\alpha, \beta,\S}$ has at most three poles counted with multiplicity, namely ${-}\S$ and the zeroes of $f_{\widetilde{\S}}{+}\beta$.
Notice that (\ref{eq:43_def_f}) makes sense for any $\S \in E(\overline{\mathbb F_q})$ and $\alpha, \beta \in \overline{\mathbb F_q}$. For the rest of the section we let $\alpha, \beta$ and $\S$ vary and we fix $\widetilde{\S}$ so that
$f_{Q}(\widetilde{\S})$, $f_{\sigma Q}(\widetilde{\S})$, $ f_{\sigma^2 Q}(\widetilde{\S})$, $f_{\sigma^ 3 Q}(\widetilde{\S})$, $f_{R}(\widetilde{\S})$, $f_{\sigma R}(\widetilde{\S})$, $ f_{\sigma^2 R}(\widetilde{\S})$, $f_{\sigma^ 3 R}(\widetilde{\S})$ are pairwise distinct.
There is at least one such point $\widetilde{\S}$ because $\# (E(k)\setminus\{O_E\}) > {8\choose2} $ and by (\ref{eq:zeroes_superuseful}) for each $P' \neq P'' \in E(\overline{\mathbb F_q})\setminus\{O_E\}$ there is at most one point $\widetilde{\S} \in (E(k)\setminus\{O_E\})$ such that $f_{P'}(P)=f_{P''}(P)$. We sometimes write $f$ for $f_{\alpha,\beta,\S}$.
By Remark \ref{rem:split_field}, for condition \ref{item2:split} to be true it is enough that $d^qc {-} ac^q \neq 0$, and that
\begin{subeqn}\label{eq:43_item2}
(d^qc - ac^q)^{q+1}(z^q-z)^{q^2-q} = c^{q^2+1}(ad - bc)^q \left((z^{q^2}-z)/(z^q-z)\right)^{q+1}\,,
\end{subeqn}
for some $z\in k$.
Since $h^\phi(\phi(P))=h(P)^q$ for all $h \in \overline{\mathbb F_q}(E)$ and $P \in E(\overline{\mathbb F_q})$, then
\[
\begin{aligned}
-f^\phi(\sigma^iQ {+} P_0) = - f^\phi(\phi(\sigma^iR)) = -f(\sigma^iR)^q \,,
\end{aligned}
\]
where $R \in E(\overline{\mathbb F_q})$ is the unique point such that $\phi(R) = Q + P_0$. In particular, property \ref{item3:gamma} is equivalent to
\begin{subeqn}\label{eq:43_item3}
\smt abcd\cdot f_{\alpha,\beta, \S}(\sigma^iQ) = - f_{\alpha,\beta, \S}(\sigma^iR)^q \quad \text{for } i =0,1,2,3\,.
\end{subeqn}
The above equation can be further manipulated.
Since cross-ratio is invariant under the action of $\mathrm{PGL}_2$ on $\P^1$, the above equation implies that either the cross-ratio of $f(\sigma^0Q), \ldots,f(\sigma^3Q)$
is equal to the cross ratio of
$f(\sigma^0R), \ldots$, $f(\sigma^3R)$, or that both cross-ratios are not defined. Conversely, supposing that these cross ratios are defined and equal, condition (\ref{eq:43_item3}) is equivalent to the same condition, but only for $i=0,1,2$.
In other words, if $f(\sigma^iQ)$ are pairwise distinct and $f(\sigma^iR)$ are pairwise distinct,
Equation (\ref{eq:43_item3}) is equivalent to
\begin{subeqn}\label{eq:43_cross_ratio}
\mathrm{CrRat}\big( f_{\alpha,\beta, \S}(\sigma^0Q), \ldots , f_{\alpha,\beta, \S}(\sigma^3Q )\big) = \mathrm{CrRat}\big( f_{\alpha,\beta, \S}(\sigma^0R)^q, \ldots , f_{\alpha,\beta, \S}(\sigma^3R )^q\big)\,,
\end{subeqn}
together with
\begin{subeqn}\label{eq:43_item3_revisited}
\smt abcd\cdot f_{\alpha,\beta, \S}(\sigma^iQ) = - f_{\alpha,\beta, \S}(\sigma^iR)^q \quad \text{for } i =0,1,2\,.
\end{subeqn}
It is easy to see that, assuming Equation (\ref{eq:43_item3_revisited}) and the distinctness of $f(\sigma^i Q)$ and $f(\sigma^iR)$, condition \ref{item4:traps} is implied by
\begin{subeqn}\label{eq:43_item4_main}
\begin{aligned}
\text{for}& \text{ all } B \text{ such that } \phi(B) = B+P_0: \qquad \S\neq {-}B\,, \quad \beta + f_{\widetilde{\S}}(B) \neq 0 \quad \text{and} \\
&\mathrm{CrRat}\big (f(Q), f(\sigmaQ), f(\sigma^2Q), f(B)\big) \neq \mathrm{CrRat}\big (f(R)^q, f(\sigmaR)^q, f(\sigma^2R)^q, f(B)^q\big)\,.
\end{aligned}
\end{subeqn}
Finally we define $E':=E \setminus\{O_E,{-}\sigma^0 Q, {-} \sigma^0 R,\ldots, {-}\sigma^3Q, {-} \sigma^3R \}$ and $\mathcal{C} \subset \mathbb A^2\times E' \times\mathrm{PGL}_2 \times \mathbb A^1$ to be the surface made of points $(\alpha, \beta, P, \smt abcd, z)$ that satisfy Equations (\ref{eq:43_cross_ratio}), (\ref{eq:43_item3_revisited}), (\ref{eq:43_item2}) and (\ref{eq:43_item4_main}), and such that
$\beta + f_{\widetilde{\S}}(\sigma^iQ) \neq 0$, $\beta + f_{\widetilde{\S}}(\sigma^iR) \neq 0$, $d^qc-ac^q\neq0$, the $f(\sigma^iQ)$ are distinct and
the $f(\sigma^iR)$ are distinct.
The definition of $E'$ and the conditions $\beta + f_{\widetilde{\S}}(\sigma^iQ) \neq 0$, $\beta + f_{\widetilde{\S}}(\sigma^iR) \neq 0$, ensure that $f(\sigma^iQ)$ and $f(\sigma^iR)$ are well defined.
Arguing as in subsection \ref{sec:3-2_defC}, we see that $\mathcal{C}$ is defined over~$k$. By construction, for all $(\alpha, \beta, P, \smt abcd, z) \in \mathcal{C}(k)$, the pair $(f_{\alpha, \beta,P}\smt abcd)$ satisfies \ref{item1:degree}, \ref{item2:split} and \ref{item3:gamma} and \ref{item4:traps}.
\subsection{Irreducibility of a projection of $\mathcal{C}$}
Before studying the irreducible components of $\mathcal{C}$, we study the closure in $\P^2 \times E$ of the projection of $\mathcal{C}$ in $\mathbb A^2 \times E$. Let $\mathcal{B}'\subset \mathbb A^2 {\times} E'$ be the surface whose points are the tuples $(\alpha, \beta, \S)$ such that
\[
\begin{aligned}
& f_{\alpha, \beta, \S}(\sigma^iQ) \text{ are pairwise distinct}\,,
\quad
f_{\alpha, \beta, \S}(\sigma^iR) \text{ are pairwise distinct}, \\
& \qquad \qquad \qquad f_{\widetilde{\S}}(\sigma^iQ)+\beta \neq 0\,, \quad f_{\widetilde{\S}}(\sigma^iR) + \beta \neq 0\,, \\
& \mathrm{CrRat}\big( f_{\alpha,\beta, \S}(\sigma^0Q), \ldots , f_{\alpha,\beta, \S}(\sigma^3Q )\big) = \mathrm{CrRat}\big( f_{\alpha,\beta, \S}(\sigma^0R)^q, \ldots , f_{\alpha,\beta, \S}(\sigma^3R )^q\big)\,,
\end{aligned}
\]
and let $\mathcal{B}$ be the closure of $\mathcal{B}'$ inside $\P^2 \times E$. Since the action of $\mathrm{PGL}_2$ on $\P^1$ is triply transitive, the projection $\mathbb A^2 \times E \times \mathrm{PGL}_2 \times \mathbb A^1 \to \mathbb A^2\times E$ gives a dominant morphism $\mathcal{C} \to \mathcal{B}$ (this is the same argument used in the proof of Lemma \ref{lem:good_curves} to show that $\mathcal{B}_0 \to \mathcal{B}$ is dominant). Since $\mathcal{C}$ is defined over~$k$, the variety $\mathcal{B}$ is also defined over~$k$.
In the rest of the subsection we prove that for all but a few choices of $\S \in E(k)$ the curve $\mathcal{B}_\S := \mathcal{B} \cap (\{\S\}{\times}\P^2)$ is reduced and geometrically irreducible, which implies the same for $\mathcal{B}$.
We first write an equation for $\mathcal{B}_\S$ in $\P^2$.
Using the definition of $f_{\alpha,\beta,P}$ we get
\[
\begin{aligned}
f_{\alpha, \beta, P}(\sigma^iQ){-}f_{\alpha, \beta,p}(\sigma^jQ) = \frac{L_{i,j}(\alpha, \beta,1)}{\big( l_i{+}\beta\big) \big( l_j{+}\beta\big)}, \,\,\,
f_{\alpha, \beta, P}(\sigma^iR){-}f_{\alpha, \beta,P}(\sigma^jR) = \frac{R_{i,j}(\alpha, \beta,1)}{\big( r_i{+}\beta\big) \big( r_j{+}\beta\big)},
\end{aligned}
\]
where $l_i := f_{\widetilde{\S}}(\sigma^iQ)$, $r_i := f_{\widetilde{\S}}(\sigma^iR)$ and $L_{i,j},R_{i,j} \in \overline{\mathbb F_q}[\alpha, \beta,\gamma]$ are the linear polynomials
\begin{subeqn}\label{eq:def_Lij_Rij}
\begin{aligned}
L_{i,j}
&:= \big( l_j {-} l_i \big) \alpha + \big( f_{\sigma^iQ}(\S) {-} f_{\sigma^jQ}(\S)\big)\beta + \big( f_{\sigma^iQ}(\S) l_j {-} f_{\sigma^jQ}(\S) l_i \big) \gamma , \\
R_{i,j}
&:= \big( r_j {-} r_i \big) \alpha + \big( f_{\sigma^iR}(\S) {-} f_{\sigma^jR}(\S)\big)\beta + \big( f_{\sigma^iR}(\S) r_j {-} f_{\sigma^jR}(\S) r_i \big) \gamma \,.
\end{aligned}
\end{subeqn}
Rewriting Equation (\ref{eq:43_cross_ratio}) with this notation, we see that $\mathcal{B}_\S$ is the vanishing locus of the homogeneous polynomial
\begin{subeqn}\label{eq:eq_B}
M(\alpha, \beta, \gamma):=
L_{0,2}L_{1,3}R_{0,1}^q R_{2,3}^q - L_{0,1}L_{2,3}R_{0,2}^q R_{1,3}^q \quad \in \overline{\mathbb F_q}[\alpha, \beta, \gamma]\,.
\end{subeqn}
\begin{comment}
\begin{subeqn} \label{eq:CR_s}
\begin{aligned}
\mathrm{CrRat} \left( f_{\alpha,\beta}(Q), f_{\alpha,\beta}(\sigmaQ), f_{\alpha,\beta}(\sigma^2Q), f_{\alpha,\beta}(\sigma^3Q) \right) & = \frac{L_{0,2}(\alpha, \beta)L_{1,3}(\alpha, \beta)}{L_{0,1}(\alpha, \beta)L_{2,3}(\alpha, \beta)} \,, \\
\mathrm{CrRat} \left( f_{\alpha,\beta}^q(R), f_{\alpha,\beta}^q(\sigmaR), f_{\alpha,\beta}^q(\sigma^2R), f_{\alpha,\beta}^q(\sigma^3R) \right) & = \frac{R_{0,2}(\alpha, \beta)R_{1,3}(\alpha, \beta)}{R_{0,1}(\alpha, \beta)R_{2,3}(\alpha, \beta)} \,.
\end{aligned}
\end{subeqn}
\end{comment}
Notice that for each pair $(i,j) \in \{(0,1), (0,2), (1,3), (2,3)\}$ the varieties $\{L_{i,j}=0\}$ and $\{R_{i,j}=0\}$ are lines inside $\P^2$ and that it is easy to determine the intersections $\mathcal{B}_\S\cap \{L_{i,j}{=}0\}$ and $\mathcal{B}_\S\cap \{R_{i,j}{=}0\}$: such divisors are linear combinations of the points $X_k$'s defined in Figure \ref{figure} as intersections between lines in $\P^2$. The following proposition says that the points $X_k$ are well-defined and distinct.
\begin{figure}\caption{The intersections $X_i$ of the curve $\mathcal{B}_\S$ with certain lines $L_{i,j}, R_{i,j}$.} \label{figure}
\includegraphics[width=\linewidth]{IntersLines.png}
\end{figure}
\begin{subClaim}\label{hope_distinct_points}
Consider the lines $\{L_{i,j}=0\}$ and $\{R_{i,j}=0\}$ and the points $X_i$ in Figure \ref{figure}.
For all but at most $450$ choices of $\S \in E(k)$, those lines
are distinct and the points $X_i$ are distinct.
\end{subClaim}
\begin{proof}
First we see that the points $\sigma^0Q, \sigma^0R,\ldots$, $\sigma^3Q, \sigma^4R$ are pairwise distinct: clearly $\sigma^0Q, \ldots$, $ \sigma^3Q$ are distinct and $\sigma^0R, \ldots, \sigma^3R$ are distinct and if we had $\sigma^i Q = \sigma^jR$, then, for $l:=[k:\mathbb F_q]$ and $m:=i{-}j$, we would have
\[\begin{aligned}
Q + P_0 = \phi(R) = &\phi(\sigma^{{i-j}} Q) = \phi(\phi^{l(i-j)}Q) = \phi^{lm +1}(Q) \implies \quad \phi^4(Q) =\phi^{4(lm+1)}(Q) = Q + 4P_0\,,
\end{aligned}
\]
which is absurd because $R$ is not a trap and consequently $\phi^4(Q) \neq Q +4P_0$.
If the lines $\{L_{0,2}=0\}$ and $\{L_{1,3}=0\}$ were equal, then the matrix of their coefficients
\[
n(\S) = \left(\begin{matrix}
l_2 {-} l_0 & (f_{Q}{-}f_{\sigma^2 Q})(\S) & (l_2f_{Q}{-}l_0f_{\sigma^2 Q})(\S) \\
l_3 {-} l_1 & (f_{\sigma Q}{-}f_{\sigma^3 Q})(\S) & (l_3f_{\sigma Q}{-}l_1f_{\sigma^3 Q})(\S)
\end{matrix} \right)
\]
would have rank $1$, which, computing the deteminant of a submatrix of $n$, would imply $\S$ to be a zero of the rational function $(l_0-l_2) (f_{\sigma^3 Q}{-}f_{\sigma Q}) - (l_1{-}l_3)(f_{\sigma^2 Q}{-}f_{Q})$, that has five poles counted with multiplicity, since the points $Q, \ldots, \sigma^3Q$ are distinct and since $l_i\neq l_j$ by the choice of $\widetilde{\S}$.
Hence for all but at most five choices of $\S \in E(k)$, the matrix $n(\S)$ has rank $2$ and consequently the lines $\{L_{0,2}=0\}$ and $\{L_{1,3}=0\}$ are distinct.
Similar arguments prove that, for any other pair of lines $\Lambda,\Lambda'$ in Figure \ref{figure}, $\Lambda \neq \Lambda'$ for all but at most five choices of $\S$.
We now prove that, for all $i \neq j$, we have $X_i \neq X_j$, for all but six choices of $\S \in E(k)$. We treat only a couple of cases.
If $X_{9} = X_{12}$, then the lines $\{R_{0,2}=0\}$, $\{R_{2,3}=0\}$ and $\{L_{0,2}=0\}$ are concurrent, hence the following matrix, that contains their coefficients, is not invertible
\[
M = M(\S) = \left(\begin{matrix}
r_2 {-} r_0 & (f_{R}{-}f_{\sigma^2 R})(\S) & (r_2f_{R}{-}r_0f_{\sigma^2 R})(\S) \\
r_3 {-} r_2 & (f_{\sigma^2R}{-}f_{\sigma^3 R})(\S) & (r_3f_{\sigma^2R}{-}r_2f_{\sigma^3 R})(\S) \\
l_2 {-} l_0 & (f_{Q}{-}f_{\sigma^2 Q})(\S) & (l_2f_{Q}{-}l_0f_{\sigma^2 Q})(\S) \\
\end{matrix}\right)\,,
\]
implying that $\S$ is a zero of the rational function $\det(M)$. Writing out $\det(M)$, and using that $\sigma^0Q, \sigma^0R,\ldots$, $\sigma^3Q, \sigma^4R$ are pairwise distinct, we see that there is a rational function $g$, regular in ${-}\sigma^2 R$, such that
\[
\det(M) = (l_2 {-} l_0)(r_0 {-} r_3) f_{\sigma^2R}^2 + f_{\sigma^2 R}\,g\,.
\]
Since $l_0 \neq l_2$ and $r_0 \neq r_3$, the rational function $\det(M)$ has a pole of order $2$ in ${-}\sigma^2R$ and in particular $\det(M)$ is a non-zero rational function with at most $6$ poles counted with multiplicity. Hence $\det(M)$ has at most $6$ zeroes, implying that $X_9 \neq X_{12}$, for all but $6$ choices of $\S \in E(k)$.
If $X_3 = X_7$, then the lines $\{L_{0,1}=0\}$, $\{L_{2,3}=0\}$ and $\{R_{0,1}=0\}$ are concurrent, hence the following matrix, that contains the coefficients of $L_{0,1}$, $L_{2,3}$ and $R_{0,1}$, is not invertible
\[
N = N(\S) = \left(\begin{matrix}
l_1 {-} l_0 & (f_{Q}{-}f_{\sigma Q})(\S) & (l_1f_{Q}{-}l_0f_{\sigma Q})(\S) \\
l_3 {-} l_2 & (f_{\sigma^2Q}{-}f_{\sigma^3 Q})(\S) & (l_3f_{\sigma^2Q}{-}l_2f_{\sigma^3 Q})(\S) \\
r_1 {-} r_0 & (f_{R}{-}f_{\sigma R})(\S) & (r_1f_{R}{-}r_0f_{\sigma R})(\S) \\
\end{matrix}\right)\,.
\]
As before, in order to prove that $X_3 \neq X_4$ for all but at most $6$ choices of $\S \in E(k)\setminus\{O_E\}$ it is enough proving that $\det(N(\S))$, considered as a rational function of $\S$, is not identically zero. We suppose by contradiction that $\det(N)$ is identically zero.
Denoting $N_{i,j}$ the $(i,j)$-minor of $N(\S)$, it easy to see the rational functions $N_{i,3}$ are not null. Consequently, the only way for $\det(N)$ to be zero is that the third column $N$ is linearly dependent from the first two.
Hence, there are rational functions $A,B \in \overline{\mathbb F_q}(E)$ such that
\begin{subeqn}\label{eq:linear_system_Hope}
\begin{cases}
\big(l_1 {-} l_0\big) \cdot A + \big(f_{Q}{-}f_{\sigma Q}\big) \cdot B = l_1f_{Q}{-}l_0f_{\sigma Q} \\
\big(l_3 {-} l_2\big) \cdot A + \big(f_{\sigma^2Q}{-}f_{\sigma^3 Q} \big) \cdot B = l_3f_{\sigma^2Q}{-}l_2f_{\sigma^3 Q} \\
\big(r_1 {-} r_0\big) \cdot A + \big(f_{R}{-}f_{\sigma R}\big) \cdot B = r_1f_{R}{-}r_0f_{\sigma R} \\
\end{cases}
\end{subeqn}
and, using Cramer's rule, we have
\[
B = \frac{N_{1,2}}{N_{1,3}} = \frac{N_{2,2}}{N_{2,3}}= \frac{N_{3,2}}{N_{3,3}}\,.
\]
We easily compute the poles of the rational functions $N_{i,j}$ and check that they all vanish in $\widetilde{\S}$ and $O_E$, (for $O_E$ it is enough noticing that $(f_P {-} \tfrac yx)(O_E)=0$ for all $P \in E(\overline{\mathbb F_q}) \setminus\{O_E\}$). These computations give
\[
\begin{aligned}
\mathrm{div}(N_{1,j}) &= D_{1,j} + \widetilde{\S} + O_E - ({-}R) - ({-}\sigmaR) - ({-}\sigma^2Q) - ({-}\sigma^3Q) \,,\\
\mathrm{div}(N_{2,k})&= D_{2,j} + \widetilde{\S} + O_E - ({-}Q) - ({-}\sigmaQ) - ({-}R) - ({-}\sigmaR) \,,\\
\mathrm{div}(N_{3,j})&= D_{3,j} + \widetilde{\S} + O_E - ({-}Q) - ({-}\sigmaQ) - ({-}\sigma^2Q) - ({-}\sigma^3Q) \,,\\
\end{aligned}
\]
for certain positive divisors $D_{i,j}$ of degree $2$. Consequently
\[
\mathrm{div}(B) = D_{1,2} - D_{1,3} = D_{2,2} - D_{2,3} =D_{3,2} - D_{3,3}\,.
\]
Since the functions $f_{Q}$, $f_{\sigma Q}$, $f_{\sigma^2 Q}$ and $f_{\sigma^3 Q}$ are $\overline{\mathbb F_q}$-linearly independent, then $N_{1,2}$ and $N_{1,3}$ are not $\overline{\mathbb F_q}$-multiples and $B$ is not constant. Since every non-constant rational function on $E$ has at least two poles, we deduce that $D_{1,3} = D_{2,3} = D_{3,3}$ is the divisor of poles of $B$. In particular, in the group $E(\overline{\mathbb F_q})$, the sum of the poles of $N_{1,3}$ is equal to the sum of the poles of $N_{2,3}$ and to the sum of the poles of $N_{3,3}$. Writing it out, we get the following equalities in the group $E(\overline{\mathbb F_q})$
\[
Q + \sigma Q = \sigma^2Q + \sigma^3 Q = R + \sigma R\,.
\]
Hence, using (\ref{eq:zeroes_superuseful}), ${-}Q{-}\sigmaQ$ is a zero of $N_{3,3}$ and consequently the two poles of $B$ are ${-}Q{-}\sigmaQ$ and ${-}Q{-}\sigmaQ{-}\widetilde{\S}$. By looking at (\ref{eq:linear_system_Hope}), we deduce that $A$ has exactly one simple pole, namely ${-}Q{-}\sigmaQ{-}\widetilde{\S}$, which is absurd. Hence $\det(N(\S))$ is not identically zero.
\end{proof}
We now study the geometrically irreducible components of $\mathcal{B}_\S$ for the points $P$ such that the conclusions of Claim \ref{hope_distinct_points} hold (there are such $P$'s because $450$ is way smaller than $\#E(k)$).
Equation (\ref{eq:eq_B}), that defines $\mathcal{B}_\S$, gives the following divisor-theoretic intersection
\begin{subeqn}\label{eq:inters_B_R_L_31}
\mathcal{B}_\S \cap \{L_{0,2} = 0 \} = X_1 + X_5 + qX_9 + qX_{13}\,.
\end{subeqn}
Since $X_i$'s are distinct, the point $X_1$ has multiplicity $1$ in the above intersection and consequently it is a smooth point of $\mathcal{B}_\S$. With analogous arguments we see that all the points $X_i$, except the ones of shape $\{ R_{i,j} =0\} \cap \{ R_{l,m}=0 \}$, are smooth. This is used in the following Claim.
\begin{subClaim}\label{hope_no_conics}
Assuming the conclusions of Claim \ref{hope_distinct_points}, the curve $\mathcal{B}_\S$ does not contain any conic defined over~$k$.
\end{subClaim}
\begin{proof}
Suppose there exists such a conic $\mathcal Q$. Since $X_9$ is a smooth point of $\mathcal{B}_\S$, if $X_9\in \mathcal Q$, then $\mathcal Q$ is the only component of $\mathcal{B}_\S$ passing through $X_9$, hence $X_9$ appears in $\mathcal{B}_\S \cap \{ L_{0,2}=0 \}$ with multiplicity at most $2<q$, contradicting Equation (\ref{eq:inters_B_R_L_31}). Hence $\mathcal Q$ does not contain $X_9$ nor, by a similar argument, $X_{13}$.
This, together with Equation (\ref{eq:inters_B_R_L_31}), implies that $X_1$ and $X_5$ belong to $\mathcal Q$. Analogously $X_2$ and $X_6$ belong to $\mathcal Q$.
We notice that $X_1,X_2,X_5,X_6$ are in general position (by Claim \ref{hope_distinct_points}) and that we know two conics passing through them, namely $\{L_{0,1}L_{2,3} =0\}$ and $\{L_{0,2}L_{1,3} = 0\}$. Hence, if we choose a homogenous quadratic polynomial $F \in k[\alpha, \beta, \gamma]$ defining $\mathcal Q$, there are $\lambda_0, \lambda_1 \in \overline{\mathbb F_q}$ such that
\[
F = \lambda_0 L_{0,1}L_{2,3} + \lambda_1 L_{0,2}L_{1,3}\,.
\]
Our claim now follows by extending extend $\sigma$ to an element in $\mathrm{Gal}(\overline{\mathbb F_q}/k)$ and looking at its action on the above equation.
For each $i,j \in \{0,1,2,3\}$ we have $\sigma L_{i,j} = L_{i+1,j+1} = {-}L_{j+1,i+1}$, considering the indices modulo $4$, hence
\[
\lambda_0 L_{0,1}L_{2,3} + \lambda_1 L_{0,2}L_{1,3} = F = \sigma F =
\sigma(\lambda_0) L_{2,3}L_{3,0} + \sigma(\lambda_1)L_{0,2}L_{1,3}\,.
\]
Some cumbersome computations imply that $\{L_{1,2}=0\}$ is the line through $X_2,X_5$, while $\{L_{3,0}=0\}$ is the line through $X_1,X_6$. In particular, the polynomials $L_{i,j}$ in the above equation are pairwise coprime, which implies $\lambda_0 = \sigma(\lambda_0) = 0$. Hence, $\mathcal Q = \{F=0\} = \{L_{0,2}L_{1,3} = 0 \}$, which is absurd because $\{L_{0,2}L_{1,3} = 0 \}$ is not contained in $\mathcal{B}_\S$.
\end{proof}
Claim \ref{hope_no_conics} also implies that $\mathcal{B}_\S$ does not contain a line of $\P^2$. Suppose that $\Lambda$ is a line contained in $\mathcal{B}_\S$. Neither $X_9$ nor $X_{13}$ are contained in $\Lambda$ since they are smooth points of $\mathcal{B}_\S$ and, by Equation (\ref{eq:inters_B_R_L_31}, the unique components of $\mathcal{B}_\S$ passing through them must have degree at least $q$ inside $\P^2$. Hence $\Lambda \cap \{L_{0,2}=0 \} \in \{X_1, X_5\}$ and consequently
\begin{subeqn}\label{eq:lines_conj_inters}
(\Lambda \cup \sigma^2 \Lambda) \cap \{L_{0,2}=0 \} = X_1 + X_5\,.
\end{subeqn}
This implies that $\sigma^2\Lambda \neq \Lambda$ and that $\sigma^2\Lambda$ and $\Lambda$ are all the $\mathrm{Gal}(\overline{\mathbb F_q}/k)$-conjugates of $\Lambda$: if $\Lambda$ had another conjugate $\Lambda'$, then, since $\mathcal{B}_\S$ is defined over~$k$, also $\Lambda'$ would be a component of $\mathcal{B}_\S$
and, by the same argument as before, $\Lambda' \cap \{L_{0,2}=0 \} = X' \in \{X_1, X_5\}$, which, together with Equation (\ref{eq:lines_conj_inters}), implies that two or more components of $\mathcal{B}_\S$ pass through $X'$, contradicting the smoothness of $X_1$ and $X_5$. We deduce that $\Lambda {\cup} \sigma^2\Lambda$ is a conic defined over~$k$ and contained in $\mathcal{B}_\S$, contradicting Claim \ref{hope_no_conics}.
By a similar argument, no conic $\mathcal Q$ is a component of $\mathcal{B}_\S$: if this happens, since conics have degree $2<q$ in $\P^2$, then $X_9,X_{13}$ do not belong to any of the $\mathrm{Gal}(\overline{\mathbb F_q}/k)$-conjugates of $\mathcal Q$, thus, by Equation (\ref{eq:inters_B_R_L_31}), for all $\tau \in \mathrm{Gal}(\overline{\mathbb F_q}/k)$ we have
\[
\tau(\mathcal Q) \cap \{L_{0,2}=0 \} = X_1 + X_5 = \mathcal Q \cap \{L_{0,2}=0 \}
\]
hence, by the smoothness of $X_1$ and $X_5$, $\mathcal Q$ is defined over~$k$, contradicting Claim \ref{hope_no_conics}.
We now suppose that $\mathcal{B}_\S$ is not geometrically irreducible. Let $\mathcal{B}_1, \ldots, \mathcal{B}_r$ be the geometrically irreducible components of $\mathcal{B}_\S$. As we already proved, each $\mathcal{B}_i$ has degree at least $3$, hence the intersection $\mathcal{B}_i \cap \{L_{0,2}=0\}$ is a sum of at least $3$ points counted with multiplicity. By Equation (\ref{eq:inters_B_R_L_31}), this implies that $\mathcal{B}_i$ is passing through $X_9$ or $X_{13}$ hence each $\mathcal{B}_i$ has degree at least $q$. Since the sum of the degrees of the $\mathcal{B}_i$'s is equal to $2q{+}2 < 3q$, we deduce that $r=2$ and that either $\deg(\mathcal{B}_1)= \deg(\mathcal{B}_2) = q+1$ or, up to reordering, $\deg(\mathcal{B}_1)= q$ and $\deg(\mathcal{B}_2) = q+2$.
If $\deg(\mathcal{B}_1)= \deg(\mathcal{B}_2) = q+1$, Equation (\ref{eq:inters_B_R_L_31}) implies that, up to reordering, $X_1 \in \mathcal{B}_1(\overline{\mathbb F_q})$ and $X_5 \in \mathcal{B}_2(\overline{\mathbb F_q})$. Since $\mathcal{B}_\S$ is defined over~$k$, then $\mathrm{Gal}(\overline{\mathbb F_q}/k)$ acts on $\{\mathcal{B}_1,\mathcal{B}_2\}$ and because of the cardinality of such a set, then $\sigma^2$ acts trivially. In particular $X_5 = \sigma^2X_1$ belongs to $\sigma^2 \mathcal{B}_1(\overline{\mathbb F_q}) = \mathcal{B}_1(\overline{\mathbb F_q})$, hence $X_5 \in \mathcal{B}_1(\overline{\mathbb F_q}) \cap \mathcal{B}_2(\overline{\mathbb F_q})$, contradicting the smoothness of $X_5$. This contradiction implies that
\[
\deg(\mathcal{B}_1)= q, \quad \deg(\mathcal{B}_2) = q+2 \,.
\]
For each $i\in\{1,2\}$ let $M_i \in \overline{\mathbb F_q}[\alpha, \beta, \gamma]$ be a homogeneous polynomial defining $\mathcal{B}_i$.
\begin{subClaim}\label{claim:shape_Mi}
There exists homogenous polynomials $F_1, F_2, G_2, N_1, N_2 \in \overline{\mathbb F_q}[\alpha, \beta, \gamma]$ of respective degree $1,1,1,q-4, q-2$ such that
\begin{align}
\label{eq:M1F1} M_1 &= F_1^q + L_{0,1}L_{2,3}L_{0,2}L_{1,3}N_1, \\
\label{eq:M2F2}
M_2 &= F_2^qL_{0,1}L_{2,3} + G_2^qL_{0,2}L_{1,3} + L_{0,1}L_{2,3}L_{0,2}L_{1,3}N_2
\end{align}
\end{subClaim}
\begin{proof}
We start from the first part. Since $\deg \mathcal{B}_1=q$ and since $X_1, X_5, X_9$ and $X_{13}$ are smooth, Equation (\ref{eq:inters_B_R_L_31}) implies that $\mathcal{B}_1 \cap \{L_{0,2}=0\}$ is either $qX_{13}$ or $qX_9$. Hence $M_1 \bmod{L_{0,2}}$ is a $q$-th power, i.e. there are homogeneous polynomials $A_1, B_1$, with $A_1$ linear, such that
\[
M_1 = A_1^q + B_1 L_{0,2}\,.
\]
Similarly to $\mathcal{B}_1 \cap \{L_{0,2}=0\}$, we have that $\mathcal{B}_1 \cap \{L_{1,3}=0\}$ is either $qX_{14}$ or $qX_{10}$, hence there exists a linear polynomial $A_2$ such that
\begin{align} \label{eq:a_congruence1}
A_2^q \equiv M_1 \equiv A_1^q + B_1 L_{0,2} \pmod{L_{1,3}} \quad \implies \quad
B_1 L_{0,2} \equiv (A_2 - A_1)^q \pmod{L_{1,3}}
\end{align}
We notice, that, since $L_{1,3} = l_\alpha \alpha + l_\beta\beta+ l_\gamma \gamma$ is linear, with $l_\alpha = l_3-l_1\neq 0$, the map $F \mapsto F\left( -\frac{ l_\beta\beta+ l_\gamma \gamma}{l_\alpha} ,\beta, \gamma \right)$ gives an isomorphism of $\overline{\mathbb F_q}[\alpha,\beta, \gamma]/L_{1,3}$ with $\overline{\mathbb F_q}[\beta, \gamma]$, which is a UFD. Moreover this isomorphism sends $(A_2 {-} A_1) \bmod{L_{1,3}}$ to a homogeneous polynomial of degree at most $1$, which means that $(A_2 {-} A_1) \bmod{L_{1,3}}$ is either null or irreducible.
We deduce that, in the last congruence of (\ref{eq:a_congruence1}), either both sides are zero or the right hand side gives the prime factorization of the left hand side.
In both cases we have $B_1\equiv \lambda_1 L_{0,2}^{q-1}$ for some $\lambda_1 \in \overline{\mathbb F_q}$, hence
\[
B_1 = \lambda_1 L_{0,2}^{q-1} + B_2 L_{1,3} \quad \implies M_1 = (A_1 +\lambda_1 L_{0,2}) ^q + B_2 L_{0,2}L_{0,3} = A_3^q + B_2 L_{0,2}L_{0,3}
\]
for certain homogeneous polynomials $A_3, B_2$, with $A_3$ linear.
Again, $\mathcal{B}_1 \cap \{L_{0,1}=0\}$ is either $qX_{3}$ or $qX_{4}$, hence, for a some linear homogenous polynomial $A_4$, we have
\[
\begin{aligned}
&A_4^q \equiv M_1 \equiv A_3^q + B_2 L_{0,2} L_{1,3} \pmod{L_{0,1}}
\quad \implies \quad
B_2 \, L_{0,2}\, L_{1,3} \equiv (A_4 - A_3)^q \pmod{L_{0,1}}\,.
\end{aligned}
\]
As before, the last congruence can be interpreted as an equation in the UFD $\overline{\mathbb F_q}[\alpha,\beta, \gamma]/L_{1,3}$ and, since $(A_4 {-} A_3) \bmod{L_{0,1}}$ is either irreducible or null, then either both sides are zero or the right hand side gives the prime factorization of the left hand side. The latter is not possible, since the points $X_1 = \{L_{0,1}=0\}\cap\{L_{0,2}=0\}$ and $X_2 = \{L_{0,1}=0\}\cap\{L_{1,3}=0\}$ are distinct and consequently $L_{0,2}\bmod{L_{0,1}}$ and $L_{1,3}\bmod{L_{0,1}}$ are relatively prime. We deduce that $B_2$ is divisible by $L_{0,1}$. By a similar argument $B_2$ is also divisible by $L_{2,3}$, hence Equation (\ref{eq:M1F1}).
We now turn to (\ref{eq:M2F2}). Since $\deg \mathcal{B}_2=q+2$ and since $X_1, X_5, X_9$ and $X_{13}$ are smooth, Equation (\ref{eq:inters_B_R_L_31}) implies that $\mathcal{B}_2 \cap \{L_{0,2}\}$ is either $X_1 {+} X_5 {+} qX_{13}$ or $X_1 {+} X_5 {+}qX_9$, hence we can write $ M_2 = A_5^qL_{0,1}L_{2,3} + B_3 L_{0,2}$
for some homogeneous polynomials $A_5, B_3$, with $A_5$ linear. In a similar fashion,
$\mathcal{B}_2 \cap \{L_{1,3}\}$ is either $X_2 {+} X_6 {+} qX_{14}$ or $X_2 {+} X_6 {+}qX_{10}$, hence,
\[\begin{aligned}
&L_{0,1} L_{2,3} A_6^q \equiv M_1\equiv L_{0,1} L_{2,3} A_5^q + B_3 L_{0,2} \, (\bmod{L_{1,3}}) \, \implies \,
B_3 L_{0,2} \equiv L_{0,1}L_{2,3} (A_6 - A_5)^q \,(\bmod{L_{1,3}} )\end{aligned}
\]
As before, in the last equation either both sides are zero or the right hand side gives the prime factorization of the left hand side. In both cases $B_3$ is congruent to a scalar multiple of $L_{0,1}L_{2,3} L_{0,2}^{q-1}$: if $B_3\equiv 0$ this is obvious, otherwise we need to use that the polynomials $L_{0,1}$, $L_{2,3}$ and $L_{0,2}$ are relatively prime modulo $L_{1,3}$ because the lines $\{L_{0,1}=0\}$, $\{L_{2,3}=0\}$ and $\{L_{0,2}=0\}$ all have different intersection with $\{L_{1,3}=0\}$. Hence
\[
M_2 = A_7^qL_{0,2}L_{1,3} + B_4L_{0,1}L_{2,3}.
\]
for certain $A_7, B_4 \in \overline{\mathbb F_q}[\alpha, \beta, \gamma]$. Iterating similar arguments we prove Equation \ref{eq:M2F2}.
\end{proof}
Let $F_1, F_2, G_1, N_1$ and $N_2$ as in Claim \ref{claim:shape_Mi}.
Up to multiplying $M_1$ with an element of $\overline{\mathbb F_q}^\times$, we can suppose that $M = M_1M_2$. Reducing this equality modulo $L_{0,2}L_{1,3}$ we see that
\[
L_{0,2}L_{1,3} \text{ divides } L_{0,1}L_{2,3}(F_1F_2 + R_{0,2}R_{1,3})^q.
\]
Since the $L_{i,j}$'s in the above equation are coprime, $L_{0,2}L_{1,3}$ divides $F_1F_2 {+} R_{0,2}R_{1,3}$. Since $F_1F_2 {+} R_{0,2}R_{1,3}$ is homogenous of degree at most $2$, then it is a scalar multiple of $L_{0,2}L_{1,3}$. Using a similar argument with $L_{0,1}L_{2,3}$ we prove that there exist $\lambda, \mu \in \overline{\mathbb F_q}$ such that
\begin{equation}\label{eq:patch1}
F_1F_2 + R_{0,2}R_{1,3} = \lambda L_{0,2}L_{1,3}, \quad F_1G_2 - R_{0,1}R_{2,3} = \mu L_{0,1}L_{2,3} \,.
\end{equation}
We have $\lambda \neq 0$, otherwise $F_1$ would be a scalar multiple of either $R_{0,2}$ or $R_{1,3}$: in the first case Equation \ref{eq:M1F1} would imply that $\mathcal{B}_1$ contains $X_9$ but not $X_{14}=\tau(X_9)$, implying that $\tau(\mathcal{B}_1)$ is a component of $\mathcal{B}$ different from $\mathcal{B}_1$, that is $\tau(\mathcal{B}_1)=\mathcal{B}_2$ which contradicts the inequality $\deg(\mathcal{B}_2)>\deg(\mathcal{B}_1)$; in the second case Equation \ref{eq:M1F1} would imply that $\mathcal{B}_1$ contains $X_{13}$ but not $X_{10}=\tau(X_{13})$, leading to the same contradiction.
Using Equations (\ref{eq:M1F1}), (\ref{eq:M2F2}) and (\ref{eq:patch1}) and the equality $M_1M_2{=}M$, we see that
\begin{equation}
\begin{aligned} \label{boh43}
0 & = \frac{M_1M_2 - M}{L_{0,1}L_{2,3}L_{0,2}L_{1,3}} = \\
&= \mu^q\! L_{0,1}^{q-1}\!L_{2,3}^{q-1} {+} \lambda^q \!L_{0,2}^{q-1}\!L_{1,3}^{q-1} {+} F_1^q\! N_2 {+} F_2^q\!N_1\!L_{0,1}\!L_{2,3} {+} G_2^q\!N_1\!L_{0,2}\!L_{1,3} {+} N_1\!N_2\! L_{0,1}\!L_{2,3}\!L_{0,2}\!L_{1,3}\\
& \equiv \lambda^q (L_{0,2}L_{1,3})^{q-1} + F_1^qN_2 + G_2^qN_1L_{0,2}L_{1,3} \pmod{L_{0,1}}.
\end{aligned}
\end{equation}
As already observed in the proof of Claim \ref{claim:shape_Mi}, $\overline{\mathbb F_q}[\alpha,\beta, \gamma]/L_{0,1}$ is isomorphic to $\overline{\mathbb F_q}[\beta, \gamma]$ through the map $F \mapsto F\left( -\frac{ l_\beta\beta+ l_\gamma \gamma}{l_\alpha} ,\beta, \gamma \right)$, where $l_\alpha, l_\beta$ and $l_\gamma$ are the coefficients of $L_{0,1}$. In particular, $\overline{\mathbb F_q}[\alpha,\beta, \gamma]/L_{0,1}$ is a UFD and for any $F \in \overline{\mathbb F_q}[\alpha, \beta, \gamma]/L_{0,1}$ we denote $\tilde F$ its image in $\overline{\mathbb F_q}[\beta, \gamma]$ through the above map. With this notation (\ref{boh43}) implies that both $\tilde L_{0,2}$ and $\tilde L_{1,3}$ divide $\tilde N_2\tilde{F_1^q}$.
More precisely $\tilde L_{0,2}$ and $\tilde L_{1,3}$ divide $\tilde N_2$, because $\tilde F_1$ is relatively prime with both $\tilde L_{0,2}$ and $\tilde L_{1,3}$, since $\mathcal{B}_1 \cap L_{0,1}$ does not contain $X_1 = \{L_{0,2} {=}0\}\cap \{L_{0,1} {=}0\}$ nor $X_3 = \{L_{1,3} {=}0\} \cap \{L_{0,1} {=}0\}$.
Moreover, since $X_1 = \{L_{0,2} {=}0\}\cap \{L_{0,1} {=}0\}$ and $X_3 = \{L_{1,3} {=}0\} \cap \{L_{0,1} {=}0\}$ are distinct, $\tilde L_{0,2}$ is relatively prime with $\tilde L_{1,2}$ and we can write $\tilde N_2 = \tilde L_{0,2}\tilde L_{1,3}N_3$ for some $N_3 \in \overline{\mathbb F_q}[\beta, \gamma]$.
Substituting in (\ref{boh43}) we get
\[
\lambda^q \tilde L_{0,2}^{q-2} \tilde L_{1,3}^{q-2} + \tilde F_1^q N_3 + \tilde G_2^q \tilde N_1 =0.
\]
Since $\tilde L_{0,2}$ and $\tilde L_{1,2}$ are coprime and since $\lambda\neq 0$, this contradicts Lemma \ref{lem:hopehopehope} below.
In particular the assumption of the reducibility of $\mathcal{B}$, together with the conclusions of Claim \ref{hope_distinct_points}, led to contradiction. We deduce that for all but at most $450$ choices of $\S \in E(k)$ the curve $\mathcal{B}_\S$ is geometrically irreducible. Since $\# E(k) > 450$ and since all the components of $\mathcal{B}$ project surjectively to $E$, we deduce that $\mathcal{B}$ is reduced and geometrically irreducible.
\begin{subLemma}\label{lem:hopehopehope}
Let $L_1, L_2 \in \overline{\mathbb F_q}[\beta, \gamma]$ be relatively prime homogenous linear polynomials. Then, there exist no homogenous polynomial $A,B,C,D\in \overline{\mathbb F_q}[\beta, \gamma]$ such that
\[
L_1^{q-2}L_2^{q-2} = A^qB + C^qD.
\]
\end{subLemma}
\begin{proof}
The zeroes of $L_1$ and $L_2$ in $\P^1$ are distinct, hence, up to a linear transformation we can suppose that their zeroes are $0$ and $\infty$. In particular, up to scalar multiples we can suppose $L_1 =\beta$ and $L_2 = \gamma$, implying that $A^qB + C^qD= \beta^{q-2}\gamma^{q-2}$. This is absurd because any monomial appearing in $A^q$ or in $B^q$ is either a multiple of $\beta^q$ of a multiple of $\gamma^q$, hence the same is true for all the monomials appearing in $ A^qB + C^qD$.
\end{proof}
\subsection{The irreducible components of $\mathcal{C}$}
In this subsection we prove that all the geometrically irreducible components of $\mathcal{C}$ are defined over~$k$. To do so, we can ignore (\ref{eq:43_item4_main}) in the definition of $\mathcal{C}$. The strategy is applying Lemma \ref{lem:good_curves} to the variety $\mathcal{B}$, using the rational functions
\[
\begin{aligned}
&u_1,u_2,u_3 \colon \mathcal{B} \dashrightarrow \P^1\,, \qquad u_i(\alpha, \beta, 1,\S) = f_{\alpha, \beta,\S} (\sigma^{i-1}Q)\,, \\
&w_1,w_2,w_3 \colon \mathcal{B} \dashrightarrow \P^1\,, \qquad w_i(\alpha, \beta, 1, \S) = -f_{\alpha, \beta, \S} (\sigma^{i-1} R)\,,
\end{aligned}
\]
and the irreducible divisor $Z \subset \mathcal{B}$ being the Zariski closure of
\begin{subeqn}\label{eq:def_Z_43}
\left\{(\alpha, \beta, \S) \in (\mathbb A^2{\times}E')(\overline{\mathbb F_q}): \begin{matrix}
\S = - Q - \sigmaQ - \sigma^3Q - \widetilde{\S}\,, \\
\alpha = \big((f_{Q}(\S){-}f_{\sigmaQ}(\S))\beta + l_1f_{Q}(\S) {-} l_0f_{\sigmaQ}(\S)\big) / (l_0 {-} l_1) \end{matrix} \right\}.
\end{subeqn}
\begin{subClaim}\label{claim_43_use_lemma}
The variety $Z$ is generically contained in the smooth locus of $\mathcal{B}$ and the rational function $u_1{-}u_2$ vanishes on $Z$ with multiplicity $1$.
\end{subClaim}
\begin{proof}
We restrict to an open subset $U \subset \P^2\times E$ containing the generic point of $Z$.
Up to shrinking $U$, the rational functions $u_i, w_i$ can be extended to regular functions on $U$ using the definition (\ref{eq:43_def_f}) of $f_{\alpha, \beta, \S}$, and we have
\[
\begin{aligned}
u_1 - u_2 &= \frac{L_{0,1}(\alpha, \beta,1,\S )}{\big( l_0 + \beta\big) \big( l_1 + \beta\big)}\,,
\end{aligned}
\]
where $L_{i,j}(\alpha, \beta, \gamma, \S) \in \overline{\mathbb F_q}[U]$ is defined as in (\ref{eq:def_Lij_Rij}), as well as $R_{i,j}(\alpha, \beta, \gamma, \S)$.
Since we can assume that $l_0{+}\beta, l_1{+}\beta$ are invertible on $U$ and since $Z$ is generically smooth, it is enough showing that $Z\cap U$ is a component of $(\mathcal{B} \cap U) \cap {\{L_{0,1} = 0\}}$ having multiplicity one. Up to shrinking $U$, the $\mathcal{B} \cap U$ is the vanishing locus, inside $U$, of
\[
M(\alpha, \beta, \S) := (L_{0,2}L_{1,3}R_{0,1}^q R_{2,3}^q - L_{0,1}L_{2,3}R_{0,2}^q R_{1,3}^q)(\alpha, \beta, 1, \S) \quad \in \overline{\mathbb F_q}[U]\,.
\]
Since the restriction of $M$ to $\{L_{0,1}=0\}$ is equal to the restriction of $L_{0,2}L_{1,3}R_{0,1}^q R_{2,3}^q$,
it is enough showing that $L_{0,2}$, $R_{0,1}$, $R_{2,3}$ do not vanish on $Z$ and that $\{L_{1,3} = 0\} \cap \{L_{0,1}=0\}$ contains $Z\cap U$ with multiplicity $1$. We start from the latter. Eliminating the variable $\alpha$ we see that, up to shrinking $U$, $\{L_{1,3} = 0\} \cap \{L_{0,1}=0\}$ is defined by the equations
\begin{subeqn}\label{eq:eqs_inters_for_lemma_43}
\lambda (\S) =0 \quad \text{and} \quad (l_1 - l_0)\alpha + (f_{Q}(\S) - f_{\sigmaQ}(\S))\beta + l_1 f_{Q}(\S) - l_0 f_{\sigmaQ}(\S) =0 \,,
\end{subeqn}
where
\[
\lambda (\S) := (l_1 {-} l_0)f_{\sigma^3Q}(\S) + (l_3 {-} l_1) f_{Q}(\S) + (l_0 {-} l_3)f_{\sigmaQ}(\S) \quad \in \overline{\mathbb F_q}(E)\,.
\]
The function $\lambda$ has three simple poles, namely ${-}Q, {-}\sigma Q, {-}\sigma^3Q$, and we easily verify that $\lambda(\widetilde{\S})=\lambda(O_E)=0$.
We deduce that $\S={-}Q{-}\sigmaQ{-}\sigma^3Q{-}\S_0$ is a simple zero of $\lambda$. This, together with the fact that the second equation in (\ref{eq:eqs_inters_for_lemma_43}) is equal to the second equation in the definition (\ref{eq:def_Z_43}) of $Z$, implies that $\{L_{1,3} = 0\} \cap \{L_{0,1}=0\}$ contains $Z \cap U$ with multiplicity~$1$.
We now suppose by contradiction that $R_{0,1}$ vanishes on $Z \cap U$. Substituting $\alpha$ and $\S$ in $R_{0,1}$ as in the definition (\ref{eq:def_Z_43}) of $Z$, we see that
\[
R_{0,1}(\alpha, \beta, 1, P)|_{Z\cap U} = \frac{\lambda_0({-}Q{-}\sigmaQ{-}\sigma^3Q{-}\widetilde{\S})}{l_0-l_1} \beta + \frac{\lambda_1({-}Q{-}\sigmaQ{-}\sigma^3Q{-}\widetilde{\S})}{l_0-l_1}\,,
\]
where
\[
\begin{aligned}
\lambda_0(\S) := & (r_1-r_0)(f_{Q} - f_{\sigmaQ})(\S) - (l_1-l_0)(f_{R} - f_{\sigmaR})(\S) \,, \\
\lambda_1(\S) := & (r_1-r_0) (l_1f_{Q}(\S) - l_0f_{\sigmaQ}(\S)) - (l_1-l_0)(r_1 f_{R}(\S) - r_0f_{\sigmaR}(\S)) \,,
\end{aligned}
\]
and we deduce that both $\lambda_0$ and $\lambda_1$ vanish on $P={-}Q{-}\sigmaQ{-}\sigma^3Q{-}\widetilde{\S}$. Both $\lambda_0$ and $\lambda_1$ have $4$ poles and $4$ zeroes counted with multiplicity: they have the same poles they share three zeroes, namely $O_E, \widetilde{\S}$ and ${-}Q{-}\sigmaQ{-}\sigma^3Q{-}\widetilde{\S}$. Since, in the group on $E(\overline{\mathbb F_q})$, the sum of the zeroes of an element of $\overline{\mathbb F_q}(E)^\times$ is equal to the sum of the poles, then $\lambda_0$ and $\lambda_1$ also share the fourth zero, hence $\lambda_0$ and $\lambda_1$ differ by a multiplicative constant in $\overline{\mathbb F_q}$. This is absurd because $l_0 \neq l_1$ and because the functions $f_{Q}$, $f_{\sigmaQ}$, $f_{R}$, $f_{\sigmaR}$ are $\overline{\mathbb F_q}$-independent.
A similar argument implies that $R_{2,3}$ does not vanish on $Z \cap U$, while the case of $L_{0,2}$ is easier. Substituting $\alpha$ and $\S$ in $L_{0,2}(\alpha, \beta,1,\S)$ as in the definition (\ref{eq:def_Z_43}) of $Z$ we get
\[
L_{0,2}(\alpha, \beta, 1, P)|_{Z\cap U} = \frac{ (\beta + l_0) \lambda_2 ({-}Q{-}\sigmaQ{-}\sigma^3Q{-}\widetilde{\S})}{l_0-l_1}\,,
\]
where
\[
\begin{aligned}
&\lambda_2(\S) := (l_2 {-} l_1)f_{Q}(\S) + (l_0 {-} l_2)f_{\sigmaQ}(\S) + ( l_1{-}l_0)f_{\sigma^2Q}(\S) \quad \in \overline{\mathbb F_q}(E) \,.
\end{aligned}
\]
Analogously to $\lambda$, we see that the zeroes of $\lambda_2$ are $\widetilde{\S},O_E$ and ${-}Q{-}\sigmaQ{-}\sigma^2Q{-}\S_0$, hence $\lambda_2$ does not vanish on ${-}Q{-}\sigmaQ{-}\sigma^3Q{-}\S_0$, implying that $L_{0,2}$ does not vanish on $Z \cap U$.
\end{proof}
We can show that $u_2{-}u_3$, $w_3{+}u_3$, $w_3{+}u_1$ and $w_i{-}w_j$ do not vanish on $Z$ using similar arguments to the ones used in the last part of the above proof.
Hence, by Lemma \ref{lem:good_curves}, all the components of $\mathcal{C}$ are defined over~$k$ and distinct.
\subsection{$k$-rational points on $\mathcal{C}$}\label{subsec:points_43}
Finally we prove that $\# \mathcal{C}(k)$ is larger than $\tfrac{1}{2}(\# k)^2$.
The surface $\mathcal{C}$ is contained in the open subset of $\mathbb A^2 \times (E {\setminus}\{O_E\}) \times \mathrm{PGL}_2 \times \mathbb A^1$ made of points $(\alpha, \beta, (x,y), \smt abcd, z)$ such that $c \neq 0$. Hence $\mathcal{C}$ is contained in $\mathbb A^8$, with variables $\alpha, \beta$, $x,y$, $a,b,d$, $z$ and it is defined by the following equations:.
\begin{itemize}
\item $0=p_1:=W(x,y)$, the Weierstrass equation defining $E$;
\item $0=p_2 :=(d^q {-} a)^{q+1}(z^q{-}z)^{q^2{-}q} - (ad {-} b)^q (\tfrac{z^{q^2}{-}z}{z^q{-}z})^{q+1}$,
the dehomogenization of (\ref{eq:43_item2}) in $c$; \item $0=p_i(\alpha, \beta, x,y,a,b,d)$ for $i = 3,4,5,6$, obtained by (\ref{eq:43_item3}) after dehomogenizing in $c$, substituting $f_{\sigma^iQ}, f_{\sigma^iR}$ by their expressions in $\alpha, \beta,x,y$ and clearing denominators;
\item a number of conditions $0\neq q_j$ ensuring that $\S\neq - \sigma^iQ, \S \neq - \sigma^iR$, $\beta + f_{\widetilde{\S}}(\sigma^iQ) \neq 0$, $\beta + f_{\widetilde{\S}}(\sigma^iR) \neq 0$, $d^q-a\neq0$, $ad-b\neq0$, that (\ref{eq:43_item4_main}) is satisfied, that $f_{\alpha, \beta,\S}(\sigma^iQ)$ are distinct and that $f_{\alpha, \beta,\S}(\sigma^iR)$ are distinct.
\end{itemize}
In particular, $\mathcal{C}$ can be seen as a closed subvariety of $\mathbb A^9$, with variables $\alpha,\beta$, $x,y$, $b,c,d,z$ and $t$ defined by the seven equations $p_1=0, \ldots, p_6=0$ and $0=p_7:=tq_1\cdots q_r -1$. Let $\mathcal{C}_1, \ldots, \mathcal{C}_s$ be the geometrically irreducible components of $\mathcal{C}$. By \cite[Remark $11.3$]{Npoints},
we have
\begin{subeqn}\label{eq:Weil43}
\# \mathcal{C}(k) \ge \# \mathcal{C}_1(k) \ge (\# k)^2 - (\delta-1)(\delta-2)(\# k)^{\frac 32} - K(\mathcal{C}_1) (\# k)\,,
\end{subeqn}
where $\delta$ is the degree of $\mathcal{C}_1$ and $K(\mathcal{C}_1)$ is the sum of the Betti numbers of $\mathcal{C}$ relative to the compact $\ell$-adic cohomology. Since $\mathcal{C}_1$ is a component of $\mathcal{C}$ then
\begin{subeqn}\label{eq:degD43}
\delta
\leq \deg(p_1)\cdots \deg(p_7) \,.
\end{subeqn}
Since $\mathcal{C}$ is the disjoint union of the $\mathcal{C}_i$, the Betti numbers of $\mathcal{C}$ are the sums of the Betti numbers of the $\mathcal{C}_i$. Hence, using \cite[Corollary of Theorem 1]{Katz}
\begin{subeqn}\label{eq:BettiD43}
K(\mathcal{C}_1) \leq K(\mathcal{C}) \leq 6 \cdot 2^{7}\cdot \left(3+7\max_{i=1,\dots ,7}\{\deg(p_i)\}\right)^{10}\,.
\end{subeqn}Combining Equations (\ref{eq:Weil43}), (\ref{eq:degD43}) and (\ref{eq:BettiD43}) and the inequalities $\deg p_1 \le 3$, $\deg p_2 \le q^3{+}q$, $\deg p_3, \ldots, \deg p_6 \le 2q{+}3$, $\deg p_7 \le 16q^2 {+} 37 q{+} 75$, we deduce that $\#\mathcal{C}(k)> \tfrac 12 (\# k)^2$ when $\# k \ge q^{80}$ and $q \ge 3$.
\bibliographystyle{alpha}
|
1,108,101,563,869 | arxiv | \section{Introduction}
Lifetime distributions are commonly utilized in studies aiming to statistically characterize a wide range of stochastic processes in physical and social systems.
There are various examples in which lifetime (or waiting time) distributions exhibit exponential decay: arrival of telephone calls or e-mails, decay of radioactive elements, occurrence of car accidents, and scoring of competitive sports \cite{Merritt2014, Clauset2015}.
Such systems can be theoretically described by a homogeneous Poisson process in which each event occurs independently at a constant rate within a certain time interval \cite{Daley2007}.
Meanwhile, power-law distributions for lifetime are also ubiquitous in nature, because they are associated with the dynamics of earthquakes, solar flares, animal movements, and human activities \cite{Karsai2018}.
Power-law behaviors are often referred to as ``burstiness'' especially for human activities.
A number of stochastic models exhibiting such power-law behaviors have been developed based on an extended Poisson process; examples include the priority queue model \cite{Barabasi2005}, Hawkes process \cite{Hawkes1971}, and cascading Poisson process \cite{Malmgren2008}.
Human activities can be divided into individual-driven and contact-driven (or communication-driven) \cite{Karsai2018}.
Recently, for contact-driven activities, experiments using wearable sensors have been conducted in scientific conferences \cite{Hui2005, Zhao2011}, schools \cite{Stehle2011, Fournet2014, Mastrandrea2015}, companies \cite{Takaguchi2011}, and other settings \cite{Cattuto2010, Isella2011}.
These studies measured how long two people were in close proximity within a certain distance, i.e., the lifetime of the adjacency relationships.
In these cases, it was found that the lifetime distribution obeys a power law.
Power-law properties can be extracted from human activities and from a more general situation.
In this study, we investigate the statistical properties of adjacency relationships in a two-dimensional Vicsek model, which describes the collective motions of self-propelled particles \cite{Vicsek1995, Ginelli2016}.
We focus on the lifetime $ \tau $ during which the adjacency relationships between two particles exist.
It is found that the cumulative distributions of $\tau$, $ P(\tau) $, becomes an exponential or a power law depending on the interaction radius in the Vicsek model.
\section{Model}
Let us consider an $ N $-particle system in a two-dimensional circular space with a diameter $ L $, which corresponds to the system size.
We denote the position and velocity of the $ j $-th particle at time $ t $ as $ \vec{r}_{j}(t) $ and $ \vec{v}_{j}(t) = v_{0} \vec{s}_{j}(t) $, respectively.
Here, $ v_{0} $ is the speed and $ \vec{s}_{j}(t) $ is the unit vector.
Note that $ \vec{s}_{j}(t) $ is determined by the angle $ \theta_{j}(t) $ in polar coordinates.
The equation of the motion of each particle is given as follows \cite{Chate2008}:
\begin{align}
\theta_{j}(t+\Delta t) \nonumber\\
&\hspace{-1.cm}=\mathrm{Arg} \sum_{k\sim j}^{N}\left[(1-c) \vec{v}_{k}(t) + c \vec{f}_{jk}(t) \right] + \xi_{j}(t),\label{vm_1}\\
\vec{r}_{j}(t+\Delta t) &= \vec{r}_{j}(t) + v_{0} \Delta t \vec{s}_{j}(t+\Delta t).
\label{vm_2}
\end{align}
In the first term on the right-hand side of Eq. \eqref{vm_1}, the notation $ k\sim j $ in the summation indicates that the $ j $-th particle interacts with others within the circle of radius $ R_{0} $, whose center is $ \vec{r}_{j} $.
In this summation, the first and the second terms show alignment and repulsive interactions between $j$-th and $k$-th particles, respectively.
Here, $ \vec{f}_{jk}(t) $ is given by
\begin{align*}
\vec{f}_{jk} &= - \vec{e}_{jk} \times \left[1 + \exp \left(\frac{|\vec{r}_{k} - \vec{r}_{j}|}{R_{f}} - 2\right)\right]^{-1},
\end{align*}
where $ \vec{e}_{jk} $ is the unit vector of $\vec{r}_{k} - \vec{r}_{j}$, and $ R_{f} $ is the typical repulsion distance \cite{Chate2008}.
The proportion of the alignment and repulsive interactions is controlled by the parameter $ c $.
The operator $ \mathrm{Arg} $ converts the vector to the angle.
The second term in Eq. \eqref{vm_1} represents noise, or fluctuation, where $ \xi $ is given as a uniform random number in $ [-\eta\pi, +\eta\pi]$ $ (\eta > 0) $.
At every time step, we define the adjacent edges for all particles using: (a) Delaunay triangulation and (b) Euclidean distance $ d $.
Hereafter, these edges are referred to as Delaunay and Euclidean edges, respectively.
The Delaunay triangulation is obtained from the adjacency relationships in the Voronoi regions of each particle (a Voronoi region of a particle is a set of locations, for which the distance to the particle is less than to any other \cite{Okabe1992}).
For the Euclidean edges, we define two particles as adjacent if the Euclidean distance between them is less than $ d $.
Thus, at each time step, particles form an adjacency network as shown in Fig. \ref{fig:snap_shot}.
We focus on the lifetime $ \tau $ during which the adjacency relationship between two particles exist (i.e., the lifetime of adjacent edges).
The adjacency relationships between particles do not affect the particles' motion.
The numerical calculation of Eqs.(\ref{vm_1}) and (\ref{vm_2}) was performed in the circular reflection boundary by setting $ L=1 $, $ \Delta t = 1 $, $\ v_{0}=0.005$, $R_{f}=0.003 $, $ c=0.5 $, and $ d=0.05 $.
$ R_{0} $, $ \eta $, and $ N $ are the controlling parameters.
The total time step was set as $ T=11000 $, and the cumulative distribution of the lifetime $ \tau $, denoted by $ P(\tau) $, is computed using the data of $ \tau $, which are obtained from all adjacent edges for $ t > 1000 $.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{figure/snap_shot.pdf}
\caption{Images of particles' motion and adjacency relationship for $ N=200 $ and $ \eta=0.2 $ at $ t=1100 $ in the circular reflection boundary with diameter $ L=1 $. The direction of the particles is shown by arrows. Lines between particles show the adjacency networks, defined by (a) the Delaunay triangulation and (b) the Euclidean distance $ d < 0.05 $. Left column: $R_{0}=0 $. Particles move randomly because they do not interact. Middle column: $ R_{0}=0.05 $. Particles directions are aligned locally. Right column: $ R_{0}=0.1 $. All particles form a single cluster and move along the boundary.}
\label{fig:snap_shot}
\end{figure}
\section{Result}
We first present results for $ R_{0}=0 $.
In this condition, each particle moves randomly because no interactions occur among them (see the directions of the particles in the left column of Fig. \ref{fig:snap_shot}).
Figure \ref{fig:felt0} shows the cumulative distribution $ P(\tau) $ for (a) Delaunay and (b) Euclidean edges in a semilogarithmic scale.
It is found that $ P(\tau) $ for small $ \eta $ exhibits an exponential decay, although it deviates from the exponential for large $ \eta $.
Next, Fig. \ref{fig:felt} shows $ P(\tau) $ for $ R_{0} \neq 0 $ obtained from (a) Delaunay and (b) Euclidean edges in a double logarithmic scale.
The left panels in Fig. \ref{fig:felt} show that $ P(\tau) $ follows the power-law distribution
with an increase in $ R_{0} $.
The power-law exponents for $R_{0} = 0.1$ and $\eta=0.2$ are estimated as $\alpha \simeq 1.56 $ and 1.57 for the Delaunay and the Euclidean edges, respectively.
The middle panels in Fig. \ref{fig:felt} show the power-law behaviors of $ P(\tau) $ when $ R_{0}=0.1 $ and $ \eta \leq 0.5 $.
We also confirmed that these power-law behaviors are almost independent of $ N ( \gtrsim 10 )$ for both the Delaunay and Euclidean edges.
To characterize the behaviors of $P(\tau)$, we introduce a coefficient of variation defined as $ \mathrm{CV}= \sigma(\tau)/ \langle \tau \rangle $, where $ \langle \tau \rangle $ and $ \sigma(\tau) $ are the mean and standard deviation of $ \tau $.
$ \mathrm{CV} $ becomes unity when $ \tau $ follows an exponential distribution.
When CV deviates from unity, the distribution is not an exponential.
We present the $ R_{0} $ dependence of $ \mathrm{CV} $ in the right-hand panels of Fig. \ref{fig:felt}.
It is found that $ \mathrm{CV} $ changes from unity to larger values as $ R_{0} $ increases;
in particular, $ \mathrm{CV} $ increases rapidly at $ R_{0} \simeq 0.05 $.
Therefore, $ P(\tau) $ has a crossover between an exponential and a power-law distribution at around $ R_{0} \simeq 0.05 $.
\begin{figure}
\includegraphics[width=8.8cm]{figure/cdf0.pdf}
\caption{Cumulative distributions $ P(\tau) $ for $ R_{0}=0 $ obtained
from (a) Delaunay and (b) Euclidean edges
in a semilogarithmic scale. For small $ \eta $, exponential decay is observed.}
\label{fig:felt0}
\end{figure}
\begin{figure*}
\includegraphics[width=15cm]{figure/cdf.pdf}
\caption{Cumulative distributions $ P(\tau) $ for (a) Delaunay and (b) Euclidean edges. Each panel is shown in a double logarithmic scale. The slope of guideline in each panel is obtained by fittings for $ R_{0}=0.1 $ and $ \eta=0.2 $. Left column: $ R_{0} $ dependence of $ P(\tau) $ where $ \eta=0.2 $; $ P(\tau) $ changes from an exponential to a power-law distributions with an increase in $ R_{0} $. Middle column: $ \eta $ dependence of $ P(\tau) $ where $ R_{0}=0.1 $. Right column: $ R_{0} $ dependence of CV values.}
\label{fig:felt}
\end{figure*}
When $ R_{0} $ is sufficiently large, particles form a single cluster and move together as shown in the right-hand panels of Fig. \ref{fig:snap_shot}.
To check the influence of reflections of particles at the boundary on $ P(\tau) $, we performed a simulation without any boundaries for $ R_{0}=0.2 $.
Note that a single cluster is kept during this simulation.
Figure \ref{fig:dep}(a) shows that the power-law distribution is observed
with an increase in $ \eta $.
In the small $ \tau $ region, the same power-law exponent $ \alpha \simeq 1.56 $ is obtained independently of $ \eta $.
The deviation from the power law for small $ \eta $ suggests that the rewiring of edges seldom occurs because of the fixed positions of particles.
This deviation of $ P(\tau) $ from the power law is also observed under the circular boundary.
Thus, reflective interactions between particles and the boundary does not affect the power-law behavior of $ P(\tau) $.
Here, we note that if there is no boundary, $P(\tau)$ is not stationary because a single cluster eventually breaks down due to the noise.
In this sense, existence of boundary is needed to ensure the stationarity of $ P(\tau) $.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{figure/free_boundary_and_rf_dep.pdf}
\caption{(a) $ \eta $ dependence of $ P(\tau) $ obtained from Delaunay edges
without boundaries where $ R_{0}=0.2 $. (b) $ R_{f} $ dependence of $ P(\tau) $ for Delaunay edges where $ R_{0}=0.1 $ and $ \eta=0.2 $.}
\label{fig:dep}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{figure/diagram.pdf}
\caption{Behaviors of $P(\tau)$ as functions of $ R_{0} $ and $ \eta $. A: exponential; B: power law; C: crossover between A and B; D: $P(\tau)$ becomes a distribution with a longer tail than the power law such as Fig. \ref{fig:dep}(a). The vertical dashed line at $ R_{0} \simeq 0.05 $ represents the point around which $ \mathrm{CV} $ changes considerably.}
\label{fig:diagram}
\end{figure}
We also check the effect of the repulsive force for large $ R_{0} $ by controlling the repulsion distance $ R_{f} $.
As shown in Fig. \ref{fig:dep}(b), the power-law region with $ \alpha \simeq 1.56 $ reduces with a decrease in $ R_{f} $.
The same result is obtained for the Euclidean edges.
This indicates that the cohesion of particles for a small $ R_{f} $ inhibits emergence of longevity edges.
Thus, the repulsive forces between particles are necessary for the emergence of the power-law behavior of $ P(\tau) $.
As shown in Fig. \ref{fig:diagram}, the typical behaviors of $ P(\tau) $ are classified into four cases A--D.
The exponential decay and power-law of $P(\tau)$ are observed in case A (small $ R_{0} $)
and case B (large $R_{0}$), respectively.
The crossover between exponential and power law occurs in case C.
In case D, $P(\tau)$ becomes a distribution with a longer tail
than the power law, such as Fig. \ref{fig:dep}(a).
The vertical dashed line at $ R_{0} \simeq 0.05 $ represents the point around which $ \mathrm{CV} $ increases rapidly with an increase in $R_{0}$.
\section{Discussion}
The lifetime $\tau$ is regarded as a one-dimensional first return time.
For the Delaunay edges, $\tau$ corresponds to the lifetime of the Voronoi line $ l(t) $, where $l(t)$ is the length of each edge.
For the Euclidean edges, $\tau$ is the first return time of the distance $ r(t) $ between two particles to the threshold $ d $, where $ r(t) < d $.
Figure \ref{fig:seq} shows the typical time series of $l(t)$ and $r(t)$, which are obtained at the circle markers in Fig. \ref{fig:diagram}.
It is found that the time series are very different for the two cases.
The power-law behavior of $P(\tau)$ in case B can be explained by considering random fluctuations of $l(t)$ and $r(t)$ as follows.
It is known that the first return time of a fractional Brownian motion follows the power law with the exponent $ \alpha = 2-H $, where $H$ is its Hurst exponent \cite{Ding1995}.
This holds for arbitrary one-dimensional time series characterized by $ H $.
Then, we calculated $ H $ for each time series of $ l(t) $ and $ r(t) $ at $ R_{0}=0.1 $ and $ \eta=0.2 $.
As shown in the right column of Fig. \ref{fig:seq}, $ H $ is distributed around a peak value, which are $ H\simeq 0.42 $ and $ H\simeq 0.43 $ for the Delaunay and Euclidean edges, respectively.
Because the power-law exponents obtained from $ P(\tau) $ are $ \alpha \simeq 1.56 $ and $ 1.57 $ (see Fig. \ref{fig:felt}), these values satisfy the relation $ \alpha = 2 - H $.
On the other hand in case A, the time series $ l(t) $ and $ r(t) $ have fewer fluctuations and become shorter than those in case B.
The exponential decay of $P(\tau)$ in case A suggests that they are subject to a homogeneous Poisson process.
\begin{figure*}
\centering
\includegraphics[width=16cm]{figure/seq.pdf}
\caption{Typical time series of (a) length $ l(t) $ of Voronoi line and (b) distance $ r(t) $ between two particles. Dashed lines indicate $ l(t) = 0 $ and $ r(t)=0.05 $, which show the thresholds for the first return time. Left column: $ R_{0}=0.01 $ and $ \eta=0.2 $ in case A of Fig.\ref{fig:diagram}. Middle column: $ R_{0}=0.1 $ and $ \eta=0.2 $ in case B. Right column: probability distributions of Hurst exponent $ H $ for the time series at $ R_{0}=0.1 $ and $ \eta=0.2 $. Dashed lines indicate the peak values.}
\label{fig:seq}
\end{figure*}
Previous studies on the Vicsek models have mainly focused on macroscopic properties such as an order--disorder transition of particles' directions or a giant density fluctuation \cite{Ginelli2016}.
In contrast, adjacency relationships of the particles have not been sufficiently examined thus far.
Thus, our present results are novel for the statistical properties in the Vicsek model.
The adjacency relationship of particles is a more detailed characterization of collective motions rather than the macroscopic quantities such as order parameters.
For example, we applied the Delaunay triangulation method to the formation analysis of team sports, i.e., football games \cite{Narizuka2017}.
Nagy {\it et al.} elucidated a hierarchical structure of flocks of birds using the network defined by the time delay between two birds' directions \cite{Nagy2010}.
We expect that our results can be observed in some experiments, such as those for bacterial motions in circular pools \cite{Wakita2015} and self-propelled robots \cite{Deblais2018}.
\section{Conclusion}
For the lifetime distributions $ P(\tau) $ of the Delaunay and Euclidean edges obtained by the Vicsek model, there exists a crossover for the shape of $ P(\tau) $ between the exponential for small $ R_{0} $ and the power law for large $ R_{0} $.
The power-law exponent $ \alpha $ of $P(\tau)$ satisfies the relation $ \alpha = 2 - H $, where $ H $ is the Hurst exponent obtained from the one-dimensional time series of the Delaunay and Euclidean edges.
\section*{Acknowledgments}
We thank Jun-ichi Wakita, Yuhei Yamada, and Ken Yamamoto for fruitful discussions.
The present work was partially supported by a Grant-in-Aid for Young Scientists No.18K18013 from the Japan Society for the Promotion of Science (JSPS).
|
1,108,101,563,870 | arxiv | \section{Introduction}
Supersymmetric Yang-Mills (SYM) theories are interesting from a variety of perspectives; as toy models for understanding theories such as QCD, as potential theories of BSM physics and via the AdS/CFT correspondence because of a possible connection to quantum gravity. Many features of these theories, for example, dynamical supersymmetry breaking, are inherently non-perturbative in nature and this serves as motivation to study such theories on the lattice.
Unfortunately, historically it has proven difficult to discretize supersymmetric theories using traditional methods. This stems from the fact that the supersymmetry algebra is an extension of the usual Poincar\'e algebra and hence is broken completely by na\"ive discretization on a space-time lattice. However, recently the development of a series of new theoretical tools have enabled us to construct certain supersymmetric theories on the lattice while preserving a subset of the continuum supersymmetries - see the reviews \cite{Kaplan:2003uh, Giedt:2006pd, Catterall:2009it, arXiv:1110.5983} and references therein. Other recent complementary approaches to the problem of exact lattice supersymmetry can be found in \cite{Sugino:2003yb, Sugino:2004qd, hep-lat/0507029, arXiv:0707.3533, Kanamori:2008bk, Hanada:2009hq, Hanada:2010kt, Hanada:2010gs, Hanada:2011qx}.
One way to understand the new constructions is to realize that they correspond to discretizations of topologically twisted forms of the target continuum theories. Currently, lattice constructions exist for a set of SYM theories, including the four-dimensional ${\cal N}=4$ SYM theory.
Lattice theories constructed this way are free of doublers, respect gauge-invariance, preserve a subset of the original supersymmetries and target the usual continuum theories in the na\"ive continuum limit. These constructions are possible only if the continuum SYM theories possess sufficient extended supersymmetry; the precise requirement is that the number of supercharges must be an integer multiple of $2^D$ where $D$ is the space-time dimension. This includes the ${\cal N}=(2, 2)$ SYM theory in two dimensions and ${\cal N} = 4$ SYM in four dimensions. In this paper we study both theories in two dimensions -the ${\cal N}=4$ model
yielding the ${\cal N}=(8,8)$ theory after dimensional reduction from four to two dimensions.
However, even when a supersymmetric lattice construction exists, it is still possible to encounter an additional difficulty that renders the use of numerical simulation problematic -- the fermionic sign problem. To understand the nature of this problem consider a generic lattice theory with a set of bosonic $\phi$ and fermionic $\psi$ degrees of freedom. The partition function of the theory is
\begin{eqnarray}
Z &=& \int [d\phi][d\psi]\, \exp\Big(-S_B[\phi] - \psi^T M[\phi] \psi\Big)~,\nonumber \\
&=& \int [d\phi]\, {\rm Pf}(M)~\exp \Big(-S_B[\phi]\Big)~,
\end{eqnarray}
where $M$ is antisymmetric fermion matrix and ${\rm Pf}(M)$ the corresponding Pfaffian. For a $2n \times 2n$ matrix $M$, the Pfaffian is explicitly given as ${\rm Pf}(M)^{2} = \mbox{Det}\, M$. In the supersymmetric lattice constructions we will consider in this paper, $M$ at non zero lattice spacing is a complex operator and one might worry that the resulting Pfaffian could exhibit a fluctuating phase depending on the background boson fields $\phi$. Since Monte Carlo simulations must be performed with a positive definite measure, the only way to incorporate this phase is through a reweighting procedure, which folds this phase in with the observables of the theory. Expectation values of observables derived from such simulations can then suffer drastic statistical errors which overwhelm the signal -- the famous fermionic {\it sign problem}. Thus, if such a complex phase is present, the Monte Carlo technique is rendered effectively useless. Lattice theories such as QCD with finite chemical potential are known to suffer from a severe sign problem, which makes it very difficult to extract physical observables from simulations using conventional methods. The lattice sign problem exists not only in relativistic field theories but also in a variety of condensed matter systems \cite{Hirsch:1983rq}.
In the construction of supersymmetric lattice gauge theories, there has been an ongoing debate on the existence of a sign problem in the two-dimensional ${\cal N}=(2,2)$ supercharge lattice theory \cite{Giedt:2003ve, Catterall:2008dv, Hanada:2010qg}. The resolution of this sign problem is crucial as the extraction of continuum physics from the lattice model depends very much on whether the results from phase quenched simulations can be trusted. Moreover, if a sign problem were to be found in this model it makes it more likely that the four-dimensional ${\cal N} = 4$ theory also suffers from a sign problem which would render practical simulation of this theory impossible. In \cite{Giedt:2003ve}, it was shown that there is a potential sign problem in the two-dimensional ${\cal N} = (2, 2)$ SYM lattice theory. Furthermore, in \cite{Catterall:2008dv} numerical evidence was presented of a sign problem in a phase quenched dynamical simulation of the theory at non-zero lattice spacing. More recently Hanada et al. \cite{Hanada:2010qg} have argued that there is no sign problem for this theory in the continuum limit. However, the models studied by these various groups differed in detail; Catterall et al. studied an $SU(2)$ model obtained by truncating the supersymmetric $U(2)$ theory and utilized bosonic link fields valued in the group $SL(2,C)$, while Hanada et al. used a $U(2)$ model where the complexified bosonic variables take their values in the algebra of $U(2)$ together with the inclusion of supplementary mass terms to control scalar field fluctuations.
In this paper, we present results from simulations of the two dimensional ${\cal N}=(2, 2)$ $U(N)$ SYM theory (which we will refer to from now on as the ${\cal Q} = 4$ theory, with ${\cal Q}$ the number of supercharges) and the maximally supersymmetric ${\cal N} = (8, 8)$ $U(N)$ SYM theory (we refer to this theory as the ${\cal Q} = 16$ theory). Our results provide strong
evidence that there is no sign problem in the supersymmetric continuum limit for these theories.
In the next four sections we summarize the details of the lattice constructions of both theories including a discussion of the possible parameterizations of the bosonic link fields. We then present our numerical results for ${\cal Q} = 4$ and ${\cal Q} = 16$ lattice SYM theories in two dimensions.
\section{Supersymmetric Yang--Mills theories on the lattice}
As discussed in the introduction it is possible to discretize a class of continuum SYM theories using ideas based on topological twisting\footnote{Note that the lattice actions constructed using orbifold and twisted methods are equivalent \cite{Unsal:2006qp, Catterall:2007kn, Damgaard:2007xi}.}. Though the basic idea of twisting goes back to Witten in his seminal paper on topological field theory \cite{Witten:1988ze}, it actually had been anticipated in earlier work on staggered fermions \cite{Elitzur:1982vh}. In our context, the idea of twisting is to decompose the fields of the Euclidean SYM theory in $D$ space-time dimensions in representations not in terms of the original (Euclidean) rotational symmetry $SO_{\rm rot}(D)$, but a twisted rotational symmetry, which is the diagonal subgroup of this symmetry and an $SO_{\rm R}(D)$ subgroup of the R-symmetry of the theory, that is,
\begin{equation}
SO(D)^\prime={\rm diag}(SO_{\rm Lorentz}(D)\times SO_{\rm R}(D))~.
\end{equation}
As an example, let us consider the case where the total number of supersymmetries is $Q=2^D$. In this case we can treat the supercharges of the twisted theory as a $2^{D/2}\times 2^{D/2}$ matrix $q$. This matrix can be expanded on the Dirac--K\"ahler basis as
\begin{equation}
q = {\cal Q} I + {\cal Q}_a \gamma_a + {\cal Q}_{ab}\gamma_a\gamma_b + \ldots
\end{equation}
The $2^D$ antisymmetric tensor components that arise in this basis are the twisted supercharges that satisfy the corresponding supersymmetry algebra inherited from the original algebra
\begin{eqnarray}
{\cal Q}^2 &=& 0\\
\{{\cal Q},{\cal Q}_a\} &=& p_a\\
&\vdots&
\end{eqnarray}
The presence of the nilpotent scalar supercharge ${\cal Q}$ is most important; it is the algebra of this charge that is compatible with discretization. The second piece of the algebra expresses the fact that the momentum is the ${\cal Q}$-variation of something which makes the statement plausible that the energy-momentum tensor and hence the entire action can be written in a ${\cal Q}$-exact form\footnote{In the case of four-dimensional ${\cal N} = 4$ SYM there is an additional ${\cal Q}$-closed term in the action.}. Notice that an action written in such a ${\cal Q}$-exact form is trivially invariant under the scalar supersymmetry ${\cal Q}$ provided the latter remains nilpotent under discretization.
The recasting of the supercharges in terms of twisted variables can be repeated for the fermions of the theory and yields a set of antisymmetric tensors $(\eta, \psi_a, \chi_{ab}, \ldots)$, which for the case of $Q=2^D$ matches the number of components of a real {K\"{a}hler--Dirac } field. This repackaging of the fermions of the theory into a {K\"{a}hler--Dirac } field is at the heart of how the discrete theory avoids fermion doubling as was shown by Becher, Joos and Rabin in the early days of lattice gauge theory \cite{Rabin:1981qj, Becher:1982ud}. It is important to recognize that the transformation to twisted variables corresponds to a simple change of variables in flat space -- one more suitable for discretization.
\subsection{Two-dimensional ${\cal Q}=4$ SYM on the lattice}
\label{sec:2d-formulation}
The two-dimensional ${\cal Q} = 4$ SYM theory is the simplest example of a gauge theory that permits topological twisting and thus satisfies our requirements for supersymmetric lattice constructions. Its R-symmetry possesses an $SO(2)$ subgroup corresponding to rotations of the its two degenerate Majorana fermions into each other. After twisting the fields and supersymmetries of the target theory, the action takes the following form in the continuum
\begin{equation}
S = \frac{1}{g^2} {\cal Q} \int {\rm Tr\;} \left(\chi_{ab}{\cal F}_{ab} + \eta [ {\overline{\cal D}}_a,{\cal D}_b ] - \frac{1}{2}\eta d\right)~,
\label{2daction-twisted}
\end{equation}
where $g$ is the coupling parameter. We use an anti-hermitian basis for the generators of the gauge group with ${\rm Tr}(T^a T^b)=-\delta^{ab}$.
The degrees of freedom appearing in the above action are just the twisted fermions $(\eta, \psi_a, \chi_{ab})$ and a complexified gauge field ${\cal A}_a$. The latter is built from the usual gauge field $A_a$ and the two scalars $B_a$ present in the untwisted theory: ${\cal A}_a = A_a + iB_a$. The twisted theory is naturally written in terms of the complexified covariant derivatives
\begin{equation}
{\cal D}_a = \partial_a + {\cal A}_a,~~~{\overline{\cal D}}_a = \partial_a + {\overline{\cal A}}_a~,
\end{equation}
and complexified field strengths
\begin{equation}
{\cal F}_{ab} = [{\cal D}_a, {\cal D}_b],~~~{\overline{\cal F}}_{ab} = [{\overline{\cal D}}_a, {\overline{\cal D}}_b]~.
\end{equation}
Notice that the original scalar fields transform as vectors under the original R-symmetry and hence become vectors under the twisted rotation group while the gauge fields are singlets under the R-symmetry and so remain vectors under twisted rotations. This structure makes the appearance of a complex gauge field in the twisted theory possible. This action is invariant under the original $U(N)$ gauge symmetry from the untwisted theory.
The nilpotent transformations associated with the scalar supersymmetry ${\cal Q}$ are given explicitly by
\begin{eqnarray}
{\cal Q}\; {\cal A}_a &=& \psi_a \nonumber \\
{\cal Q}\; \psi_a &=& 0 \nonumber \\
{\cal Q}\; {\overline{\cal A}}_a &=& 0 \nonumber \\
{\cal Q}\; \chi_{ab} &=& -{\overline{\cal F}}_{ab} \nonumber \\
{\cal Q}\; \eta &=& d \nonumber \\
{\cal Q}\; d &=& 0
\end{eqnarray}
Performing the ${\cal Q}$-variation on the action and integrating out the auxiliary field $d$ yields
\begin{equation}
S = \frac{1}{g^2} \int {\rm Tr\;} \left(-{\overline{\cal F}}_{ab}{\cal F}_{ab} + \frac{1}{2}[ {\overline{\cal D}}_a, {\cal D}_a]^2 - \chi_{ab}{\cal D}_{\left[a\right.}\psi_{\left.b\right]}-\eta {\overline{\cal D}}_a\psi_a\right)~.
\label{2d-twist_action}
\end{equation}
The prescription for discretization is somewhat natural. The complexified gauge fields are represented as complexified Wilson gauge fields
\begin{equation}
{\cal A}_a(x) \rightarrow {\cal U}_a( {\bf n} )~,
\end{equation}
living on links of a lattice, which for the moment can be thought of as hypercubic, with integer-valued basis vectors
\begin{equation}
\widehat{\boldsymbol {\mu}}_1 = (1, 0),~~~\widehat{\boldsymbol {\mu}}_2 = (0, 1)~.
\end{equation}
They transform in the usual way under $U(N)$ lattice gauge transformations
\begin{equation}
{\cal U}_a( {\bf n} )\to G( {\bf n} ){\cal U}_a( {\bf n} )G^\dagger( {\bf n} +\widehat{\boldsymbol {\mu}}_a)~.
\end{equation}
Supersymmetric invariance then implies that $\psi_a( {\bf n} )$ live on the same links and transform identically. The scalar fermion $\eta( {\bf n} )$ is clearly most naturally associated with a site and transforms accordingly
\begin{equation}
\eta( {\bf n} )\to G( {\bf n} )\eta( {\bf n} )G^\dagger( {\bf n} )~.
\end{equation}
The field $\chi_{ab}( {\bf n} )$ is slightly more difficult. Naturally as a 2-form it should be associated with a plaquette. In practice we introduce diagonal links running through the center of the plaquette and choose $\chi_{ab}( {\bf n} )$ to lie {\it with opposite orientation} along those diagonal links. This choice of orientation will be necessary to ensure gauge invariance. Figure \ref{fig:2dlattice} shows the resultant lattice theory.
\begin{figure}
\begin{center}\includegraphics[width=0.5\textwidth]{square.eps}\end{center}
\caption{\label{fig:2dlattice}The 2d lattice for the four supercharge theory with field orientation assignments.}
\end{figure}
To complete the discretization we need to describe how continuum derivatives are to be replaced by difference operators. A natural technology for accomplishing this in the case of adjoint fields was developed many years ago and yields expressions for the derivative operator applied to arbitrary lattice p-forms \cite{Aratyn:1984bd}. In the case discussed here we need just two derivatives given by the expressions
\begin{eqnarray}
{\cal D}^{(+)}_a f_b( {\bf n} ) &=& {\cal U}_a( {\bf n} )f_b( {\bf n} + \widehat{\boldsymbol {\mu}}_a) - f_b( {\bf n} ){\cal U}_a( {\bf n} + \widehat{\boldsymbol {\mu}}_b)~,\\
{\overline{\cal D}}^{(-)}_a f_a( {\bf n} ) &=& f_a( {\bf n} ){\overline{\cal U}}_a( {\bf n} )-{\overline{\cal U}}_a( {\bf n} - \widehat{\boldsymbol {\mu}}_a)f_a( {\bf n} - \widehat{\boldsymbol {\mu}}_a)~.
\end{eqnarray}
The lattice field strength is then given by the gauged forward difference acting on the link field: ${\cal F}_{ab}( {\bf n} ) = {\cal D}^{(+)}_a {\cal U}_b( {\bf n} )$, and is automatically antisymmetric in its indices. Furthermore, it transforms like a lattice 2-form and yields a gauge invariant loop on the lattice when contracted with $\chi_{ab}( {\bf n} )$. Similarly the covariant backward difference appearing in ${\overline{\cal D}}^{(-)}_a {\cal U}_a( {\bf n} )$ transforms as a 0-form or site field and hence can be contracted with the site field $\eta( {\bf n} )$ to yield a gauge invariant expression.
This use of forward and backward difference operators guarantees that the solutions of the lattice theory map one-to-one with the solutions of the continuum theory and hence fermion doubling problems are evaded \cite{Rabin:1981qj}. Indeed, by introducing a lattice with half the lattice spacing one can map this {K\"{a}hler--Dirac } fermion action into the action for staggered fermions \cite{Banks:1982iq}. Notice that, unlike the case of QCD, there is no rooting problem in this supersymmetric construction since the additional fermion degeneracy is already required in the continuum theory.
As for the continuum theory the lattice action is again ${\cal Q}$-exact:
\begin{equation}
S = \sum_{ {\bf n} } {\rm Tr\;} {\cal Q} \Big(\chi_{ab}( {\bf n} ){\cal D}_a^{(+)}{\cal U}_b( {\bf n} ) + \eta( {\bf n} ) {\overline{\cal D}}_a^{(-)}{\cal U}_a( {\bf n} ) - \frac{1}{2}\eta( {\bf n} ) d( {\bf n} ) \Big)~.
\end{equation}
Acting with the ${\cal Q}$ transformation on the lattice fields and integrating out the auxiliary field $d$, we obtain the gauge and ${\cal Q}$-invariant lattice action:
\begin{equation}
\label{eq:2d-latt-action}
S = \sum_{ {\bf n} } {\rm Tr\;} \Big({\cal F}_{ab}^{\dagger}( {\bf n} ) {\cal F}_{ab}( {\bf n} ) + \frac{1}{2}\Big({\overline{\cal D}}_a^{(-)}{\cal U}_a( {\bf n} )\Big)^2 - \chi_{ab}( {\bf n} ) {\cal D}^{(+)}_{[a}\psi_{b]}( {\bf n} ) - \eta( {\bf n} ) {\overline{\cal D}}^{(-)}_a\psi_a( {\bf n} ) \Big)~.
\end{equation}
\subsection{Four-dimensional ${\cal Q} = 16$ SYM on the lattice}
\label{sec:4d-lattice-theory}
In four dimensions the constraint that the target theory possess sixteen supercharges singles out a unique theory for which this construction can be undertaken -- the ${\cal N} = 4$ SYM theory.
The continuum twist of ${\cal N} = 4$ that is the starting point of the twisted lattice construction was first written down by Marcus in 1995 \cite{Marcus:1995mq} although it now plays an important role in the Geometric-Langlands program and is hence sometimes called the GL-twist \cite{Kapustin:2006pk}. This four-dimensional twisted theory is most compactly expressed as the dimensional reduction of a five-dimensional theory in which the ten (one gauge field and six scalars) bosonic fields are realized as the components of a complexified five-dimensional gauge field while the 16 twisted fermions naturally span one of the two {K\"{a}hler--Dirac } fields needed in five dimensions. Remarkably, the action of this theory contains a ${\cal Q}$-exact term of precisely the same form as the two-dimensional theory given in Eq. (\ref{2daction-twisted}) provided one extends the indices labeling the fields to run now from one to five. In addition, the Marcus twist
of ${\cal N}=4$ YM requires a new ${\cal Q}$-closed term which was not possible in the two-dimensional theory
\begin{equation}
S_{\rm closed} = -\frac{1}{8} \int {\rm Tr\;} \epsilon_{mnpqr} \chi_{qr} {\overline{\cal D}}_p \chi_{mn}~.
\label{closed}
\end{equation}
The supersymmetric invariance of this term then relies on the Bianchi identity
\begin{equation}
\epsilon_{mnpqr}{\overline{\cal D}}_p{\overline{\cal F}}_{qr} = 0~.
\end{equation}
The four-dimensional lattice that emerges from examining the moduli space of the resulting discrete theory is called the $A_4^*$-lattice and is constructed from the set of five basis vectors $\widehat{\boldsymbol {e}}_a$ pointing out from the center of a four-dimensional equilateral simplex out to its vertices together with their inverses $ -\widehat{\boldsymbol {e}}_a$. It is the four-dimensional analog of the two-dimensional triangular lattice. Complexified Wilson gauge link variables ${\cal U}_a$ are placed on these links together with their ${\cal Q}$-superpartners $\psi_a$. Another 10 fermions are associated with the diagonal links $\widehat{\boldsymbol {e}}_a + \widehat{\boldsymbol {e}}_b$ with $a>b$. Finally, the exact scalar supersymmetry implies the existence of a single fermion for every lattice site. The lattice action corresponds to a discretization of the Marcus twist on this $A_4^*$-lattice and can be represented as a set of traced closed bosonic and fermionic loops. It is invariant under the exact scalar supersymmetry ${\cal Q}$, lattice gauge transformations and a global permutation symmetry $S^5$ and can be proven free of fermion doubling problems as discussed above. The ${\cal Q}$-exact part of the lattice action is again given by Eq. (\ref{eq:2d-latt-action}) where the indices $a, b$ now correspond to the indices labeling the five basis vectors of $A_4^*$.
While the supersymmetric invariance of this ${\cal Q}$-exact term is manifest in the lattice theory, it is not clear how to discretize the continuum ${\cal Q}$ closed term. Remarkably, it is possible to discretize Eq. (\ref{closed}) in such a way that it is indeed exactly invariant under the twisted supersymmetry
\begin{equation}
S_{\rm closed} = -\frac{1}{8}\sum_{ {\bf n} } {\rm Tr\;} \epsilon_{mnpqr} \chi_{qr}( {\bf n} + \widehat{\boldsymbol {\mu}}_m + \widehat{\boldsymbol {\mu}}_n + \widehat{\boldsymbol {\mu}}_p)
{\overline{\cal D}}^{(-)}_p\chi_{mn}( {\bf n} + \widehat{\boldsymbol {\mu}}_p)
\end{equation}
and can be seen to be supersymmetric since the lattice field strength satisfies an exact Bianchi identity \cite{Aratyn:1984bd}.
\begin{equation}
\epsilon_{mnpqr}{\overline{\cal D}}^{(+)}_p{\overline{\cal F}}_{qr} = 0~.
\end{equation}
The renormalization of this theory has been recently studied in perturbation theory with some remarkable conclusions \cite{Catterall:2011pd}; namely that the classical moduli space is not lifted to all orders in the coupling, that the one loop lattice
beta function vanishes and that no fine tuning of the bare lattice parameters with cut-off is required at one-loop for the theory to recover full supersymmetry as the lattice spacing is sent to zero.
\section{Towards the continuum limit}
\subsection{Parametrizations of the gauge links}
\label{sec:gauge-link-params}
There exist two distinct parameterizations of the gauge fields on the lattice that
have been proposed for these theories. The first one follows the standard Wilson prescription where the complexified gauge fields in the continuum are mapped to link fields ${\cal U}_a( {\bf n} )$ living on the link between $ {\bf n} $ and $ {\bf n} + \widehat{\boldsymbol {\mu}}_a$ through the mapping
\begin{equation}
{\cal U}_a( {\bf n} ) = e^{{\cal A}_a( {\bf n} )}~,
\end{equation}
where ${\cal A}_a ( {\bf n} ) = \sum_{i=1}^{N_G} {\cal A}_a^i T^i$ and $T^i=1, \ldots, N_G$ are the anti-hermitian generators of $U(N)$.
The resultant gauge links belong to $GL(N,C)$.
We call this realization of the bosonic links the {\it exponential or group based parametrization}\footnote{Notice that our lattice gauge fields are dimensionless and hence contain an implicit factor of the lattice spacing $a$.}.
The other parametrization of the bosonic link fields that has been used, particularly in the orbifold literature, simply takes the complex gauge links as taking values in the algebra of the $U(N)$ group
\begin{equation}
{\cal U}_a( {\bf n} )={\cal A}_a( {\bf n} )~.
\end{equation}
In this case to obtain the correct continuum limit one must subsequently expand the fields around a particular point in the moduli space of the theory corresponding to giving an expectation value to a component of the link field proportional to the unit matrix. This
field can be identified as the trace mode of the scalar field in the untwisted
theory.
\begin{equation}
{\cal U}_a( {\bf n} ) = {\mathbf I}_N + {\cal A}_a( {\bf n} )~.
\label{alg}
\end{equation}
Usually the use of
such an algebra based or {\it non compact} parametrization would signal a breaking of lattice gauge invariance. It is only possible here because the bosonic fields take values in a complexified $U(N)$ theory -- so that the unit matrix appearing in Eq. (\ref{alg}) can be interpreted as the expectation value of a {\it dynamical field} - the trace mode of the scalars. We will refer to this parametrization as the {\it linear or algebra based parametrization}\footnote{{In fact, a non-compact parametrization of the gauge-fields has also been recently used to restore BRST symmetry on the lattice in Ref. \cite{arXiv:0710.2410}, i.e., to evade the so-called Neuberger $0/0$ problem \cite{Print-86-0394} (see also Refs.~\cite{arXiv:0710.2410} and \cite{arXiv:0912.0450} for the recent progress, and \cite{Mehta:latt11} for the relation between the Neuberger $0/0$ problem and sign problem for the lattice SYM theories.).}}.
Both parameterizations of the gauge links are equivalent at leading order in the lattice spacing, yield the same lattice action and can be considered as providing equally valid representations of the lattice theory at the classical level. The exponential parametrization was used in studies of both ${\cal Q} = 4$ and ${\cal Q} = 16$ theories in \cite{Catterall:2008dv} while in \cite{Hanada:2010qg} the linear parametrization was employed to perform simulations of the ${\cal Q} = 4$ theory. In this work we have concentrated on the linear parametrization principally because it is naturally associated with a manifestly
supersymmetric measure in the path integral - the flat measure. Explicit comparison with results from the exponential parametrization
can be found in \cite{Mehta:latt11}.
\subsection{Potential terms}
\label{sec:pot-terms}
As we have described in the previous section, the linear parameterization only yields the correct na\"ive continuum limit if the trace mode of the scalars develops a vacuum expectation value so that appropriate kinetic terms are generated in the tree level action. In addition, we require that the fluctuations of all dimensionless lattice fields vanish as the lattice spacing is sent to zero; a non-trivial issue in theories possessing flat directions associated with extended supersymmetry. Since no classical scalar potential is present in the lattice theory\footnote{Lattice theories based on supersymmetric mass deformations have also been proposed in two dimensions \cite{Hanada:2010qg, Hanada:2011qx}} it is crucial to add {\it by hand} a suitable gauge invariant potential to ensure these features\footnote{It was precisely this requirement that led to a truncation of the $U(N)$ symmetry to $SU(N)$ in the original simulations of these theories. One can think of this truncation as corresponding to the use of a delta function potential for the $U(1)$ part of the field \cite{Catterall:2008dv}.}.
Specifically we add a potential term of the following form \cite{Hanada:2010qg}
\begin{equation}
S_M = \mu^2 \sum_ {\bf n} \left(\frac{1}{N}{\rm Tr}({\cal U}_a^\dagger( {\bf n} ) {\cal U}_a( {\bf n} ))-1\right)^2~,
\end{equation}
to the lattice action. Here $\mu$ is a tunable mass parameter, which can be used to control the expectation values and fluctuations of the lattice fields. Notice that such a potential obviously breaks supersymmetry -- however because of the exact supersymmetry at $\mu = 0$ all supersymmetry breaking counterterms induced via quantum effects will possess couplings that vanish as $\mu \to 0$ and so can be removed by sending $\mu\to 0$ at the end of the calculation.
To understand the effect of this term let us consider the full set of vacuum equations for the lattice theory. These are given by setting the bosonic action to zero
\begin{eqnarray}
{\cal F}_{ab}( {\bf n} ) &=&0~, \\
{\overline{\cal D}}_a^{(-)}{\cal U}_a( {\bf n} ) &=&0~, \\
\frac{1}{N}{\rm Tr}\Big({\cal U}^\dagger_a( {\bf n} ){\cal U}_a( {\bf n} )\Big) - 1 &=& 0~.
\end{eqnarray}
The first two equations imply that the moduli space consists of constant complex matrices taking values in the $N$-dimensional Cartan subalgebra of $U(N)$.
Assuming that the matrix valued complexified link fields ${\cal U}_a( {\bf n} )$ are nonsingular\footnote{Having zero eigenvalues for the matrices ${\cal U}_a( {\bf n} )$ would not cause a problem for us, as we are interested in expanding these fields around the point ${\mathbf I}_N$ instead of the origin of the moduli space.}, we can decompose them in the following way
\begin{equation}
{\cal U}_a( {\bf n} ) = P_a( {\bf n} )U_a( {\bf n} )~,
\end{equation}
where $P_a( {\bf n} )$ is a positive semidefinite hermitian matrix and $U_a( {\bf n} )$ a unitary matrix. The form of the mass term clearly does not depend on the unitary piece and clearly is minimized by setting $P_a( {\bf n} ) = {\mathbf I}_N$. Expanding about this configuration gives the following expression for the complex link matrices
\begin{equation}
{\cal U}_a( {\bf n} ) = P_a( {\bf n} )U_a( {\bf n} ) = \Big({\mathbf I}_N + p_a( {\bf n} )\Big)U_a( {\bf n} )~,
\end{equation}
where $p_a( {\bf n} )$ is a hermitian matrix.
Minimizing the mass term leads to
\begin{eqnarray}
0 &=& \frac{1}{N}{\rm Tr}\Big({\cal U}^{\dagger}_a( {\bf n} ){\cal U}_a( {\bf n} )\Big) - 1~,\nonumber \\
&=& \frac{1}{N}{\rm Tr}\Big[U^\dagger_a( {\bf n} )\Big({\mathbf I}_N + p_a( {\bf n} )\Big)\Big]\Big[\Big({\mathbf I}_N + p_a( {\bf n} )\Big)U_a( {\bf n} )\Big] - 1~,\nonumber \\
&=& \frac{1}{N}{\rm Tr}\Big[{\mathbf I}_N + 2p_a( {\bf n} )+ p^2_a( {\bf n} )\Big] - 1~,\nonumber \\
&=& \frac{1}{N}\Big[\frac{2}{\sqrt{N}} p^0_a( {\bf n} )+ \sum_{A=1}^N (p^A_a( {\bf n} ))^2\Big]~.
\end{eqnarray}
where we have adopted a basis in which $T^0$ is proportional to the unit matrix and all
other (Cartan) generators are traceless.
Analyzing the gauge transformation properties of the complexified link fields,
\begin{equation}
{\cal U}_a( {\bf n} ) \rightarrow G( {\bf n} ){\cal U}_a( {\bf n} )G^\dagger( {\bf n} + \widehat{\boldsymbol {\mu}}_a)~,
\end{equation}
we see that the unitary piece $U_a( {\bf n} )$ transforms like a link field \begin{equation}
U_a( {\bf n} ) \rightarrow G( {\bf n} )U_a( {\bf n} )G^\dagger( {\bf n} + \widehat{\boldsymbol {\mu}}_a)\end{equation}
while the hermitian matrix $p_a( {\bf n} )$ transforms like a scalar field \begin{equation}
p_a( {\bf n} ) \rightarrow G( {\bf n} ) p_a( {\bf n} )G^\dagger( {\bf n} ).\end{equation}
Thus in this language we can identify the $p_a( {\bf n} )$ with the scalar field fluctuations
$B_a( {\bf n} )$. The mass term then becomes
\begin{equation}
S_M = \mu^2 \sum_ {\bf n} \frac{1}{N^2} \Big[\frac{2}{\sqrt{N}} B^0_a( {\bf n} )+ \sum_{A=1}^N (B^A_a( {\bf n} ))^2\Big]^2~.
\end{equation}
From this expression it is straightforward to see that the fluctuations of the scalar trace mode are governed by a quadratic potential while the traceless scalar field fluctuations feel only a quartic potential. Thus, if we keep $\mu\equiv\mu a$ fixed as $a\to 0$ the trace
mode will acquire an infinite mass in the continuum limit and hence fluctuations
of the trace more around its vacuum expectation value will be completely
suppressed in that limit. In the same limit the presence of the quartic potential for the
traceless Cartan generators is sufficient to regulate possible
infrared problems associated with the flat directions of the $SU(N)$ sector.
Finally, once the continuum limit is attained,
we can restore supersymmetry by taking the final limit $\mu \to 0$.
Notice that the fact that this potential term selects out preferentially the trace mode of the scalars is trivially obvious if we adopt the exponential parametrization of the complexified gauge links
since in that case we can identify $I+p_a$ with $e^{iB_a}$.
\section{Simulation Results}
\label{sec:sim-results}
As noted previously, we have rescaled all lattice fields by powers of the lattice spacing to make them dimensionless. This leads to an overall dimensionless coupling parameter of the form $N/(2\lambda a^2)$, where $a = \beta/T$ is the lattice spacing, $\beta$ is the physical extent of the lattice in the Euclidean time direction and $T$ is the number of lattice sites in the time-direction. Thus, the lattice coupling is
\begin{equation}
\kappa = \frac{NL^2}{2\lambda\beta^2}~,
\end{equation}
for the symmetric two-dimensional lattice where the spatial length $L = T$\footnote{Notice that this coupling multiples {\it all} terms in the bosonic action including those associated with the scalar potential.}. Note that $\lambda\beta^2$ is the dimensionless physical `t Hooft coupling in units of the area. In our simulations\footnote{See \cite{Catterall:2011ce} for the details of the code we used to simulate these theories.}, the continuum limit can be approached by fixing $\lambda\beta^{2}$ and $N$ and increasing the number of lattice points $L\rightarrow\infty$.
In practice we fix the value of $\beta = 1$ and vary $\lambda$.
We have taken three different values for this coupling $\lambda= 0.5, 1.0, 2.0$ and lattice sizes ranging from $L = 2, \cdots, 16$. Systems with $U(N)$ gauge groups with $N = 2, 3$ and $4$ have been examined.
The simulations are performed using anti-periodic (thermal) boundary conditions for the fermions\footnote{This forbids exact zero modes that are otherwise present in the fermionic sector.}. An RHMC algorithm was used for the simulations as described in \cite{Catterall:2011ce}. The use of a GPU accelerated solver \cite{Galvez:2011cd} allowed us to reach larger lattices than have thus far been studied.
\subsection{${\cal Q} = 4$ Supersymmetries}
In figure ~\ref{fig:U2_Q4_pfaffian}. we show results for the absolute value of the (sine of) the Pfaffian phase $|\sin{\alpha}|$ as a function of lattice size $L = 1/a$ for the ${\cal Q} = 4$ model with gauge group $U(2)$. The data corresponds to $\lambda=1$ but
similar results are obtained for $\lambda=0.5,2.0$ and larger numbers of
colors. Three values of $\mu$ are shown corresponding to $\mu=0.1$, $\mu=1.0$ and $\mu=10.0$.
While modest phase fluctuations are seen for small
lattices for the smallest value of $\mu$,
we see that they disappear as the continuum limit is taken. As a practical
matter, these results make it clear that no re-weighting of observables is needed over much of the parameter space. This point is reinforced when we plot a histogram of the phase angle in figure \ref{fig:Histogram_Q4}. Clearly the angle fluctuations contract towards the origin as the
continuum limit is approached.
\begin{figure}
\hspace{-1.2cm}
\includegraphics[scale=0.82]{Q4_pfaffian_U2.eps}
\caption{$<|\sin{\alpha}|>$ for ${\cal Q}=4$, U$(2)$ with $\mu=0.1, 1, 10$ }
\label{fig:U2_Q4_pfaffian}
\end{figure}
\begin{figure}
\hspace{-0.5cm}\includegraphics[scale=0.9]{histogram_Q4.eps}
\caption{Histogram for $\alpha$, with ${\cal Q}=4$, U$(2)$, $\mu=0.1$ and volumes of 6x6, 8x8 and 10x10. \label{fig:Histogram_Q4}}
\end{figure}
\begin{figure}
\hspace{-1.4cm}
\includegraphics[scale=0.82]{Q4_Action.eps}
\caption{$<\kappa S_B>$ for ${\cal Q}=4$, U$(2)$ and $\mu=0.1, 1, 10$ }
\label{fig:Q4_ActionvsL}
\end{figure}
To check for the restoration of supersymmetry in the continuum limit and
as the scalar potential is sent to zero, we
show in figure~\ref{fig:Q4_ActionvsL}.
a plot of the bosonic action density vs lattice size $L$. While the curves
plateau for large $L$ indicating a well defined continuum limit
it is clear that in general supersymmetry is broken there. Indeed, the exact value of
the bosonic action which is shown by the dotted line in the plot
can be computed using a simple ${\cal Q}$ Ward identity and yields \cite{Catterall:2008dv}
\begin{equation}
<\frac{1}{L^2}\kappa S_B>=\frac{3}{2}N_G\end{equation}
It should be clear from the plot that
the measured action indeed approaches this supersymmetric value if the subsequent limit $\mu\to 0$ is taken\footnote{Actually strictly we only expect this as $\beta\equiv \lambda\to\infty$ and thermal effects are suppressed. These appear to be already small for $\lambda=1$ in
this theory}. Thus the regulating procedure we have described does indeed provide a well defined
procedure for studying the supersymmetric lattice theory.
Finally, to reassure ourselves that $L\to\infty$ indeed corresponds to a continuum limit, figure \ref{fig:scalareigs}.
shows a plot of the expectation value of the maximal
eigenvalue of the operator $({\cal U}_a^\dagger {\cal U}_a-1)$ averaged over the lattice
as a function of $L$ for $\lambda = 1$.
To leading order, this expression yields the largest scalar field eigenvalue in units of the lattice
spacing. Reassuringly we see that the eigenvalue indeed approaches zero as $L\to\infty$ corresponding
to a vanishing lattice spacing.
\begin{figure}
\hspace{-1.6cm}\includegraphics[scale=0.83]{Q4_Eigenval.eps}
\caption{Ensemble average for the eigenvalues of the $({\cal U}_a^\dagger {\cal U}_a-1)$ operator for ${\cal Q}=4$, U$(2)$ and $\mu=0.1, 1, 10$ }
\label{fig:scalareigs}
\end{figure}
\subsection{${\cal Q} = 16$ Supersymmetries}
The results for the absolute value of the (sine of) the Pfaffian phase for the ${\cal Q} = 16$ supercharge model with U(2) gauge group
in two dimensions are shown in figure.~\ref{fig:Q16_pfaffian.eps}.
\begin{figure}
\hspace{-1.3cm}\includegraphics[scale=0.9]{Q16_pfaffian_U2.eps}
\caption{$<|\sin{\alpha}|>$ for ${\cal Q}=16$, U$(2)$ and $\mu=0.1, 1, 10$ \label{fig:Q16_pfaffian.eps}}
\end{figure}
As for the ${\cal Q}=4$ case, we see that the average Pfaffian phase is small and decreases with $L$. Indeed, the magnitude of
these angular fluctuations are $O(10^{-4})$ for all $L$ and $\mu$ - {\it much} smaller
than that observed for ${\cal Q}=4$. Thus, even on the coarsest lattice
and smallest $\mu$, there is clearly no practical sign problem and certainly no sign problem in the continuum limit. Again, this picture is reinforced by looking at a histogram of the phase angle $\alpha$ as seen in figure \ref{fig:Histogram_Q16}.
\begin{figure}
\hspace{-0.35cm}\includegraphics[scale=0.9]{histogram_Q16.eps}
\caption{Histogram for $\alpha$, with ${\cal Q}=16$, U$(2)$, $\mu=0.1$ for volumes of 6x6, 8x8, and 10x10. \label{fig:Histogram_Q16}}
\end{figure}
The corresponding plot of the expectation value of the bosonic action vs lattice size $L$ is shown in figure~\ref{fig:Q16_ActionvsL}.
In the case of the ${\cal Q}=16$ model the exact expression for the bosonic action is given by
\begin{equation}
<\frac{1}{L^2}\kappa S_B>=\frac{9}{2}N_G
\end{equation}
The data shown in this plot allow us to conclude that
a well defined continuum limit exists for non-zero
$\mu$ and furthermore, ${\cal Q}$-supersymmetry can
be restored
by subsequently sending the parameter $\mu\to 0$.
\begin{figure}
\hspace{-1.8cm}\includegraphics[scale=0.83]{Q16_Action.eps}
\caption{$<S_B>$ for ${\cal Q}=16$, U$(2)$ and $\mu=0.1, 1, 10$ \label{fig:Q16_ActionvsL}}
\end{figure}
As a final cross check that the limit $L\to\infty$ indeed
corresponds to a true continuum limit, we have again examined the the behavior of the
maximal eigenvalue of ${\cal U}^\dagger_a{\cal U}-I$ as $L\to\infty$.
The result is shown in figure~\ref{fig:Q16_EigenvsL}. and is consistent with a vanishing
lattice spacing in this limit.
\begin{figure}
\hspace{-1.6cm}\includegraphics[scale=0.83]{Q16_Eigenval.eps}
\caption{Ensemble averaged eigenvalues for the $({\cal U}_a^\dagger {\cal U}_a-1)$ operator for ${\cal Q}=16$ with U$(2)$ and $\mu=0.1, 1, 10$ \label{fig:Q16_EigenvsL}}
\end{figure}
These results generalize to large numbers of colors as can be seen in figure~\ref{fig:U4_Q16_APBC_Pfaf}. where we plot the expectation value of
the absolute value of the sine of the Pfaffian phase for the case
of the U(4) group. Notice that the Pfaffian can be proven real in the limit that the ${\cal Q}=16$ theory
is reduced to zero dimensions
for two and three colors
so that it is necessary to examine the $U(4)$ case to be sure of seeing truly
generic behavior.
Nevertheless we see that U(4) looks qualitatively the same as for U(2). In fact the fluctuations in the phase angle that we observe are even
{\it smaller} than those seen for the U(2) theory. This again indicates that this theory exhibits no sign problem even on small lattices and certainly in the continuum limit.
\begin{figure}
\hspace{-1cm}\includegraphics[scale=0.93]{Q16_pfaffian_U4.eps}
\caption{$<|\sin{\alpha}|>$ for ${\cal Q}=16$, U$(4)$ and $\mu=0.1, 1, 100$ \label{fig:U4_Q16_APBC_Pfaf}}
\end{figure}
The plot of the bosonic action for U(4) is shown in figure~\ref{fig:U4_Q16_APBC_Action}. While the largest lattice we have been able to
simulate thus far is rather too small to get a good continuum limit the measured bosonic action
is nevertheless within a percent or so of the exact value expected on the basis of
${\cal Q}$-supersymmetry. The scalar field fluctuations also decrease toward zero as the
number of lattice points increase as shown in figure \ref{fig:U4_Q16_APBC_Eigen}.
\begin{figure}
\hspace{-1.6cm}\includegraphics[scale=0.83]{Q16_Action_U4.eps}
\caption{$<S_B>/L^2$ for ${\cal Q}=16$, U$(4)$ and $\mu=0.1, 1, 100$ \label{fig:U4_Q16_APBC_Action}}
\end{figure}
\begin{figure}
\hspace{-1.6cm}\includegraphics[scale=0.83]{Q16_Eigenval_U4.eps}
\caption{Ensemble averaged eigenvalues for the $({\cal U}_a^\dagger {\cal U}_a-1)$ operator for ${\cal Q}=16$, U$(4)$ and $\mu=0.1, 1, 100$ \label{fig:U4_Q16_APBC_Eigen}}
\end{figure}
It is at first sight rather remarkable that the observed Pfaffian phase fluctuations
are small in the ${\cal Q}=16$ theory given that the Pfaffian is certainly complex when
evaluated on a generic
set of background scalar and gauge fields. It appears to be a consequence of very specific
dynamics in the theory which ensure that only certain special regions of
field space are important in the path integral. Of course the continuum theory does possess very special dynamics;
for example the twisted supersymmetry ensures that
the torus partition function $Z$ is a topological invariant.
One immediate consequence of this is that
$Z$ may be computed exactly at one loop where Marcus has argued that
it simply reduces to an unsigned sum over isolated points in the moduli space of flat
complexified connections up to complex gauge transformations \cite{Marcus:1995mq}.
Furthermore, much of this structure survives in the {\it lattice} theory; the full
partition function {\it including} any Pfaffian phase
may be calculated exactly at one loop. As in the continuum theory there is
a perfect cancellation of contributions from fermons and
bosons and the final result is real \cite{Catterall:2011pd}. Of course this does not mean that simulations
at finite gauge coupling should not suffer from
sign problems but certainly makes it less likely. More prosaically, it is easy to see
that the Pfaffian is real positive if the lattice scalar fields are set to zero - and this is what effectively
happens in the continuum limit as a result of the scalar potential that we use to control the
vacuum expectation value and fluctuations of the trace mode.
\section{Conclusions}
We have performed numerical simulations of the four and sixteen supercharge lattice SYM theories in two dimensions to investigate the occurrence of a sign problem in these theories. In contrast to the usual situation in lattice gauge theory, we utilize a non compact parameterization of the gauge fields in which the lattice fields are expanded on the algebra of the group. While such a scheme would ordinarily break lattice gauge invariance we show that in the case of these twisted supersymmetric models this preserves gauge symmetry since the models in question are formulated in terms of a complexified gauge field valued in $U(N)$. The correct continuum limit is then ensured by adding an appropriate gauge invariant potential term which picks out a non-zero vacuum expectation value for the trace mode of the scalar fields in the continuum limit. We argue that the effects of this potential on the remaining traceless modes can be subsequently removed by sending the potential to zero {\it after} the continuum limit is taken.
We have examined both supersymmetric theories for several values of the dimensionless 't Hooft coupling $\lambda \beta^2$ and for gauge groups $U(2)$, $U(3)$ and $U(4)$. We take a careful continuum limit by simulating the theories over a range of lattice size $L = 2-14$. In both cases we see that the average Pfaffian phase goes to zero for a fixed gauge invariant potential as the continuum limit is taken. We also examine the subsequent limit in which the potential is removed and show evidence that supersymmetry is restored. While the absence of a sign problem is not surprising in the ${\cal Q}=4$ case (where one can prove the Pfaffian reduces to a real positive definite determinant in the continuum limit) it is much more non trivial matter in the ${\cal Q}=16$ supercharge case. In that case the Pfaffian evaluated on a generic background is complex even in the continuum limit. Nevertheless, we observe that the Pfaffian phase is
small and decreases to zero as the continuum limit is taken. Indeed, in practice it is sufficiently small even on coarse lattices that there is no need to use a reweighting procedure to compute expectation values of observables. The analysis of the ${\cal Q}=16$ model is complicated by the fact that the $U(2)$ and $U(3)$ theories exhibit some special properties since in the matrix model limit they are real positive definite and real respectively. Nevertheless, the pattern we observe for the $U(4)$ group is similar to that seen for the smaller groups and the trend supports the conjecture that the sign problem is absent in the continuum limit.
These results thus help to strengthen the case that there may be no sign problem for the ${\cal Q} = 16$ theory in four dimensions and hence no a priori barrier to numerical studies of this
theory.
\begin{acknowledgments}
This work was supported by the U.S. Department of Energy grant under contract no. DE-FG02-85ER40237 and Science Foundation Ireland grant 08/RFP/PHY1462. Simulations were performed using USQCD resources at Fermilab. The authors would like to acknowledge valuable conversations with Joaquin Drut and Robert Wells. AJ's work is also supported in part by the LDRD program at the Los Alamos National Laboratory.
\end{acknowledgments}
\bibliographystyle{JHEP}
|
1,108,101,563,871 | arxiv |
\section{Introduction}\label{intro}
Transiting exoplanets currently present one of the best options towards studying the atmospheres of planets outside of the Solar System through observations of wavelength-dependent variations in their apparent radii as they occult their host star. These variations are intrinsically linked to the composition and structure of an exoplanetary atmosphere, as the starlight transmitted through the planetary limb is strongly modulated by the wavelength dependent opacities of its constituent molecular species \citep{Seag00}. Tracing these variations as a function of wavelength, known as transmission spectroscopy, has already been successfully applied across a range of both ground- and space-based observatories, unveiling a host of atomic and molecular species in the atmospheres of exoplanets (e.g. \citealt{Char02, Redf08, Snel08, Sing11, Demi13, Spak18, Evan18}) as well as providing strong insights into their bulk atmospheric properties (e.g. \citealt{Madh11, Evan17, Wake18}). In particular, \citet{Sing16} show a large diversity in the atmospheres of a sample of ten hot Jupiter exoplanets, revealing a continuum in the obscuring effects of haze and clouds on molecular absorption features present in their transmission spectra. Of the ten exoplanets displayed by \citet{Sing16}, WASP-6b and WASP-39b were lacking in near-infrared observations between 1-2 $\mu$m, a region abundant in potential water absorption features. \citet{Wake18} reported such observations for WASP-39b, providing a strong constraint on the water abundance in its atmosphere. In this study we present these observations for WASP-6b, completing the search for water absorption features across this sample of exoplanets.
Space-based observations, such as those performed with the \textit{Hubble Space Telescope} (\textit{HST}) and \textit{Spitzer}, have thus far proven to be the most prolific method towards the broad spectrophotometric characterisation of exoplanet atmospheres (e.g. \citealt{Char02, Demi13, Sing16}). However, ground-based characterisation through multi-object differential spectrophotometry with the \textit{Very Large Telescope} (\textit{VLT}) FOcal Reducer and Spectrograph (FORS2) \citep{Appe98}, has recently been able to produce \textit{HST}-quality transmission spectra for a variety of exoplanets \citep{Bean11, Niko16, Gibs17, Seda17, Niko18}. As part of a small survey to test the performance of FORS2 and assess the validity of previously observed spectroscopic features with \textit{HST}, the optical spectra of WASP-31b, WASP-39b and WASP-6b have been observed. In the case of WASP-39b and WASP-31b, these results have already been reported in \citet{Niko16} and \citet{Gibs17} respectively. In this study we report the results for WASP-6b, the final target from our ground-based comparative program.
WASP-6b is an inflated hot Jupiter with a mass of $0.485\, M_{\textrm{Jup}}$, a radius of $1.230\, R_{\textrm{Jup}}$ and an equilibrium temperature of 1184$\,$K \citep{Treg15} discovered by the Wide Angle Search for Planets (\textit{WASP}) ground-based transit survey \citep{Poll06, Gill09}. WASP-6b orbits with a period of $P \simeq 3.36$ d at a separation $a \simeq 0.041$ AU around a mildly metal-poor G8V star \citep{Gill09, Treg15}. \citet{Ibgu10} demonstrate that the planet's inflated radius could be due to tidal-heating brought on by a non-zero eccentricity reported in \citet{Gill09}. Whilst further radial velocity data from \citet{Husn12} demonstrated that this eccentricity is not significantly non-zero, as initially inferred, it does not necessitate a circular orbit and as such the true cause of the inflation has yet to be definitively determined. \citet{Doyl13} refine the bulk properties of the host star WASP-6 through spectroscopy, providing measurements of T$_\textrm{eff}$ = 5375 $\pm$ 65, log($g$) = 4.61 $\pm$ 0.07 and [Fe/H] = -0.15 $\pm$ 0.09. Finally, \citet{Treg15} demonstrated that fluctuations in multiple transit light curves of archival photometry of WASP-6b could be attributed to a single star spot anomaly. This enabled a more precise measurement on the sky projected spin-orbit alignment of $\lambda = 7.2^\circ \pm 3.7^\circ$ in agreement with \citet{Gill09}.
The atmosphere of WASP-6b was initially probed spectrophotmetrically in the optical with the ground-based IMACS instrument on the 6.5-m \textit{Magellan Telescope} by \citet{Jord13} who observed a decrease in transit depth as a function of wavelength, characteristic of a scattering haze, and no evidence of the Na {\sc i} and K {\sc i} absorption lines. Subsequent observations performed in the optical with \textit{HST}'s Space Telescope Imaging Spectrograph (STIS) and \textit{Spitzer}'s InfraRed Array Camera (IRAC) \citep{Niko15} also demonstrated evidence of a scattering haze, however the Na {\sc i} and K {\sc i} lines were resolved in this case with significance levels of 1.2$\sigma$ and 2.7$\sigma$ respectively. WASP-6b's atmosphere has also been observed at secondary eclipse as the planet passes behind its host star from our point of view with \textit{Spitzer} IRAC, providing day side temperature estimates of 1235$\substack{ +70 \\ -77 }\,$K and 1118$\substack{ +68 \\ -74 }\,$K for the 3.6 and 4.5 $\mu$m channels respectively \citep{Kamm15}.
We present new spectrophotometric observations from 1.1 to 1.7 $\mu$m using the \textit{HST} Wide Field Camera 3 (WFC3) instrument with the G141 grism for the exoplanet WASP-6b, the final object in the \citet{Sing16} study without observations in this wavelength range. Additionally, we present new spectrophotometric observations from 0.4 to 0.8 $\mu$m performed from the ground using \textit{VLT} FORS2. Recent photometric observations of WASP-6b performed from space with the Transiting Exoplanet Survey Satellite (\textit{TESS}) \citep{Rick14} are also included in our study. These datasets were analysed in tandem with a reanalysis of the archival STIS and \textit{Spitzer} datasets on a common Gaussian Process (GP) framework \citep{Gibs12}. We also perform light-curve corrections to account for the effects of stellar heterogeneity on the perceived transmission spectrum of WASP-6b, the presence of which can act to mimic the signatures of scattering hazes \citep{Mccu14, Rack18, Pinh18, Alam18, Rack19}.
Descriptions of our observations and the necessary data reduction are shown in Section \ref{obs}. All light curve fitting and analysis is presented in Section \ref{lightcurves}. An accounting of the effects of stellar heterogeneity is shown in Section \ref{stelact}. The resultant transmission spectra and the conclusions drawn from them using both forward and retrieval based models are described in Section \ref{disc}. Finally, we summarise our results in Section \ref{conc}.
\section{Observations and Data Reduction}\label{obs}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{grismthroughput.png}
\vspace*{-5mm}
\caption{Representative observed spectra for the FORS2 G600B, FORS2 G600RI and WFC3 G141 grisms, the thicker coloured lines indicate spectra of WASP-6 whilst thinner grey lines correspond to that of the reference star, both target and reference spectra are normalised to the maximum of the target spectrum for that observation. Shaded bands indicate the selected wavelength binning for each grism. }
\label{grismthroughput}
\end{figure*}
\subsection{\textit{VLT} FORS2}\label{vltobs}
We obtained observations of two primary transits of WASP-6b using the \textit{VLT} FORS2 GRIS600B (G600B) and GRIS600RI (G600RI) grisms in multi-object spectroscopy mode on 2015 October 3 and 2015 November 9 respectively as part of program 096.C-0765 (PI: Nikolov). These observations utilise a mask with broad slits centred on WASP-6 and a nearby reference star (2MASS J23124095-2243232), all slits had a width of 25", the slit lengths used in the G600B and G600RI observations were 31" and 90" respectively. On the night of the G600B observations conditions began clear (less than 10 per cent of the sky covered in clouds, transparency variations under 10 per cent) and moved to photometric (no clouds, transparency variations under 2 per cent) approximately half way through the observations. The exposure time was set at 100 seconds per exposure for a total of 152 exposures. During this night observations were halted for $\sim$30 minutes during transit ingress as the target passed through the zenith and was outside the observable region of the telescope. On the night of the G600RI observations, conditions began clear but moved to photometric for the bulk of the observation and the exposure time was set to 60 seconds per exposure for a total of 184 exposures. Towards the end of the transit an earthquake caused a guide star loss and as such observations were halted for $\sim$15 minutes.
We begin the data reduction by performing bias- and flat-field corrections on the raw data frames, followed by cosmic ray correction using two iterations of the \texttt{L.A.Cosmic} algorithm \citep{Dokk01}. Background flux subtraction for each spectrum was conducted using the median of a box of pixels outside of each spectral trace. Spectra were then extracted using the \texttt{APALL} procedure within the \texttt{IRAF} package \citep{Tody93}. Aperture widths for the spectral extraction were varied and values of 14 and 15 pixels were selected as they minimised the dispersion in the out-of-transit flux for the G600B and G600RI white light curves respectively. We produce a wavelength solution for both observations using the spectra of an emission lamp taken with the calibration mask following each observation. In particular, a low-order Chebyshev polynomial was fit to a multitude of emission lines, the centres of which were determined through individual Gaussian fits. This wavelength solution was then applied to a single data frame to produce a reference spectrum for each observation. Finally, each extracted spectrum was then cross-correlated against its respective reference in order to account for sub-pixel shifts in the dispersion direction, the maximum resultant shifts were $\sim$1.2 pixels and $\sim$0.3 pixels for the G600B and G600RI datasets respectively. Representative spectra of both WASP-6 and the reference star are shown in Figure \ref{grismthroughput} for both the G600B and G600RI observations.
\subsection{\textit{HST} WFC3}\label{hstobs}
A primary transit of WASP-6b was also observed using the \textit{HST} WFC3 G141 grism on 2017 May 6 as part of General Observer (GO) program 14767 (PI: Sing and L\'opez-Morales). All exposures were taken in sequence across five \textit{HST} orbits, with 13 exposures per orbit, except for the first orbit which only consisted of 10 exposures. Each exposure was performed in forward spatial scanning mode \citep{mccu12}, where the telescope slews in the cross dispersion axis during the exposure, allowing for longer exposure times whilst avoiding saturation on the detector. For the first orbit the exposure times were set to $\sim$184 seconds, whilst the remaining orbits had exposure times of $\sim$138 seconds. All exposures employed the SPARS25 readout mode and used a scan rate of $\sim$0.46 pixels per second.
Reduction of the spectra began with the \texttt{.ima} files output from the \texttt{CALWF3} pipeline. Each \texttt{.ima} file contains multiple reads for each individual spatial scan, up to the final full scan image. We do not however perform spectral extraction on the final frame of each scan but rather the sum of differenced frames, following \citet{Demi13}. This has the advantage of reducing the impact of cosmic rays and hot pixels, whilst also reducing the overall sky background. For each differenced read, pixels beyond a mask of 35 pixels above and below the centre of the spectral trace were zeroed before extraction of the differenced frame following \citep{Evan16}. Finally, we then sum all of the differenced frames for each spatial scan to produce a final differenced frame scan.
To perform cosmic ray correction these frames were stacked into a single cube so that the variation of each pixel could be tracked as a function of time. Each pixel was smoothed temporally with a Gaussian filter and pixel deviations between this and the initial datacube larger than 8$\sigma$ were flagged as cosmic rays. Static bad pixels were also flagged by searching for deviations greater than 10$\sigma$ between each individual unsmoothed pixel and the median of a span of 5 pixels in the cross-dispersion direction, centred on the initial pixel. These cosmic rays and static pixels were then replaced by a linear interpolation of the pixel to the PSF of the same median span. Using a second mask of 50 pixels above and below the centre of the final scans, the 2D spectra were summed along the cross-dispersion axis to produce a 1D spectrum for each scan. This mask width was selected as it provided the minimal white light curve out-of-transit scatter across a range of 30 to 80 pixels in steps of 5 pixels. The background was subtracted from each spectrum using the median of a box of pixels in a region of the detector unpolluted by the diffuse light from the edges of the spatial scan.
Wavelength solutions were obtained by cross-correlating each individual spectrum with an \texttt{ATLAS}\footnote{\url{http://kurucz.harvard.edu/}} \citep{Kuru93} stellar spectrum, with parameters similar to WASP-6 (T$_\textrm{eff}$=5500K, log($g$)=4.5, [M/H]=-0.2), convolved with the throughput of the G141 grism. Before cross-correlation, both spectra were smoothed with a Gaussian filter to inhibit the effects of spectral lines and focus the correlation on the steep edges of the G141 throughput. This process revealed shifts in the dispersion direction across the course of observation within $\sim$0.12 pixels. An example 1D spectrum from the G141 observations is shown in Figure \ref{grismthroughput}.
\subsection{\textit{TESS}}\label{tessobs}
The \textit{Transiting Exoplanet Survey-Satellite} (\textit{TESS}) is currently performing an all sky search for transiting exoplanets in a single broadband filter from 0.6 to 1.0 $\mu$m \citep{Rick14}. Due to the broad 24$^\circ$ $\times$ 96$^\circ$ field of view, \textit{TESS} holds enormous potential not only for discovering new exoplanets, but also observing transits of already known transiting systems. With the public release of the \textit{TESS} Sector 2 data, 7 clear transits of WASP-6b can be readily identified from 2018 Aug 23 to 2018 Sep 19.
To obtain the \textit{TESS} light curve spanning this time period we initially used the pre-calibrated and extracted light curve held in the \texttt{lc.fits} file. However, on closer inspection we found indications of a non-optimal pipeline correction and as such choose to perform our own correction on the uncorrected light curve in the same file. We follow a Pixel Level Decorrelation (PLD) systematics removal method on the raw data as implemented by the \texttt{lightkurve} python package \citep{lkurv}. PLD has already been used successfully as a systematics correction technique on both \textit{Spitzer} \citep{Demi15} and K2 data \citep{Luge16, Luge18} and we refer the reader to these references for further information on the PLD technique itself. Finally, to prepare for the transit light curve analysis, we extract seven separate portions from the complete light curve, each centred on one of the observed transits. Each individual extracted light curve spans from roughly 5 hours pre-transit to 5 hours post transit, in order to facilitate an effective out-of-transit baseline determination.
\subsection{Archival Data}
In order to fully exploit the data that are available to us we opt to perform a reanalysis of the previously reported \textit{HST} STIS and \textit{Spitzer} IRAC data \citep{Niko15}. Specifically, there were two spectroscopic transit observations with the STIS G430L grism from 0.33-0.57 $\mu$m, one spectroscopic transit using the STIS G750L grism from 0.55-1.03 $\mu$m, and one photometric transit for each of the \textit{Spitzer} IRAC 3.6 and 4.5 $\mu$m bandpasses. Performing such a reanalysis can account for transit depth baseline offsets between these datasets and those in this study by fitting all light curves under a common set of prior system parameters. Furthermore, the implementation of a stellar heterogeneity correction, and its changes to the system parameters (Section \ref{stelact}) necessitates further light curve fitting. A complete reanalysis ensures that any comparisons between the spot corrected and uncorrected datasets are not influenced by the differing light curve fitting methodologies of this study and that of \citet{Niko15}.
With respect to the data reduction of the observations themselves, all light curves were extracted following the same methodology outlined in \citet{Niko15}. For the STIS data this involves spectral extraction following the \texttt{APALL} procedure in \texttt{IRAF} \citep{Tody93}, and photometry is performed for the \textit{Spitzer} data through time-variable aperture extraction. For the \textit{Spitzer} IRAC light curves there are thousands of independent photometric measurements throughout each observation and to reduce the computational intensity of the light curve fitting procedure described in Section \ref{lightcurves} we bin each light curve into 1000 bins, corresponding to a cadence of $\sim$15 and $\sim$16 seconds for the 3.6 and 4.5 $\mu$m bands respectively.
\section{Light Curve Analysis}\label{lightcurves}
White light curves for the G600B, G600RI and G141 datasets were produced by summing the flux for each individual spectrum along the dispersion axis from 0.449 to 0.617 $\mu$m, 0.529 to 0.833 $\mu$m, and from 1.0 to 1.8 $\mu$m respectively. Spectrophotometric light curves were produced for the G600B, G600RI, and G141 datasets by summing the flux within 12, 34, and 28 respective bins across the wavelength ranges displayed in Figure \ref{grismthroughput}.
Below $\sim$ 0.45 $\mu$m the G600B flux levels are the lowest of both of the FORS2 datasets and inherently contain a limited amount of information due to the higher photon error. Whilst using a larger bin size could alleviate this, the contribution of differential extinction due to a spectral type mismatch between the target and reference must also be considered. In the case of our observations such a mismatch is evident in the different spectral profiles of the target and reference star in Figure \ref{grismthroughput}. The flux of the reference star from 0.40 to 0.45 $\mu$m is ~50\% that of the target, whereas at 0.6 $\mu$m this value is ~80\%. Therefore, the data below 0.45$\mu$m not only contain the lowest flux levels of our FORS2 observations, but their accuracy is impacted the most by the differential extinction. Furthermore, including such a wavelength range would also impart further differential extinction effects on every other spectrophotometric bin in the G600B dataset due to the nature of the common-mode, white-light correction performed during the light curve fitting. In an effort to mitigate the impact of differential extinction on our final transmission spectra we therefore choose to exclude the G600B data below $\sim$0.45 $\mu$m.
In the case of the G600B and G600RI observations, all light curves were also produced for the reference star. Before fitting any of the G600B or G600RI light curves, we first correct for dominant atmospheric effects by dividing the raw flux of the target by that of the corresponding wavelength range reference. The spectrophotometric bins for all observations are displayed in Figure \ref{grismthroughput}. As the \textit{TESS} observations are photometric they hold no spectral information and were treated as white light curves in terms of fitting. Finally, we obtain the archival STIS and \textit{Spitzer} light curves across identical wavelength ranges as described in \citet{Niko15}.
During both the G600B and G600RI observations, the target needed to be reacquired and as such all light curves suffer from incomplete phase coverage, this also results in the separate pieces of each light curve exhibiting differing systematics effects. Throughout our analyses we were unable to accurately and effectively account for these systematic offsets due to the significant, or complete, absence of in transit observations for one piece of each light curve. As such, in the analysis presented here we exclude the pre-ingress data for the G600B observation and the post-egress data for the G600RI observation. The first orbit, and first spectrum of all other orbits, of the G141 observation exhibit much stronger systematics than the other obtained spectra due to charge trapping in the detector \citep{Zhou17}. We therefore opt to remove these data from our analysis in line with many other studies (e.g. \citealt{Knut14, Sing16, Wake18, Evan19}) that have been performed since the first spatial scanning WFC3 transit observations were made \citep{Demi13}.
\begin{table}
\centering
\begin{tabular}{c c c c}
\hline
\hline
& Uncorrected & \textit{TESS} Corrected & \textit{AIT} Corrected \\
\hline
$i$ ($^{\circ}$) & 88.78$\substack{ +0.13 \\ -0.13 }$ & 88.73$\substack{ +0.13 \\ -0.12 }$ & 88.72$\substack{ +0.013 \\ -0.012 }$ \\[1ex]
$a/R_*$ & 11.154$\substack{ +0.049 \\ -0.072 }$ & 11.135$\substack{ +0.050 \\ -0.072 }$ & 11.123$\substack{ +0.050 \\ -0.072 }$ \\
\hline
\end{tabular}
\caption{Weighted average values of the orbital inclination and normalised semi-major axis for the uncorrected and spot corrected light curve analyses.}
\label{bestfitpars}
\end{table}
\subsection{White Light Curves}\label{wlcs}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{whitelightcurve_2.png}
\vspace*{-5mm}
\caption{Normalised white light curves and residuals of WASP-6b for the G600B, G600RI and G141 grism observations as labelled. \textit{Left:} Data shown from top to bottom are: the raw light curve following reference star correction (grey squares indicating the excluded sections of the light curve) with the black line indicating the GP transit plus systematic model fit, the light curve after removal of the GP systematic component overplotted with the best fitting transit model from \citet{Mand02}, and the computed common-mode correction following division of the raw data by the best fitting transit model. \textit{Centre:} As in the left panel. \textit{Right:} The upper light curve is the raw flux with the black line indicating the GP transit plus systematic model fit, whilst the lower is the light curve after removal of the GP systematic component overplotted with the best fitting transit model from \citet{Mand02}. All lower panels display residuals following subtraction of the corresponding corrected light curves by their respective best fitting models.}
\label{whitelightcurve}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{tesslightcurve.png}
\vspace*{-5mm}
\caption{Normalised \textit{TESS} photometric light curves multiplied by an arbitrary constant. \textit{Left:} Raw extracted light curves with black lines indicating the GP transit plus systematic model fits. \textit{Centre:} Light curves after removal of GP systematic component. The best fitting transit models from \citet{Mand02} are displayed in grey. \textit{Right:} Residuals after subtraction of best fitting models from the GP systematic corrected light curves. }
\label{tesslightcurves}
\end{figure*}
To perform all lightcurve fitting we follow \citet{Gibs12}, accounting for the transit and instrumental signals simultaneously by treating the data for each light curve as a Gaussian process (GP) using the Python library \texttt{George} \citep{Ambi14}. GP fitting methodologies have been successfully applied to a range of transit observations \citep{Gibs12, Gibs12b, Gibs17, Evan13, Evan15, Evan16, Evan17, Evan18, Evan19, Cart17, Kirk17, Kirk18, Kirk19, Loud17, Seda17} from both the ground and space thus far and enable the measurement of the systematic signal without assuming any prior knowledge on its functional form. We obtain the best fit model to each light curve by marginalising over the constructed GP likelihoods using Markov chain Monte Carlo (MCMC) as implemented by the Python library \texttt{emcee} \citep{Fore13}. When executing each MCMC, we first initialised a group of 150 walkers near the maximum likelihood solution, identified using a Nelder-Mead simplex algorithm as implemented by the \texttt{fmin} function within the \texttt{scipy} library. We run a group for 500 samples and then use the best run to initialise a second group of 150 walkers in a narrow space of this solution. This second group was then run for 3000 samples, with the first 500 samples being discarded as burn-in.
We list the individual subtleties for each dataset throughout the GP fitting procedure below, however there are some aspects which remained unchanged regardless of the dataset. For the GP covariance amplitudes of all datasets we utilise gamma priors of the form $p(A) \propto e^{-100A}$ as in \citet{Evan18} in order to favour smaller correlation amplitudes and reduce the effects of outliers. Additionally, we follow previous studies and fit for the natural logarithm of the inverse length scale hyperparameters (e.g. \citealt{Evan17, Gibs17, Evan18}), but limit these quantities with a uniform prior ranging between the cadence of the observation and twice the length of the observation. This prescription encourages the GP to fit the broader systematic variations that occur during the transit, with shorter variations described by white noise and longer variations accounted for by the linear baseline trend. Finally, in all cases the orbital period was held fixed to the value of $P$ = 3.36100239 d from \citet{Niko15} and the eccentricity was held fixed to the value of $e$ = 0.041 from \citet{Husn12}.
\subsubsection{G600B \& G600RI}
To describe the mean function of the GP we use the model transit light curves of \citet{Mand02} generated using the \texttt{batman} Python library \citep{Krei15} multiplied by a linear airmass baseline trend. We initially tested a time baseline trend however found that this restricted the final GP fitting of shorter frequency variations within the light curves, by utilising a linear airmass baseline trend the non-linear sloping of the light curves was better matched and the GP had more freedom to fit these shorter frequency variations. Whilst the observed airmass trend can be included in the GP directly as a decorrelation parameter we found this necessitated stricter priors on the length scale hyperparameters and did not measurably improve the fitting. As such, we opt to include this term through the baseline trend. To construct the covariance matrix of the GP we use the Mat\' ern $\nu$ = 3/2 kernel, with time as the decorrelation parameter. Other decorrelation parameters were also tested both individually and in combination such as: spectral dispersion drift, cross-dispersion drift, full-width half maximum, ambient temperature, ambient pressure, and telescope rotation angle. Despite this, no clear correlations were observed and therefore we excluded these parameters from the final analysis.
Unlike the other datasets, for the FORS2 analysis we account for limb-darkening following the two-parameter quadratic law. The treatment is different as these observations were performed from the ground where the Earth's atmosphere acts as an filter for the incoming light. Crucially, the response of the atmosphere is a function of wavelength and varies as a function of the zenith distance, which varies throughout the observations. Instead of making explicit assumptions about this atmospheric transmission and including it directly in our determination of the precomputed limb darkening coefficients we choose to fit for the coefficients themselves. We select the quadratic limb darkening law in order to improve computational efficiency by reducing the number of fit parameters whilst still providing an accurate description for the true limb darkening of WASP-6 given its temperature \citep{Espi16}.
For the G600RI observation we allow the transit depth $R_\textrm{p} / R_*$, inclination $i$, normalised semi-major axis $a/R_*$, transit central time $T_0$, linear trend parameters and quadratic limb darkening parameters $c_1$ and $c_2$ to vary throughout the fit. However, in the case of the G600B observation, we found that the paucity of transit coverage provided imprecise determinations of $i$ and $a/R_*$ and as such perform a simpler fit after retrieving the weighted average best fit parameters, see Section \ref{bfm_para}.
The presence of high frequency variations from $\sim$2-3 hours and $\sim$0-1 hours after mid-transit for the G600B and G600RI light curves, respectively, strongly constrain the hyperparameters of the GP fit which leads to over fitting of other variations within the light curve. In order to assess the impact on the fit transit parameters we restricted the priors on these hyperparameters such that the high frequency variations could no longer bias the GP fitting. Whilst this significantly reduced the perceived overfitting, we find that all fit transit parameters are unaffected by this change and lie within 1$\sigma$ of the original fit. Therefore, and in addition to the lack of prior knowledge on these hyperparameters, we opt not to perform such a restriction for any of the final white light curve fits.
\subsubsection{STIS \& G141}
The mean function of the GP is described identically to the G600B and G600RI mean functions, except using a linear time baseline trend. To construct the covariance matrix of the GP we use the Mat\' ern $\nu$ = 3/2 kernel, with \textit{HST} orbital phase, dispersion shift and cross dispersion shift identified as the optimal decorrelation parameters. Limb darkening was accounted for through the four-parameter non-linear law. During the fitting we allow the transit depth $R_\textrm{p} / R_*$, inclination $i$, normalised semi-major axis $a/R_*$, transit central time $T_0$ and linear trend parameters to vary throughout the fit and we fixed all four non-linear limb darkening values to values calculated from the ATLAS model described in Section \ref{obs}, following \citet{Sing10}. Finally, as there are two independent light curves in the STIS G430L observations we performed a joint fit between them, only allowing the transit central time for each light curve to vary independently.
\subsubsection{Spitzer}
The mean function of the GP is described identically to the G600B and G600RI mean functions, except using a linear time baseline trend. We construct the covariance matrix following \citet{Evan15}. Specifically, we construct a kernel $k = k_{xy} + k_t$ where $k_{xy}$ is a squared exponential kernel, with the photometric centroid $x$ and $y$ coordinates as the decorrelation parameters, and $k_t$ is a Mat\' ern $\nu$ = 3/2 kernel, with time as the decorrelation parameter. Constructing such a kernel allows us to account for the smooth variations in pixel sensitivities as well as residual correlated noise in the light curve. Limb darkening was accounted for through the four-parameter non-linear law. During the fitting we allow the transit depth $R_\textrm{p} / R_*$, inclination $i$, normalised semi-major axis $a/R_*$, transit central time $T_0$ and linear trend parameters to vary throughout the fit and we fixed all four non-linear limb darkening values similarly to the STIS and G141 observations.
\subsubsection{TESS}
The mean function of the GP is described identically to the G600B and G600RI mean functions, except using a linear time baseline trend. To construct the covariance matrix of the GP we use the Mat\' ern $\nu$ = 3/2 kernel, with time as the decorrelation parameter. Limb darkening was accounted for through the four-parameter non-linear law. During the fitting we allow the transit depth $R_\textrm{p} / R_*$, inclination $i$, normalised semi-major axis $a/R_*$, transit central time $T_0$ and linear trend parameters to vary throughout the fit and we fixed all four non-linear limb darkening values similarly to the STIS and G141 observations.
\subsubsection{Best Fit Models}\label{bfm_para}
In order to obtain the best fit model to each dataset we determine the weighted average values of the orbital inclination and the normalised semi-major axis (Table \ref{bestfitpars}). Using these values we performed the fit to the G600B dataset, where we allowed the transit depth $R_\textrm{p} / R_*$, transit central time $T_0$, linear trend parameters, and the quadratic limb darkening parameters $u_1$ and $u_2$ to vary. In addition, we repeat the fit for each other light curve, with the orbital inclination and normalised semi-major axis fixed to the weighted average values and the transit central time to that of its respective original fit. The G600B, G600RI and G141 light curves, alongside the systematics corrected light curves are displayed in Figure \ref{whitelightcurve}, all \textit{TESS} light curves are displayed in Figure \ref{tesslightcurves}, all STIS light curves are displayed in Figure \ref{stis_wlcs}, all \textit{Spitzer} light curves are displayed in Figure \ref{spitzer_wlcs}, and all relevant MCMC results are displayed in Table \ref{wlcparams}.
\input{wlc_table}
\subsection{Spectrophotometric Light Curves}\label{slcs}
Prior to the full spectrophotometric fits, we correct all of the spectrophotometric light curves for wavelength independent (common-mode) systematics. In the case of the G600B and G600RI datasets we follow \citet{Niko16} and determine a common-mode correction by dividing each uncorrected transit white light curve by its final best fit transit model. To apply the correction we divide all spectrophotometric light curves by the common-mode calculated from their parent white light curve. For the G141 dataset we correct for common-mode systematics following the shift-and-fit method of \citet{Demi13}. In this case a reference spectrum was first produced by averaging all of the out-of-transit spectra. Each individual spectrum was then matched against this reference through stretching vertically in flux and shifting horizontally in wavelength following a linear least-squares optimisation. We then separate the spectral residuals of the previous fit into 28 wavelength bins spanning 1.13 to 1.65 $\mu$m. Each spectrophotometric residual was then added to a transit model constructed using the best fit parameters from the white light curve fit and limb-darkening calculated for the relative wavelength bin to produce the spectrophotometric light curves. All corrections can be seen under each systematics corrected light curve in Figure \ref{whitelightcurve}.
All spectrophotometric light curves were then fit following the same process as their corresponding white light curves. In each case however, the inclination and normalised semi-major axis were fixed to the weighted average values calculated from the white light curve fits and the transit central time was fixed to that of each respective white light curve fit. Additionally, for the G600B and G600RI light curves the quadratic limb darkening parameter $u_2$ was fixed to a value calculated from the ATLAS model described in Section \ref{obs} for each individual wavelength bin. The results for all best fit transit depths are displayed in Tables \ref{slcparams} and \ref{slcparams2} and all spectrophotometric light curves for the G600B, G600RI, G141 and STIS datasets are displayed in Figures \ref{spectro_b}, \ref{spectro_ri}, \ref{spectro_w} and \ref{stis_slcs} respectively.
The initial transmission spectrum of these spectrophotometric light curves revealed an offset in transit depth between the G600B and G600RI datasets. Whilst activity of the host star can lead to such offsets, the stellar variability monitoring performed in \citet{Niko15} shows that potential offsets are of a magnitude $\Delta R_\textrm{p}/R_* \simeq 0.00022$, much too small to account for the observed offset of $\Delta R_\textrm{p}/R_* \sim 0.002$. Furthermore, the very good agreement of the G600RI dataset with the STIS measurements (Section \ref{archivalcomparison}) of \citet{Niko15} demonstrates that the cause of this offset most likely lies with the G600B dataset. Due to the poor phase coverage of the G600B dataset there are almost no observations during ingress, this produces a large uncertainty in the transit central time and subsequently the absolute transit depth, which may be responsible for the offset we see. Therefore, to account for this offset we apply a vertical shift to the G600B dataset by performing a weighted least-squares minimisation on the difference between the spectrophotometric bins in the overlapping region between the G600B and G600RI datasets, leaving the relative vertical shift of the G600B dataset as a free parameter in the minimisation. This results in a shift of $\Delta R_\textrm{p}/R_* = 0.00248$, equivalent to $\sim1.5\sigma$ of the error on the transit depth of the G600B white light curve. A full transmission spectrum with this offset included is shown in Figure \ref{reduced_transpec}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{spectro_B.png}
\vspace*{-5mm}
\caption{Normalised spectrophotometric light curves for the G600B dataset of WASP-6b, light curves are offset from one another by an arbitrary constant. \textit{Left:} Raw light curves following reference star correction. \textit{Centre-Left:} Light curves after common-mode correction with black lines indicating the best GP transit plus systematic model fit. \textit{Centre-Right:} Light curves after common-mode correction and removal of GP systematic component. The best fitting transit models from \citet{Mand02} are displayed in grey. \textit{Right:} Residuals following subtraction of best fitting model.}
\label{spectro_b}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{spectro_RI.png}
\vspace*{-5mm}
\caption{As in Figure \ref{spectro_b}, but for the G600RI dataset.}
\label{spectro_ri}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{spectro_W.png}
\vspace*{-5mm}
\caption{Normalised spectrophotometric light curves for the G141 dataset of WASP-6b, light curves are offset from one another by an arbitrary constant. \textit{Left:} Raw extracted light curves with black lines indicating the GP transit plus systematic model fit. \textit{Centre:} Light curves after removal of GP systematic component. The best fitting transit models from \citet{Mand02} are displayed in grey. \textit{Right:} Residuals following subtraction of best fitting model.}
\label{spectro_w}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{seperated_transpec.png}
\vspace*{-5mm}
\caption{The measured transmission spectrum of WASP-6b obtained from the G600B, G600RI, \textit{TESS}, STIS, G141 and \textit{Spitzer} datasets.}
\label{reduced_transpec}
\end{figure}
\section{Correcting for Stellar Heterogeneity}\label{stelact}
Stellar activity leads to the presence of heterogeneities on stellar surfaces through the magnetically driven formation of cooler regions known as star spots and hotter regions known as faculae. The presence of spots (or faculae) on the surface of a star results in a wavelength dependent variation in the stellar baseline flux due to the respective differences in the emission profiles of the relatively cool spot (or relatively hot faculae) and the stellar surface itself. As the stellar baseline flux is crucial in determining transit depth, the presence of an unocculted star spot during a transit observation will necessarily produce a wavelength dependent variation in the measured transit depth \citep{Rack18, Rack19}. If significant enough, this variation can produce an artificial slope in the optical region of the final measured transmission spectrum, potentially mimicking the effects of haze in the atmosphere \citep{Pont08, Sing11b, Mccu14, Alam18, Pinh18}. These wavelength dependent variations can also impact individual spectral features due to the differential emission of specific stellar lines. Previous studies have displayed small decreases in the amplitude of Na {\sc{i}} absorption following a stellar heterogeneity correction (e.g. \citealt{Sing11b, Alam18}), however this effect is typically secondary to the artificially induced optical slope.
To estimate the impact surface stellar heterogeneities may have on our observations we obtained a proxy of the magnetic activity level of WASP-6 using a measurement of $\log(R^\prime_{HK})$. This value has been previously quoted without uncertainties as -4.741 in \citet{Sing16}, however analysis of the emission cores of the Ca {\sc ii} H and K lines in the HARPS spectra of \citet{Gill09} results in a direct measurement of -4.511 $\pm$ 0.037, indicating that WASP-6 is a moderately active star compared to the broader population of cool stars \citep{Saik18}. We therefore endeavour to account for the effects of unocculted star spots following the methodology of \citet{Alam18}.
\subsection{Photometric Monitoring of WASP-6}\label{photom}
We estimate the long baseline variability of WASP-6 by considering all 18,317 images from the \textit{TESS} observations previously described in Section \ref{tessobs} in addition to 435 $R$-band images from the Tennessee State University 14-inch Celestron \textit{Automated Imaging Telescope} (\textit{AIT}) taken from September 2011 to January 2019 (Figure \ref{spotgps}). Initially, we also incorporated 738 $V$-band images taken from November 2013 to July 2018 as part of The Ohio State University \textit{All-Sky Automated Survey for Supernovae} (\textit{ASAS-SN}) \citep{Shap14, Jaya18} into our photometric monitoring dataset as in \citet{Alam18}. However, on comparing the contemporaneous \textit{ASAS-SN} and \textit{TESS} data we find a $\sim$4 times larger photometric scatter in the \textit{ASAS-SN} dataset compared to the more precise \textit{TESS} sample and, as such, exclude it from our analysis in order to avoid influencing the variability amplitude estimation with such a noise-dominated dataset.
\subsection{The Stellar Rotation Period}\label{period}
In order to perform an accurate fit to the photometric monitoring data, it is necessary to have a measurement of the stellar rotation period. However, a range of rotation periods have been reported for WASP-6. In particular, \citet{Jord13} find a period of 16 $\pm$ 3 d based on the $v$sin$I$ = 2.4 $\pm$ 0.5 km $\textrm{s}^{-1}$ measurement from \citet{Doyl13}, \citet{Niko15} determine a period of 23.6 $\pm$ 0.5 d from a portion of their \textit{AIT} photometric monitoring, and by tracking transit star spot crossings \citet{Treg15} find a period of 23.80 $\pm$ 0.15 d, assuming the star had rotated only once between successive observed crossings.
We also perform a measurement of this rotation period through virtue of the very high cadence \textit{TESS} observations. Even from an initial inspection of the light curve shown in Figure \ref{spotgps} a clear sinusoidal variation can be seen. In order to determine that this variation is not due to an instrumental effect we inspect the light curves and background flux of the four closest neighbouring stars to WASP-6 with \textit{TESS} light curve observations. We find that none of the stars exhibit the same sinusoidal variation as WASP-6, and they all exhibit similar variations in their background flux. To determine the rotation period itself, we perform a least-squares minimisation using a simplistic sinusoidal model on the data with all transit events removed. This resulted in an inferred period of 12.18 $\pm$ 0.05 d.
Even though this method of model fitting is quite rudimentary, the determined period is clearly in contradiction to current estimates of the stellar rotation period. This contradiction suggests that the variability observed is likely not that of a single spot feature rotating with a period equal to that of the stellar rotation period. Alternatively, the perceived \textit{TESS} period can be explained by the spot coverage during the \textit{TESS} epoch being concentrated on opposite hemispheres of the star, rather than one single hemisphere. During a period of \textit{AIT} photometry performed shortly after the \textit{TESS} observations from September 2018 to January 2019 we find a standard deviation of 3.8 mmag, in contrast to previous seasons where this reached up to 8.1 mmag. This reduced variability is further justification of the measured \textit{TESS} period being a result of hemispherically varying star spot coverage and not intrinsic to the \textit{TESS} instrument itself. Further high-quality photometric monitoring will likely be necessary to fully resolve the discrepancy between these observations. For subsequent analysis however, we adopt the stellar rotation period of 23.6 $\pm$ 0.5 d from \citet{Niko15} as this estimate was made over much longer timescales compared to the estimates of \citet{Jord13} and \citet{Treg15}.
\subsection{Modelling and Correction of Unocculted Star Spots}\label{stelcorr}
The variability of WASP-6 was modelled following the methodology of \citet{Alam18}. We perform a Gaussian process (GP) regression model fit to the photometric monitoring data constructed with a three component kernel which models: the quasi-periodicity of the data, irregularities in the amplitude, and stellar noise. A gradient based optimisation routine was used to locate the best-fit hyperparameters and a uniform prior was placed on the stellar rotation period, centred on the value of 23.6 $\pm$ 0.5 d from \citet{Niko15} with a width three times that of the standard deviation. The \textit{TESS} bandpass ranges from 0.6-1.0 $\mu$m and is less susceptible to active photometric variations compared to the \textit{AIT} $R$-band observations. This should not affect the wavelength dependence of our determined spot correction however, as the estimated variability amplitude is ultimately used as a reference to normalise the true model wavelength-dependent correction factor (Equation \ref{wdcf}). Despite this, the discrepancy of the measured \textit{TESS} period from the measured period in other studies \citep{Jord13, Niko15, Treg15}, and the reduced variation in a subset of \textit{AIT} data described in Section \ref{period}, does indicate that the variability of the star as a whole was also lower during this epoch. Because the variability amplitude is crucial in determining the spot correction, we opt to perform separate fits to the \textit{TESS} and \textit{AIT} datasets. To avoid influencing the GP fitting with the lower variance \textit{AIT} data, we exclude 41 measurements obtained shortly after the \textit{TESS} epoch which correspond to the subset described in Section \ref{period}. Due to the large size of the \textit{TESS} dataset ($\sim$18,000 data points) we bin the data down by a factor of 10 in order to make the GP fitting computationally tractable.
Whilst the \textit{TESS} data is well sampled and more precise than the \textit{AIT} data, we may be perceiving a lower level of variability due to the \textit{TESS} bandpass or the lower intrinsic variability of WASP-6 during the \textit{TESS} epoch (Section \ref{period}). Comparatively, the \textit{AIT} data has a much broader temporal coverage and could therefore be more indicative of the longer-term variability of WASP-6, though as there are no contemporaneous measurements with the \textit{TESS} dataset their accuracy is not guaranteed. The \textit{TESS} and \textit{AIT} model fits therefore provide respectively more conservative or realistic estimates of the true stellar variability. All such fits to the photometric monitoring data are displayed in Figure \ref{spotgps}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{spotgps.png}
\vspace*{-5mm}
\caption{Photometric monitoring and modelling of the stellar variability of WASP-6. \textit{Left}: \textit{AIT} monitoring data prior to the \textit{TESS} epoch(purple dots) with best fit GP model represented by the solid black line, the shaded area represents the 1$\sigma$ confidence region. Additional vertical lines are plotted corresponding to the best fit transit central times of each observation as shown in Table \ref{wlcparams}, the broader green region nearest the latest observations corresponds to the full \textit{TESS} epoch. \textit{Right:} Unbinned (grey) and binned (cyan) \textit{TESS} monitoring data with best fit GP model represented by the solid black line, the shaded area represents the 1$\sigma$ confidence region. For both the \textit{AIT} and \textit{TESS} datasets the flux has been normalised with the maximum stellar flux obtained from their respective GP model fits corresponding to unity.}
\label{spotgps}
\end{figure*}
We are then able to correct for the unocculted spots in the transit light curves following \citet{Huit13}. Under the assumption that there is always some level of spot coverage on the stellar surface, the maximum observed stellar flux does not correspond to the flux emitted by an entirely unspotted surface. Using the amplitude of the GP fit to both the \textit{TESS} and \textit{AIT} photometric monitoring data we determine different estimates for the unspotted stellar flux $F'$ = max($F$)+$k\sigma$, where $F$ is the observed photometric monitoring data, $\sigma$ is the dispersion of these photometric measurements, and $k$ is a value fixed to unity. Whilst an accurate value of $k$ can be difficult to determine a $k$ = 1 has been shown to be suitable for active stars \citep{Aigr12}. Furthermore, varying the chosen value of $k$ does not significantly influence the wavelength dependence of the correction and mainly influences the offset of the transmission spectrum baseline \citep{Alam18}. For each estimate, the fractional dimming due to stellar spots was then calculated as $f_\textrm{norm} = \overline{F/F'}$, giving the amplitude of the spot correction at the variability monitoring wavelength as $\Delta f_0$ = 1 - $f_\textrm{norm}$.
In order to determine each wavelength dependent spot correction we must compute the wavelength-dependent correction factor shown in \citet{Sing11}:
\begin{equation}\label{wdcf}
f(\lambda ,T) = \bigg{(} 1- \frac{F_{\lambda ,T_{\textrm{spot}}}}{F_{\lambda ,T_{\textrm{star}}}} \bigg{)} \Bigg{/} \bigg{(} 1- \frac{F_{\lambda_{0} ,T_{\textrm{spot}}}}{F_{\lambda_{0} ,T_{\textrm{star}}}} \bigg{)}
\end{equation}
where $F_{\lambda ,T_{\textrm{spot}}}$ is the wavelength dependent stellar flux at temperature $T_\textrm{spot}$, $F_{\lambda ,T_{\textrm{star}}}$ is the wavelength dependent stellar flux at temperature $T_\textrm{star}$, $F_{\lambda_{0} ,T_{\textrm{spot}}}$ is the stellar flux at the wavelength of the photometric monitoring data at temperature $T_\textrm{spot}$, and $F_{\lambda_{0} ,T_{\textrm{star}}}$ is the stellar flux at the wavelength of the photometric monitoring data at temperature $T_\textrm{star}$. In order to determine the stellar and spot fluxes described we use the \texttt{ATLAS} stellar model described in Section \ref{obs}. The only difference between the stellar flux and spot models is that they differ by a temperature of 1500K, assumed from an empirically determined relation \citep{Berd05}. Finally, we compute wavelength dependent spot corrections based on both the \textit{AIT} and \textit{TESS} photometry following $\Delta f = \Delta f_0 \times f(\lambda ,T)$ (Figure \ref{spotcorrfig}).
Each spot correction was then independently applied to both the white and spectrophotometric light curves using:
\begin{equation}
y_\textrm{corr} = y + \frac{\Delta f}{(1-\Delta f)}\overline{y_{\textrm{oot}}}
\end{equation}
where $y_\textrm{corr}$ is the corrected light curve flux, $y$ is the uncorrected flux, and $\overline{y_{\textrm{oot}}}$ is the out-of-transit mean flux. These corrected light curves, informed by either the \textit{TESS} or \textit{AIT} photometry, were then refit following the same method as demonstrated in Section \ref{lightcurves} and are hereafter defined as the \textit{TESS} corrected or \textit{AIT} corrected datatsets. Both \textit{TESS} and \textit{AIT} corrected G600B spectrophotometric light curves exhibited comparable offsets to the uncorrected dataset (Section \ref{slcs}) of $\Delta R_\textrm{p}/R_* $ = 0.00244 and 0.00242 respectively and thus similar vertical shifts are performed. All best fit parameters from the white light curve fits are displayed in Tables \ref{bestfitpars} and \ref{wlcparams}, and all best fit spectrophotometric transit depths are displayed in Table \ref{slcparams} and \ref{slcparams2}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{spotcorrection_2.png}
\vspace*{-5mm}
\caption{Calculated spot corrections based on the \textit{TESS} (teal, bottom) and \textit{AIT} (purple, top) photometric data. Regions of wavelength coverage for all observations performed in this study are also shown, the photometric \textit{TESS} and \textit{Spitzer} data points are represented as lines at the centre of their respective bandpasses. }
\label{spotcorrfig}
\end{figure}
\section{Discussion}\label{disc}
The observed transmission spectrum of WASP-6b reveals a variety of spectroscopic features present both in the uncorrected and spot corrected analyses (Figure \ref{grid_transpec}). In particular, the broad absorption feature at 1.4 $\mu$m indicates the presence of H$_2$O in the atmosphere. Additionally, narrow band absorption features at 0.589 and 0.767 $\mu$m due to Na {\sc i} and K {\sc i} are also evident in the optical. Finally, a distinct increase in transit depth across optical wavelengths is seen, indicative of a scattering haze and in agreement with \citet{Niko15}. The primary difference between the uncorrected and spot corrected datasets is the presence of a vertical offset across the full wavelength range. This offset is not wavelength independent however and the spot correction has acted to slightly reduce the gradient across the optical slope. This wavelength dependence is clearly identified by the difference in transit depth between the uncorrected and \textit{AIT} corrected datasets at the shortest wavelength bin compared to that of the longest wavelength (Figure \ref{grid_transpec}).
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[\label{Full}]{%
\includegraphics[clip, width=\textwidth]{grid_transpec.png}}
\vspace{-41pt}
\subfloat[\label{Full}]{%
\includegraphics[clip, width=\textwidth]{grid_transpec_zoomed.png}}
\vspace*{-5mm}
\caption{\textit{Top}: The uncorrected (orange circles) and \textit{AIT} spot corrected (purple triangles) transmission spectra of WASP-6b as determined from the performed G600B, G600RI, G141, \textit{TESS}, and archival STIS and \textit{Spitzer} observations with the best fit models from the \citet{Goya18} forward grid. For reasons of clarity the \textit{TESS} spot corrected dataset is not shown, however the best fit model is displayed in order to demonstrate the differences in transit depth. \textit{Bottom}: As in the top panel, except zoomed in to the wavelength region spanning the Na {\sc i} and K {\sc i} lines}
\label{grid_transpec}
\end{figure*}
\subsection{Archival Data Comparisons}\label{archivalcomparison}
The transmission spectrum of WASP-6b had already been measured using the available \textit{HST} STIS and \textit{Spitzer} IRAC datasets \citep{Niko15}. In order to compare our independent reduction against these results we overplot both the uncorrected transit depths from this study, with those from this prior published study (Figure \ref{stiscomp_transpec}). The different reductions agree quite well, with all measurements within 1$\sigma$ of one another. A minor discrepancy in transit depth is seen for the longest wavelength STIS bins and the \textit{Spitzer} photometry. These discrepancies are likely due to the slightly different measured system parameters which were held fixed during the independent fittings in addition to slight differences in the adopted stellar limb darkening parameters. The error bars for the reduction performed in this study are larger than those of those from the original reduction, primarily due to the difference between the model marginalisation and Gaussian process approaches towards light curve fitting.
As the STIS and \textit{VLT} FORS2 datasets have a broad overlapping wavelength range we reproduce the \textit{VLT} FORS2 transmission spectrum using an identical wavelength binning as the \textit{HST} STIS measurements to facilitate a comparison between the results (Figure \ref{stiscomp_transpec}). It is evident from this comparison that whilst our results agree very well at the shortest and longest wavelengths, there is a small disparity in the measurements centred around the Na {\sc i} absorption line. We calculate a weighted average transit depth across 5 wavelengths bins centred on the Na {\sc i} absorption line for the G600RI dataset and the STIS dataset, resulting in $R_\textrm{p}/R_*$'s of 0.14628$\pm$0.00031 and 0.14520$\pm$0.00043 respectively. We exclude the G600B dataset from the calculation to avoid any bias due to the applied vertical shift as described in Section \ref{lightcurves}.
As the offset reduces proportionally with separation from the Na {\sc i} line center, this signal could be indicative of an observation of the pressure-broadened wings from the full Na {\sc i} feature in the FORS2 datasets. Such wings have recently been definitively observed in the atmosphere of the hot Jupiter WASP-96b \citep{Niko18}. Given these wings are not present in the STIS dataset, this could suggest we are observing variability in the atmosphere of WASP-6b. However, this offset being of an instrumental or systematic origin cannot be excluded, particularly as the FORS2 observations are taken from the ground where systematic variations are not as well understood and harder to model. The possibility that this discrepancy has been caused by the STIS observations in particular also cannot be excluded as there exists robust evidence that systematics in STIS observations resulted in a spurious detection of K in WASP-31b \citep{Gibs17, Gibs19}. The true cause of the discrepancy, be it physical or systematic, can not be determined with these data and additional observations at higher signal to noise and over long timescales will be required to investigate this further.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{stiscomparison.png}
\vspace*{-5mm}
\caption{\textit{Top}: A comparison of the measured STIS and \textit{Spitzer} transit depths from this study (grey stars/brown crosses) and those published in \citet{Niko15} (teal squares). A small wavelength offset has been added to the literature datasets for clarity. \textit{Middle}: The measured uncorrected transit depths of the STIS (grey stars) dataset in comparison to the G600B (blue circles) and G600RI (orange squares) datasets, binned down to an identical resolution where possible. \textit{Bottom}: Differenced transit depths following subtraction of the STIS dataset from the G600B and G600RI datasets, a slight disparity is seen within the Na {\sc i} line.}
\label{stiscomp_transpec}
\end{figure}
\subsection{Goyal Forward Models}\label{goyalmodels}
In order to explore the bulk properties of WASP-6b we fit the observed transmission spectrum to a grid of forward models \citep{Goya18, Goya19a}. These models are generated using the 1D radiative-convective equilibrium code \texttt{ATMO}. Initially we opted to use the more recent generic model grid \citep{Goya19a} in our analysis as it allowed for a broader coverage of the parameter space than the WASP-6b specific grid from \citet{Goya18}. However, as sub-solar metallicity forward models have yet to be implemented into the generic grid our ability to accurately fit the observed data was ultimately restricted. As such, we used the WASP-6b specific grid \citep{Goya18} in order cover the sub-solar metallicity range of parameter space.
With the arrival of the \textit{Gaia} Data Release 2 \citep{Gaia18} the distance to WASP-6 has been more accurately determined as $d = 197.1\substack{ +0.4 \\ -1.6 }$ pc \citep{Bail18}, significantly different to the prior measurement of 307 pc. This re-estimation has significant effects on the inferred stellar radius of WASP-6 which in turn affects the estimation of planetary radius from the observed transit depths. A mismeasurement of the planetary radius naturally leads to a mismeasurement of the planetary gravity, a currently fixed parameter for the planet specific forward model grid of \citet{Goya18}. Following the methodology of \citet{Morr19}, we performed spectral energy distribution (SED) fitting on WASP-6 using NUV, optical and NIR broadband photometry. The fitted integrated flux allows us to measure its luminosity, and the shape of the SED determines its so-called $T_{\rm SED}$ \citep[see][for details]{Morr19}. By combining this with the revised distance measurement, we obtained an updated estimate of the radius of WASP-6, and subsequently the radius of WASP-6b. This radius results in a new value for the planetary gravity of $g = 10.55\substack{ +0.19 \\ -0.39 }$ ms$^{-2}$, notably different from the previous estimate of $g = 8.71 \pm 0.55$ ms$^{-2}$ \citep{Gill09}. Changes in gravity can have significant effects on the computed forward models \citep{Goya18, Goya19a} and therefore to fit our observed data we use a more updated forward model grid for WASP-6, identical to the original shown in \citet{Goya18} except recomputed for a value of $g=10.5$.
The model grid used consists of 3920 different transmission spectra varying in temperature, metallicity, C/O ratio, scattering haze and uniform cloud. The scattering haze is implemented through the use of a haze enhancement factor $\alpha_{\textrm{haze}}$ which simulates an increase in the total scattering of small aerosol particles in the atmosphere. Similarly, the uniform cloud is implemented through a variable cloudiness factor $\alpha_{\textrm{cloud}}$, which produces the effects of a cloud deck through a modification to the wavelength dependent scattering using the strength of grey scattering due to H$_2$ at 350 nm. Irrespective of the true cloud composition, implementing a grey cloud is appropriate for our observations as at the observed wavelengths Mie scattering predicts essentially grey scattering profiles \citep{Wake15}. Further details on the grid parameters, including their ranges and implementations, can be found in \citet{Goya18}.
Each model spectrum was fit in turn by producing a binned version of the spectrum which matches the selected spectrophotometric bands from the data reduction and then averaged to produce a single value of transit depth in each bin. A $\chi^2$ measurement between the observed and model data was then computed following a least-squares minimisation scheme with a varying wavelength-independent vertical offset. These fits were performed for both the uncorrected and both spot corrected transmission spectra and the best fitting models for each are presented in Figure \ref{grid_transpec}.
\begin{figure}
\centering\captionsetup[subfloat]{labelfont=bf}
\subfloat[\label{Uncorrected}]{%
\includegraphics[clip, width=\columnwidth]{goyalgrid_uncorr.pdf}}
\vspace{-10pt}
\subfloat[\label{TESS Corrected} ]{%
\includegraphics[clip, width=\columnwidth]{goyalgrid_acorr_tess.pdf}}
\vspace{-10pt}
\subfloat[\label{AIT Corrected}]{%
\includegraphics[clip, width=\columnwidth]{goyalgrid_acorr_ait.pdf}}\\
\caption{\label{all_gridplots} $\chi^2$ contour maps produced when fitting the complete transmission spectrum of WASP-6b to forward model grids of \citet{Goya18} considering \textbf{(a)} no correction for stellar heterogeneity, \textbf{(b)} correction using \textit{TESS} photometry, and \textbf{(c)} correction using \textit{AIT} photometry. Shaded regions indicate models in the parameter space which are at least $N-\sigma$ from the best fit model. Preferences towards the lowest metallicity, highest haze enhancement factors, and lower C/O ratios are present for the uncorrected dataset, whereas this is not the case for the \textit{TESS} or \textit{AIT} spot corrected datasets.}
\end{figure}
For the uncorrected and the \textit{TESS} corrected transmission spectra we find a best fitting model of $T = 1334\,$K, sub-solar metallcity [M/H] = $-1.0$, slightly super-solar C/O ratio of [C/O] = $0.70$, moderate hazes $\alpha_\textrm{haze} = 10$ and no evidence of clouds $\alpha_\textrm{cloud} = 0$ corresponding to a $\chi^2_\nu$ = 1.10 and 0.98 respectively. For the \textit{AIT} corrected transmission spectrum however, we find a best fitting model of $T = 1334\,$K, sub-solar metallcity [M/H] = $-1.0$, solar C/O ratio of [C/O] = $0.56$, moderate hazes $\alpha_\textrm{haze} = 10$ and no evidence of clouds $\alpha_\textrm{cloud} = 0$ corresponding to a $\chi^2_\nu$ = 0.99. To explore the discrepancies and commonalties between the grid fits to the uncorrected and corrected datasets we produce $\chi^2$ contour maps \citep{Madh09} as shown in Figure \ref{all_gridplots}. We begin by constructing 2D grids of every possible pair of model parameters. In each separate grid, and at every individual grid point dictated by the resolution of the model parameters, we vary all the remaining model parameters in turn and determine the model with the smallest $\chi^2$. Across these new $\chi^2$ spaces we determine contours which correspond to models in the parameter space which are $N$-$\sigma$ from the overall best fit model following \citet{Goya18}.
The primary differences between the datasets are the existence of subsets of model fits more favoured by the lowest metallicities and the highest haze enhancement factors for only the uncorrected dataset. These subsets are present because the wavelength dependence of stellar heterogeneity acts to increase the gradient of the optical slope in the observed data, an effect that is somewhat degenerate with lower metallicity and hazy atmospheres \citep{Goya18}. Whilst both the lowest metallicities and highest haze enhancements factors are not as favoured in tandem, they both correspond to model fits favouring a lower level of C/O ratio. This is because both low metallicity and high haze enhancement factor act to suppress the H$_2$O absorption features beyond the constraints set by the G141 dataset and as such the C/O ratio must be reduced in order to re-inflate the H$_2$O features to match the observations. In summary, the $\chi^2$ contour map for even the conservative \textit{TESS} corrected dataset indicates that these highest haze enhancement factors, lowest metallicities, and lowest C/O ratios are likely effects of stellar heterogeneity on the transmission spectrum of WASP-6b and not truly symptomatic of its atmosphere. However, a moderate haze enhancement of at least $\alpha_\textrm{haze} = 10$ is strongly constrained, and a preference towards sub-solar metallicities is still evident, independent of the addition of a spot correction.
Whether or not a spot correction is used, temperatures of 1334$\,$K are primarily preferred for each grid fit. Comparatively, the measured dayside temperatures for WASP-6b are 1235$\substack{ +70 \\ -77 }$ and 1118$\substack{ +68 \\ -74 }$ from the 3.6 and 4.5 $\mu$m \textit{Spitzer} IRAC channels respectively \citep{Kamm15}. As these values are within $\sim1\sigma$ they do not suggest a disagreement, however, it is worthwhile assessing the source of the slight preference of the grid model fits towards limb temperatures higher than that measured from the dayside. As the model grid varies in temperature steps of 150$\,$K the model cannot settle on a precise temperature estimate and is therefore likely to be somewhat discrepant from the true value. However, there are models at a temperature $T = 1184\,$K which should in theory match the true temperature of WASP-6b's limb more accurately. Looking to Figure \ref{all_gridplots}, the preferred temperature is strongly constrained below the 1484$\,$K grid models, as at approximately this temperature absorption features due to TiO and VO start to become significant in the optical \citep{Fort08} and are strongly disfavoured by the observed FORS2 and STIS datasets. As temperature acts to increase the gradient of the optical slope \citep{Goya18} it is also degenerate with the effects of stellar heterogeneity. Therefore the models at 1334$\,$K are the most favoured as it is the highest temperature, and thus steepest slope, that the model grid can produce without generating conflicting TiO and VO features. Figure \ref{all_gridplots} demonstrates this as the model preferences for the highest temperatures are slightly reduced upon application of the spot corrections, with the most significant difference being for the \textit{AIT} corrected dataset. As the best fit temperature for the \textit{AIT} correction is still beyond what we would expect given the day side temperatures already reported it could even suggest that the spot correction used has been underestimated. However, a subset of $1184\,$K models are comfortably within the 2$\sigma$ region for every dataset and therefore conclusively determining the true effect of stellar heterogeneity on the best fit model temperature will require further investigation with observations at a higher signal to noise.
To determine the significance of the perceived detections of the Na {\sc i} and K {\sc i} features we begin by performing a quadratic interpolation of the baseline of the best fit model to each dataset from 0.4-0.9 $\mu$m using regions of the optical slope with no clear absorption features as anchors for the interpolation. The interpolation then served as a comparison against the weighted mean value of the G600B, G600RI, STIS 430 and STIS 750 data contained with the Na {\sc i} and K {\sc i} lines. Detection significances are summarised in Table \ref{nak_sigma}, these values indicate at least a 3$\sigma$ detection of the Na {\sc i} and K {\sc i} narrow line signatures in the atmosphere WASP-6b, irrespective of an applied spot correction.
\afterpage{
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{aitretrieval.png}
\vspace*{-5mm}
\caption{The measured \textit{AIT} spot corrected transmission spectrum of WASP-6b (white diamonds) in addition to the best fit \texttt{ARC} retrieval model (yellow line) and its corresponding 1, 2 and 3$\sigma$ bounds (purple shaded regions).}
\label{retrieval_spectrum}
\end{figure*}
\input{retrievaltable_v2}
}
\subsection{\texttt{ATMO} Retrieval Modelling}\label{retrieval}
The previously available transmission spectra of WASP-6b has been the subject of multiple retrieval based model analyses thus far. Firstly by \citet{Bars17} who utilize the \texttt{NEMESIS} retrieval code to demonstrate that the atmosphere of WASP-6b is best described by Rayleigh scattering clouds at high altitudes. In addition, \citet{Pinh18} perform a retrieval using the \texttt{AURA} code, demonstrating that the atmosphere of WASP-6b is best described as a combination of the effects of stellar heterogeneity and atmospheric hazes. However, in an effort to fit the widely disparate STIS and \textit{Spitzer} points this retrieval predicts a very low H$_2$O abundance, a claim that has not been possible to verify or refute until the recent acquisition of \textit{HST} WFC3 data from this study.
\begin{table}
\centering
\begin{tabular}{l c c}
\hline
\hline
Dataset & Na {\sc i} Significance & K {\sc i} significance \\
\hline
Uncorrected & 4.2 $\sigma$ & 3.5 $\sigma$ \\
\textit{TESS} Corrected & 3.9 $\sigma$ & 3.2 $\sigma$ \\
\textit{AIT} Corrected & 3.9 $\sigma$ & 3.4 $\sigma$ \\
\hline
\end{tabular}
\caption{Sigma confidence levels of the Na {\sc i} and K {\sc i} line detections with respect to the model baseline level. }
\label{nak_sigma}
\end{table}
Due to the wealth of new data available with the addition of the FORS2, WFC3 and \textit{TESS} observations we perform our own atmospheric retrieval on the uncorrected and spot corrected datasets using the \texttt{ATMO} Retrieval Code (\texttt{ARC}) which has already been used for a variety of transmission spectra to date \citep{Wake17b, Wake18, Niko18, Spak18, Evan18}. For the retrieval model, the relative elemental abundances for each model were calculated in equilibrium. For each model, equilibrium chemistry was calculated on the fly, using input elemental abundances fit by assuming solar values and we allowed for non-solar scaled elemental compositions by fitting the carbon, oxygen, sodium, and potassium elemental abundances ([C/H], [O/H], [Na/H], [K/H]), which can potentially all be constrained by the transmission spectrum. We fit all remaining species by varying a single quantity for the trace metallicity, [M$_\textrm{trace}$/H]. Throughout this study, all abundances are quoted as [X/H] which is logarithmic relative to the Sun, with all solar abundances taken from \citet{Aspl09}. The resulting chemical network consisted of 175 neutral gas phase species, 93 condensates, and the ionized species e$^{-}$, H$^+$, H$^{-}$, He$^+$, Na$^+$, K$^+$, C$^+$, Ca$^+$, and Si$^+$. By varying both C and O separately, we mitigate several important modelling deficiencies and assumptions compared to varying the C/O ratio as a single parameter \citep{Drum19}. For the spectral synthesis, we included the spectroscopically active molecules of H$_2$, He, H$_2$O, CO$_2$, CO, CH$_4$, NH$_3$, Na, K, Li, TiO, VO, FeH, and Fe. The temperature was assumed to be isothermal, fit with one parameter, and we also included a uniform haze fit with the enhancement factor. A differential-evolution MCMC was used to infer the posterior probability distribution which was then marginalised \citep{East13}, we ran twelve chains each for 30,000 steps, discarding the burn-in before combining them into a single chain. Uniform priors were adopted, with the log$_{10}$ abundances allowed to vary between -12 and -1.3.
The resulting best fit retrieval models for the uncorrected, \textit{TESS} corrected, and \textit{AIT} corrected datasets all provide good fits to the data, with $\chi^2$ = 75, 71, and 73 respectively for 86 degrees of freedom. We show a visual representation of the retrieval for the \textit{AIT} corrected dataset in Figure \ref{retrieval_spectrum} and the mean values for each individual retrieval are shown in Table \ref{retrieval_results}. To facilitate comparisons between the uncorrected and corrected datasets we plot the retrieval posteriors for each together in Figure \ref{retrieval_posteriors}. As with the forward model grid fits shown in Section \ref{goyalmodels} there are clear differences between the uncorrected and spot corrected datasets, particularly for the temperature, radius, and haze opacity. The difference in radius is a natural result of performing the spot correction, as this results is a wavelength dependent shift in the transmission baseline to lower transit depths. Given the square root of the transit depth $\delta$ = $R_\textrm{p}/R_*$, and that the stellar radius is fixed during the retrieval, any decrease in the transit depth will subsequently produce a decrease in the estimated planetary radius. In a similar fashion to the forward model grid fits, the highest temperatures and highest levels of haze opacity are favoured by the uncorrected dataset, the cause of which being the degeneracy between these properties and the effects of stellar heterogeneity on the uncorrected transmission spectrum. Upon performing a spot correction, the best fit temperature and haze opacity falls as the gradient of the optical slope has been reduced. However at least a moderate amount of haze is still required irrespective of spot correction.
Due to the freedom of the retrieval analyses we were also able to investigate the specific elemental abundances inferred from the measured transmission spectra. Firstly, as the C, O, Na, and K abundances were fit independently throughout the retrieval analysis the measured metallicity only encompasses the other elemental constituents of the atmosphere. The sub-solar metallicity measured across all retrieval analyses therefore show that no other substantial absorber is required to fit the measured transmission spectra. The \textit{Spitzer} data points are the only observations sensitive to carbon bearing species in the atmosphere such as CH$_4$, CO and CO$_2$, however, given their non-negligible uncertainties and minimal relative offset the retrieved carbon abundance is largely unconstrained and merely represents an upper limit. This is true across all datasets as the addition of a stellar heterogeneity correction has a marginal effect towards the infrared. We constrain the carbon abundance to sub-solar at 3$\sigma$ for the uncorrected and \textit{AIT} datasets, and at 2$\sigma$ for the \textit{TESS} dataset. Our limit on the carbon abundance suggests that H$_2$O is the primary oxygen-bearing species, and from the observed feature we constrain the oxygen abundance to a sub-solar value, irrespective of a spot correction. For the best fit retrieval model to the \textit{AIT} corrected dataset our oxygen abundance corresponds to a water abundance of log(H$_2$O) = -4.87. Given the lack of WFC3 data available to previous studies of WASP-6b this water abundance is the first to be informed by an observed water absorption feature in transmission. Furthermore, given the extensive optical data from FORS2 and STIS, this result is robust to previously observed degeneracies of water abundance and reference pressure \citep{Grif14, Pinh18}. Contrasting to oxygen, the Na and K abundances are relaxed to lower values following the application of a spot correction as the lone Na {\sc i} and K {\sc i} absorption features lie in the optical region where stellar heterogeneity has a significant effect on the observed slope. Upon a reduction in the slope opacity, these abundances must necessarily drop to fit the observed data. Specifically for the \textit{AIT} correction, we see variations in sodium of super-solar, [Na/H] = $1.33^{+0.42}_{-0.67}$, to solar/super-solar, [Na/H] = $0.83^{+0.67}_{-0.80}$, and potassium of solar/super-solar, [K/H] = $ 0.22^{+0.65}_{-0.74}$, to sub-solar/solar, [K/H] = $ -0.12^{+0.71}_{-0.74}$. Given the measurement precision we cannot explicitly quantify the impact of the correction as both the [Na/H] and [K/H] abundances lie within 1$\sigma$ of their inferred uncorrected abundances. Despite this, the broader shifts of their full retrieved distributions (Figure \ref{retrieval_posteriors}) indicate that neglecting to account for the affects of stellar heterogeneity in future, higher precision, observations may lead to strictly incorrect determinations of their abundances.
As the metallicity we retrieve excludes C, O, Na, and K, we cannot perform a comparison to the [M/H] distributions obtained as part of the forward model analysis in Section \ref{goyalmodels}. However, comparing the retrieved [O/H] to the forward model [M/H] we see similar distributions indicating a sub-solar metallicity. Additionally, whilst the slightly super-solar abundances of [Na/H] and [K/H] do not completely agree with the sub-solar [M/H] the large uncertainties of these distributions indicate that an overall sub-solar metallicity cannot be ruled out.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{retrieval_posteriors_v2.png}
\vspace*{-5mm}
\caption{Retrieval posteriors from the \texttt{ARC} analysis of the uncorrected (orange, dotted line), \textit{TESS} spot corrected (teal, dashed line) and \textit{AIT} spot corrected (purple, solid line) datasets for WASP-6b. The metallicity and abundances of Na, K, C, and O are given with reference to solar values as taken from \citet{Aspl09}. All distributions have been normalised so that their integral is equal to unity.}
\label{retrieval_posteriors}
\end{figure*}
\subsection{WASP-6b In Context}\label{context}
Our determined, spot corrected, oxygen abundance of [O/H] = $-0.84^{+0.40}_{-0.39}$ and sodium abundance of [Na/H] = $0.83^{+0.67}_{-0.80}$ are slightly disparate to the determined sub-solar metallicity of the host star of [Fe/H] = $-0.15 \pm 0.09$ \citep{Doyl13}, whilst the potassium abundance is in good agreement at [K/H] = $ -0.12^{+0.71}_{-0.74}$. Variations in these elemental abundances relative to the host star could be indicative of formation history (e.g. \citealt{Ober11}), however in the case of WASP-6b the current uncertainties are not sufficiently constrained to make such determinations, with all values lying within 2$\sigma$ of the host star metallicity. Further observations of the atmosphere of WASP-6b will be necessary to provide more detailed constraints on these elemental abundances. In particular, due to the presence of carbon-bearing molecular features beyond 2 $\mu$m such as CO, CO$_2$, and CH$_4$, spectroscopic observations with the upcoming \textit{James Webb Space Telescope} (\textit{JWST}) will provide stronger constraints on the carbon abundance, of which this study could only provide an upper limit. This in turn will enable robust constraints on the C/O ratio and progress our understanding of the formation history of WASP-6b.
Irrespective of the application of a stellar heterogeneity correction, both the forward and retrieval models require some level of haze opacity enhancement in order to describe the steep optical slope of the transmission spectrum. In the context of hot Jupiter atmospheres, this haze is often thought of as either photochemically produced, or condensate dust, scattering species within the atmosphere \citep{Marl13}. In the case of the condensate species it is thought that the lofting of particles from deeper atmospheric cloud decks can serve to populate the upper atmosphere and lead to the observed scattering we see (e.g. \citealt{ Parm13}). Despite this, the most recent simulations of condensate particle formation in the atmosphere of the hot Jupiter HD 189733b \citep{Bouc05} fail to fully reproduce its observed scattering slope \citep{Lee17, Powe18}. At the temperature of WASP-6b, generation of hydrocarbons through photochemistry was initially thought to be inhibited \citep{Lian04} and whilst sulphur photochemistry may play a role \citep{Zahn09}, it primarily induces a scattering slope below 0.45 $\mu$m, whereas the observed slope of WASP-6b extends further into the optical. However, recent laboratory experiments have shown that hydrocarbons may form not just in cool exoplanet atmospheres \citep{Hors18, He18}, but also in hot atmospheres beyond 1000 K with a sufficiently high [C/O] = 1 \citep{Fleu19}, a possibility our observations cannot definitively rule out. Additionally, the effects of wind-driven chemistry act to homogenise the atmospheres of tidally locked hot Jupiters such as WASP-6b and can lead to significant increases in the abundance of CH$_4$ compared to standard equilibrium models \citep{Drum18a, Drum18b}. Given photolysis of CH$_4$ can drive the formation of haze precursors \citep{Lavv08}, this increase in abundance may naturally lead to their more efficient production. Furthermore, of the well characterised hot Jupiter atmospheres, WASP-6b and HD 189733b present an interesting comparison as they have similar temperatures, both orbit active stars ($\log(R^\prime_{HK})$ = -4.511 and -4.501 respectively), and both exhibit strong haze scattering slopes across the optical \citep{Sing16}. Recent simulations of HD 189733b by \citet{Lavv17} have shown that the formation of photochemical haze "soots" higher in the atmosphere are not excluded and can match its observed transmission spectrum. Moreover, the increased UV flux that these two planets are subject to due to their large host star activity levels is likely acting to enhance the rate of photochemical haze production in their atmospheres \citep{Kawa19}. Possible evidence to this conclusion is seen in the potential trend towards stronger scattering haze signatures with reducing $\log(R^\prime_{HK})$ (increasing activity) observed in the hot Jupiter population study of \citet{Sing16}. An exact determination of whether the haze produced in the atmosphere of WASP-6b is of photochemical origin, condensate dust origin, or a combination of the two, was not possible as part of this study due to their similar opacities at the wavelengths of these observations (e.g. \citealt{Niko15}). In future analyses however, the relative contributions of both photochemical and condensate haze components should be considered in order to describe this observed scattering.
Amongst the population of spectroscopically studied exoplanets, the atmosphere of WASP-6b is one of the haziest. Previous studies of its atmosphere predicted a small \citep{Niko15, Sing16} amplitude H$_2$O feature at 1.4 $\mu$m, however the feature observed as part of this study is slightly larger than anticipated. This increase is likely due to the seemingly small \textit{Spitzer} transit depths biasing the model estimates prior to the acquisition of the FORS2 and WFC3 datasets. To quantify the size of the H$_2$O feature relative to an assumed clear atmosphere for WASP-6b we determine the scaled amplitude of the water feature following \citet{Wake19}. Specifically, we begin by taking a clear atmosphere forward model from the grid used throughout this paper \citep{Goya18} with: the equilibrium temperature of WASP-6b, solar metallicity, solar C/O ratio, and no haze or cloud opacity components. We then scale this model to fit the data using a model defined as $S1 = (S0\times p_0) + p_1$, where $S0$ is the clear atmosphere model, $p_0$ is the model amplitude scale factor and $p_1$ is a baseline offset. For the \textit{AIT} corrected dataset we determine $p_0$ = 64 $\pm$ 12 per cent, in contrast to the median amplitude across the observed population of $p_0$ = 33 $\pm$ 24 per cent \citep{Wake19}. These new observations indicate that despite the presence of haze, WASP-6b remains a favourable target for atmospheric characterisation, particularly with \textit{JWST}. This potential for \textit{JWST} to characterise hazy hot Jupiters such as WASP-6b is in contrast to those who exhibit flat, cloudy spectra such as WASP-31b \citep{Gibs17} and WASP-101b \citep{Wake17}.
\section{Conclusions}\label{conc}
We present the most complete optical to infrared transmission spectrum of the hot Jupiter WASP-6b to date utilising new observations performed with \textit{HST} WFC3, \textit{VLT} FORS2 and \textit{TESS} in addition to reanalysed existing \textit{HST} STIS and \textit{Spitzer} IRAC data. The impact of host star heterogeneity on the transmission spectrum was investigated and we correct the observed light curves to account for these effects under different assumptions for the level of stellar activity. All reduced transmission spectra then undergo a retrieval analysis fitting in addition to being fit to a grid of forward atmospheric models.
Across all datasets we find clear evidence for Na {\sc i}, K {\sc i} and H$_2$O within the atmosphere of WASP-6b in addition to a steep increase in transit depth towards the optical. After applying both forward model and retrieval analyses we find that at least a moderate haze enhancement is required to describe the optical slope, however when neglecting even a conservative stellar heterogeneity correction, higher and potentially erroneous haze enhancement factors are more preferred. An analogous effect is also seen in the estimated temperature, where higher and potentially unphysical temperatures are preferred when there is no stellar heterogeneity correction. Both of these effects likely stem from the degeneracy of these properties and the impact of stellar heterogeneity towards increasing the optical slope of the transmission spectrum.
Whilst the precision of current observations is not sufficient to definitively estimate the impact of stellar heterogeneity on the transmission spectrum of WASP-6b, the parameter differences observed upon the application of a stellar heterogeneity correction indicate that its effect should not be neglected for future observations of exoplanetary atmospheres around moderately active stars. Despite the presence of haze in its atmosphere, WASP-6b remains a favourable target for further characterisation. Contemporaneous and broader wavelength measurements of its transmission spectrum with missions such as \textit{JWST} will enable a more detailed characterisation of its atmosphere in addition to the precisely determining the effects stellar heterogeneity has on its appearance.
\section*{Acknowledgements}
This work is based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere under European Southern Observatory programme 196.C-0765 in addition to observations associated with program GO-14767 made with the NASA/ESA \textit{Hubble Space Telescope} that were obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc, under NASA contract NAS 5-2655. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center / California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Based on observations made with the NASA Galaxy Evolution Explorer. \textit{GALEX} is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. ALC and SM are funded by a UK Science and Technology Facilities Council (STFC) studentship. MKA acknowledges support by the National Science Foundation through a Graduate Research Fellowship. HRW acknowledges support from the Giacconi Prize Fellowship at STScI, operated by AURA. This research has made use of NASAs Astrophysics Data System and the Python modules \texttt{NumPy}, \texttt{Matplotlib}, and \texttt{SciPy}.
\FloatBarrier
\bibliographystyle{mnras}
|
1,108,101,563,872 | arxiv | \section{Introduction}
The existence of an endohedral fullerene, i.e.\ one or several atoms encapsulated in a fullerene molecule, was originally
inferred from an analysis of mass spectra of LaCl$_\text{3}$-impregnated graphite and did lead to the proposal
La@C$_\text{60}$ \cite{1}. At present the study of endohedral metallofullerenes M$_x$@C$_n$, $x=1,2,3,4$ and
$n=66, 68, 72, 74,\hdots, 100$, where M are group II and III metals such as Sc, Y, $\hdots$ or lanthanides
Ce, $\hdots$, Lu, is a subject of interdisciplinary research \cite{2} in physics, chemistry and materials
sciences. By now one is able to produce materials where not only single atoms but clusters of atoms are
encapsulated \cite{3}.
Due to charge transfer between the cluster and the surrounding carbon cage it is possible to obtain molecular-like
complexes which do not exist otherwise (i.e.\ in absence of encapsulation) and which have unusual properties.
Not only clusters with metal atoms of
a same kind, such as the Sc$_\text{3}$ trimer in Sc$_\text{3}$@C$_\text{82}$ are produced, but also clusters
composed of different kinds of atoms. A remarkable case is the production of
Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ in crystalline powder form \cite{4}.
The powder crystal is composed of crystallites where the Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ units are
arranged with average space
group symmetry $Fm\overline{3}m$. From spectroscopic and structural
characterization by NMR- and synchrotron X-ray diffraction experiments \cite{4} it follows that the
Sc$_\text{2}$C$_\text{2}$ complex is encaged as a rigid unit with point group symmetry $D_{2h}$ in a
C$_\text{84}$ fullerene of symmetry D$_{2d}$ (isomer III, number 23 \cite{5}). The center of mass of
Sc$_\text{2}$C$_\text{2}$ coincides with the center of mass of the molecule.
The two Sc atoms are located at a distance of $4.29(2)$ {\AA} on the long
$C_\text{2}$ ($S_4$) axis of C$_\text{84}$.
The two C atoms of Sc$_\text{2}$C$_\text{2}$ are located in the plane containing the two $C_\text{2}$ axes
perpendicular to the long axis of the C$_\text{84}$ molecule and
have a calculated distance of $1.28$ {\AA}. This distance lies between
those of typical double and triple carbon bonds, and is consistent with the experimental and calculated C--C
stretching frequency of the C$_\text{2}$ unit (exp.\ $1745$ cm$^{-1}$, calc.\ $1742$ cm$^{-1}$) \cite{6}. In the following we will
speak of this C--C bond as a
C$_\text{2}$ unit or molecule. Indeed low energy Raman spectra \cite{6} on powder
samples of Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ in a temperature range $25$ -- $150$ K (Kelvin)
have revealed the existence of quantized rotational states of the C$_\text{2}$ unit.
The Raman lines' positions
reflect
transitions between energy levels of a C$_\text{2}$ planar quantum rotor in a fourfold static potential due to
the surrounding C$_\text{84}$ cage.
Therefore, one can speak of a quantum gyroscope.
In Ref. \onlinecite{6} the potential parameters of the encaged quantum rotor
were obtained from density functional calculations using the
VASP (Vienna ab initio simulation package) code \cite{9}. The energy levels were then determined from the solution of the Schr\"{o}dinger
equation. Within this approach, the Raman spectra consist of infinitely
sharp lines while experimentally the lines are
broadened and have a characteristic temperature behavior. A reason for this shortcoming is the restriction of
the role of the encapsulating C$_\text{84}$ molecule to a purely static body.
Since the measured transition frequencies are in the range of $10$ -- $80$ cm$^{-1}$ and since the line broadening
is of the order of a few cm$^{-1}$, any involvement of internal vibrational modes
of the C$_\text{84}$ cage as well as of stretching modes Sc--C$_\text{84}$
can be excluded. The latter are of
higher frequencies and have been measured in C$_\text{84}$ and in Sc$_\text{2}$@C$_\text{84}$ by infrared and
Raman techniques \cite{Krause2,Krause3}. However the low-frequency external rotational modes of
the C$_\text{84}$ molecule and their superposition with the transitions of the quantum rotor should be retained:
indeed the encapsulated Sc$_\text{2}$C$_\text{2}$ gyroscope is
dragged by
the classical rotational motion of the C$_\text{84}$ molecule and this dragging will affect the Raman spectrum of the
C$_\text{2}$ unit. It
follows that an experimental probe such as Raman scattering which couples to the encapsulated species in an
endohedral complex, in casu Sc$_\text{2}$C$_\text{2}$, also yields information on the dynamics of the
encapsulating molecule, in casu C$_\text{84}$.
In the present paper we will extend the theoretical interpretation given in Ref. \onlinecite{6} and develop a
unified theory where the quantum
mechanical motion of the Sc$_\text{2}$C$_\text{2}$ complex
is coupled to the thermally excited classical rotational
motion of the C$_\text{84}$ fullerene.
The coupling results from the fact that the long axis of the quantum gyroscope coincides with the $S_4$ axis of
the surrounding C$_\text{84}$ molecule.
In a given crystallite the C$_\text{84}$ molecules are randomly
oriented with their long $C_2$ axis in equivalent $\langle 100 \rangle$ directions of the face-centered cubic (fcc)
unit
cell \cite{4}. We call this situation meroaxial disorder (this terminology seems to be more appropriate than merohedral
disorder).
We start from a model where at low temperature the meroaxially oriented
C$_\text{84}$ molecules in the fcc crystal perform uniaxial rotational diffusions about their long axis.
Such a classical motion can be seen as a
time-dependent modulation of the fourfold potential experienced by the quantum rotor and causes a
temperature-dependent broadening of the quantum levels.
An additional broadening effect is to be expected from the stochastic reorientations of the
C$_\text{84}$ molecules among the meroaxial directions which should become increasingly
important at higher temperature. In
addition, the stochastic reorientations lead to the appearance of a temperature-dependent
quasi-elastic peak in the Raman spectrum.
The content of the paper is as follows. In Section II we write down the Raman scattering law for the
C$_\text{2}$-unit belonging to the Sc$_\text{2}$C$_\text{2}$ complex encapsulated by the C$_\text{84}$ molecule
in Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ fullerite. We start from a single crystal with $Fm\overline{3}m$
structure and static meroaxial disorder of the C$_\text{84}$ molecules. Assuming a quantum mechanical
rotational motion of the Sc$_\text{2}$C$_\text{2}$ complexes and a classical rotational motion of the
C$_\text{84}$ molecules, the dynamic polarizability-polarizability correlation function is decoupled in a
product of correlation functions for the rotational dynamics of Sc$_\text{2}$C$_\text{2}$ and C$_\text{84}$
respectively. The scattering law is obtained as a convolution of these correlation functions
in Fourier space.
Next (Sect.\ III) we calculate the correlation functions, using a quantum mechanical tunneling model for the
Sc$_\text{2}$C$_\text{2}$ complex and a uniaxial rotational diffusion model for the surrounding C$_\text{84}$
molecule. The rotational diffusion motion of the encapsulating C$_\text{84}$ molecule leads to a linear
temperature-dependent broadening of the energy
transition lines of the C$_\text{2}$ planar rotor. In Sect.\ IV we extend the theory to a powder crystal
which consists of arbitrarily oriented crystallites, each with $Fm\overline{3}m$ space group symmetry and static
meroaxial disorder. In the following (Sect.\ V) we consider the case of dynamic
meroaxial disorder, describing the reorientations of C$_\text{84}$ molecules among the three meroaxial
directions by a stochastic jump model.
This model yields an exponential temperature-dependent broadening of the transition lines.
In the last Section VI we give a numerical evaluation of the Raman
scattering law for a Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ powder crystal where quantum mechanical tunneling
of the encapsulated Sc$_\text{2}$C$_\text{2}$ units is superimposed by uniaxial rotational diffusion and dynamic
meroaxial disorder of the
C$_\text{84}$ molecules. The temperature dependence of the line intensities and of the line broadenings is
discussed.
\section{Raman scattering law}\label{secRaman}
We will derive the Raman scattering law where we limit ourselves to the interaction of the incident laser light
with the plane rotational motion of the induced dipole of the C--C bond belonging to the Sc$_\text{2}$C$_\text{2}$ complex of
Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$.
This means that we consider the low frequency part of the spectrum (say $\le 100$ cm$^{-1}$).
The Sc$_\text{2}$C$_\text{2}$ complex is centered in the origin (center-of-mass position) of C$_\text{84}$.
The long axis of Sc$_\text{2}$C$_\text{2}$ coincides with the $S_4$ axis of C$_\text{84}$. The C--C bond of
Sc$_\text{2}$C$_\text{2}$ lies in the plane containing the secondary $C_2$ axes
of C$_\text{84}$ and rotates about the $S_4$ axis.
In that respect we will consider the C--C bond as a C$_\text{2}$ planar rotor which experiences a fourfold
potential inside the C$_\text{84}$ molecule.
Our formulation of the Raman scattering law is an extension of the conventional theory \cite{10,11} in as much as
we describe a situation where the rotational motion of the induced dipole with respect to the laboratory-fixed
frame is a superposition of the quantum motion of the C$_\text{2}$ planar rotor inside the C$_\text{84}$ molecule
and of the classical motion of the C$_\text{84}$ molecule in the laboratory frame.
We start with considering a single crystal of Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ units with static meroaxial
disorder. We assume that the Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ units are statistically independent, hence it
will be sufficient to consider one single representative unit.
The cubic crystal axes $(X',Y',Z')$ are chosen to coincide with the
laboratory-fixed cubic coordinate system $(X,Y,Z)$. We consider a cubic system of axes $(\xi,\eta,\zeta)$ fixed in the
C$_\text{84}$ molecule such that the $\xi$ axis coincides with the $S_4$ axis while $\eta$ and $\zeta$ coincide
with the secondary twofold axes (Fig.\ \ref{figC84}).
The meroaxial orientations of the C$_\text{84}$ molecules correspond to the situation where the $\xi$ axes are
randomly oriented along the $X'$, $Y'$ or $Z'$ crystal axes. The C$_\text{2}$ units then rotate in the planes
$(Y',Z')$, $(Z',X')$ or $(X',Y')$ respectively (Fig.\ \ref{figaxes3D}).
We say that the C$_\text{84}$ molecule is in
standard orientation if the $S_4$ axis coincides with the laboratory-fixed
$X$ axis and the plane containing the secondary $C_2$ axes coincides
with the laboratory
$(Y,Z)$ plane. The $\zeta$ axis forms an angle $\nu$ with the $Z$ axis, while the C--C bond forms an
angle $\tau$ with the $\zeta$ axis. Hence the polar angle $\theta$
of the C--C bond with the laboratory $Z$ axis (Fig.\ \ref{figaxes2D}) is a sum of two terms:
\begin{align}
\theta = \nu + \tau. \label{angles}
\end{align}
Since the C$_\text{2}$ rotor is confined to the $(Y,Z)$ plane, the azimuthal angle $\phi$ measured away from $X$
has value $\pi/2$. The distinction of two contributions to the angle $\theta$ is essential.
\begin{figure}
\resizebox{5cm}{!}
{\includegraphics{fig1.eps}}
\caption{View of the C$_\text{84}$ molecule along the $S_4$ axis.}
\label{figC84}
\end{figure}
\begin{figure}
\resizebox{8cm}{!}{\includegraphics{fig2.eps}}
\caption{The Sc$_\text{2}$C$_\text{2}$ complex in the crystal-fixed cubic
coordinate system $(X',Y',Z')$ while the C$_\text{84}$ molecule is in standard
orientation.
}
\label{figaxes3D}
\end{figure}
\begin{figure}
\resizebox{8cm}{!}{\includegraphics{fig3.eps}}
\caption{Orientation of the C$_\text{2}$ bond of Sc$_\text{2}$C$_\text{2}$ in
the rotatory reflection plane of the C$_\text{84}$ molecule (C$_\text{84}$ in
standard orientation).}
\label{figaxes2D}
\end{figure}
In the following we will assign the angular variable $\tau$ to the quantum mechanical tunneling of the
Sc$_\text{2}$C$_\text{2}$ complex about its long axis inside the C$_\text{84}$ cage and the angular variable
$\nu$
to the thermally excited classical rotation of the C$_\text{84}$ molecule about the $S_4$ axis.
The assumption of classical uniaxial rotational diffusion motion as a first approximation to the dynamics of the
C$_\text{84}$ molecule at low temperature is motivated by the structural results of meroaxial disorder \cite{4}.
It is also inspired from the dynamics of solid C$_\text{70}$ in the rhombohedral and monoclinic
phases. There the importance of uniaxial rotational diffusion about the long axis of the C$_\text{70}$ molecule
has been probed by muon spin spectroscopy \cite{Dennis,Mcrae}, nuclear magnetic resonance \cite{Maniwa,
Blinc, Tycko} and inelastic neutron scattering \cite{Christides}.
We treat the C--C bond of Sc$_\text{2}$C$_\text{2}$ as a rigid cylindrical rod with longitudinal and transverse
static polarizability $\alpha_\parallel$ and $\alpha_\perp$ respectively.
The Raman scattering law for incident
and scatterd radiation in $Z$ direction is given by the Fourier transform of the time-dependent autocorrelation
function of the polarizability $\alpha_{ZZ}$:
\begin{align}
R_{ZZZZ}(\omega) = \frac{N}{2\pi}\int_{-\infty}^{+\infty} dt\,
e^{i\omega t}\langle\alpha_{ZZ}(t)\alpha_{ZZ}(0)\rangle. \label{RZZZZ}
\end{align}
Here $N$ is the number of Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ units and $\omega$ is the frequency difference
of incident and scattered radiation. The polarizability has to be
understood as an average over the three meroaxial molecular orientations. In the following we label these
orientations by a superscript $(i)$, $i=1,2,3$.
If the C$_\text{84}$ molecule is in standard orientation ($\xi$ axis $\parallel X$),
or in orientation $\xi \parallel Y$, the corresponding orientation-dependent polarizabilities $\alpha_{ZZ}^{(1)}$
and $\alpha_{ZZ}^{(2)}$ are equal and given by \cite{10}
\begin{align}
\alpha_{ZZ}^{(1)} = \alpha_{ZZ}^{(2)} = \frac{\alpha_\perp + \alpha_\parallel}{2}
+\frac{\alpha_\parallel - \alpha_\perp}{2}\cos 2\theta, \label{alphaZZ1}
\end{align}
while with $\xi\parallel Z$ one has
\begin{align}
\alpha_{ZZ}^{(3)} = \alpha_\perp, \label{alphaZZ3}
\end{align}
independent of $\theta$. In the case of meroaxial disorder, the average polarizability is given by
\begin{align}
\alpha_\text{ZZ} = \frac{1}{3}\sum_{i=1}^3 \alpha_\text{ZZ}^{(i)} = a + \frac{2}{3}b\cos 2\theta \label{alphaZZ}
\end{align}
where we have defined
\begin{xalignat}{2}
a = \frac{\alpha_\parallel + 2\alpha_\perp}{3}, & & b = \frac{\alpha_\parallel - \alpha_\perp}{2}.
\end{xalignat}
Hence the time-dependent correlation function reads
\begin{align}
\langle \alpha_{ZZ}(t) \alpha_{ZZ}(0) \rangle =
a^2 + \frac{4b^2}{9}\bigl\langle \cos 2\theta(t) \cos 2\theta(0)\bigr \rangle. \label{avalphaalphaZZ}
\end{align}
Similarly we obtain for incident radiation in $Z$
direction and scattered radiation in $Y$ direction
\begin{align}
R_{ZYZY}(\omega) = \frac{N}{2\pi}\int_{-\infty}^{+\infty} dt\,
e^{i\omega t}\langle\alpha_{ZY}(t)\alpha_{ZY}(0)\rangle. \label{RZYZY}
\end{align}
If the C$_\text{84}$ molecule is in standard orientation,
\begin{align}
\alpha_{ZY}^{(1)} = b\sin 2\theta, \label{alphaZY1}
\end{align}
while for $\xi \parallel Y$, $\alpha_{ZY}^{(2)} = 0$, and $\xi \parallel Z$, $\alpha_{ZY}^{(3)} = 0$.
The average polarizability for the case of meroaxial disorder reads
\begin{align}
\alpha_{ZY} = \frac{b}{3}\sin 2\theta, \label{alphaZY}
\end{align}
and the correlation function becomes
\begin{align}
\langle \alpha_{ZY}(t) \alpha_{ZY}(0) \rangle =
\frac{b^2}{9}\bigl\langle \sin 2\theta(t) \sin 2\theta(0)\bigr \rangle. \label{avalphaalphaZY}
\end{align}
The problem of determining the scattering laws $R_{ZZZZ}(\omega)$ and $R_{ZYZY}(\omega)$ consists in the
calculation of the orientation-orientation thermal correlation functions
\begin{align}
C(t) & = \langle\cos 2\theta(t)\cos 2\theta(0)\rangle, \label{Ctdef} \\
S(t) & = \langle\sin 2\theta(t)\sin 2\theta(0)\rangle, \label{Stdef}
\end{align}
Taking into account the basic relation Eq.\ (\ref{angles}), we expand in terms of $\cos 2\tau$, $\sin 2\tau$, $\cos 2\nu$ and $\sin
2\nu$ thereby obtaining correlation functions of the form
\begin{align}
C^\text{cccc}(t) & = \langle\cos 2\tau(t)\cos 2\nu(t)\cos 2\tau(0)\cos 2\nu(0)\rangle, \\
S^\text{cscs}(t) & = \langle\cos 2\tau(t)\sin 2\nu(t)\cos 2\tau(0)\sin 2\nu(0)\rangle,
\end{align}
and similarly for $C^\text{ssss}(t)$ and $S^\text{scsc}(t)$.
Observing that $\tau$ refers to quantum dynamics of C$_\text{2}$ and $\nu$ to classical dynamics of
C$_\text{84}$, we decouple the thermal averages:
\begin{xalignat}{2}
C^\text{cccc} = Q^\text{cc}(t)F^\text{cc}(t), && C^\text{ssss} = Q^\text{ss}(t)F^\text{ss}(t), \\
S^\text{cscs} = Q^\text{cc}(t)F^\text{ss}(t), && S^\text{scsc} = Q^\text{ss}(t)F^\text{cc}(t).
\end{xalignat}
Here the correlation functions
\begin{align}
Q^\text{cc}(t) & = \langle\cos 2\tau(t)\cos 2\tau(0)\rangle, \\
Q^\text{ss}(t) & = \langle\sin 2\tau(t)\sin 2\tau(0)\rangle,
\end{align}
describe the quantum dynamics of the C$_\text{2}$ unit while the correlation functions
\begin{align}
F^\text{cc}(t) & = \langle\cos 2\nu(t)\cos 2\nu(0)\rangle, \label{Fcct} \\
F^\text{ss}(t) & = \langle\sin 2\nu(t)\sin 2\nu(0)\rangle, \label{Fsst}
\end{align}
describe the classical dynamics of the surrounding C$_\text{84}$ molecule.
Finally quantum and classical dynamics occur as products of correlation functions:
\begin{align}
C(t) = Q^\text{cc}(t)F^\text{cc}(t) + Q^\text{ss}(t)F^\text{ss}(t), \label{Ct} \\
S(t) = Q^\text{ss}(t)F^\text{cc}(t) + Q^\text{cc}(t)F^\text{ss}(t). \label{St}
\end{align}
Defining Fourier transforms
\begin{align}
Q(\omega) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\, e^{i\omega t}Q(t), \\
F(\omega) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\, e^{i\omega t}F(t), \label{Fomega}
\end{align}
and using Eqs.\ (\ref{Ct}), (\ref{St}), (\ref{avalphaalphaZZ}) and (\ref{avalphaalphaZY}), we rewrite the Raman scattering law
in terms of convolutions of Fourier-transformed quantum and classical correlation functions, thereby obtaining
\begin{align}
R_{ZZZZ}(\omega) = N\bigl[a^2\delta(\omega)
+\frac{4b^2}{9}C(\omega)\bigr], \label{RZZZZ2}
\end{align}
with scattering function
\begin{align}
C(\omega) = \int_{-\infty}^{+\infty}d\omega'\, \bigl[Q^\text{cc}(\omega - \omega')F^\text{cc}(\omega')
+Q^\text{ss}(\omega - \omega')F^\text{ss}(\omega')\bigr],
\label{Comega}
\end{align}
and
\begin{align}
R_{ZYZY}(\omega) = N\frac{b^2}{9}S(\omega), \label{RZYZY2}
\end{align}
with scattering function
\begin{align}
S(\omega) = \int_{-\infty}^{+\infty}d\omega'\, \bigl[Q^\text{ss}(\omega - \omega')F^\text{cc}(\omega')
+Q^\text{cc}(\omega - \omega')F^\text{ss}(\omega')\bigr].
\label{Somega}
\end{align}
The first term in brackets on the right-hand side of Eq.\ (\ref{RZZZZ2}) corresponds to the unshifted Rayleigh line of the
spectrum while the function $C(\omega)$ (as also $S(\omega)$ in Eq.\ (\ref{RZYZY2})) accounts for the inelastic part.
Expressions (\ref{Comega}) and (\ref{Somega}) which are convolutions in Fourier space
show that the quantum motion of the C$_\text{2}$ rotor is modulated
by the classical rotational motion of the surrounding C$_\text{84}$ cage.
This is an example of ``direct coupling" of two motions through the detection process \cite{YvinecPick},
in contradistinction to the
``indirect coupling" through a Hamiltonian. The origin of the direct coupling here is due to the fact that the
detection angle $\theta$ is a sum of two terms, Eq.\ (\ref{angles}).
In the next section we will calculate the quantum mechanical and classical orientational correlation functions for
C$_\text{2}$ and C$_\text{84}$ respectively.
\section{Dynamic correlations}\label{secDynamic}
\subsection{C$_\text{2}$ quantum rotor}
The quantum mechanics of a diatomic molecular rotor in crystals goes
back to Pauling \cite{7}. A still valid review of the subject of single particle rotations in molecular
crystals has been given by W. Press \cite{8}. We will calculate the orientational autocorrelation functions $Q^\text{cc}$ and $Q^\text{ss}$ by starting from the
model of the C$_\text{2}$ planar quantum rotor in the fourfold potential due to the C$_\text{84}$ cage. We will
refer to this motion as rotational tunneling \cite{8,Hueller}.
We will show that the resonances of the correlation functions $Q^{cc}(\omega)$ and $Q^{ss}(\omega)$ are due to
transitions between tunneling energy levels.
The sole degree
of freedom is the angle $\tau$ which accounts for the rotatory motion of C$_\text{2}$ with respect to the cage. The
corresponding Schr\"{o}dinger equation reads
\begin{align}
\left[-B \frac{d^2}{d\tau^2} + \frac{V_0}{2}(1-4\cos 4\tau)\right]\psi(\tau) = E\psi(\tau). \label{Schroedinger}
\end{align}
Here $B = \hbar^2/2I$ is the rotational constant and $I$ the moment of inertia of C$_\text{2}$, $V_0$ is the
barrier height of the potential.
The rotational constant has the dimension of an energy, from experiment \cite{6} one deduces $B = 1.73$ cm$^{-1}$
(wave number units) and $V_0 = 8B$. These values are supported by ab initio density functional calculations
\cite{6}.
Equation (\ref{Schroedinger}) which is an extension of Mathieu's equation
\cite{7,12} is
also called Hill's equation \cite{13}. With the definitions
\begin{xalignat}{2}
\alpha = \frac{1}{B}\left(E-\frac{V_0}{2}\right), && q = \frac{V_0}{4B},
\end{xalignat}
Eq.\ (\ref{Schroedinger}) reads
\begin{align}
\left[\frac{d^2}{d\tau^2} + \alpha + 2q\cos 4\tau\right]\psi(\tau) = 0. \label{Schroedinger2}
\end{align}
From symmetry considerations (nuclear spin is zero for $^\text{12}$C, electron wave function of
C$_\text{2}^{2-}$ is totally symmetric), it follows that the rotational wave function $\psi(\tau)$ must be
symmetric with respect to the operation $\tau\longrightarrow\tau - \pi$. For even
periodic solutions one makes the ansatz
\begin{align}
\psi^+(\tau) = \sum_{m=0}^\infty A_{2m}\cos(2m\tau),
\end{align}
$m=0,1,2,\hdots$. Equation (\ref{Schroedinger2}) then leads to an infinite system of homogeneous equations for
the coefficients $A_{2m}$. Truncation of this system for a given value $m=N$ leads to $N+1$ equations which
separate into two systems: a first one for $\bigl\{A_0,A_4,\hdots,A_{2N}\bigr\}$ and a second one for
$\bigl\{A_2,A_6,\hdots,A_{2N-2}\bigr\}$ (we take $N$ even). Solving for the two discriminants yields the roots
$\alpha^+_{2m}(q)$ for $m=0,2,\hdots,N$ and $m=1,3,\hdots,N-1$. In case of zero potential, i.e.\ $q=0$, these
solutions reduce to the free planar rotor energies $\bigl\{E^+_{2m}(q=0)\bigr\} =
\bigl\{0,\hdots,(2m)^2B,\hdots\bigr\}$ with normalized
eigenfunctions
\begin{align}
\bigl\{\psi^+_{2m}(\tau)\bigr\} =
\left\{\frac{1}{\sqrt{\pi}},\hdots,\frac{\cos(2m\tau)}{\sqrt{\pi/2}},\hdots\right\}
\end{align}
in the interval $0\le\tau\le\pi$. The ansatz for odd periodic solutions reads
\begin{align}
\psi^-(\tau) = \sum_{m=1}^\infty B_{2m}\sin(2m\tau).
\end{align}
Proceeding as before one determines the roots $\alpha_{2m}^-(q)$. In case of zero potential the eigenfunctions
are
\begin{align}
\bigl\{\psi^-_{2m}(\tau)\bigr\} = \left\{\frac{\sin
2\tau}{\sqrt{\pi}},\hdots,\frac{\sin(2m\tau)}{\sqrt{\pi/2}},\hdots\right\}.
\end{align}
In the following we will label the energy eigenfunctions and eigenvalues by the double index $(\sigma,2m)$,
$\sigma=\pm$,
also in the case of nonzero potential.
In Fig.\ \ref{tunneling} we show plots of $\frac{E_{2m}^\sigma(q)}{B} = \alpha_{2m}^\sigma(q) + 2q$.
\begin{figure}
\resizebox{10cm}{!}{\includegraphics{fig4.eps}}
\caption{Energy levels of the C$_\text{2}$ planar quantum rotor in the fourfold
molecular potential as a function of the potential strength $q$ (dimensionless
units). Only levels up to $2m = 8$ are shown.
}
\label{tunneling}
\end{figure}
We next perform a spectral decomposition of the correlation functions $Q^\text{cc}(t)$ and $Q^\text{ss}(t)$
in terms of eigenfunctions and eigenvalues of the Schr\"{o}dinger equation (\ref{Schroedinger}). In general form
the result reads
\begin{align}
Q^\text{cc}(t) = \frac{1}{Z}\sum_{i,j}e^{-E_i/T}|C_{ij}|^2e^{i(E_i-E_j)t/\hbar}, \label{Qcct} \\
Q^\text{ss}(t) = \frac{1}{Z}\sum_{i,j}e^{-E_i/T}|S_{ij}|^2e^{i(E_i-E_j)t/\hbar}, \label{Qsst}
\end{align}
where
\begin{align}
C_{ij} = \langle i|\cos 2\tau |j\rangle, \\
S_{ij} = \langle i|\sin 2\tau |j\rangle.
\end{align}
Here the label $i$ ($j$) stands for the double index $(\sigma,2m)$ of the solutions of the
Schr\"{o}dinger
equation. We calculate the matrix elements $C_{ij}$ and $S_{ij}$ with the free planar rotor
energies. Symmetry implies that only functions of a same parity $(+,+)$ or $(-,-)$ contribute to C$_{ij}$
while $S_{ij}$ differs from zero only for functions $i,j$ with different parity.
For instance
\begin{align}
C^{++}_{2m2n} = \int_0^\pi d\tau\, \frac{\cos(2m\tau)}{\sqrt{\pi/2}}\cos 2\tau\frac{\cos(2n\tau)}{\sqrt{\pi/2}}
= \frac{1}{2}\delta_{m,n\pm 1}, \label{sr1} \\
S^{+-}_{2m2n} = \int_0^\pi d\tau\, \frac{\cos(2m\tau)}{\sqrt{\pi/2}}\sin 2\tau\frac{\sin(2n\tau)}{\sqrt{\pi/2}}
= \frac{1}{2}\delta_{m,n\pm 1}, \label{sr3}
\end{align}
These matrix elements imply selection rules for transitions between energy levels.
We take Fourier transforms of
Eqs.\ (\ref{Qcct}) and (\ref{Qsst}), using the identity
\begin{align}
\frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\, e^{i\omega t}e^{i(E_i - E_j)t/\hbar} =
\delta\left(\omega-\left(\frac{E_j - E_i}{\hbar}\right)\right).
\end{align}
We insert the energies
\begin{align}
E^\sigma_{2m} = B\alpha^\sigma_{2m}(q) + \frac{V_0}{2},
\end{align}
and take into account the selection rules (\ref{sr1}) -- (\ref{sr3}). Defining the frequency transfer
\begin{align}
\omega^{\sigma\sigma'}_{mn} = \frac{E_{2n}^{\sigma'} - E_{2m}^{\sigma}}{\hbar},
\end{align}
we obtain
\begin{multline}
Q^\text{cc}(\omega) = \frac{1}{2Z}\Biggl\{e^{-E^+_0/T}\delta(\omega - \omega^{++}_{01})
+\sum_{m=1}^\infty\frac{e^{-E^+_{2m}/T}}{2}\bigl[\delta(\omega -
\omega^{++}_{mm+1}) + \delta(\omega - \omega^{++}_{mm-1})\bigr] \\
+\frac{e^{-E^-_{2}/T}}{2}\bigl[\delta(\omega - \omega^{--}_{12})
+\sum_{m=2}^\infty\frac{e^{-E^-_{2m}/T}}{2}\bigl[\delta(\omega -
\omega^{--}_{mm+1}) + \delta(\omega - \omega^{--}_{mm-1})\bigr]\Biggr\}
\label{Qccomega},
\end{multline}
with
\begin{align}
Z = e^{-E^+_0/T} + \sum_{m=1}^\infty \left(e^{-E^+_{2m}/T} + e^{-E^-_{2m}/T}\right). \label{partitionsum}
\end{align}
Similarly we get
\begin{multline}
Q^\text{ss}(\omega) = \frac{1}{2Z}\Biggl\{e^{-E^+_0/T}\delta(\omega - \omega^{+-}_{01}) \\
+\frac{e^{-E^+_{2}/T}}{2}\delta(\omega - \omega^{+-}_{12})
+\sum_{m=2}^\infty\frac{e^{-E^+_{2m}/T}}{2}\bigl[\delta(\omega -
\omega^{+-}_{mm+1}) + \delta(\omega - \omega^{+-}_{mm-1})\bigr] \\
+\sum_{m=1}^\infty\frac{e^{-E^-_{2m}/T}}{2}\bigl[\delta(\omega -
\omega^{-+}_{mm+1}) + \delta(\omega - \omega^{-+}_{mm-1})\bigr]\Biggr\}.
\label{Qssomega}
\end{multline}
We notice that in absence of the uniaxial rotation of the C$_\text{84}$ cage, i.e.\ for $\nu = 0$, the
correlation functions Eqs.\ (\ref{Fcct}) and (\ref{Fsst}) reduce to constants: $F^\text{cc} = 1$,
$F^\text{ss} = 0$. Hence the spectral functions $C(\omega)$ and $S(\omega)$ entering the Raman scattering laws
Eqs.\ (\ref{RZZZZ2}) and (\ref{RZYZY2}) reduce to $Q^\text{cc}(\omega)$ and $Q^\text{ss}(\omega)$ and exhibit
infinitely sharp $\delta$-peaks which account for transitions between quantized planar rotor states.
In Table I we have quoted some values ($m\le 4$) of $\omega_{m n}^{\sigma \sigma'}$ for $q = 0$ (free rotor) and $q = 2$
(value of the potential strength taken from experiment in Ref.\ \onlinecite{6}).
\begin{table}
\caption{Tunneling frequency transfers
$\omega_{m n}^{\sigma \sigma'}$, $n = m\pm 1$, $\sigma,\sigma'=\pm$, in units cm$^{-1}$.
}
\label{hbaromega}
\begin{ruledtabular}
\begin{tabular}{rrrrrrrrr}
$m$ & $\omega_{mm+1}^{++}$ & $\omega_{mm+1}^{--}$ & $\omega_{mm+1}^{+-}$ & $\omega_{mm+1}^{-+}$
& $\omega_{mm-1}^{++}$ & $\omega_{mm-1}^{--}$ & $\omega_{mm-1}^{+-}$ & $\omega_{mm-1}^{-+}$ \\
\hline
\multicolumn{9}{c}{$q=0$} \\
\hline
$0$ & $6.92$ & & $6.92$ & & & & & \\
$1$ & $20.76$ & $20.76$ & $20.76$ & $20.76$ & $-6.92$ & & & $-6.92$ \\
$2$ & $34.60$ & $34.60$ & $34.60$ & $34.60$ & $-20.76$ & $-20.76$ & $-20.76$ & $-20.76$ \\
$3$ & $48.44$ & $48.44$ & $48.44$ & $48.44$ & $-34.60$ & $-34.60$ & $-34.60$ & $-34.60$ \\
$4$ & $62.28$ & $62.28$ & $62.28$ & $62.28$ & $-48.44$ & $-48.44$ & $-48.44$ & $-48.44$ \\
\hline
\multicolumn{9}{c}{$q=2$} \\
\hline
$0$ & $4.10$ & & $10.99$ & & & & & \\
$1$ & $25.12$ & $17.39$ & $24.28$ & $18.23$ & $-4.10$ & & & $-10.99$ \\
$2$ & $34.00$ & $34.87$ & $34.02$ & $34.84$ & $-25.12$ & $-17.39$ & $-18.23$ & $-24.28$ \\
$3$ & $48.40$ & $48.38$ & $48.40$ & $48.38$ & $-34.00$ & $-34.87$ & $-34.84$ & $-34.02$ \\
$4$ & $62.26$ & $62.26$ & $62.26$ & $62.26$ & $-48.40$ & $-48.38$ & $-48.38$ & $-48.40$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{C$_\text{84}$ uniaxial rotational diffusion}\label{s3B}
In order to calculate the classical correlation functions $F^{cc}(t)$ and $F^{ss}(t)$ we
treat the C$_\text{84}$ molecule as a classical uniaxially diffusing rotor with rotation axis $S_4$ in coincidence with a
cubic crystal axis, in casu the $X'$ axis. The corresponding rotation angle $\nu$ is measured away from the $Z'$
axis.
Equivalently one considers the $S_4$ axis along $Y'$ and $Z'$ (meroaxial disorder).
Given the $S_4$ axis it would be tempting to study this
motion in a crystal field potential of fourfold symmetry. Such a study can be carried out along the lines of
Ref.\ \onlinecite{14} and leads to a continued fraction expansion in terms of frequency moments of the orientational
variables. It is adequate in the case of a strong crystal field potential since then one can limit the continued
fraction to a few steps. However this approximation is not valid in the case of weak potentials. Since
the equator of the C$_\text{84}$ molecule for rotations about $S_4$ deviates only slightly from circular
shape, we prefer to consider the rotator about the $S_4$ axis in the rotational-diffusion
approximation. This model has the obvious advantage of simplicity and leads to a linear temperature-dependent
broadening of the tunneling transition lines. Within the uniaxial diffusion model
the C$_\text{84}$ molecule experiences a random rotational torque (also called Brownian motion
torque) about its $S_4$ axis. This torque is caused by the thermal motion of the surrounding lattice (heat bath).
In that respect the present problem is different from the situation of the heavy symmetrical top with
gravitational torque since on the molecular scale the effect of gravitation is negligible in comparison with the
heat bath.
The idea of rotational diffusion goes back to Debye \cite{15} who applied the concept of rotational
Brownian motion to the theory of dielectric relaxation (see also \cite{8} and \cite{16}). In Appendix \ref{appA} we give some details for the present
problem.
As a result we obtain
\begin{align}
F^\text{cc}(t) = \bigl\langle\cos 2\nu(t) \cos 2\nu(0)\bigr\rangle = \frac{1}{2}e^{-4D_\text{R}t},
\label{Fcctb} \\
F^\text{ss}(t) = \bigl\langle\sin 2\nu(t) \sin 2\nu(0)\bigr\rangle = \frac{1}{2}e^{-4D_\text{R}t}. \label{Fsstb}
\end{align}
Here the rotational diffusion coefficient $D_\text{R}$ is given by the Einstein
relation
\begin{align}
D_\text{R} = \frac{k_\text{B}T}{\zeta}, \label{diffcoeff}
\end{align}
where $\zeta$ is the friction coefficient and $T$ the temperature.
The equality of $F^\text{cc}(t)$ and $F^\text{ss}(t)$ is a consequence of our neglect of the crystal field
potential within the large-friction approximation. From Eqs.\ (\ref{Ct}) and (\ref{St}) one sees that then
\begin{align}
C(t) = S(t). \label{alsosees}
\end{align}
In the following we will neglect the superscripts $ss$ and $cc$ on $F^{cc}$ and $F^{ss}$ and write just $F$.
The Fourier transform is obtained from Eq.\ (\ref{Fsstb}) with the result
\begin{align}
F(\omega) = \frac{1}{2\pi}\left[\frac{4D_R}{\omega^2 + 16D_\text{R}^2}\right].
\label{resFccomega}
\end{align}
We rewrite the scattering functions Eqs.\ (\ref{Comega}) and (\ref{Somega}) as
\begin{align}
C(\omega) = S(\omega) = \int_{-\infty}^{+\infty}d\omega'\,\bigl[Q^\text{ss}(\omega-\omega') +
Q^\text{cc}(\omega-\omega')\bigr]F(\omega'). \label{integral}
\end{align}
Using expressions (\ref{Qccomega}) and (\ref{Qssomega}) we carry out the integral over $\omega'$ and obtain
\begin{align}
C(\omega) = C_{++}(\omega) + C_{--}(\omega) + C_{+-}(\omega) + C_{-+}(\omega) \label{resComega}
\end{align}
where
\begin{align}
C_{++}(\omega) & = \frac{1}{2Z}\left\{e^{-E_0^+/T}F(\omega - \omega_{01}^{++})
+ \sum_{m=1}^\infty \frac{e^{-E_{2m}^+/T}}{2}\bigl[F(\omega - \omega_{mm+1}^{++})
+ F(\omega - \omega_{mm-1}^{++})\bigr]\right\}, \label{Cpp} \\
C_{--}(\omega) & = \frac{1}{2Z}\left\{\frac{e^{-E_2^-/T}}{2}F(\omega - \omega_{12}^{--})
+ \sum_{m=2}^\infty \frac{e^{-E_{2m}^-/T}}{2}\bigl[F(\omega - \omega_{mm+1}^{--})
+ F(\omega - \omega_{mm-1}^{--})\bigr]\right\}, \\
C_{+-}(\omega) & = \frac{1}{2Z}\left\{e^{-E_0^+/T}F(\omega - \omega_{01}^{+-})
+ \frac{e^{-E_2^+/T}}{2}F(\omega - \omega_{12}^{+-})\right. \nonumber \\
& \phantom{=\frac{1}{2Z}\left\{\right.}\left. + \sum_{m=2}^\infty \frac{e^{-E_{2m}^+/T}}{2}\bigl[F(\omega - \omega_{mm+1}^{+-})
+ F(\omega - \omega_{mm-1}^{+-})\bigr]\right\}, \\
C_{-+}(\omega) & = \frac{1}{2Z}\left\{
\sum_{m=1}^\infty \frac{e^{-E_{2m}^-/T}}{2}\bigl[F(\omega - \omega_{mm+1}^{-+})
+ F(\omega - \omega_{mm-1}^{-+})\bigr]\right\}. \label{Cmp}
\end{align}
We see that $C(\omega)$ is a sum of weighted Lorentzians
\begin{align}
F(\omega - \omega_{mm\pm 1}^{\sigma\sigma'}) = \frac{1}{2\pi}\left[ \frac{4D_\text{R}}{(\omega - \omega_{mm\pm
1}^{\sigma\sigma'})^2 + 16D_\text{R}^2} \right] \label{Lorentzian}
\end{align}
centered
around the allowed frequency transfers $\omega=\omega^{\sigma\sigma'}_{mm\pm 1}$ and of width
$8D_\text{R}$ (full width half maximum).
Since $D_\text{R}$ has dimension s$^{-1}$, it follows from Eq.\ (\ref{diffcoeff})
that $\zeta$ has the dimension of an action.
We write $\zeta = \zeta_n h$, where $\zeta_n$ is a dimensionless number taken as parameter. We then obtain
$D_\text{R} = 2.08\times 10^{10}(T/\zeta_n)$ s$^{-1}$ where $T$ is measured in Kelvin. Equivalently,
$D_\text{R}=0.694(T/\zeta_n)$ cm$^{-1}$.
Since to our knowledge there are so far no direct measurements of the orientational dynamics of the C$_\text{84}$
molecule in Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$, we will choose a value of $D_\text{R}$ such that the
correlation time $\tau_c = (4D_\text{R})^{-1}$ has a value that is intermediate between the values of $2$ ns and
$5$ ps measured by NMR experiments for the C$_\text{70}$ molecule in the low-temperature monoclinic and
high-temperature fcc phases of solid C$_\text{70}$, respectively \cite{Tycko}.
Assuming that $\zeta_n=100$ is a realistic value (then $D_\text{R} =
10^{10}$ s$^{-1}$ at $T = 50$ K), we have plotted the scattering function $C(\omega)$ for several temperatures in
Fig.\ \ref{Comegaplot}. The resonances are centered at the frequency transfers $\omega_{mn}^{\sigma\sigma'}$ for
the potential strength $q=2$. The spectra reflect the characteristic assymetries for $\omega>0$ and $\omega<0$
due to anti-Stokes and Stokes processes, respectively. In our calculations, we have included transitions with
the values $m,n=0,\hdots,19$.
\begin{figure}
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig5av2.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig5bv2.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig5cv2.eps}}}
\caption{Scattering
function $C(\omega)$ for $T=20$ K, $60$ K and $120$ K. The width $8D_\text{R}$ is the same for all resonance
lines and increases linearly
from $1.11$ cm$^{-1}$ to $6.66$ cm$^{-1}$ for $T=20$ K and $120$ K, respectively.
The inset at $T=20$ K shows the splitting of the $\omega_{12}^{--}$ and $\omega_{12}^{+-}$ resonances.
}
\label{Comegaplot}
\end{figure}
We notice that if one artificially excludes the tunneling motion of the C$_\text{2}$ unit by taking a fixed
value, say $0$, for the angle $\tau$, one finds that $Q^{ss}(\omega)=0$ and $Q^{cc} = \delta(\omega)$. Then Eq.\
(\ref{integral}) becomes $C(\omega) = F(\omega)$. Since the C$_\text{2}$-unit is dragged along with the classical
rotational diffusion of the encapsulating C$_\text{84}$ molecule, its polarizability is changing accordingly with
time and
the Raman scattering laws $R_{ZZZZ}(\omega)$ and $R_{ZYZY}(\omega)$ will exhibit a
Lorentzian $F(\omega)$ of width $4D_\text{R}$ centered at $\omega = 0$.
\section{Powder averages}\label{secPowder}
In Sect.\ \ref{secDynamic} we have considered
a cubic crystal with crystal axes $(X',Y',Z')$ in coincidence with the laboratory-fixed cubic axes $(X,Y,Z)$.
Since experiments are performed on powder samples, we will extend the previous results. The powder sample
consists of a large number of arbitrarily oriented cubic crystallites, each crystallite has symmetry $Fm\overline{3}m$ where the
Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ units are meroaxially disordered \cite{4}.
We first will consider one single crystallite where the crystal-fixed system of axes is related to the
laboratory-fixed system of axes by the Euler angles $(\alpha,\beta,\gamma)$. The C$_\text{2}$ rotors are now
moving in planes normal to the $X'$, $Y'$, $Z'$ axes of the rotated coordinate system. This means that the
polarizabilities $\alpha_{ZZ}^R$ or $\alpha_{ZY}^R$ measured in the laboratory-fixed coordinate system will depend on
the Euler angles of the given crystallite.
In Appendix \ref{appB} we have calculated the polarizability components $\alpha_{ZZ}^{(i)R}$ which are
obtained from $\alpha_{ZZ}^{(i)}$ by application of the rotation operation $R(\alpha,\beta,\gamma)$:
\begin{align}
\alpha_{ZZ}^{(i)R} = R(\alpha,\beta,\gamma)\alpha_{ZZ}^{(i)}.
\end{align}
The meroaxial average
\begin{align}
\alpha_{ZZ}^R = \frac{1}{3}\sum_{i=1}^3 \alpha_{ZZ}^{(i)R}
\end{align}
is obtained as
\begin{align}
\alpha_{ZZ}^R = a +
\frac{b}{3}\sum_{i=1}^3\bigl[A_{ZZ}^{(i)}(\beta,\gamma) + B_{ZZ}^{(i)}(\beta,\gamma)\sin 2\theta +
C_{ZZ}^{(i)}(\beta,\gamma)\cos 2\theta\bigr]. \label{alphaZZRmeroaxial}
\end{align}
where $i=1$ refers to $\xi\parallel X'$, $i=2$ to $\xi \parallel Y'$ and $i=3$ to $\xi \parallel Z'$.
The coefficients $A_{ZZ}^{(i)}(\beta,\gamma)$, $B_{ZZ}^{(i)}(\beta,\gamma)$ and
$C_{ZZ}^{(i)}(\beta,\gamma)$ are derived in Appendix \ref{appB}, they are found to depend on only two Euler
angles.
In the present section we assume that the meroaxial disorder is static or equivalently there are no
reorientations of the C$_\text{84}$ molecules among the three meroaxial directions in a given crystallite. The
angle $\theta$ is then the sole dynamic quantity. The time-dependent polarizability correlation function per
molecule in the given crystallite is obtained as
\begin{align}
\bigl\langle \alpha_{ZZ}^R(t) \alpha_{ZZ}^R(0)\bigr\rangle
= a^2 + \frac{b^2}{9}\sum_{i,j}\bigl[ A_{ZZ}^{(i)}A_{ZZ}^{(j)}
+ B_{ZZ}^{(i)}B_{ZZ}^{(j)}S(t)
+ C_{ZZ}^{(i)}C_{ZZ}^{(j)}C(t)
\bigr]. \label{alphaZZRmeroaxialt}
\end{align}
The correlation functions $C(t)$ and $S(t)$, defined by Eqs.\ (\ref{Ctdef}) and (\ref{Stdef}) respectively, have been calculated in
Sects.\ II and III.
The powder average for a function $F(\beta,\gamma)$ is defined as
\begin{align}
\overline{F} =
\frac{1}{4\pi}\int_0^{2\pi}d\gamma\,\int_0^{\pi}d\beta\,\sin\beta F(\beta,\gamma). \label{Fbetagammaaverage}
\end{align}
The results for the products $\overline{A_{ZZ}^{(i)}A_{ZZ}^{(j)}},
\overline{B_{ZZ}^{(i)}B_{ZZ}^{(j)}},\hdots$ are quoted in Appendix \ref{appB}. The
powder-averaged polarizability correlation function per molecule reads
\begin{align}
\overline{\bigl\langle\alpha_{ZZ}^R(t)\alpha_{ZZ}^R(0)\bigr\rangle} & = a^2 +
\frac{b^2}{9}\left[\frac{8}{15}S(t) + \frac{12}{15}C(t)\right]. \label{refaZZR}
\end{align}
Taking into account $S(t) = C(t)$, Eq.\ (\ref{alsosees}), we obtain the Raman scattering law for a
powder-averaged sample with meroaxial disorder:
\begin{align}
\overline{R_{ZZZZ}(\omega)} & = N\left(a^2\delta(\omega) + \frac{4}{27}b^2C(\omega)\right).
\end{align}
The expression for a single crystal with meroaxial disorder has been given by Eq.\ (\ref{RZZZZ2}).
In an analogous way one calculates
\begin{align}
\alpha_{ZY}^R = \frac{1}{3}\sum_{i=1}^3\alpha_{ZY}^{(i)R}
\end{align}
with the result
\begin{align}
\alpha_{ZY}^R = \frac{b}{3}\sum_{i=1}^3\bigl[A_{ZY}^{(i)}(\alpha,\beta,\gamma)
+ B_{ZY}^{(i)}(\alpha,\beta,\gamma)\sin 2\theta + C_{ZY}^{(i)}(\alpha,\beta,\gamma)
\cos 2\theta\bigr]. \label{alphaZYRmeroaxial}
\end{align}
The coefficients $A_{ZY}^{(i)},\hdots,C_{ZY}^{(i)}$ are given in Appendix \ref{appB}.
The time-dependent polarizability correlation function per molecule reads
\begin{align}
\bigl\langle \alpha_{ZY}^R(t)\alpha_{ZY}^R(0)\bigr\rangle
= \frac{b^2}{9}\sum_{i,j}\bigl[A_{ZY}^{(i)}A_{ZY}^{(j)}
+ B_{ZY}^{(i)}B_{ZY}^{(j)}S(t)
+ C_{ZY}^{(i)}C_{ZY}^{(j)}C(t)
\bigr]. \label{alphaZYRmeroaxialt}
\end{align}
The powder average of a function $F(\alpha,\beta,\gamma)$ is defined by
\begin{align}
\overline{F} =
\frac{1}{8\pi^2}\int_0^{2\pi}d\alpha\,\int_0^{2\pi}d\gamma\,\int_0^{\pi}d\beta\,\sin\beta F(\alpha,\beta,\gamma).
\label{Falphabetagammaaverage}
\end{align}
Taking into account the powder averages
$\overline{A_{ZY}^{(i)}A_{ZY}^{(j)}}$ etc., calculated in Appendix \ref{appB}, we obtain
\begin{align}
\overline{\bigl\langle\alpha_{ZY}^R(t)\alpha_{ZY}^R(0)\bigr\rangle} & = \frac{b^2}{9}\left[\frac{11}{15}S(t)
+ \frac{4}{15}C(t)\right]. \label{refaZYR}
\end{align}
The Raman scattering law then reads
\begin{align}
\overline{R_{ZYZY}(\omega)} & = N\frac{b^2}{9}C(\omega),
\end{align}
where again we have used $S(t) = C(t)$, Eq.\ (\ref{alsosees}). We see that the powder-averaged
expression is the same as the one for a single crystal
with meroaxial disorder, Eq.\ (\ref{RZYZY2}).
\section{Dynamic Meroaxial Disorder}
So far we have assumed that the orientation of the long axis ($S_4$) of the C$_\text{84}$ molecule in a given
cubic crystallite along the equivalent $\langle 100\rangle$ directions is random but static.
The sole effect of the heat bath was the uniaxial rotational diffusion studied in Sect.\ \ref{s3B}.
While this
situation
of static meroaxial disorder is realistic at temperatures inferior to say $100$ K, it becomes less valid at higher
$T$.
Here again we refer to the situation in solid C$_\text{70}$ where with increasing temperature it is found that
the uniaxial rotation axis flips between different symmetry equivalent orientations such that the rotational
motion becomes more and more isotropic \cite{Dennis,Mcrae,Maniwa,Blinc,Tycko,Christides}.
We therefore will extend the previous model and take into account the situation where a molecule at a given
lattice site in one crystallite changes orientation with the $S_4$ axis jumping randomly between equivalent potential minima in
$\langle 100 \rangle$ directions. Here the heat bath causes stochastic torques about axes perpendicular to the
long axis of the C$_\text{84}$ molecule or equivalently perpendicular to the rotation axis of the encapsulated
Sc$_\text{2}$C$_\text{2}$ quantum gyroscope. We recall that accordingly the normal to the plane of the
C$_\text{2}$
quantum rotor will change its orientation. Within a simple three sites stochastic jump model (see e.g.\ \cite{8}), the conditional
probability $p(i,t|j,0)$ to find a C$_\text{84}$ molecule in an orientation $i=1,2,3$ at time $t\ge 0$ when it was in orientation $j=1,2,3$
at time $0$ is obtained by solving a system of three linear differential equations. One obtains
\begin{align}
p(i,t|j,0) & = \frac{1}{3}(1 + 2e^{-3w t})\text{, }i=j, \label{ieqj} \\
p(i,t|j,0) & = \frac{1}{3}(1 - e^{-3w t}) \text{, }i\ne j, \label{ineqj}
\end{align}
where $w$ is the transition rate for a molecular reorientation. We associate the
transition rate with the inverse of a relaxation time:
\begin{align}
w = \frac{1}{\tau} = \frac{1}{\tau_0}e^{-E_\text{a}/T}.
\end{align}
Here we have assumed an Arrhenius-type law, known from reaction rate theory \cite{18,16}, where $1/\tau_0$ is an attempt frequency and $E_\text{a}$ an activation
energy for meroaxial reorientations of the Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ complex as a whole.
The equilbrium value of the conditional probability is independent of the initial and final orientation and
corresponds to an a priori probability:
\begin{align}
\lim_{t\longrightarrow \infty} p(i,t|j,0) = \frac{1}{3}.
\end{align}
In the previous section the meroaxial orientations within a given crystallite
have been characterized by the coefficients $\bigl\{
A_{ZZ}^{(i)},B_{ZZ}^{(i)},C_{ZZ}^{(i)}\bigr\}$, $\bigl\{A_{ZY}^{(i)},B_{ZY}^{(i)},C_{ZY}^{(i)}\bigr\}$ in Eqs.\
(\ref{alphaZZRmeroaxial}) and (\ref{alphaZYRmeroaxial}) of the polarizabilities $\alpha_{ZZ}^R$ and
$\alpha_{ZY}^R$. Treating these coefficients as dynamic stochastic quantities we obtain instead of Eqs.\
(\ref{alphaZZRmeroaxialt}) and (\ref{alphaZYRmeroaxialt})
\begin{align}
\bigl\langle\alpha_{ZZ}^R(t)\alpha_{ZZ}^R(0)\bigr\rangle & = a^2 + b^2\Bigl[
\bigl\langle A_{ZZ}(t)A_{ZZ}(0)\bigr\rangle \nonumber \\
&\phantom{a^2 + b^2\Bigl[} + \bigl\langle B_{ZZ}(t) B_{ZZ}(0)\bigr\rangle S(t)
+ \bigl\langle C_{ZZ}(t) C_{ZZ}(0)\bigr\rangle C(t)
\Bigr], \label{aZZ} \\
\bigl\langle\alpha_{ZY}^R(t)\alpha_{ZY}^R(0)\bigr\rangle & = b^2\Bigl[
\bigl\langle A_{ZY}(t)A_{ZY}(0)\bigr\rangle \nonumber \\
&\phantom{b^2\Bigl[} + \bigl\langle B_{ZY}(t) B_{ZY}(0)\bigr\rangle S(t)
+ \bigl\langle C_{ZY}(t) C_{ZY}(0)\bigr\rangle C(t)
\Bigr]. \label{aZY}
\end{align}
The correlation functions $\bigl\langle A_{ZZ}(t)A_{ZZ}(0)\bigr\rangle,\hdots,
\bigl\langle C_{ZY}(t)C_{ZY}(0)\bigr\rangle$ which refer to meroaxial reorientations are calculated within the
frame of the stochastic jump model. For instance for a given set $\bigl\{A^{(i)},i=1,2,3\bigr\}$ one has
\begin{align}
\bigl\langle A(t)A(0)\bigr\rangle = \frac{1}{3}\sum_{i,j}A^{(i)}A^{(j)}p(i,t|j,0),
\end{align}
where the conditional probabilities $p(i,t|j,0)$ are given by Eqs.\ (\ref{ieqj}) and (\ref{ineqj}), while the factor $1/3$
accounts for the equilibrium initial probability. Since
the coefficients $A^{(i)}$
depend on the Euler angles which specify the orientation of a given crystallite (Sect.\ \ref{secPowder}), the
powder-averaged correlation functions are obtained by averaging over the Euler angles:
\begin{align}
\overline{\bigl\langle A(t)A(0)\bigr\rangle} = \frac{1}{3}\sum_{i,j}\overline{A^{(i)}A^{(j)}}p(i,t|j,0).
\label{AtA0}
\end{align}
Taking into account the values of the powder-averaged products given in Appendix \ref{appB}, we obtain:
\begin{align}
\overline{\bigl\langle A_{ZZ}(t)A_{ZZ}(0)\bigr\rangle} & = \frac{4}{45}e^{-3t/\tau}, \\
\overline{\bigl\langle B_{ZZ}(t)B_{ZZ}(0)\bigr\rangle} & = \frac{8}{135}\left[1 + 2e^{-3t/\tau}\right], \\
\overline{\bigl\langle C_{ZZ}(t)C_{ZZ}(0)\bigr\rangle} & = \frac{12}{135}\left[1 + 3e^{-3t/\tau}\right].
\end{align}
The powder average of Eq.\ (\ref{aZZ}) then reads
\begin{align}
\overline{\bigl\langle \alpha_{ZZ}^R(t)\alpha_{ZZ}^R(0)\bigr\rangle} = a^2 + \frac{4b^2}{27}D(t),
\end{align}
where the function $D(t)$ is given by
\begin{align}
D(t) = \bigl[C(t) + \frac{3}{5}e^{-3t/\tau} + \frac{13}{5}C(t)e^{-3t/\tau}\bigr]. \label{Dt}
\end{align}
Here we have used again $C(t) = S(t)$, Eq.\ (\ref{alsosees}).
The first term on the right-hand side $C(t)$ accounts for the superposition of the quantum motion (tunneling) of
the C$_\text{2}$ rotor
and the uniaxial rotational diffusion of the encapsulating C$_\text{84}$ molecule, the second term $\propto e^{-3t/\tau}$ accounts
for the classical motion of the C$_\text{2}$ rotor when its plane of motion is changing with the meroaxial
reorientations of the encapsulating C$_\text{84}$ molecule, finally the third term $\propto C(t)e^{-3t/\tau}$
accounts for the interference of the two motions of the C$_\text{84}$ molecule with the tunneling of the
C$_\text{2}$ rotor.
Similarly, using again Eq.\ (\ref{AtA0}) and the powder-averaged products $\overline{\bigl(A^{(i)}_{ZY}\bigr)^2}$ etc.\ in
Appendix \ref{appB}, we find
\begin{align}
\overline{\bigl\langle A_{ZY}(t)A_{ZY}(0)\bigr\rangle} & = \frac{1}{15}e^{-3t/\tau}, \\
\overline{\bigl\langle B_{ZY}(t)B_{ZY}(0)\bigr\rangle} & = \frac{11}{135}\left[1 + 2e^{-3t/\tau}\right], \\
\overline{\bigl\langle C_{ZY}(t)C_{ZY}(0)\bigr\rangle} & = \frac{1}{135}\left[4 + 17e^{-3t/\tau}\right],
\end{align}
and hence
\begin{align}
\overline{\bigl\langle \alpha_{ZY}^R(t)\alpha_{ZY}^R(0)\bigr\rangle} = \frac{b^2}{9}D(t),
\end{align}
with $D(t)$ again given by Eq.\ (\ref{Dt}). In the limit of small relaxation time we recover Eq.\ (\ref{refaZYR}) for
static meroaxial disorder. The Raman scattering laws are given by
\begin{align}
\overline{R_{ZZZZ}(\omega)} = N\bigl(a^2\delta(\omega) + \frac{4}{27}b^2 D(\omega)\bigr), \label{f518}
\end{align}
and
\begin{align}
\overline{R_{ZYZY}(\omega)} = N\frac{b^2}{9}D(\omega).
\end{align}
The Fourier transform of $D(t)$ leads to the scattering function
\begin{align}
D(\omega) = C(\omega) + \frac{3}{5}J(\omega) + \frac{13}{5}G(\omega). \label{Domega}
\end{align}
The spectral function $C(\omega)$ is given by Eqs.\ (\ref{resComega}) -- (\ref{Lorentzian}) while
\begin{align}
J(\omega) = \frac{1}{\pi}\frac{(3/\tau)}{\omega^2 + (3/\tau)^2}, \label{Jomega}
\end{align}
is the Fourier transform of the relaxation function $e^{-3t/\tau}$. The Fourier transform of the interference
term
\begin{align}
G(\omega) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\,e^{-i\omega t}C(t)e^{-3|t|/\tau}
\end{align}
is rewritten as
\begin{align}
G(\omega) = \int_{-\infty}^{+\infty}d\omega'\,C(\omega - \omega')J(\omega'). \label{Gomega}
\end{align}
Using Eqs.\ (\ref{resComega}) -- (\ref{Lorentzian}) and (\ref{Jomega}) we obtain the scattering function
\begin{align}
G(\omega) = G_{++}(\omega) + G_{--}(\omega) + G_{+-}(\omega) + G_{-+}(\omega).
\end{align}
The functions $G_{++}(\omega),\hdots,G_{--}(\omega)$ have the same structure as
$C_{++}(\omega),\hdots,C_{--}(\omega)$, Eqs.\ (\ref{Cpp}) -- (\ref{Cmp}), respectively, but where the Lorentzians
$F(\omega - \omega_{mm\pm 1}^{\sigma\sigma'})$, Eq.\ (\ref{Lorentzian}), are replaced by
\begin{align}
H(\omega - \omega_{mm\pm 1}^{\sigma\sigma'}) = \frac{1}{2\pi}\left[\frac{\Gamma}
{(\omega - \omega_{mm\pm 1}^{\sigma\sigma'})^2 + \Gamma^2} \right].
\end{align}
Similarly to $C(\omega)$, Eq.\ (\ref{resComega}), the function $G(\omega)$ is a sum of weighted Lorentzians
centered around
$\omega = \omega_{mm\pm 1}^{\sigma\sigma'}$ but of width $2\Gamma$ where
\begin{align}
\Gamma = 4D_\text{R} + 3/\tau. \label{gammaeq}
\end{align}
The broadening of
the transition frequencies of the quantum rotor with increasing temperature
is now due to the uniaxial rotational diffusion and the meroaxial
reorientations of the encapsulating C$_\text{84}$ molecule. Notice that both contributions depend
on temperature. In Fig.\ \ref{fig6} we have plotted the function $G(\omega)$ for several temperatures. The
parameters describing the dynamics of the C$_\text{84}$ molecule are $\zeta_n=100$ for the rotational diffusion
model and $\tau_0^{-1} =
3\times 10^{12}$ s$^{-1}$ (attempt frequency) and $E_a = 580$ K (activation energy) for the thermally activated
meroaxial reorientations.
Comparable values of the activation energy, i.e.\ $32(7)$ meV and $35(15)$ meV have been deduced from neutron
scattering studies in solid C$_\text{70}$ \cite{Christides} and solid C$_\text{60}$ \cite{solidC60}, respectively.
While for $T=20$ K and $60$
K the contribution of $3/\tau$ to the half width $\Gamma$ is negligible in comparison to $4D_\text{R}$, both
(additive) contributions become comparable at $150$ K.
At higher $T$ the thermally-activated reorientations are dominant
and lead to a smearing out of the low-frequency resonances
in the scattering function $G(\omega)$.
\begin{figure}
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig6av2.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig6bv2.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig6cv2.eps}}}
\caption{Scattering function $G(\omega)$ for $T=20$ K, $60$ K and $120$ K.
}
\label{fig6}
\end{figure}
\begin{figure}
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig7av2.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig7bv2.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{fig7cv2.eps}}}
\caption{Spectral function $D(\omega)$ of the low-frequency Raman scattering laws for $T=20$ K, $60$ K, $120$ K.
}
\label{fig7}
\end{figure}
If one would artificially exclude the tunneling motion, the function $D(\omega)$ entering the Raman scattering
laws of the C$_\text{2}$ unit would reduce to a superposition of Lorentzians centered at $\omega = 0$:
\begin{align}
D(\omega) = F(\omega) + \frac{3}{5}J(\omega) + \frac{13}{5}H(\omega).
\end{align}
The first term on the right-hand side [given by Eq.\ (\ref{resFccomega})] accounts solely for the rotational
uniaxial diffusion, the
second term for the meroaxial reorientations and the last term for the interference of these classical motions of
the encapsulating C$_\text{84}$ molecule.
\section{Discussion}
It has been shown that the low-frequency (rotational) part of the Raman scattering spectrum of a powder crystal of
Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ fullerite reflects the superposition of the quantum tunneling motion of
the encapsulated Sc$_\text{2}$C$_\text{2}$ complex about its long axis and the random
classical rotational motion of the
surrounding
C$_\text{84}$ molecule.
The effect of the C$_\text{84}$ molecule on the dynamics of Sc$_\text{2}$C$_\text{2}$ is twofold.
Firstly, since the long axis of Sc$_\text{2}$C$_\text{2}$ gyroscope
coincides with the $S_4$ axis of the
molecule, the rotation of Sc$_\text{2}$C$_\text{2}$ about this axis corresponds to the motion of the
C$_\text{2}$ bond as a planar quantum rotor in a fourfold potential \cite{6}. Secondly,
any rotation
of the C$_\text{84}$ molecule caused by torques due to the thermal lattice environment
leads to a dragging of the enclosed Sc$_\text{2}$C$_\text{2}$ unit and
hence affects the spectrum of the C$_\text{2}$ quantum rotor seen in the laboratory frame.
The low-frequency Raman spectra resulting
from the interaction of the scattering radiation with the induced dipole of the C$_\text{2}$ rotor reflect these
features.
In analogy with the dynamics of the C$_\text{70}$ molecule in solid C$_\text{70}$ \cite{Christides}, we have
assumed that
the rotational motion of the C$_\text{84}$ molecule at a lattice site in a given crystallite
is composed of two parts:
uniaxial rotational diffusion about the $S_4$ axis and stochastic jumps of the $S_4$ orientation among $\langle 100
\rangle$ directions.
The superposition of the tunneling motion of the planar quantum rotor with the classical rotations of the
C$_\text{84}$ molecule leads to the spectral function
\begin{align}
D(\omega) = C(\omega) + \frac{3}{5}J(\omega) + \frac{13}{5}G(\omega), \label{61}
\end{align}
given by Eq.\ (\ref{Domega}), in the Raman scattering laws $R_{ZZZZ}(\omega)$ and $R_{ZYZY}(\omega)$.
The function $C(\omega)$, defined by Eqs.\ (\ref{integral}) --
(\ref{Lorentzian}), accounts for tunneling transitions between the energy levels of the
encapsulated C$_\text{2}$ rotor.
The spectrum consists of a series of resonances described by Lorentzians centered at the transition frequencies
(Table \ref{hbaromega}, $q=2$)
and broadened by the uniaxial rotational diffusion (half width $4D_\text{R}$) of the surrounding C$_\text{84}$
molecule.
Since the hindering potential for the rotational diffusion about the $S_4$ axis is weak, this motion affects the
spectrum already at low $T$.
The temperature
dependence of the spectrum has been studied in Fig.\ \ref{Comegaplot}.
The term $J(\omega)$ in Eq.\ (\ref{61}) accounts for the Raman spectrum of the radiation-induced
C$_\text{2}$ dipole while the Sc$_\text{2}$C$_\text{2}$
unit is dragged along by the classical reorientations of the C$_\text{84}$ molecule among its three meroaxial
directions. This motion which reflects the changes of the orientation of the C$_\text{2}$ rotor plane is
described by a three sites stochastic jump model, characterized by a thermally
activated relaxation time $\tau = \tau_0e^{E_a/T}$. Notice that $J(\omega)$ leads to a central resonance of half
width $(3/\tau)$ in the Raman scattering law even in absence of any quantum mechanical tunneling of
C$_\text{2}$. The width of this central resonance (quasi-elastic peak) becomes appreciable at $T\ge 100$ K. In
the scattering law $\overline{R_{ZZZZ}}(\omega)$, Eq.\ (\ref{f518}), this quasi-elastic peak is present in addition to the
elastic Rayleigh peak. We suggest that in future low-energy Raman experiments additional attention will be given
to the possible identification of the temperature-dependent quasi-elastic peak.
The last term $G(\omega)$ in Eq.\ (\ref{61}) is due to the interference between the
uniaxial diffusion-modulated tunneling
motion described by $C(\omega)$ and the stochastic jump model accounted for by $J(\omega)$. The function
$G(\omega)$ is a convolution of $C$ and $J$ [see Eq.\ (\ref{Gomega})]. While at low $T$ the spectra of
$C(\omega)$ and $G(\omega)$ are very similar (compare the plots for $T=20$ K, $T=60$ K in Fig.\ \ref{Comegaplot}
and Fig.\ \ref{fig6}) they become different at higher $T$ (see the $120$ K plots) where the increasing
influence of the stochastic jumps adds to the line broadening.
The width $2\Gamma$ of the individual resonances, Eq.\ (\ref{gammaeq}), increases from
$1.11$ cm$^{-1}$ at $T=20$ K to $3.37$ cm$^{-1}$ at $T=60$ K and $11.43$ cm$^{-1}$ at $T=120$ K.
This broadening leads to an overlap of the low-frequency resonances with increasing $T$.
Finally the sum $D(\omega)$ of these contributions which corresponds to the low-frequency Raman response function
is shown in Fig.\ \ref{fig7}. The quasi-elastic peak centered at
$\omega = 0$ becomes important with increasing temperature. In addition the
growing importance of $G(\omega)$ smears out the low-frequency resonances with increasing $T$ while the higher
frequency resonances remain prominent.
The overall shape of the spectral function $D(\omega)$ and its temperature evolution agree very well with the
low-frequency Raman scattering results of Ref.\ \onlinecite{6}. There is quantitative agreement with the position
of the resonance lines. The smearing out of the low-frequency resonances and the prominence of the
higher-frequency resonances with increasing $T$ (Fig.\ 3 of Ref.\ \onlinecite{6}) are well reproduced by the
present theory. In addition to the positions of the resonance lines, the theory accounts for their
temperature-dependent broadening.
In Fig.\ \ref{figexpthe} we confront the theoretical spectra $D(\omega)$ with the experimental Raman spectra,
for both $T=60$ K and $T = 120$ K.
We notice that the experimental spectra are contaminated by a plasma line at $-\omega = 29.6$ cm$^{-1}$ \cite{6}.
Note that the central parts of the
experimental spectra have been
omitted in order to remove the effect of the unshifted Rayleigh peak.
On the other hand the theoretical spectrum
exhibits a quasi-elastic peak which is an intrinsic effect due to the
meroaxial stochastic reorientations of the Sc$_\text{2}$C$_\text{2}$@C$_\text{84}$ complex
[contribution $J(\omega)$ in $D(\omega)$]. Complementary to the present work it would be useful to
measure the
dynamics of the C$_\text{84}$ molecule in solid Sc$_\text{2}$C$_\text{2}$C$_\text{84}$ directly
say by NMR, neutron or $\mu$-spin spectroscopy.
\begin{figure}
\subfigure{\resizebox{8cm}{!}
{\includegraphics{the_exp_60Kcnvincd.eps}}} \\
\subfigure{\resizebox{8cm}{!}
{\includegraphics{the_exp_120Kcnvincd.eps}}} \\
\caption{Comparison of theoretical scattering law $D(\omega)$ (solid line),
calculated at $T = 60$ K and $T = 120$ K, with
experimental Raman results (dotted line) taken at the respective temperatures.}
\label{figexpthe}
\end{figure}
\acknowledgments
The theoretical work has been supported by the Bijzonder Onderzoeksfonds, Universiteit Antwerpen (BOF-UA).
B.V.\ is a Postdoctoral Fellow of the Research Foundation - Flanders (FWO).
The experimental work has been supported by the EU Project NANOTEMP and by the Austrian FWF
(17345-PHY).
|
1,108,101,563,873 | arxiv | \section*{Introduction}A linear matrix polynomial is a symmetric matrix whose entries are real linear
polynomials in $n$ variables. Such a matrix can be evaluated in
any point of $\R^n$, and the set of points where it is positive
semidefinite is a closed convex subset of $\R^n.$ If the matrix is
diagonal, the resulting set is a polyhedron. Since sets defined by
general linear matrix polynomials inherit certain properties from
polyhedra, they are called \emph{spectrahedra}. Sometimes also the
term \emph{LMI (representable) sets} has been used.
Spectrahedra have long been of interest in applications, see for
example the book of Boyd, El Ghaoui, Feron, and Balakrishnan
\cite{MR1284712}. Most importantly, spectrahedra are the feasible
sets of semidefinite programs, which have been much studied in
recent years, as explained for example in Vandenberghe and Boyd
\cite{MR1379041}. Semidefinite programming is a generalization of
linear programming for which there exist efficient algorithms.
Projections of spectrahedra will be called \emph{semidefinitely
representable sets}. They are still useful for optimization.
Indeed, instead of optimizing a linear function on the projection,
one can optimize the same function on the higher dimensional
spectrahedron itself.
In recent years, the fundamental question to characterize
spectrahedra and their projections geometrically has gained a lot
of attention. Helton and Vinnikov have introduced the notion of
rigid convexity, which is an obvious property of spectrahedra.
They show that in dimension two this property characterizes
spectrahedra, and conjecture that the same is true in arbitrary
dimension \cite{MR2292953}. As for semidefinitely representable
sets, the only known property besides convexity is that they are
semialgebraic, i.e. described by a boolean combination of
polynomial inequalities. Indeed, Helton and Nie conjecture that
every convex semialgebraic set is semidefinitely representable
\cite{HeltonNieNecSuffSDP}. Lasserre proposed a construction to
approximate convex semialgebraic sets by semidefinitely
representable sets \cite{MR2505746}. Under certain
conditions this approximation is exact, i.e.~the original set is
semidefinitely representable itself. Helton and Nie have shown that these
conditions are satisfied for a surprisingly large class of sets, see
\cite{MR2292953} Theorem 5.1. They also prove that
Lasserre's method can be applied locally for compact sets. This
allows them to show semidefinite representability for an even
larger class of sets.
In this work, we investigate the facial geometry of spectrahedra,
rigidly convex sets and semidefinitely representable sets. It is
known that all faces of a spectrahedron are exposed. We review
this fact in Section \ref{seczwei} and prove the same for
rigidly convex sets, as a consequence of Renegar's result for hyperbolicity cones \cite{MR2198215}. Our main result is Theorem \ref{main} in Section \ref{secdrei}. We prove that Lasserre's construction can
only be exact if all faces of the considered convex set are
exposed. This is a necessary condition which complements the
sufficient conditions from the above mentioned literature. We use real algebra, basic model theory, and convex geometry in our
proof.
\section{Preliminaries}
Let $\R[\ul t]$ denote the polynomial ring in $n$ variables
$\ul t=(t_1,\dots,t_n)$ with coefficients in $\R$. A subset $S$ of
$\R^n$ is called \emph{basic closed} if there exist polynomials
$p_1,\dots,p_m\in\R[\ul t]$ such that
\[
S=\sS(p_1,\dots,p_m)=\bigl\{x\in\R^n\:\bigl|\: p_1(x)\ge 0,\dots,p_m(x)\ge 0\bigr\}.
\]
A \emph{linear matrix polynomial} (of dimension $k$ in the
variables $\ul t$) is a linear polynomial whose coefficients are
real symmetric $k\times k$-matrices, i.e.~an expression $A(\ul
t)=A_0+t_1A_1+\cdots +t_nA_n$ with $A_0,\dots,A_n\in {\rm
Sym_k(\R)}$. A subset $S$ of $\R^n$ is called a
\emph{spectrahedron}, if it is defined by a linear matrix
inequality, i.e. if there exists a linear matrix polynomial $A(\ul
t)$ such that
\[
S=\sS(A)=\bigl\{x\in\R^n\:\bigl|\: A(x)=A_0+x_1A_1+\cdots+x_nA_n\succeq 0\bigr\},
\]
where $\succeq 0$ denotes positive semidefiniteness. It is obvious
that spectrahedra are closed and convex. They are also basic
closed: A real symmetric matrix is positive semidefinite if and
only if the coefficients of its characteristic polynomial have
alternating signs; write
\[
\det(A(\ul t)-sI_k)=c_0(\ul t)+c_1(\ul t)s+\cdots+c_{k-1}(\ul t)s^{k-1}+(-1)^ks^k
\]
with $p_i\in\R[\ul t]$, then
\[
\sS(A)=\sS(c_0,-c_1,\dots,(-1)^{k-1}c_{k-1}).
\]
A further property of spectrahedra is their rigid convexity: A
polynomial $p\in\R[\ul t]$ is called a \emph{real zero polynomial
with respect to $e\in\R^n$ (RZ$_e$-polynomial)} if $p(e)>0$ and all
zeros of the univariate polynomial $p(e+sv)\in\R[s]$ are real, for
every $v\in\R^n\setminus \{0\}$. A set $S$ is called \emph{rigidly
convex} if there exists $e\in S$ and an RZ$_e$-polynomial $p$ such
that $S$ is the closure of the connected component of
$\{x\in\R^n\:|\: p(x)>0\}$ containing $e$. Rigid convexity was
introduced and studied by Helton and Vinnikov \cite{MR2292953}.
Rigidly convex sets are convex (see Section 5.3 in
\cite{MR2292953}); they are also basic closed (see Remark
\ref{Rem:RigidConvexBasicClosed} below). Furthermore, any
spectrahedron with non-empty interior is rigidly convex. The
principal reason is that if $A(\ul t)$ is a linear matrix
polynomial with $A_0\succ 0$, then $p(\ul t)=\det(A(\ul t))$ is an
RZ$_0$-polynomial defining $\sS(A)$ (see \cite{MR2292953},
Thm.~2.2). A much harder question is whether every rigidly convex
set is a spectrahedron. This has been shown for $n=2$ and
conjectured in general by Helton and Vinnikov in \cite{MR2292953}.
The question is closely related to the famous Lax-conjecture.
A subset $S$ of $\R^n$ is called \emph{semidefinitely representable}
if it is the image of a spectrahedron $S'$ in $\R^m$ under a
linear map $\R^m\into\R^n$. A linear matrix representation of $S'$
together with the linear map is called a \emph{semidefinite
representation} of $S$. In contrast to spectrahedra, no necessary
conditions other than convexity are known for a semialgebraic set
to be semidefinitely representable.
Various sufficient conditions
have recently been given by Lasserre \cite{MR2505746} as well as
Helton and Nie \cite{MR2533752},
\cite{HeltonNieNecSuffSDP}. Moreover, it has been shown that various operations, like
taking the interior or taking the convex hull of a finite union, preserve semidefinite representability, see \cite{TimRainer} and \cite{Tim}.
\section{Faces of spectrahedra and rigidly convex
sets}\label{seczwei}
In this section, we study the facial structure of spectrahedra and
rigidly convex sets (see also \cite{MR2322886} for a discussion of facial structures
in a more abstract setting). We review the result of Ramana and Goldman
that every spectrahedron has only exposed faces. We then discuss
how the same result can be proven for rigidly convex sets, mostly
by going back to Renegar's corresponding result for hyperbolicity
cones.
\begin{Defs}
Let $S$ be a closed convex subset of $\R^n$ with non-empty
interior. A \emph{supporting hyperplane} of $S$ is an affine
hyperplane $H$ in $\R^n$ such that $S\cap H\neq\emptyset$ and
$S\setminus H$ is connected (equivalently, the zero set of a
linear polynomial $0\neq \ell\in\R[\ul t]$ such that $\ell\ge 0$
on $S$ and $\{\ell=0\}\cap S\neq\emptyset$).
A \emph{face} of $S$ is a non-empty convex subset $F\subseteq S$ with
the following property: For every $x,y\in S$, $\lambda\in (0,1)$, if
$\lambda x+(1-\lambda) y\in F$, then $x,y\in F$.
A face $F$ of $S$ is called \emph{exposed} if either $F=S$ or there exists a
supporting hyperplane $H$ of $S$ such that $H\cap S=F$. The hyperplane
$H$ is said to \emph{expose} $F$.
The \emph{dimension} of a face $F$ is the dimension of its affine
hull.
\end{Defs}
\begin{Remarks}\item \label{Remark:Faces}
\begin{enumerate}
\item $H\cap S$ is an exposed face of $S$ for any supporting
hyperplane $H$ of $S$.
\item For every face $F\subsetneq S$ there exists a supporting
hyperplane $H$ of $S$ such that $F\subseteq H$.
\item Every face of $S$ is closed (since $S$ is closed).
\item If $F_1,F_2$ are faces of $S$ with $F_1\subsetneq F_2$, then
$\dim(F_1)<\dim(F_2)$.
\item Let $F$ be a face of $S$, and take $x_0$ in the relative
interior of $F$. For any two points $x\neq y\in\R^n$, let $g(x,y)$
denote the line passing through $x$ and $y$. Then $F$ consists
exactly of $x_0$ and those points $x\in S\setminus\{x_0\}$ such
that $x_0$ lies in the relative interior of $g(x,x_0)\cap S$.
\end{enumerate}
\end{Remarks}
The following is a combination of Theorem~1 and Corollary~1 in
\cite{MR1342934} (see Corollary~1 in \cite{MR2322886} for a more general statement).
\begin{Thm}[Ramana and Goldman]
Let $A(\ul t)$ be a linear matrix polynomial of dimension $k$, $S=\sS(A)$.
For every linear subspace $U$ of $\R^k$, the set
\[
F_U=\bigl\{x\in S\:|\: U\subseteq \ker\bigl(A(x)\bigr)\bigr\}
\]
is a face of $S$ or empty, and every face of $S$ is of this form.
Furthermore, every face of $S$ is exposed.
\end{Thm}
A similar result can be proven for rigidly convex sets, by
reducing to the results of Renegar on hyperbolicity cones that we
now describe: A homogeneous polynomial $P$ in $n+1$ variables is
called \emph{hyperbolic with respect to
$e\in\R^{n+1}\setminus\{0\}$} if $P(e)>0$ and all zeros of the
univariate polynomial $P(x-se)\in\R[s]$ are real, for every
$x\in\R^{n+1}$. The \emph{hyperbolicity cone of $P$} is the
connected component of $\{P>0\}$ containing $e$. It is a convex
cone in $\R^{n+1}$. Its closure is called the \emph{closed
hyperbolicity cone of $P$}.
\begin{Thm}[Renegar \cite{MR2198215}, Thm.~23]\label{Thm:RenegarExposedFaces}
The faces of a closed hyperbolicity cone are exposed.
\end{Thm}
\begin{Cor}
The faces of a rigidly convex set are exposed.
\end{Cor}
\begin{proof}
It is well-known and easy to see that a polynomial $p\in\R[\ul t]$ is
an $RZ_e$-polynomial if and only if the homogenisation $P(\ul
t,u)=u^dp(\frac{\ul t}{u})$ is hyperbolic with respect to $\wt
e=(e,1)$. Furthermore, the rigidly convex set $S\subseteq\R^n$ defined
by $p$ (i.e.~the closure of the connected component of $\{p>0\}$
containing $e$) is the intersection of $C$, the closed hyperbolicity
cone of $P$ in $\R^{n+1}$, with the hyperplane $H=\{u=1\}$.
Let $F_0$ be a face of $S$. For any two points $x\neq
y\in\R^{n+1}$, let $g(x,y)$ denote the line passing through $x$
and $y$. Take $x_0$ in the relative interior of $F_0$, and let $F$
be the set of all points $z\in C$ such that $x_0$ lies in the
relative interior of $g(z,x_0)\cap C$. One checks that $F$ is a
face of $C$ and that $F\cap H=F_0$ (see Remark \ref{Remark:Faces}
(5)). Since $F$ is exposed by Thm.~\ref{Thm:RenegarExposedFaces},
so is $F_0$.
\end{proof}
The idea of the proof of Renegar's theorem is the following: Let $P$ be a
homogeneous polynomial in $n+1$ variables $(\ul t,u)$ that is
hyperbolic with respect to $e\in\R^{n+1}\setminus\{0\}$, and let $C$
be the closed hyperbolicity cone of $P$. For every $k\ge 0$, put
\[
P^{(k)}(\ul t,u)=\frac {d^k}{ds^k} P\bigl((\ul t,u)+se\bigr)\biggl|_{s=0}.
\]
The polynomials $P^{(k)}$ are again hyperbolic with respect to $e$ (by Rolle's
theorem) and the corresponding closed hyperbolicity cones $C^{(k)}$ form an
ascending chain $C=C^{(0)}\subseteq C^{(1)}\subseteq C^{(2)}\subseteq\cdots$.
For $x\in C$, define ${\rm mult}(x)$ as the multiplicity of $0$
as a zero of the univariate polynomial $P(x+se)\in\R[s]$. If ${\rm
mult}(x)=m$, then $x$ is a boundary point of $C^{(m-1)}$ and a regular
point of $\{P^{(m-1)}=0\}$, i.e.~$(\nabla P^{(m-1)})(x)\neq
0$. Now if $F$ is a face of $C$ and $x$ is in the relative interior of
$F$, then the tangent space of $P^{(m-1)}$ in $x$ exposes $F$ as a face
of $C^{(m-1)}$ and hence as a face of $C$.
This translates into the setting of rigid convexity as follows: Let
$p\in\R[\ul t]$ be an $RZ_0$-polynomial of degree $d$, and let $S$ be
the corresponding rigidly convex set; write $p=\sum_{i=0}^d p_i$ with
$p_i$ homogeneous of degree $i$, and put $P(\ul t,u)=u^dp(\frac {\ul
t}{u})=\sum_{i=0}^d p_{d-i}(\ul t)u^i$. Define $P^{(k)}$ for $k\ge
0$ as above and put $p^{(k)}(\ul t)=P^{(k)}(\ul t,1)$, so that
\[
p^{(k)}(\ul t)=\sum_{i=k}^d \frac{i!}{(i-k)!} p_{d-i}(\ul t).
\]
The polynomials $p^{(k)}$ are again RZ$_0$-polynomials and the
corresponding rigidly convex sets form an ascending chain
$S=S^{(0)}\subseteq S^{(1)}\subseteq S^{(2)}\subseteq\cdots$. For any
$x\in S$, we find that ${\rm mult}(x)$ is the multiplicity of $0$ as a
zero of the univariate polynomial $\sum_{i=0}^d
p_{d-i}(x)(1+s)^i\in\R[s]$. A simple computation shows that ${\rm
mult}(x)$ is also the multiplicity of $1$ as a zero of the
univariate polynomial $p(sx)\in\R[s]$.
Now let $F$ be a face of $S$, let $x$ be a point in the relative
interior of $F$, and put $m={\rm mult}(x)$. Then $x$ is a boundary point of
$S^{(m-1)}$ and a regular point of $\{p^{(m-1)}=0\}$. The tangent
space $\{x+v\:|\: (\nabla p^{(m-1)}(x))^tv=0\}$ exposes $F$
as a face of $S$.
\begin{Remark}\label{Rem:RigidConvexBasicClosed}
It follows from Renegar's construction that closed hyperbolicity
cones and rigidly convex sets are basic closed semialgebraic sets.
Namely, if $C$ is the closed hyperbolicity cone of a hyperbolic
polynomial $P$ of degree $d$, then
$C=\sS(P,P^{(1)},\dots,P^{(d-1)})$; similarly, if $S$ is a rigidly
convex set corresponding to an RZ$_0$-polynomial $p$ of degree
$d$, then $S=\sS(p,p^{(1)},\dots,p^{(d-1)})$.
Alternatively, one can use the fact that the closed hyperbolicity cone of
$P$ coincides with the set of all $ x\in\R^{n+1}$ such that all
zeros of $P(x-se)\in\R[s]$ are nonnegative. This translates to an
alternating sign condition on the coefficients with respect to
$s$, as explained in Section 1.
\end{Remark}
\begin{Example}
Let $p=t_1^3-t_1^2-t_1-t_2^2+1\in\R[t_1,t_2]$. One checks
that $p$ is an irreducible RZ$_0$-polynomial. The corresponding
rigidly convex set, i.e.~the closure of the connected component of $\{p>0\}$
containing $0$, is the basic closed set $S=\sS(p,1-t_1)$.
We have ${\rm mult}(x)=1$ for every boundary point $x\in\partial
S\setminus\{(1,0)\}$, and ${\rm mult}(1,0)=2$. Furthermore,
$p^{(1)}=-t_1^2-t_2^2-2t_1+3$, $p^{(2)}=6-t_1$. Every
$x\in\partial S\setminus\{(1,0)\}$ is a regular point of $\{p=0\}$
and is exposed as a face of $S$ by the tangent line to $\{p=0\}$
in $x$. The point $(1,0)$ is a regular point of $\{p^{(1)}=0\}$
and is exposed as a face of $S$ by the tangent line to that curve
in $(1,0)$, which is $t_1=1$. We also see that
$S=\sS(p,p^{(1)},p^{(2)})$ (though $p^{(2)}$ is redundant):
\begin{center}
\begin{tikzpicture}
\begin{scope}
\clip (-4,-2.2) rectangle (7,2.2);
\pgfsetstrokecolor{FireBrick};
\pgfsetfillpattern{north east lines}{FireBrick};
\filldraw[smooth,domain=0:2,samples=\nos] plot({\x-1},{0-(sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1))});
\draw[smooth,domain=2:4,thick] plot({\x-1},{0-(sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1))});
\filldraw[smooth,domain=0:2,samples=\nos] plot({\x-1},{sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1)});
\draw[smooth,domain=2:4,thick] plot({\x-1},{sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1)});
\pgfsetstrokecolor{DarkBlue};
\draw[thick] (-1,0) circle (2);
\pgfsetstrokecolor{DarkOliveGreen};
\draw[thick] (6,-4) -- (6,4);
\end{scope}
\draw[->] (-4,0) -- (7,0) node[right]{$t_1$};
\draw[->] (0,-2.5) -- (0,2.5) node[above]{$t_2$};
\fill[color=DarkGreen] (1,0) circle (2pt);
\end{tikzpicture}
\end{center}
By the theorem of Helton and Vinnikov, $S$ is a spectrahedron.
Explicitly, let $A(t_1,t_2)=A_0+t_1A_1+t_2A_2$ with
\[
A_0=
\left(
\begin{array}{ccc}
2 & 0 & 1\\
0 & 1 & 0\\
1 & 0 & 1
\end{array}
\right),
\quad
A_1=
\left(
\begin{array}{ccc}
-2 & 0 & -1\\
0 & -1 & 0\\
-1 & 0 & 0
\end{array}
\right),
\quad
A_2=
\left(
\begin{array}{ccc}
0 & 1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{array}
\right).
\]
For the characteristic polynomial, one finds
$\chi_A(s)=c_0+c_1s+c_2s^2-s^3$ with $c_0=p$,
$c_1=-t_1^2+5t_1+t_2^2-4$, $c_2=4-3t_1$. One checks that
$S=\sS(A)=\sS(c_0,-c_1,c_2)=\sS(c_0,-c_1)$. This gives an alternative
description of $S$ as a basic closed set.
\begin{center}
\begin{tikzpicture}
\begin{scope}
\clip (-4,-2.2) rectangle (7,2.2);
\pgfsetstrokecolor{FireBrick};
\pgfsetfillpattern{north east lines}{FireBrick};
\filldraw[smooth,domain=0:2,samples=\nos] plot({\x-1},{0-(sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1))});
\draw[smooth,domain=2:4,thick] plot({\x-1},{0-(sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1))});
\filldraw[smooth,domain=0:2,samples=\nos] plot({\x-1},{sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1)});
\draw[smooth,domain=2:4,thick] plot({\x-1},{sqrt((\x-1)^3-(\x-1)^2-(\x-1)+1)});
\pgfsetstrokecolor{DarkBlue};
\draw[smooth,domain=-5:1,samples=\nos,thick] plot({\x},{sqrt(\x^2-5*\x+4)});
\draw[smooth,domain=4:8,samples=\nos,thick] plot({\x},{sqrt(\x^2-5*\x+4)});
\draw[smooth,domain=-5:1,samples=\nos,thick] plot({\x},{0-sqrt(\x^2-5*\x+4)});
\draw[smooth,domain=4:8,samples=\nos,thick] plot({\x},{0-sqrt(\x^2-5*\x+4)});
\pgfsetstrokecolor{DarkOliveGreen};
\draw[thick] (4/3,-4) -- (4/3,4);
\end{scope}
\draw[->] (-4,0) -- (7,0) node[right]{$t_1$};
\draw[->] (0,-2.5) -- (0,2.5) node[above]{$t_2$};
\fill[color=DarkGreen] (1,0) circle (2pt);
\end{tikzpicture}
\end{center}
\end{Example}
\section{Exposed faces and Lasserre relaxations}\label{secdrei}
For a certain class of convex semialgebraic sets, Lasserre has
given an explicit semidefinite representation
\cite{MR2505746} (see \cite{MR1940975} for a less well known but related construction), as follows: Let $\ul p=(p_1,\dots,p_m)$
be an $m$-tuple of real polynomials in $n$ variables $\ul t$, and
set $p_0=1$. Let $\QM(\ul p)$ be the quadratic module generated by
$\ul p$, i.e.~
\[
\QM(\ul p)=\left\{\sum_{i=0}^m \sigma_i p_i\:\bigl|\:
\sigma_i\in\sum\R[\ul t]^2\right\}
\]
where $\sum\R[\ul t]^2=\{f_1^2+\cdots +f_r^2\:|r\ge 0,
f_1,\dots,f_r\in\R[\ul t]\}$. We denote by $\R[\ul t]_d$ the
finite-dimensional vector space of polynomials of degree at most $d$,
and write $\R[\ul t]_d^\vee$ for its (algebraic) dual. Define
\[
\QM(\ul p)_d=\left\{\sum_{i=0}^m \sigma_i p_i\:\bigl|\:
\sigma_i\in\sum\R[\ul t]^2;\: \sigma_ip_i\in\R[\ul t]_d\right\}.
\]
Note that the inclusion $\QM(\ul p)_d\subseteq\QM(\ul p)\cap\R[\ul
t]_d$ is in general not an equality. Let
\[
\sL(\ul p)_d=\bigl\{L\in\R[\ul t]_d^\vee\:\bigl|\: L|_{\QM(\ul
p)_d}\ge 0, L(1)=1\bigr\}.
\]
It is well-known that $\sL(\ul p)_d$ is a spectrahedron in $\R[\ul
t]_d^\vee$ (see for example Marshall \cite{MR2383959}, 10.5.4).
Now consider the projection $\pi\colon\R[\ul t]_d^\vee\into\R^n$,
$L\mapsto (L(t_1),\dots,L(t_n))$ and put
\[
S(\ul p)_d=\pi\bigl(\sL(\ul p)_d\bigr),
\]
a semidefinitely representable subset of $\R^n$. The idea is to
compare $S(\ul p)_d$ with $S=\sS(\ul p)$, the basic closed set
determined by $\ul p$. Note first that $S(\ul p)_d$ contains $S$
and therefore its convex hull: For if $x\in S$, let
$L_{x}\in\R[\ul t]_d^\vee$ denote evaluation in $x$; then
$L_x\in\sL(\ul p)_d$ and $\pi(L_x)=x$. Note also that the sets
$S(\ul p)_d$ form a decreasing sequence, i.e.
$$S(\ul p)_{d+1}\subseteq S(\ul p)_d$$ holds for all $d$.
\medskip We call the set $S(\ul p)_d$ the $d$-th
\emph{Lasserre relaxation} of ${\rm conv}(S)$ with respect to $\ul
p$. If there exists $d\ge 0$ such that $S(\ul p)_d={\rm conv}(S)$,
we say that ${\rm conv}(S)$ possesses an \emph{exact Lasserre
relaxation} with respect to $\ul p$. The existence of an exact
Lasserre relaxation is a sufficient condition for the semidefinite
representability of ${\rm conv}(S)$.
A characterization for exactness of Lasserre relaxations is the
following proposition. The implication (2)$\Rightarrow$(1) is
\cite{MR2505746}, Thm.~2.
\begin{Prop}\label{Lasserre:Exact}
Assume that $S=\sS(\ul p)$ has non-empty interior. For $d\in\N$,
the following are equivalent:
\begin{enumerate}
\item ${\rm conv}(S)\subseteq S(\ul p)_d\subseteq \ol{{\rm conv}(S)}$;
\item Every $\ell\in\R[\ul t]_1$ with $\ell|_S\ge 0$ is contained in
$\QM(\ul p)_d$.
\end{enumerate}
\end{Prop}
\begin{proof}
We include the proof of (2)$\Rightarrow$(1) for the sake of
completeness. So assume that (2) holds and suppose that there
exists $x\in S(\ul p)_d\setminus \ol{{\rm
conv}(S)}$. Thus there is $\ell\in\R[\ul t]_1$ with $\ell|_S\ge 0$ and
$\ell(x)<0$. Furthermore, there exists a linear functional $L\colon\R[\ul
t]_d\into\R$ such that $L|_{\QM(\ul p)_d}\ge 0$, $L(1)=1$, and
$x=\bigl(L(t_1),\dots,L(t_n)\bigr)$. By assumption, $\ell$ belongs to
$\QM(\ul p)_d$, so $0\le L(\ell)=\ell(L(t_1),\dots,L(t_n))=\ell(x)<0$, a
contradiction.
For the converse, assume that (1) holds, and suppose that there
exists $\ell\in\R[\ul t]_1$ with $\ell|_S\ge 0$ but
$\ell\notin\QM(\ul p)_d$. Since $S$ has non-empty interior,
$\QM(\ul p)_d$ is a closed convex cone in $\R[\ul t]_d$ (see for
example Marshall \cite{MR2383959}, Lemma 4.1.4, or Powers and
Scheiderer \cite{MR1823953}, Proposition 2.6). Thus there exists a linear
functional $L\colon\R[\ul t]_d\into\R$ such that $L|_{\QM(\ul
p)_d}\ge 0$, $L(1)=1$, and $L(\ell)<0$ (note that $L(1)=1$ is
non-restrictive; see the little trick in Marshall \cite{MR2011395}, proof
of Theorem 3.1). Since $x=(L(t_1),\dots,L(t_n))\in S(\ul
p)_d\subseteq\ol{{\rm conv}(S)}$, we have $0\le
\ell(x)=L(\ell)<0$, a contradiction.
\end{proof}
An immediate consequence is that if ${\rm conv}(S)$ is closed (for
example if $S$ is compact or convex), then (2) implies that ${\rm
conv}(S)$ is semidefinitely representable. Lasserre shows that (2) is satisfied
for certain classes of sets, for example if all $p_i$ are linear
or concave and quadratic. These results have been extended
substantially by Helton and Nie
\cite{MR2533752,HeltonNieNecSuffSDP}.
\medskip In the following, we will give a necessary condition for (2)
in the case that $S$ is convex. Namely, all faces of $S$ must be
exposed. The following lemma and its proof are a special case of
Prop.~II.5.16 in Alfsen \cite{MR0445271}.
\begin{Lemma}
Let $S$ be a closed convex subset of $\R^n$. A face $F$ of $S$ is
exposed if and only if for every $x\in S\setminus F$ there exists a
supporting hyperplane $H$ of $S$ with $F\subseteq H$ and $x\notin H$.
\end{Lemma}
\begin{proof}
Necessity is obvious. To prove sufficiency, write $F=\bigcap_{k\ge 1}
U_k$ with $U_k$ open subsets of $\R^n$ such that $\R^n\setminus U_k$
is compact for every $k\ge 1$ (note that $F$ is closed by Remark
\ref{Remark:Faces} (3)). Fix $k\ge 1$. For each $x\in S\setminus
U_k$, we can choose by hypothesis a linear polynomial $\ell_x\in\R[\ul
t]$ such that $\{\ell_x=0\}$ is a supporting hyperplane of $S$ with
$\ell_x|_F=0$ und $\ell_x(x)>0$. Since $S\setminus U_k$ is compact, we may
choose $x_1,\dots,x_m\in S\setminus U_k$ such that $\ell_k:=\sum_{i=1}^m
\ell_{x_i}$ is strictly positive on $S\setminus U_k$. Clearly,
$\ell_k|_F=0$. Put
\[
\ell:=\sum_{k=1}^\infty \frac{\ell_k}{2^k\cdot ||\ell_k||},
\]
where $||\cdot||$ is a norm on the space of linear polynomials. Then
$\{\ell=0\}$ is a supporting hyperplane of $S$ that exposes $F$.
\end{proof}
\begin{Lemma}\label{Lemma:ExposedDimReduction}
Let $S$ be a closed convex subset of $\R^n$ with non-empty interior. A
face $F$ of $S$ is exposed if and only if $F\cap U$ is an exposed face
of $S\cap U$ for every affine-linear subspace $U$ of $\R^n$ containing
$F$ with $\dim(U)=\dim(F)+2$ and $U\cap\interior(S)\neq\emptyset$.
\end{Lemma}
\begin{proof}
Note first that the condition is empty if $F$ is of dimension $\ge
n-1$. Indeed, $F$ is always exposed in that case by Remark
\ref{Remark:Faces} (2),(4). Thus we may assume
that $n\ge 2$ and $\dim(F)\le n-2$.
If $H$ exposes $F$ and $U\cap\interior(S)$ is non-empty, then $H\cap
U$ exposes $F$ in $S\cap U$. Conversely, assume that $F\cap U$ is an
exposed face of $S\cap U$ for every $U$ satisfying the hypotheses. We
want to apply the preceding lemma. Let $x\in S\setminus F$, then we
must produce a supporting hyperplane $H$ of $S$ containing $F$ with
$x\notin H$. Choose $U$ to be an affine-linear subspace of $\R^n$ of
dimension $\dim(F)+2$ containing $F$ such that $x\in U$ and
$U\cap\interior(S)\neq\emptyset$. By hypothesis, there exists a
supporting hyperplane $G$ of $S\cap U$ in $U$ that exposes $F$ as a
face of $S\cap U$. In particular, $x\notin G$. Since $G\cap S=F$, it
follows that $G\cap\interior(S)=\emptyset$, hence by separation of
disjoint convex sets (see e.g.~Barvinok \cite{MR1940576},
Thm.~III.1.2), there exists a hyperplane $H$ that satisfies
$G\subseteq H$ and $H\cap \interior(S)=\emptyset$. Since
$U\cap\interior(S)\neq\emptyset$, it follows that $G\subseteq H\cap
U\subsetneq U$, hence $G=H\cap U$. Thus $H$ is a supporting hyperplane
of $S$ containing $F$ with $x\notin H$.
\end{proof}
We need the following technical lemma.
\begin{Lemma}\label{Lemma:ExtSep}
Let $S$ be a convex subset and $U$ be an affine-linear subspace of $\R^n$ intersecting
the interior of $S$.
Suppose that $\ell\colon\R^n\to\R$ is an affine linear function such that
$\ell\ge 0$ on $S\cap U$. Then there exists an affine linear function
$\ell'\colon\R^n\to\R$ such that $\ell'\ge 0$ on $S$ and $\ell'|_U=\ell|_U$.
\end{Lemma}
\begin{proof}
Let $N:=\{x\in U\mid\ell(x)<0\}$ and $S'$ be the convex hull of
$\{x\in U\mid\ell(x)\ge 0\}\cup S$. Then $N$ and $S'$ are convex sets
that we now prove to be disjoint.
Assume for a contradiction that
there are $\lambda\in[0,1]$, $x\in U$ and $y\in S$ such that $\ell(x)\ge0$ and
$\lambda x+(1-\lambda)y\in N$. Since neither $x$ nor $y$ lies in $N$, we have
$\lambda\not\in\{0,1\}$. Since $U$ is an affine linear subspace,
$\lambda x+(1-\lambda)y\in U$ now implies $y\in U$ and therefore $\ell(y)\ge 0$, leading
to the contradiction
$0>\ell(\lambda x+(1-\lambda)y)=\lambda\ell(x)+(1-\lambda)\ell(y)\ge 0$.
Without loss of generality $N\neq\emptyset$ (otherwise $\ell|_U=0$ and we can take
$\ell'=0$). Then by separation of non-empty disjoint convex sets
(e.g., Thm.~III.1.2 in Barvinok \cite{MR1940576}), we get an affine linear
$\ell'\colon\R^n\to\R$, not identically zero, such that
$\ell'\ge 0$ on $S'$ and $\ell'\le 0$ on $N$. In particular, $\ell'\ge0$ on $S$ and
$\ell'$ cannot vanish at an interior point of $S$.
Since $U$ intersects by hypothesis the interior of $S$, it is not possible that $\ell'$
vanishes identically on $U$. Moreover, all $x\in U$ with $\ell(x)=0$ lie at the same
time in $S'$ and in the closure of $N$, implying that $\ell'(x)=0$. This shows that
the restrictions of $\ell$ and $\ell'$ on $U$ are the same up to a positive factor
which we may assume to be $1$ after rescaling.
\end{proof}
\noindent We are now ready for the main result:
\begin{Thm}\label{main}
Let $S=\sS(\ul p)$ be a basic closed convex subset of $\R^n$ with
non-empty interior. Suppose that there exists $d\ge 1$ such that
the $d$-th Lasserre relaxation of $S$ with respect to $\ul p$ is
exact, i.e.
$$S(\ul p)_d=S$$ holds. Then all faces of $S$ are exposed.
\end{Thm}
In view of Proposition \ref{Lasserre:Exact}, we have the following
equivalent formulation of the same theorem:
\begin{Thm*}[Alternative formulation]
Let $S=\sS(\ul p)$ be a basic closed convex subset of $\R^n$ with
non-empty interior. Suppose that there exists $d\ge 1$ such that
every linear polynomial $\ell$ with $\ell\ge 0$ on $S$ is
contained in $\QM(\ul p)_d$. Then all faces of $S$ are exposed.
\end{Thm*}
\begin{proof}
We begin by showing that it is sufficient to prove that all faces of
dimension $n-2$ are exposed. Let $F$ be a face of $S$ of dimension
$e$. For $e\ge n-1$ there is nothing to show, so assume $e\le n-2$. If
$F$ is not exposed, then by Lemma \ref{Lemma:ExposedDimReduction}
there exists an affine-linear subspace $U$ of $\R^n$ containing $F$
with $\dim(U)=e+2$ and $U\cap\interior(S)\neq\emptyset$ and such that
$F$ is a non-exposed face of $S\cap U$. Furthermore, by Lemma
\ref{Lemma:ExtSep}, for every linear
polynomial $\ell$ that is psd on $S\cap U$ there exists a linear
polynomial $\ell'$ that is psd on $S$ and agrees with $\ell$ on $U$.
Upon replacing $\R^n$ by $U$ and $S$
by $S\cap U$, we reduce to the case $e=n-2$.
Now assume for contradiction that $d\ge 1$ as in the statement exists
and that $F$ is a face of dimension $n-2$ that is not exposed.
\emph{Step 1.} There is exactly one supporting hyperplane $H$ of $S$
that contains $F$. For if $\ell_1,\ell_2$ are non-zero linear polynomials with
$\ell_i|_F=0$ and $\ell_i|_S\ge 0$, put $W:=\{\ell_1=0\}\cap\{\ell_2=0\}$. Then
$\ell:=\ell_1+\ell_2$ defines a supporting hyperplane $\{\ell=0\}$ of $S$ with
$\{\ell=0\}\cap S=W\cap S$. If $\ell_1,\ell_2$ are linearly independent, then
$\dim(W)=n-2=\dim(F)$, hence $F=\{\ell=0\}\cap S$, contradicting the fact
that $F$ is not exposed.
\smallskip We may assume after an affine change of coordinates that
$H=\{t_1=0\}$, $t_1\ge 0$ on $S$, and that $0$ lies in the relative
interior of $F$. Note that any supporting hyperplane of $S$ containing
$0$ must contain $F$ and therefore coincide with $H$.
Since $F$ is not exposed, $F_0=H\cap S$ is a face of dimension $n-1$
with $F$ contained in its relative boundary. In particular, it follows
that $F$ is also contained in the closure of $\partial S\setminus H$.
\medskip
\emph{Step 2.} By the curve selection lemma (see e.g.~Thm.~2.5.5.~in
Bochnak, Coste, and Roy \cite{MR1659509}), we may choose a continuous
semialgebraic path $\gamma\colon [0,1]\into\partial S$ such that
$\gamma(0)=0\in F$, $\gamma\bigl((0,1]\bigr)\cap H=\emptyset$. We
relabel $p_0,\dots,p_m$ into two groups $f_1,\dots,f_r$,
$g_1,\dots,g_s$ as follows:
\begin{center}
\begin{tabular}{ll}
$f_i|_{\gamma([0,1])}=0$ & ($i=1,\dots,r$)\\
$g_j|_{\gamma((0,1])}>0$ & ($j=1,\dots,s$)\\
\end{tabular}
\end{center}
(Indeed, after restricting $\gamma$ to $[0,\alpha]$ for suitable
$\alpha\in (0,1]$ and reparametrizing, we can assume that each $p_i$
falls into one of the above categories.)
We claim that there exists an expression
\[
t_1=\sum_{i=1}^r \rho_i f_i + \sum_{j=1}^s \sigma_j g_j
\leqno{(\ast)}
\]
with $\rho_i,\sigma_j\in\sum\R[\ul t]^2$ and such that
$\sigma_j(0)=0$ for all $j=1,\dots,s$.
To prove the existence of the expression $(\ast)$, consider the
following statement:
\begin{itemize}
\item[($\dagger$)] For each $\lambda\in (0,1]$ there exists a
linear polynomial $\ell_\lambda\in\R[\ul t]_1$ such that
$\ell_\lambda(\gamma(\lambda))=0$, $\ell_\lambda\ge 0$ on $S$, and
$||\ell_{\lambda}||=1$. For this $\ell_\lambda$, there exist
$\rho_i^{(\lambda)},\sigma_j^{(\lambda)}\in\sum\R[\ul t]^2_d$ such
that
\[
\ell_\lambda=\sum_{i=1}^r \rho_i^{(\lambda)} f_i + \sum_{j=1}^s
\sigma_j^{(\lambda)} g_j
\]
and such that
\[
\sigma_j^{(\lambda)}(\gamma(\lambda))=0
\]
for all $j=1,\dots,s$.
\end{itemize}
The statement ($\dagger$) is true, with $d\ge 1$ not depending on
$\lambda$: For $\lambda\in (0,1]$, let $\ell_\lambda\in\R[\ul
t]_1$ be such that $\{\ell_\lambda=0\}$ is a supporting hyperplane
of $S$ passing through $\gamma(\lambda)$, and such that
$||\ell_\lambda||=1$ and $\ell_{\lambda}|_S\ge 0$. By hypothesis,
$\ell_\lambda\in\QM(\{f_i\},\{g_j\})_d$ with $d$ not depending on
$\lambda$, which yields the desired representation. Note that
$\sigma_j^{(\lambda)}(\gamma(\lambda))=0$ is automatic, since
$g_j(\gamma(\lambda))\neq 0$, but
$\ell_\lambda(\gamma(\lambda))=0$.
Furthermore, because the degree-bound $d$ is
fixed, ($\dagger$) can be expressed as a first-order formula in the
language of ordered rings. Thus ($\dagger$) holds over any real closed
extension field $R$ of $\R$, by the model-completeness of the theory of
real closed fields. Let $R$ be any proper (hence non-archimedean)
extension field and let $\epsilon\in R$, $\epsilon>0$, be an
infinitesimal element with respect to $\R$. We apply $(\dagger)$ with
$\lambda=\epsilon$ and get
\[
\ell_\epsilon=\sum_{i=1}^r \rho_i^{(\epsilon)} f_i + \sum_{j=1}^s
\sigma_j^{(\epsilon)} g_j \leqno{(\ddagger)}
\]
with
\[
\sigma_j^{(\epsilon)}(\gamma(\epsilon))=0
\]
for all $j=1,\dots,s$. Let $\mathcal{O}$ be the convex hull of
$\R$ in $R$, a valuation ring with maximal ideal $\fm$. Since
$\interior(S)\neq\emptyset$, the quadratic module
$\QM(\{f_i\},\{g_j\})$ has trivial support. As
$||\ell_\epsilon||=1$, it follows that all coefficients of the
polynomials in ($\ddagger$) must lie in $\mathcal{O}$ (see
e.g.~the proof of Lemma 8.2.3 in Prestel and Delzell
\cite{MR1829790}). We can therefore apply the residue map
$\mathcal{O}\into\mathcal{O}/\fm\isom\R$, $a\mapsto \overline{a}$
to the coefficients of ($\ddagger$). From the uniqueness of the
supporting hyperplane $H=\{t_1=0\}$ in $0$ (Step 1), it follows
that $\overline{\ell_\epsilon}=c\cdot t_1$ for some $c\in\R_{>0}$.
This yields the desired expression $(\ast)$.
\medskip
\emph{Step 3.} The existence of ($\ast$) leads to a contradiction:
Substituting $t_1=0$ in ($\ast$) gives
\[
0=\sum_{i=1}^r \rho_i(0,\ul t') f_i(0,\ul t') + \sum_{j=1}^s
\sigma_j(0,\ul t') g_j(0,\ul t')
\]
in $\R[\ul t']$, with $\ul t'=(t_2,\dots,t_n)$. Since all
$f_i(0,\ul t'), g_j(0,\ul t')$ are non-negative on $F_0$, which
has non-empty interior in $H$, it follows that $\rho_i(0,\ul
t')=0$ whenever $f_i(0,\ul t')\neq 0$. In other words, if $t_1$
does not divide $f_i$, then $t_1^2$ divides $\rho_i$ in $\R[\ul
t]$.
Going back to ($\ast$) and substituting $t_2=\cdots=t_n=0$ now gives
\[
t_1=\sum_{i=1}^r \rho_i(t_1,0) f_i(t_1,0) + \sum_{j=1}^s
\sigma_j(t_1,0) g_j(t_1,0)
\]
Since $\sigma_j(0)=0$ for all $j=1,\dots,s$, we now know that
$t_1^2$ divides all terms on the right-hand side, except possibly
$\rho_i(t_1,0)f_i(t_1,0)$ for such $i$ where $t_1|f_i$. In the
latter case, write $f_i=t_1\wt{f}_i$ and note that $\wt{f}_i$
vanishes on $\gamma((0,1])$ since $f_i$ does and $t_1$ does not.
Thus $\wt{f}_i(0)=0$ by continuity which implies
$t_1|\wt{f}_i(t_1,0)$, so $t_1^2|f_i(x_1,0)$ after all. It follows
that $t_1^2$ divides $t_1$, a contradiction.
\end{proof}
\begin{Remarks}
\begin{enumerate}
\item
Note that whether the faces of $S$ are exposed is a purely geometric
condition, independent of the choice of the polynomials $\ul p$. Thus
if $S$ has a non-exposed face, there do not exist polynomials $\ul p$
defining $S$ that yield an exact Lasserre relaxation for $S$.
\item The theorem does \emph{not} imply that a basic closed convex
set with a non-exposed face cannot be semidefinitly representable,
as we will see in the example below. We have only shown that
Lasserre's explicit approach does not work in that case.
\end{enumerate}
\end{Remarks}
\begin{Example}
Consider the basic closed semialgebraic set $S$ defined by
$p_1=t_2-t_1^3$, $p_2=t_1+1$, $p_3=t_2$, $p_4=1-t_2$.
\begin{center}
\begin{tikzpicture}
\begin{scope}
\clip (-1.5,-1.5) rectangle (1.5,1.5);
\pgfsetstrokecolor{FireBrick};
\pgfsetfillpattern{north east lines}{FireBrick};
\draw[smooth,domain=-1.2:1.2] plot({\x},{\x^3});
\draw (-2,1) -- (2,1);
\draw (-1,-1.7) -- (-1,1.7);
\filldraw[smooth,domain=0:1] plot({\x},{\x^3}) -- (-1,1) -- (-1,0) -- (0,0);
\end{scope}
\draw[->] (-2,0) -- (2,0) node[right]{$t_1$};
\draw[->] (0,-1.5) -- (0,1.5) node[above]{$t_2$};
\fill[color=DarkGreen] (0,0) circle (2pt);
\draw[-latex,color=DarkBlue,snake=coil] (1.3,-0.8)
node[right]{non-exposed face} -- (0,0);
\end{tikzpicture}
\end{center}
The point $(0,0)$ is a non-exposed face of $S$ since the only
supporting hyperplane of $S$ passing through $(0,0)$ is the
vertical line $\{t_2=0\}$, whose intersection with $S$ is strictly
bigger than $\{(0,0)\}$. Therefore, there do not exist polynomials
$\ul p$ with $S=\sS(\ul p)$ such that all linear polynomials that
are non-negative on $S$ belong to $\QM(\ul p)_d$ for some
\emph{fixed} value of $d$. On the other hand, the preordering
generated by $p_1,p_2,p_3,p_4$ as above (i.e.~the quadratic module
generated by all products of the $p_i$) contains all polynomials
that are non-negative on $S$. This follows from results of
Scheiderer. Indeed, by the local-global principle \cite[Corollary
2.10]{MR2223624} it suffices to show that the preordering
generated by the $p_i$ is locally saturated. At the origin this
follows from the results in \cite{lpo2} (in particular, Theorem
6.3 and Corollary 6.7). At all other points it follows already
from \cite{MR2223624}, Lemma 3.1.
However, from the result of Helton and Nie, we can deduce that $S$
is in fact semidefinitely representable: For $S$ is the (convex hull
of) the union of the sets $S_1=[-1,0]\times[0,1]$ and
$S_2=\sS(t_2-t_1^3,t_1,1-t_2)$. The set $S_1$ is obviously
semidefinitely representable (even a spectrahedron), while $S_2$
possesses an exact Lasserre-relaxation: More precisely, we claim
that $\QM(t_2-t_1^3,t_1,1-t_2)_3$ contains all linear polynomials
$\ell\in\R[t_1,t_2]$ such that $\ell|_{S_2}\ge 0$. It suffices to show
this for the tangents $\ell_a=t_2-3a^2t_1+2a^3$ to $S_2$ passing
through the points $(a,a^3)$, $a\in [0,1]$ (The claim then follows
from Farkas's lemma). Write
$\ell_a=t_1^3-3a^2t_1+2a^3+(t_2-t_1^3)$. The polynomial
$t_1^3-3a^2t_1+2a^3\in\R[t_1]$ is non-negative on $[0,\infty)$ and
is therefore contained in $\QM(t_1)_3\subseteq\R[t_1]$ (see
Kuhlmann, Marshall, and Schwartz \cite{MR2174483}, Thm.~4.1),
which implies the claim.
\end{Example}
\begin{Remark} \label{clopen} We do not know if the conclusion of Theorem
\ref{main} remains true for ${\rm conv }(S)$ in place of $S$, if
$S$ is not assumed to be convex. It seems unlikely that our proof
can be extended to that case. More generally, is every face of any
Lasserre relaxation exposed?
\end{Remark}
\noindent
{\bf Note added in proof:} Jo\~ao Gouveia \cite{Joao} showed that our Theorem \ref{main} is optimal in the sense that the questions in Remark \ref{clopen} have negative answers. He also gave an alternative proof of our main theorem which is yet unpublished.
|
1,108,101,563,874 | arxiv | \section{Introduction}
The hypothesis that dark matter (DM) may consist in weakly interacting massive
particles (WIMP) is currently being tested by various experiments including
direct and indirect DM probes, as well as colliders searches. In this
article,
we study a minimal setup in which a real scalar DM particle $S$ couples to the
Standard Model (SM) through interactions with a vector-like fermion. Such a
vector-like portal scenario has been the object of several previous studies,
which have focused on couplings either to light leptons~\cite{Toma:2013bka,%
Giacchino:2013bta,Giacchino:2014moa,Ibarra:2014qma} or to light quarks~\cite{%
Giacchino:2015hvk}. A distinctive feature of this class of portals is that
radiative corrections tend to play a major role in DM annnihilation
phenomenology. In particular, virtual bremsstrahlung or annihilations into
mono-energetic photons and gluons may be the dominant mechanism driving the DM
relic abundance. By the same token, these may give rise to remarkable spectral
features, like a gamma-ray line that consists of a smoking gun for many DM
searches. For physical, but also technical reasons, previous studies have
nevertheless been limited to couplings to light SM leptons or quarks. In this
work we
complement these studies by considering a scenario in which the DM
particle solely couples, at tree-level, to the top quark through interactions
with a vector-like quark $T$. We explore different approaches to
investigate the phenomenological viability of the model from both collider and
cosmology standpoints.
In our predictions, we take into account several
higher-order corrections that include the QCD Sommerfeld effect and
next-to-leading-order (NLO) QCD corrections to the DM annihilation cross
section, both in the Early Universe and for what concerns indirect searches. For
the latter purpose, we have in particular computed the ${\cal O}(\alpha_s)$
corrections to the $S S \rightarrow t\bar t$ annihilation process, which
involves contributions from gluon emission by both the final-state top quarks as
well as by the virtual intermediate $t$-channel vector-like mediator. Although
the treatment of the associated infrared and collinear divergences is more
involved for heavy quarks than when the DM candidate is coupled to light
fermions, we only comment briefly on the associated difficulties and refer
instead to Ref.~\cite{Colucci:2018qml} and Ref.~\cite{Bringmann:2017sko} for details on
the scalar DM and Majorana DM cases respectively. We complement these constraints stemming from the relic density of DM
and its indirect detection null results by a study of the relevance of existing
direct DM probes. Our calculations take into account the effective coupling
of the dark scalar $S$ to gluons through loops involving top quarks and $T$
mediators~\cite{Hisano:2010ct}.
On different lines, we estimate how collider
searches for both the vector-like partner $T$ and the DM particle $S$
restrict the model. We extend a previous study relying on simplified model
results from the Run~1 of the Large Hadron Collider (LHC)~\cite{Kraml:2016eti}
by considering more recent LHC Run~2 supersymmetry searches that can be recast
to constrain any model featuring strongly-interacting quark partners decaying
into a final-state comprised of missing energy and several SM objects~\cite{Chala:2017xgc}. We
moreover
include NLO QCD corrections through the computation of the corresponding matrix
elements and match the fixed-order predictions with parton showers~\cite{%
Fuks:2016ftf}, so that a state-of-the-art modeling of the LHC signals is used.
We additionally investigate the reach of the dedicated DM searches at
the LHC in the mono-X channels where the final-state signature consists in a
pair of DM particles recoiling against a very hard SM object X.
The plan of this article is as follows. In section \ref{sec:model} we define the
model and the associated parameter space. In section~\ref{sec:relic}, we discuss
our calculation of the DM relic abundance and how the latest results constrain
the parameter space. In section \ref{sec:astro_bounds} we further derive bounds
stemming from DM direct and indirect detection searches, and we finally address
the collider phenomenology of the model in section \ref{sec:lhc}. We emphasize
the complementarity of the different approaches in section \ref{sec:summary}, in
which we summarize the various cosmological and collider bounds that we have
obtained.
\section{Theoretical framework}
\label{sec:model}
We consider a simplified top-philic DM setup in which we extend the
Standard Model with a real scalar DM candidate $S$ with a mass $m_S$
and whose interactions with the Standard Model are mediated by exchanges of a
heavy vector-like quark $T$ of mass $m_T$. The $T$ quark is as usual considered
as lying in the fundamental representation of the QCD gauge group $SU(3)_c$, and
we focus on a minimal option where it is a weak isospin singlet with an
hypercharge quantum number set to 2/3. In order for the $S$ particle to be a
stable DM candidate, we impose a $\mathbb{Z}_2$ discrete symmetry under which
all Standard Model fields are even and the new physics states are odd.
Provided the $\mathbb{Z}_2$ symmetry is unbroken, it
prevents the $S$ field from mixing with the Standard Model Higgs doublet $\Phi$
and forbids the mixing of the $T$ quark with the Standard Model
up-type quark sector.
Our model is described by the Lagrangian
\begin{equation}\bsp
{\cal L} =&\ {\cal L}_{\rm SM}
+ i \bar T \slashed{D} T - m_T \bar T T
+ \frac 12 \partial_\mu S \partial^\mu S\\
&\ - \frac 12 m_S S^2 + \Big[ \tilde{y}_t\, S\ \bar T P_R t + {\rm h.c.} \Big]
-\frac12 \lambda S^2 \Phi^\dag \Phi\ ,
\esp\label{eq:lag}\end{equation}
where $P_R$ denotes the right-handed (RH) chirality projector and $t$ the top quark
field. The interaction strength between the mediator $T$, the DM and the SM
sector (or equivalently the top quark) is denoted by $\tilde{y}_t$. Like the DM, the vector-like mediator field $T$ is odd under $\mathbb{Z}_2$ but otherwise transforms as the RH top field under $SU(3)\times SU(2) \times U(1)$ (and so has electric charge $Q = + 2/3$).
A similar effective
Lagrangian has been considered in Ref.~\cite{Giacchino:2015hvk} in the
case of a DM particle coupling to light quarks, and, more recently, in
Refs.~\cite{Baek:2016lnv,Baek:2017ykw} for a coupling to the top
quark. In contrast to these last two studies, our analysis
differs in the treatment of the radiative corrections
that are relevant for the relic abundance, DM indirect and direct
detection as well as for the modeling of the collider signals.
The core of this work focuses on the phenomenological implications of the
presence of a colored vector-like $T$ particle mediating the interactions
of dark matter with the Standard Model. We therefore assume that the coupling of
the DM particle to the Higgs boson $\lambda$ appearing in the Lagrangian of
Eq.~\eqref{eq:lag} can be neglected, so that we set $\lambda = 0$. We moreover
impose that any loop-contribution to $\lambda$ could be absorbed in the
renormalization procedure and thus ignored. Details on
departures from this hypothesis can be found in Ref.~\cite{Baek:2016lnv}.
This contrasts with the analogous case in which the dark matter particle
consists in a Majorana fermion that couples to the SM top quark through a scalar
colored mediator, as in the latter new physics setup, an effective DM-Higgs
coupling arises at the one-loop level, is calculable and
finite~\cite{Garny:2018icg}.
The relevant model parameter space is therefore defined by three parameters,
namely the two new physics masses $m_T$ and $m_S$, and the Yukawa
coupling $\tilde{y}_t$.
\section{Dark matter relic density}
\label{sec:relic}
\subsection{Radiative corrections}
\label{sec:bremsstr-radi-corr}
It has been recently shown that radiative corrections to the DM annihilation
cross section play a significant role in the phenomenology of a real scalar DM
candidate, either through internal bremsstrahlung or via new channels that open
up (like for instance when DM annihilates into a pair of
monochromatic gluons or photons)~\cite{Giacchino:2013bta,Toma:2013bka,%
Giacchino:2015hvk,Giacchino:2014moa,Ibarra:2014qma}. All these analyses have
however been restricted to scenarios featuring a DM particle coupling
to light SM
quarks or leptons, so that the corresponding fermion masses could be neglected
and the calculation performed in the so-called chiral limit. When non-vanishing
SM fermion masses are accounted for, the computation of the radiative
corrections to the annihilation cross section is plagued by infrared
divergences that must be consistently handled, as it has been studied in details
for Majorana DM~\cite{Bringmann:2015cpa}. The scalar DM case has been thoroughly
analyzed by some of us~\cite{Colucci:2018qml}, so that we summarize in this section the
points that are the most relevant for our study.
The calculation of the annihilation cross section associated with the
$S S \rightarrow t\bar t$ process at ${\cal O}(\alpha_s)$ involves
contributions both from final state radiation (FSR) and from virtual internal
bremsstrahlung (VIB) diagrams. The corresponding amplitudes exhibit a specific
dependence on the kinematics, which reflects in distinguishable features in the
spectrum of radiated photons or gluons. In particular, VIB tends to yield a
final-state energy spectrum that peaks at high energies $E_{\gamma,g} \lesssim
m_S$. Whilst FSR contributions also lead to the emission of a hard gluon or
photon~\cite{Birkedal:2005ep}, the related spectral feature is less remarkable
than in the VIB case, unless VIB is relatively suppressed. For a fixed
DM mass $m_S$, the relative FSR and VIB weights are controlled by the mass of
the vector-like mediator $m_T$ and by the final state quark mass ({\it i.e.},
the mass of the top quark $m_t$). FSR turns out to be less important as
$m_t/m_S$ decreases, since the contribution to the annihilation cross section is
proportional to the leading order (LO) result which, in an $s$-wave
configuration, is helicity suppressed. In the chiral limit, $m_t/m_S \to 0$ and
FSR can thus be neglected. On the other hand, the VIB spectral features are
controlled by the $m_T/m_S$ mass ratio, and the energy spectrum peaks toward
$E_{\gamma,g} \sim m_S$ as $T$ and $S$ become more mass-degenerate.
For generic particle masses, both FSR and VIB features are present and must be
accounted for. This consequently requires a consistent handling of the infrared
and collinear divergences of the FSR amplitude, that are only cancelled out
after including the virtual contributions as guaranteed by the Kinoshita-Lee-%
Nauenberg theorem. The associated computations are facilited when
carried out in an effective approach (with a contact $SS t\bar t$ interaction)
that suits well for the annihilation of non-relativistic DM particles in the
soft and collinear limit~\cite{Colucci:2018qml,Bringmann:2015cpa}. The hard part of the
spectrum is then described by the $SS \rightarrow t \bar t g$ (or $t\bar t
\gamma$) contribution as calculated from the full theory of Eq.~\eqref{eq:lag},
and the two results are matched by using a cutoff on the energy of the radiated
gluon (photon). This approach allows us to get a
regularized expression for the total $SS$ annihilation cross section at the NLO
accuracy that is valid for a broad range of parameters~\cite{Colucci:2018qml}.
The procedure outlined above vindicates the fact that for a large part of the
parameter space, one may rely on a simple approximation for the total
annihilation cross section,
\begin{equation} \label{eq:svttgall}
\renewcommand{\arraystretch}{1.3}
\sigma v_{t\bar t} |_{\rm NLO} \approx
\left\{
\begin{array}{ll}
\sigma v_{t\bar t} & m_S<300 \,{\rm GeV,}\\
\sigma v_{t\bar tg}\vert_{m_t=0}+\sigma v_{t\bar t} &m_S>300 \,{\rm GeV.}
\end{array} \right.
\end{equation}
In this expression, $\sigma v_{t\bar t}$ is the $s$-wave contribution to the LO
annihlation cross section,
\begin{equation}
\label{eq:SStoqq}
\sigma v_{t \bar t}^{s{\rm-wave}} = \frac{3 \tilde{y}_t^4 }{4 \pi m_S^3} \frac{m_t^2\, (m_S^2 - m_t^2)^{3/2}}{(m_S^2+m_T^2- m_t^2)^2} \ ,
\end{equation}
and $\sigma v_{t\bar tg}\vert_{m_t=0}$ is the ($s$-wave) annihilation
cross section as obtained in the chiral limit and when a single gluon
radiation is included. Its explicit form can be found in
Refs.~\cite{Ibarra:2014qma,Giacchino:2013bta,Colucci:2018qml}. The difference with the
exact result is only large for $m_S \simeq m_t$ and reaches at most
30\% beyond (see Fig.~\ref{fig:svtt-g-ratios} discussed in the
framework of section~\ref{sec:parameter-space}). When $m_S \rightarrow
m_t$, the treatment used for the derivation of $\sigma v_{t\bar tg}$
breaks down due to threshold corrections that affects the production
of a top-antitop system nearly at rest~\cite{Drees:1990dq}, an artefact that is
visible in Fig.~\ref{fig:svtt-g-ratios} in the region below $m_S$ of about
300~GeV. For $m_S \sim 300$ GeV,
$\sigma v_{t\bar t}|_{\rm NLO}\approx\sigma v_{t\bar t}$.
Whereas the procedure allowing to deal with threshold effects relevant for this
mass configuration is in principle well-know~\cite{Drees:1989du}, its
implementation goes beyond the scope of our work. Those effects not
only concern a narrow range of parameters, but in the absence of toponium bound
states, they are also expected to yield small and sub-leading corrections to the
LO annihalition cross section~\cite{Colucci:2018qml}. For this reason, the LO
annihilation cross section $\sigma v_{t \bar t}$ is used for scalar masses below
about 300~GeV. For larger scalar masses, we additionally include the
contribution of internal bremsstrahlung, calculated in the massless quark limit.
Such an approximation provides a smooth transition to the mass regime in which
gluon emission consists in the dominant contribution to the annihilation cross
section, {\it i.e.}, for $m_S$ of a few TeV~\cite{Colucci:2018qml}.
\subsection{Relic abundance}
\label{sec:parameter-space}
\begin{figure*}
\begin{center}
\hspace*{-1.cm}
\begin{tabular}{cc}
\includegraphics[width=8cm]{fint-m-y-1.png}&
\includegraphics[width=8cm]{fint-m-r-1-1.png}
\end{tabular}
\end{center}
\caption{Region of our parameter space for which one can accommodate a relic
abundance of $\Omega h^2=0.12$. The results are shown in the $(m_S,
\tilde{y}_t)$ plane (left) and $(m_S,r-1)$ plane (right), the color code
being associated with the value of the $r-1$ and $\tilde{y}_t$ parameters
respectively. For comparison, the dotted black contour in the right panel
represents the expected parameter space coverage in the case of a scalar DM
particle coupling to right-handed up quarks $u_R$.}
\label{fig:viab}
\end{figure*}
In order to determine the relic abundance of the dark $S$ particle, we
consider the freeze-out mechanism for DM production in the Early
Universe and make use of the {\sc MicrOMEGAs}
code~\cite{Belanger:2014vza}, which we have modified in order to
accommodate some of the particularities of our model. These include dark
matter annihilations into a $tWb$ three-body final state once one lies
below the top threshold, the radiative corrections mentioned in
section~\ref{sec:bremsstr-radi-corr} and Sommerfeld effects. The
latter especially affect vector-like fermion annihilation and dark
matter co-annihilation with a mediator, those corrections contributing
to the relic abundance by at most 15\% (see
appendix~\ref{sec:somm-corr}). In addition, DM annihilations
into a $tWb$ system play a non negligible role for DM masses lying in
the $[(m_t+m_W)/2, m_t]$ mass window, and we have included these
contributions by evaluating them numerically with {\sc
CalcHEP}~\cite{Belyaev:2012qa}. Finally, we have added the
loop-induced $SS\to gg$ and $SS\to\gamma\gamma$ processes in the
computation of the DM annihilation cross section
~\cite{Giacchino:2014moa,Ibarra:2014qma}. The annihilation into
gluons is in particular significant for DM masses below the
top threshold.
We present the results in Fig.~\ref{fig:viab}, under the form of two
two-dimensional projections of our three-dimensional parameter
space. In the left panel of the figure, we show the region of the
$(m_S, \tilde{y}_t)$ plane for which there exists a mediator mass
value yielding to a relic density $\Omega h^2=0.12$ compatible with the Planck
results~\cite{Ade:2013zuv}.
The gradient of colors in Fig.~\ref{fig:viab} is associated to
relative mass difference between the DM and the mediator given by
$r-1$ with
\begin{equation}
r = \frac{m_T}{m_S} \, .
\end{equation}
Similarly, we present in the right panel of the Fig.~\ref{fig:viab} the region
of the $(m_S,r-1)$ plane for which there exists a $\tilde{y}_t$ coupling value,
shown through a color code, leading to the observed relic abundance. The Yukawa
coupling is enforced to lie in the $[10^{-4}, 6]$ window, the upper bound being
an extreme value at the limit of the
perturbative regime (defined by $\tilde y_t g_s/4\pi < 1$) and the lower bound
guaranteeing the correct treatment of the co-annihilation processes by {\sc
MicrOMEGAs}. For $\tilde{y}_t > 10^{-4}$, co-annihilation processes like $St \to
Tg$ occur in chemical equilibrium, and the DM abundance is determined by a
single Boltzmann equation involving an effective annihilation cross section
accounting for co-annihilations~\cite{Edsjo:1997bg}. For smaller $\tilde{y}_t$
values, thermal freeze-out could still yield the observed DM abundance, but a
larger system of Boltzmann equations involving the abundance of both the $T$ and
$S$ particles has to be accounted for in order to precisely determine the
departure from chemical equilibrium~\cite{Garny:2017rxs,Garny:2018icg}. This
issue is left for a possible future work.
The two panels of Fig.~\ref{fig:viab} provide complementary information. In the
$(m_S, \tilde y_t)$ plane, one observes parameter space regions in which the DM
abundance is driven by co-annihilation processes and so feature little
dependence on the $\tilde y_t$ value. They correspond to setups for which
$m_T/m_S-1$ is of at most ${\cal O}(0.1)$, and which are represented by the thin
dark blue region in the complementary $(m_S, m_T/m_S-1)$ plane. For the sake of
the comparison, we also superimpose in
Fig.~\ref{fig:viab} the limits of the viable parameter space (black
dotted contour) of a model for which the DM couples to the
right-handed up quark $u_R$. We refer to Ref.~\cite{Giacchino:2015hvk}
for more details.
The viable part of the parameter space can be divided into three distinct
regions according to the DM mass $m_S$.
\begin{figure}
\includegraphics[width=8cm]{fint-m-svrat-1.png}
\caption{Ratio of the exact NLO DM annihilation cross section
$\sigma v_{ t \bar tg}|_{\rm NLO}$~\cite{Colucci:2018qml} to the two-body LO cross
section $\sigma v_{t\bar t}$. This shows that gluon radiation consists in the
dominant component of the annihilation cross section for DM masses
satisfying $m_S \gtrsim 5$~TeV.
In the figure, all points correspond to models matching the correct
DM abundance and the color code represents the value of $r-1$.}
\label{fig:svtt-g-ratios}
\end{figure}
\noindent $\bullet$ \underline{$\mathbf{m_S>5}$ {\bf TeV}}. For very
heavy DM, the mass of the top quark only plays a subleading role. This
is clearly visible in the right panel of Fig.~\ref{fig:viab}, where
the viable region of the parameter space of the top-philic scenario
matches the one expected in the up-philic case. In this regime,
$m_S\gg m_t$ and the chiral limit approximation for the DM
annihilation cross section is valid. Moreover, VIB
corrections are large, as illustrated in Fig.~\ref{fig:svtt-g-ratios}
where we show, for all benchmark points giving rise to the right DM
abundance in Fig.~\ref{fig:viab}, the ratio of the exact NLO
result~\cite{Colucci:2018qml} to the LO predictions $\sigma v_{t\bar t}$.
The importance of the
NLO corrections will be further discussed in the context of DM
indirect detection bounds in section~\ref{sec:indirect}.
\vspace{.5cm}
\noindent $\bullet$ \underline{$\mathbf{m_t <m_S <5}$ {\bf TeV}}. In
this regime where the DM mass is moderate, the tree-level $s$-wave
$SS\to t\bar t$ contribution to the annihilation cross section
dominates, as additionally illustrated in Fig.~\ref{fig:svtt-g-ratios}
where the NLO to LO ratio is close to 1. Notice that the feature
observed for $m_S \sim m_t$ in Fig.~\ref{fig:svtt-g-ratios} is
spurious as correct predictions must include threshold effects that we
have ignored. The LO annihilation into a pair of quarks is, in
contrast, completely negligible in the light quark case for which the
relic density is driven by loop-induced annihilations into
gluons~\cite{Giacchino:2015hvk}. The phenomenologically viable region of
the parameter space in the top-philic scenario consequently strongly
deviates from the corresponding one in the up-philic model, as shown
in the right panel of Fig.~\ref{fig:viab}. Given that finite quark
mass effects are significant, larger $r$ parameters are found
acceptable for a given DM mass in the top-philic case.
\noindent $\bullet$ \underline{$\mathbf{m_S < m_t}$}. In this regime,
the DM abundance is driven either by annihilations into a $tWb$ system
via a virtual top quark (for $m_S \lesssim m_t$), through
loop-induced annihilations into gluons, or through co-annihilations
with the mediator. Any other potential contribution, like DM
annihilations into pairs of SM particles through the Higgs portal (as
it occurs in the scalar singlet DM
scenario~\cite{Cline:2013gha,Athron:2017kgt}) is here irrelevant since we have set
the $\lambda$ quartic coupling in Eq.~\eqref{eq:lag} to
zero. Co-annihilations particularly play an important role near
$m_T+m_S\simeq m_t$, as the $ST\to t\to tg$ channel is resonantly
enhanced. This corresponds to the light-yellow region in the left
panel of Fig.~\ref{fig:viab} for $m_S\sim70-80$ GeV, and to the blue
peak in the right panel of the figure for the same $m_S$
values. Annihilations into monochromatic gluons are only important
when the mass of the mediator is large enough to close all
co-annihilation channels, and annihilations into a $tWb$ three-body
system are only relevant close to threshold, for $m_S\in [(m_t+m_W)/2,
m_t]$.
\section{Direct and indirect constraints}
\label{sec:astro_bounds}
\subsection{Direct detection constraints}
\label{sec:direct-detect-constr}
\begin{figure}
\begin{center}
\includegraphics[width=.32\columnwidth]{Fig3-left.pdf}
\includegraphics[width=.32\columnwidth]{Fig3-center.pdf}
\includegraphics[width=.32\columnwidth]{Fig3-right.pdf}
\end{center}
\caption{Feynman diagrams relevant for DM-nucleon scattering.}
\label{fig:diagDD}
\end{figure}
In the limit in which the quartic coupling of $S$ to the Higgs boson vanishes,
the DM nucleon scattering cross-section can be computed from the evaluation of
the one-loop diagrams shown in Fig.~\ref{fig:diagDD}. This allows one to derive
an effective Lagrangian for the DM coupling to gluons,
\begin{equation}
\mathcal{L}_{g} = C_S^g \, \frac{\alpha_s}{\pi} S^2 \, G^{\mu \nu} G_{\mu \nu} \,, \label{eq:Lg_eff}
\end{equation}
where the Wilson coefficient $C_S^g$ includes both short and long-distance
contributions (relatively to the momentum scale involved in the
loop)~\cite{Hisano:2010ct,Gondolo:2013wwa}. The resulting effective
spin-independent coupling $f_N$ of the scalar DM particle $S$ to a
nucleon $N$ of mass $m_N$ is then given by~\cite{Drees:1993bu}
\begin{equation}
\frac{f_N}{m_N} = - \frac89 C_S^g f_{T_G}^{(N)}
\ \ \text{with}\ \
f_{T_G}^{(N)}=1-\sum_{q=u,d,s}f_{T_q}^{(N)} \ ,
\label{eq:fN} \end{equation}
where the quark mass fractions $f_{T_q}^{(N)}$ and the analytical expression for
$C_S^g$ can be found in Ref.~\cite{Hisano:2015bma}.
We compute the total spin-independent cross section $\sigma_A$ for DM
scattering off a nucleus with charge $Z$ and a mass number $A$ by taking the
coherent sum of the proton and the neutron contributions,
\begin{equation}
\sigma_A = \frac{m_A^2}{\pi (m_S+m_A)^2} \bigg[ Z f_p + (A-Z)f_n \bigg]^2 \,,
\label{eq:sigma_general}
\end{equation}
where $f_p$ and $f_n$ denote the respective DM couplings to a proton and a
neutron derived from Eq.~\eqref{eq:fN} with $N=p$ or $n$, respectively, and
$m_A$ is the nucleus mass.
\begin{figure}
\centering
\includegraphics[width=.98\columnwidth]{fint-r-m-SI-1.png}
\caption{DM-proton spin-independent scattering cross section as a function of
the DM mass $m_S$. For each scenario, the coupling to the top quark and the
value of the $r-1$ parameter (shown trough the color code) are fixed
to reproduce the observed relic density. The continuous red line represents
the 90\% confidence level exclusion of the Xenon 1T experiment~\cite{%
Aprile:2017iyp}, the orange dashed line the Xenon 1T reach~\cite{%
Aprile:2015uzo} and the red dashed line the neutrino
floor~\cite{Billard:2013qya}.}
\label{fig:DD1}
\end{figure}
In Fig.~\ref{fig:DD1}, we present the dependence of the DM scattering
cross section on protons $\sigma_{SI}$ calculated as depicted above, for all DM
scenarios of Fig~\ref{fig:viab}. For $m_S\lesssim m_t$, the models
featuring the largest $\sigma_{SI}$ values are those with the largest
$\tilde{y}_t$ value and for which the relic density is driven by
annihilations into a pair of gluons. As in the left panel of
Fig.~\ref{fig:viab}, the yellow region around $m_S \sim 80$~GeV
corresponds to scenarios for which resonant co-annihilations of the
$S$ and $T$ particles into a top quark play a leading role. Above the
top mass threshold, the Yukawa coupling required to match a correct
relic abundance drops, and so does the elastic scattering cross
section. The figure finally exhibits a bump above $m_S \gtrsim
2.5$~TeV, which corresponds to setups in which $m_S + m_t \sim
m_T$. The $C_S^g$ coefficient is then consequently enhanced, which
directly impacts the elastic cross section~\cite{%
Hisano:2015bma}.
For most DM models $\sigma_{SI}$ lies however below the neutrino
floor, except for some scenarios with a DM candidate lighter than the
top quark. The constraints originating from the results of the
Xenon 1T experiment after 34 days of exposure~\cite{Aprile:2017iyp}
are also indicated, together with predictions under the assumption of
2.1 years of data acquisition~\cite{Aprile:2015uzo}. Although a large
part of the parameter space region lying above the neutrino floor is within the range of Xenon~1T, a
significant fraction of it will stay unconstrained in the near future
by DM direct detection searches. The corresponding excluded region
projected in the $(m_S, r-1)$ plane is presented in the summary of
Fig.~\ref{fig:summary}, after
accounting for the latest bounds from the Xenon 1T
experiment (red region), together with the region
that could be tested up to the neutrino floor (red dashed contour).
\subsection{Indirect detection constraints}
\label{sec:indirect}
\begin{figure}
\begin{center}
\includegraphics[width=0.91\columnwidth]{fint-m-svtt-2-1.png}\\[-.1cm]
\includegraphics[width=0.91\columnwidth]{fint-m-svttNLO-2-1.png}\\[-.1cm]
\includegraphics[width=0.91\columnwidth]{fint-m-svttg0-mg300-2-1.png}
\caption{LO (upper panel) and NLO (central panel) $SS \to t\bar t$
annihilation cross sections at zero velocity, where the NLO
results are evaluated using the approximation of
Eq.~\eqref{eq:svttgall}, as well as the $SS\rightarrow \bar t t
g$ annihilation cross section in the chiral
limit~\cite{Giacchino:2013bta} (lower panel). We superimpose to
our results the indirect detection limits obtained from the
cosmic ray (CR) analysis of Ref.~\cite{Cuoco:2017iax} (green
continuous line), as well as the bounds that could be expected
after 15 years of Fermi-LAT running when dwarf spheroidal galaxy
data in the $b\bar b$ channel is analyzed~\cite{%
Charles:2016pgz} (dot-dashed orange line). An
estimation of the upper limits expected from gamma-ray line
H.E.S.S. data~\cite{Rinchiuso:2017kfn} is also presented (see
the text for details) in the lower panel (gray). The color code
represents the value of the $r-1$ parameter and we have
considered DM models satisfying the relic density
constraints of section~\ref{sec:relic}.
\label{fig:svtt-g}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.98\columnwidth]{fint-m-svgg-2-1.png}\\
\caption{Predictions for the $SS\rightarrow gg$ annihilation cross
section, to which we superimpose the upper limits extracted from
the cosmic ray analysis of dwarf spheroidal galaxy data in the
$b\bar b$ channel from Fermi-LAT~\cite{Ackermann:2015zua} using
current results (dark green continuous line) and projected results
assuming 15 years of data acquisition (orange dot-dashed
line)~\cite{Charles:2016pgz}. We also indicate the upper limits
obtained from the gamma-ray line analysis of
Fermi-LAT~\cite{Ackermann:2015lka} and
H.E.S.S.~\cite{Rinchiuso:2017kfn} by gray dotted and double-dot-dashed
lines, respectively (see the text for details). The color code
represents the value of the $r-1$ parameter and we have
considered DM models satisfying the relic density constraints
of section~\ref{sec:relic}.
\label{fig:svg}}
\end{center}
\end{figure}
In Figs.~\ref{fig:svtt-g} and \ref{fig:svg}, we present, for all scenarios
satisfying the relic density constraints of section~\ref{sec:relic}, the value
of the DM annihilation cross section at zero velocity into varied final states
and using different approximations. In the upper panel of Fig.~\ref{fig:svtt-g},
we show the LO contribution to the $SS\to t\bar t$ channel, whilst the NLO
corrections, computed in the approximation of Eq.~\eqref{eq:svttgall}, are
included in the central panel. In the lower panel of the figure, we only show the gluon emission contributions, $SS\to t \bar t g$, computed in the
chiral limit for $m_S>300$ TeV. The (loop-induced) contributions of the $SS\to gg$ channel to the
annihilation cross section are evaluated in Fig.~\ref{fig:svg}.
Comparing the upper and central panels of Fig.~\ref{fig:svtt-g}, we
observe that QCD emissions play a significant role for $m_S>2$ TeV, as
already visible in Fig.~\ref{fig:svtt-g-ratios} (in which the exact
NLO results from Ref.~\cite{Colucci:2018qml} have been employed). In contrast,
Fig.~\ref{fig:svg} shows that annihilations into pairs of gluons are
only relevant for $m_S < m_t$ (see also section~\ref{%
sec:relic}). Moreover, $\sigma v_{gg}$ exhibits a minimum around
$m_S\sim 280$~GeV independently of the value of $r$. This minimum is
connected to a change of sign at the level of the loop-amplitude that
always happens for $m_S \in [270,290]$~GeV (see
appendix~\ref{sec:ggloop} for an analytic expression of $\sigma
v_{gg}$). As in Fig.~\ref{fig:viab}, the yellow region around $m_S \sim
60-70$ GeV in Fig.~\ref{fig:svg} corresponds to models with a DM abundance
dominated by the resonant co-annihilation of a $TS$ system into a
top quark.
\begin{figure}
\begin{center}
\includegraphics[width=0.98\columnwidth]{bbbar-gg-tWb-150.pdf}
\includegraphics[width=0.98\columnwidth]{bbbar-ttbar-250.pdf}
\caption{Gamma-ray spectra as obtained with
{\sc Pythia}~8~\cite{Sjostrand:2014zea} for different
DM annihilation mechanisms. We consider a DM mass of $m_S = 150$~GeV
(upper panel) and 250~GeV (lower panel).
\label{fig:spectra}}
\end{center}
\end{figure}
We superimpose to our predictions limits extracted from varied
observations. DM annihilations into top-antitop systems can be
constrained with antiproton cosmic ray data~\cite{Cuoco:2017iax}
(continuous green lines in Fig.~\ref{fig:svtt-g}). We also show
Fermi-LAT gamma-ray constraints from dwarf spheroidal analysis for
annihilations into a $b \bar b$ final state~\cite{Ackermann:2015zua}
(continuous dark green line in Fig.~\ref{fig:svg}) and the
corresponding prospects from 15 years of Fermi-LAT
running~\cite{Charles:2016pgz} (dot-dashed orange lines). Whilst the
Fermi-LAT collaboration has not published any specific limits for what
concerns the gamma-ray spectrum issued from DM annihilations into the
$t \bar t$ and $gg$ final states, both spectra are expected to show a
similar behavior as for annihilations into a $b\bar b$ system, as
illustrated in Fig.~\ref{fig:spectra} for $m_S$ below (upper panel) and above
(lower panel) the top
mass. An estimate of the limits for $t \bar t$ and $gg$ final states can
be obtained following the methodology advocated
in Ref.~\cite{Bringmann:2012vr}, using exclusion limits from DM
annihilations into $b \bar b$ pairs that are rescaled using
\begin{equation}
\sigma v_{gg, t\bar t} = \sigma v_{b \bar b} \frac{N_\gamma^{b\bar{b}}}
{N_\gamma^{gg,t \bar t}} \ ,
\end{equation}
where $N_\gamma^{X}$ is the number of photon expected from an $X$
final state. We have nevertheless verified that $N_\gamma^{b \bar b} <
N^{gg,t \bar t}$ by determining $N_\gamma^{X}$ using the hadronization
model of {\sc Pythia}~8~\cite{Sjostrand:2014zea}, so that the obtained
bounds can be seen as conservative.
\begin{figure}
\begin{center}
\includegraphics[width=0.98\columnwidth]{MS2000MT2200.pdf}
\includegraphics[width=0.98\columnwidth]{MS10000MT11000.pdf}
\caption{Gamma ray spectrum originating from the annihilation of a pair of
$S$ particles of mass $m_S = 2$~TeV (upper panel) and 10~TeV (lower panel),
and for a mediator mass fixed through the $r$ parameter that is set to
$r=1.1$.
Our predictions include (virtual and final state) gluon and photon
emissions from a $t\bar t$ final state, as well as the direct one-loop
contributions issued from annihilations into a pair of monochromatic photons
and gluons.
\label{fig:svtt-spect1}}
\end{center}
\end{figure}
The shape of the gamma-ray spectrum could also potentially be used to
get hints on DM, as radiative corrections may give rise to specific
gamma-ray spectral imprint such as line-like features. However, these
are most of the time overwhelmed by the continuum originating from the
hadronization of the annihilation products. There are however
two regimes in which they may be potentially important, namely in the low mass range
($m_S<m_t$) where annihilations into a photon pair could be relevant,
and in the multi-TeV regime where radiative emission is crucial (as
shown on the different panels of Fig.~\ref{fig:svtt-g}). The typical
gamma-ray spectral signature of the annihilation of a pair of very
heavy $S$ particles into $t\bar t$, $\gamma\gamma$ and $gg$ systems is
presented in Fig.~\ref{fig:svtt-spect1}, our predictions being derived
as sketched in appendix~\ref{sec:QCDcorrections}.
\noindent$\bullet$ \underline{$\mathbf{m_S \gtrsim 5}$ {\bf TeV}}. This regime is the one for which
VIB emissions play a significant role and for which the approximation of
Eq.~\eqref{eq:svttgall} holds. DM annihilations into a top-antitop system
produced in association with a photon can then be simply deduced,
\begin{equation}
{\sigma v_{t \bar t \gamma} \over \sigma v_{t \bar t g}} = {2 N_c Q^2 \alpha \over (N_c^2-1)\alpha_s}\approx 2.3\cdot 10^{-2} \ ,
\label{eq:rat-ttgamg}
\end{equation}
where $N_c=3$ denotes the number of
colors. Moreover, $\alpha$ and $\alpha_s$ stand for the electromagnetic
and strong coupling constants and we use $Z$-pole values as references,
$\alpha=1/128$ and $\alpha_s=0.112$. Although results from the
H.E.S.S. collaboration can potentially constrain the model, there is
no official VIB dedicated analysis and one must thus refer to the
independent analysis of Ref.~\cite{Ibarra:2013eda} and the recent
constraints that can be extracted from the gamma ray spectrum issued
from the galactic center~\cite{Rinchiuso:2017kfn}. This suggests that
the annihilation cross section can be of at most $\sigma v_{t\bar
t\gamma} \sim 10^{-27}$~cm$^3/$s for DM masses of about 10~TeV,
which can be translated as $\sigma v_{t\bar t g} \sim
10^{-25}$~cm$^3/$s. This is illustrated in the lower panel of
Fig.~\ref{fig:svtt-g} where we show the H.E.S.S. constraints derived
in Ref.~\cite{Rinchiuso:2017kfn}, after including both the rescaling
factor of Eq.~\eqref{eq:rat-ttgamg} and a factor of $2$ accounting for
the photon multiplicity.
\noindent$\bullet$ \underline{$\mathbf{m_S < m_t}$}. In this regime, $\sigma v_{gg}$ can be as
large as about $2 \cdot 10^{-26}$~cm$^3/s$ (see Fig.~\ref{fig:svg}), and there
is a well defined prediction for annihilations into a pair of
photons~\cite{Chu:2012qy},
\begin{equation}
{\sigma v_{\gamma\gamma} \over \sigma v_{gg}} = \frac{4Q^4\alpha^2N_c^2}{\alpha_S^2\left(N_c^2-1\right)} \approx 4.3 \cdot 10^{-3} \,.
\label{eq:gamgrat}
\end{equation}
The strongest constraints on the production of
gamma-ray lines at energies around and below $m_t$ originate from the
Fermi-LAT collaboration~\cite{Ackermann:2015lka} and we indicate them
in Fig.~\ref{fig:svg} after including the rescaling factor of
Eq.~\eqref{eq:gamgrat} (gray dotted line). H.E.S.S. bounds at larger
DM masses are also indicated, following Ref.~\cite{Rinchiuso:2017kfn}
(double-dot-dashed line). In both cases, we use the limits associated
with an Einasto DM density profile.
To conclude this section, we project the DM indirect detection constraints from
the cosmic ray analysis (green region at large mass) and existing (dark green
region at low mass) and future (orange region with a dot-dashed contour)
Fermi-LAT constraints from the gamma-ray continuum from dwarf
Spheroidal galaxies in the summary of Fig.~\ref{fig:summary}.
The color code is the same as in
Figs.~\ref{fig:svtt-g} and~\ref{fig:svg}. A substantial part of the
parameter space, for $m_S< 1$ TeV region, turns out to be constrained by
probes of the gamma-ray continuum and antiproton cosmic
rays. Moreover, for moderately heavy DM candidates, these constraints
are complementary to those originating from direct DM searches studied
in Sec.~\ref{sec:direct-detect-constr}. As for the relic density,
annihilations into pairs of gluons are relevant for light DM ($m_S <
m_t$) whilst annihilations into top-antitop systems help to test
heavier candidates with masses ranging up to $m_S\sim 400$~GeV and
$450$~GeV when observations based on gamma rays and antiprotons are
respectively used. The major difference with the relic density
considerations is that close to the top-antitop threshold, the
non-zero DM velocity at the freeze-out time allows for DM
annihilations into a $t \bar t$ pair, which is kinematically forbidden
today. A three-body $tWb$ final state must therefore be considered
instead, which does not yield further constraints. Finally, the
predicted annihilation cross sections $\sigma v_{gg}$ and $\sigma
v_{t\bar t g}$ appear to be too small to allow us to constrain the
models using searches of specific features in the gamma-ray spectrum
(considering an Einasto DM density profile).
\section{Collider constraints}
\label{sec:lhc}
Searches for new physics have played an important role in past, current and
future physics programs at colliders. In the context of the class of scenarios
investigated in this work, in which the Standard Model is extended by a bosonic
DM candidate and a fermionic vector-like mediator, the results of many
collider analyses can be reinterpreted to constrain the model.
In our model, the extra scalar particle is rendered stable (and thus a viable
candidate for DM) by assuming a $\mathbb{Z}_2$ symmetry under
which all new states are odd and all Standard Model states are even. As a
consequence, the collider signatures of the model always involve final states
containing an even number of odd particles that each decay into Standard Model
particles and a DM state. This guarantees the presence of a large
amount of missing transverse energy as a generic model signature.
For top-philic models, the relevant signatures can be classified into two
classes, the model-independent mono-X searches that target the production of a
pair of DM particles in association with a single energetic visible
object X, and the production of a pair of top-antitop quarks
in association with missing energy.
Before going through the most recent constraints originating from LHC searches
for DM, we will account for LEP results. In electron-positron
collisions, top partners can be produced electroweakly,
\begin{equation}
e^+ e^- \to \gamma^*, Z \to T \bar T \to t \bar t + \slashed{E}_T \ ,
\end{equation}
and yield a signature made of a pair of top-antitop quarks and missing
transverse energy $\slashed{E}_T$. Reinterpreting the results of the
LEP searches for the supersymmetric partner of the top quark,
vector-like (top) partners are essentially excluded if their mass
satisfies $m_T\lesssim 100$~GeV~\cite{Abbiendi:2002mp}. This excludes
the lower left corner of the viable parameter space of the summary
of Fig.~\ref{fig:summary} (magenta region) corresponding to DM
masses of typically $m_S<78$ GeV.
\begin{figure}
\centering
\includegraphics[width=0.32\columnwidth]{Fig10-left.pdf}
\includegraphics[width=0.32\columnwidth]{Fig10-center.pdf}
\includegraphics[width=0.32\columnwidth]{Fig10-right.pdf}
\caption{Representative Feynman diagrams corresponding to the collider
signatures of the model.
We consider the pair production of vector-like quarks that
then decay each into a DM state and a top quark (leftmost and
central diagrams) and loop-induced monojet production (rightmost diagram).}
\label{fig:collider_graphs}
\end{figure}
At the LHC, pairs of mediators can be copiously produced by virtue of the strong
interaction. Top-antitop production in association with missing energy consists
of the corresponding signature, as each mediator then decays, with a 100\%
branching fraction, into a system comprised of a top quark and a DM
particle,
\begin{equation}
p p \to T \bar T \to t S \ \bar t S\ .
\label{eq:pp2TT}\end{equation}
Contributions to this process are illustrated by the first two Feynman diagrams
of Fig.~\ref{fig:collider_graphs}.
Such a top-antitop plus missing energy signature has been widely studied by
both the ATLAS and CMS collaborations, in particular in Run~2 searches
for the superpartners of the top quark (assuming a decay into a top
quarks and missing energy carried by a neutralino)~\cite{Aaboud:2017aeu,%
Aaboud:2017ayj,Aaboud:2017wqg,Aaboud:2017nfd,Aaboud:2017dmy,Sirunyan:2017kqq,%
Sirunyan:2017wif,Sirunyan:2017xse,Sirunyan:2017leh} and in dedicated DM
searches~\cite{Sirunyan:2017xgm}.
Additionally, the model can also be probed through classical DM
searches using mono-X probes. Amongst all mono-X searches, we
focus on the monojet one given the relative magnitude of the strong coupling
with respect to the strength of the electroweak interactions. In this case, the
considered signature exhibits the presence of a hard QCD jet recoiling against
a large quantity of missing energy carried away by a pair of DM
particles. Such a process,
\begin{equation}
p p \to S S j\ ,
\label{eq:monoj}\end{equation}
is loop-induced in our model, as illustrated by the last Feynman diagram of
Fig.~\ref{fig:collider_graphs}. Although early monojet analyses were vetoing
events featuring any extra hadronic activity through additional hard jets, it
has been demonstrated that the latter could consist in useful handles to get a
better sensitivity to the signal~\cite{Buchmueller:2015eea}. For this reason,
recent ATLAS and CMS monojet analyses now include several signal regions in
which more than one hard jet is allowed~\cite{Aaboud:2016tnv,Aaboud:2017phn,%
Sirunyan:2017hci,Sirunyan:2017jix}.
\subsection{Simulation details}
\label{sec:simu}
In order to reinterpret relevant results of the LHC in the context of the
considered top-philic DM scenario and to determine their impact,
we have implemented the Lagrangian of
Eq.~\eqref{eq:lag} into the {\sc FeynRules} program~\cite{Alloul:2013bka}. With
the help of a joint usage of the NLOCT~\cite{Degrande:2014vpa} and
{\sc FeynArts}~\cite{Hahn:2000kx} packages, we have analytically evaluated the
ultraviolet and so-called $R_2$ counterterms required for numerical one-loop computations
in four dimensions. The information has been exported under the
form of an NLO UFO model~\cite{Degrande:2011ua} containing, in addition to
the tree-level model information, the $R_2$ and NLO counterterms.
We rely on the
{\sc MadGraph~5}\_aMC@NLO~\cite{Alwall:2014hca} platform for the generation of
hard-scattering events, at the NLO accuracy in QCD for the vector-like quark
pair production process of Eq.~\eqref{eq:pp2TT} and at the LO accuracy
for the loop-induced monojet process of Eq.~\eqref{eq:monoj}. In our simulation
chain, we respectively convolute the LO and NLO matrix elements with the LO and
NLO sets of NNPDF~3.0 parton distribution functions~\cite{Ball:2014uwa}, that we
access through the LHAPDF~6 library~\cite{Buckley:2014ana}. Moreover, the
unphysical scales are always set to half the sum of the transverse mass of all
final-state particles.
The decay of the heavy $T$ quark into DM and a top quark,
\begin{equation}
T \to t S \ ,
\end{equation}
is factorized from the production processes and is handled with the
{\sc Mad\-Spin}~\cite{Artoisenet:2012st} and {\sc Mad\-Width}~\cite{Alwall:2014bza} programs, together
with those of all Standard Model heavy particles. For each considered new
physics setup, we have consequently checked that the narrow-width approximation
could be safely and consistently used, which is guaranteed by the fact that the
mediator decay width satisfies $\Gamma_T/m_T<0.2$.
The resulting
partonic events are matched with parton showers by relying on the
{\sc Pythia}~8 code~\cite{Sjostrand:2014zea} and the MC@NLO
prescription~\cite{Frixione:2002ik}. Whilst hadronization is also taken care of
by {\sc Pythia}, we simulate the response of the ATLAS and CMS detectors
by means of the {\sc Del\-phes}~3 program~\cite{deFavereau:2013fsa} that
internally relies on the anti-$k_T$ jet algorithm~\cite{Cacciari:2008gp} as
implemented in the {\sc Fast\-Jet}~software~\cite{Cacciari:2011ma} for object
reconstruction. For each of the analyses that we have recast, the {\sc Del\-phes}\
configuration has been tuned to match the detector setup described in the
experimental documentation. We have used the {\sc Mad\-A\-na\-ly\-sis}~5\
framework~\cite{Conte:2012fm,Conte:2014zja,Dumont:2014tja} to calculate the
signal efficiencies for the different considered search strategies and to derive
95\% confidence level (CL) exclusions with the CL$_s$ method~\cite{Read:2002hq}.
\subsection{Reinterpreted LHC analyses}
\label{se:reinter}
In order to assess the reach of LHC searches for DM in top-antitop
quark production in association with missing energy ($pp\to t\bar t +
\slashed{E}_T$), we reinterpret a CMS
analysis of collision events featuring a pair of leptons of opposite
electric charge~\cite{Sirunyan:2017leh}. While other final states in the single
lepton and fully hadronic decay mode of the top-antitop pair are
relevant as well, all these LHC searches are so far found to
yield similar bounds. For this reason, we have chosen to
focus to a single of those channels, namely the cleaner dileptonic decay mode of
the top-antitop pair.
The CMS-SUS-17-001 analysis of Ref.~\cite{Sirunyan:2017leh} focuses on the
analysis of 35.9~fb$^{-1}$ of LHC collisions featuring the presence of a system
of two isolated
leptons of opposite electric charges which is compatible neither with a low-mass
hadronic resonance nor with a $Z$ boson. The presence of
at least two hard jets is required, at least one of them being $b$-tagged, as
well as a large amount of missing transverse energy. The latter is required to
possess a large significance and to be well separated from the two leading jets.
After this preselection, the analysis defines three aggregated signal regions
depending of the value of the missing energy and of the transverse
mass $m_{T2}$~\cite{Lester:1999tx,Cheng:2008hk} reconstructed from the two
leptons and the missing momentum.
In addition, we include in our investigations the CMS-SUS-16-052 analysis which
is dedicated to probing the more compressed regions of the parameter space with
35.9~fb$^{-1}$ of LHC collisions~\cite{CMS:2017odo}. In this analysis, it is
assumed that the top partner cannot decay on-shell into a top quark plus missing
energy system, so that the search strategy is optimized for top partners
decaying into systems made of three mostly soft fermions (including $b$-quarks)
and missing energy via an off-shell top quark. Event selection requires the
presence of one hard
initial-state-radiation jet and of at most a second jet well separated from the
first one. Moreover, one asks for a single identified lepton, a large amount of
missing energy and an important hadronic activity. The threshold values that are
imposed and the detailed properties of the lepton, the missing energy and the
hadronic activity allow to define two classes of three signal regions targeting
varied new physics configurations.
We have also confronted the process of Eq.~\eqref{eq:pp2TT} to the LHC
Run~1 results, and in particular to the null results of the 8~TeV search
labeled CMS-B2G-14-004~\cite{Khachatryan:2015nua,Arina:2016cqj,%
inspire-cms-b2g-14-004}. This search focuses on
singly-leptonic final states containing at least three jets (including at least
one $b$-tagged jet) and a large amount of missing energy well separated from the
jets. The event selection moreover constrains the transverse mass of the system
comprised of the lepton and of the missing transverse momentum, as well as the
$m_{T2}^W$ transverse variable~\cite{Bai:2012gs}.
For the reinterpretation of the LHC search results for mono-X DM
signals, we have considered two ATLAS analyses targeting
a monojet-like topology, {\it i.e.} at least one very hard jet recoiling
against some missing momentum and a subleading jet activity. Although those
analyses~\cite{Aaboud:2016tnv,Aaboud:2016zdn} focus on a small integrated
luminosity of LHC collisions (3.2~fb$^{-1}$), they are already limited by the
systematics so that the constraints derived from early Run~2 data
are not expected to get more severe in the future~\cite{Banerjee:2017wxi}.
In the ATLAS-EXOT-2015-03 analysis~\cite{Aaboud:2016tnv,%
inspire-atlas-exot-2015-03}, the target consists in
a monojet-like topology where the subleading jet activity is rather limited, the
event selection being allowed to contain only up to three additional jets.
Seven inclusive and seven exclusive signal regions are defined, the
differences between them being related to various requirements on the missing
energy. In contrast, the ATLAS-SUSY-2015-06 analysis~\cite{Aaboud:2016zdn,%
inspire-atlas-susy-2015-06} allows both for a small and larger subleading jet
activity, the event selection being dedicated to final states containing
two to six jets. Seven signal regions are defined, depending on the number and
on the kinematic properties of the jets and on the missing momentum.
All the above analyses are implemented and validated in the {\sc Mad\-A\-na\-ly\-sis}~5\ framework, and
have thus been straightforwardly and automatically used within the simulation
chain depicted in section~\ref{sec:simu}. We consider a new physics signal
including contributions from both processes of Eq.~\eqref{eq:pp2TT} and
Eq.~\eqref{eq:monoj}, although vector-like quark pair production largely
dominates for perturbative $\tilde{y}_t$ values.
\subsection{Collider constraints}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{viab-lin.pdf}
\caption{Collider constraints on our top-philic DM model expressed, together
with the relic density and DM direct detection bounds, in the $m_S, m_T)$
mass plane. \label{fig:coll}}
\end{center}
\end{figure}
In Fig.~\ref{fig:coll}, we report our findings in the $(m_T, m_S)$ mass plane.
As the vector-like-quark production process of Eq.~\eqref{eq:pp2TT}
dominates regardless of the actual value of the $\tilde{y}_t$ coupling, the
latter is irrelevant for what concern constraints stemming from the LHC. This is
induced by the fact that the vector-like mediator only couples to the top
quark, which contrasts with scenarios in which couplings to lighter quarks
exist. Those interactions with the first and second generation SM quarks indeed
yield extra contributions featuring a direct dependence on the Yukawa
couplings~\cite{Giacchino:2015hvk}. Coming back to the considered top-philic
scenario, all results can thus be represented in the $(m_S, m_S/m_T-1)$
two-dimensional plane. In the
figure, we superimpose to the cosmology considerations discussed in the previous
sections (namely the relic density and direct detection bounds, the indirect
bounds being not reproduced, so as to avoid cluttering of the figure) the
constraints
that can be obtained by reinterpreting the results of the LHC searches for new
physics discussed in section~\ref{se:reinter}. The white region corresponds to
configurations for which the experimentally-observed relic abundance is
reproduced and which are not excluded by current cosmological data. In the
(excluded) light gray region, a correct abundance would imply going beyond the
perturbative regime, whilst in the dark gray region, the dark matter particle
$S$ is unstable as not the lightest $\mathbb{Z}_2$-odd particle.
For each new physics configuration and each signal region of each considered
analysis, we evaluate the number of signal events $s$ surviving the selection
with {\sc MadAnalysis}~5 and extract a CL$_s$ exclusion from the
observed number of events $n_{\rm data}$ populating the region and
the expected number of background events $\hat b\pm \Delta b$. To this aim, we
undertake 100.000 toy experiments in which we generate the actual number of
background events $b$ by assuming that the corresponding distribution is a
Gaussian of mean $\hat b$ and width $\Delta b$. We then consider two Poisson
distributions of parameters $b$ and $b+s$ to evaluate the $p$-values of the
signal-plus-background and background-only hypotheses, knowing that
$n_{\rm data}$ events have been observed. From these $p$-values, we
derive the associated CL$_s$ value.
The colored regions shown in Fig.~\ref{fig:coll} are excluded at the 95\% CL by
at least one signal region of the considered analyses. The dark blue region
corresponds to what we obtain with the reinterpretation of the results of the
two CMS searches for DM in the top-antitop plus missing energy channel,
namely CMS-SUS-17-001 and CMS-SUS-16-052. Whilst our results only focus on Run~2
data, we have verified that the obtained limits are compatible with the less
stringent Run~1 constraints derived from the results of the CMS-B2G-14-004
analysis. The light blue area depicted on the figure corresponds to bounds that
can be extracted from the reinterpretation of the results of the
ATLAS-EXOT-2015-03 and ATLAS-SUSY-2015-06 searches for new physics in the
multijet plus missing energy channel.
We have found that mediator masses ranging up to 1~TeV are excluded, provided
that the DM mass is light enough for having enough phase space to
guarantee the decay of the mediator into a DM particle and a top quark
in a far-from-threshold regime. Whilst generic multijet plus missing energy
searches are quite sensitive when the DM mass is small, they quickly
lose any sensitivity for larger $m_S$ values. This stems from the monojet-like
selection of the considered analyses, that can only be satisfied if enough
phase space is available for the $T$ decay process.
As soon as the $T\to t S$ decay channel is closed, the $T$ quark becomes
long-lived enough to hadronize before decaying and it could potentially travel
on macroscopic distances in the detector. Whilst the unknown modeling of
vector-like quark hadronization would introduce uncontrolled uncertainties on
the predictions, none of the currently available computer tools allows for
a proper handling of long-lived colored particles. Moreover, all considered LHC
analyses have been designed for being sensitive to promptly-decaying new-physics
states, and are thus expected to lose sensitivity when new physics particles are
long-lived. For this reason, we restrict ourselves to provide LHC constraints in
the region of the parameter space where the $T$ quark can promptly decay into a
top quark and a DM particle.
\section{Summary}
\label{sec:summary}
\begin{figure}
\centering
\includegraphics[width=.98\columnwidth]{viab-all.pdf}
\caption{Phenomenologically-viable region of our model parameter space,
presented in the $(m_S, r-1)$ plane, on which we project constraints from
DM direct and indirect detection and collider experiments. The grey regions
correspond to regions for which the relic density cannot be
accommodated. {\it Direct DM searches:} The red region is excluded
by the Xenon 1T~\cite{Aprile:2017iyp} experiment at the 90\%
confidence level while the region delimited by the red dashed line
is in principle testable by DM direct detection searches as lying
above the neutrino floor~\cite{Billard:2013qya}. {\it Indirect DM
searches:} the dark green (at low mass) and light green
(at large mass) regions are excluded by Fermi-LAT gamma-rays
constraints~\cite{Ackermann:2015zua} and by the CR analysis
of Ref.~\cite{Cuoco:2017iax}. The orange region delimited by a dot-dashed
line is the projected sensitivity of Fermi-LAT after 15
years of exposure~\cite{Charles:2016pgz}. {\it Collider searches:}
constraints on top partner production at LEP~\cite{Abbiendi:2002mp} and the
LHC~\cite{Sirunyan:2017leh,CMS:2017odo} are respectively shown by the
magenta and light blue regions, whilst
mono-X bounds~\cite{Aaboud:2016tnv,Aaboud:2016zdn} are indicated by the dark
blue region.}
\label{fig:summary}
\end{figure}
The WIMP paradigm is being tested by various experiments, both in
astrophysics, cosmology and at colliders. At the same time, there is a
significant interest on top-philic new physics, as the top quark is
widely considered, due to its large mass, as a perfect laboratory for
the study of the electroweak symmetry breaking mechanism. In this
work, we have extensively investigated a simple DM scenario that
brings naturally these topics together. It is based on a real scalar
particle coupled to the top quark through a Yukawa coupling with a
heavy vector-like quark. As in the SM sector, the top quark has the
largest coupling to the Higgs boson, it is at least conceivable that
it also features the largest coupling to a new dark sector. The model
rests only on very few parameters (one coupling strength and two
masses), so that it provides a good starting point to compare the
impact of different experimental results from varied origins. In the
present case, we focus on DM direct and indirect detection searches,
as well as on collider probes. We have studied the constraints on
the DM model, paying special attention to the potential impact of the
QCD radiative corrections on all the considered bounds ({\it i.e.},
the DM relic abundance, the DM direct and indirect searches and the
collider searches). In this way, our study complements and extents similar
earlier works based on Majorana DM candidates~\cite{Ibarra:2015nca,%
Bringmann:2015cpa,Garny:2018icg}.
Our analysis reveals that, although there is a complementarity between
the different searches, only a small fraction of the viable parameter
space of this very simple DM scenario is tested by the current
experiments. This is illustrated in Fig.~\ref{fig:summary} which
summarizes our results and complements the information provided in
Fig.~\ref{fig:coll}. On the long term, the most fruitful strategy
to further test such a DM scenario would be to increase the energy reach at
colliders.
|
1,108,101,563,875 | arxiv | \section{Brief Overview of T-SVD Framework}
\label{sec:2}
\subsection{Notations and Preliminaries}
In this part we briefly describe the notations used throughout the paper, and the t-SVD structure proposed in \cite{Braman2010,Kilmer2011641,KBN}.
A tensor is a multidimensional array of numbers. For example, vectors are first order tensors, matrices are second order tensors. Tensors of size $n_1 \times n_2 \times n_3$ are called third order tensors. In this paper, third order tensors are represented in bold script font $\T{A}$.
A \emph{\textbf{Slice}} of an $n$-th order tensor is a $2$-D section defined by fixing all but two indices. For a third order tensor $\T{A}$, we will use the Matlab notation $\T{A}(k, :, :)$ , $\T{A}(:, k, :)$ and $\T{A}(:, :, k)$ to denote the $k$-th horizontal, lateral and frontal slices. $\T{A}^{(k)}$ is particularly used to represent $\T{A}(:, :, k)$, and $\overrightarrow{\T{A}}_k$ represents $\T{A}(:,k,:)$. We also call such $\overrightarrow{\T{A}}_k$ \textbf{\emph{tensor columns}}.
A \emph{\textbf{Fiber}} (or \emph{\textbf{Tube}}) is a $1$-D section obtained by fixing all indices but one. For a third order tensor, $\T{A}(:, i, j)$, $\T{A}(i, :, j)$ and $\T{A}(i, j, :)$ denote the $(i, j)$-th mode-$1$, mode-$2$ and mode-$3$ fiber. Specifically we let $\vec{a} \in \mathbb{R}^{1 \times 1 \times n_3}$ denote an $n_3$-tube.
The approach in \cite{Braman2010,KBN,Kilmer2011641} rests on defining a multiplication operation, referred to as the tensor-product (t-product) between two third order tensors. This is done by using a commutative operation, in particular circular convolution between tensor tubes as defined below.
\begin{defn} \textbf{(t-product)} The t-product between $\T{A} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ and $\T{B} \in \mathbb{R}^{n_2 \times n_4 \times n_3}$ is an $n_1 \times n_4 \times n_3$ tensor $\T{C}$ whose $(i,j)$-th tube $\T{C}(i,j,:)$ is given by
\begin{equation}
\T{C}(i,j,:) = \sum_{k=1}^{n_2} \T{A}(i,k,:) * \T{B}(k,j,:)
\end{equation}
\end{defn}
\noindent where $i=1,2,...,n_1$, $j=1,2,...,n_4$. When a third order tensor is viewed as a matrix of tubes along the third dimension, the t-product is analogous to the matrix multiplication except that the multiplication between numbers are replaced by the circular convolution between tubes.
\begin{remark}
From the relationship between circular convolution and Discrete Fourier Transform(DFT), the t-product of $\T{A}$ and $\T{B}$ can be computed efficiently in Fourier domain. Specifically, let $\widehat{\T{A}} = \mbox{\tt fft} (\T{A},[\hspace{2mm}],3)$ and $\widehat{\T{B}} = \mbox{\tt fft} (\T{B},[\hspace{2mm}],3)$ be the tensors obtained by taking the Fast Fourier Transform (FFT) along the tube fibers in third dimension of $\T{A}$ and $\T{B}$, then we can compute the t-product of $\T{A}$ and $\T{B}$ through the following,
\begin{equation}
\nonumber
\begin{aligned}
\widehat{\T{C}}(:,:,i) = &\widehat{\T{A}}(:,:,i)*\widehat{\T{B}}(:,:,i),i=1,2,...,n_3\\
&\T{C} = \mbox{\tt ifft} (\widehat{\T{C}} ,[\hspace{2mm}],3)
\end{aligned}
\end{equation}
\end{remark}
\begin{defn} \textbf{(Tensor transpose)} The conjugate transpose of a tensor $\T{A} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ is an $n_2 \times n_1 \times n_3$ tensor $\T{A}\Tra$ obtained by taking the conjugate transpose of each frontal slice of $\T{A}$, then reversing the order of transposed frontal slices $2$ through $n_3$.
\end{defn}
\begin{defn}\textbf{(Identity tensor)}
The identity tensor $\T{I} \in \mathbb{R}^{n \times n \times n_3}$ is defined as follows,
\begin{equation}
\T{I}(:,:,1) = I_{n \times n}, \hspace{5mm} \T{I}(:,:,k) = 0, k =2,3,...,n
\end{equation}
where $I_{n \times n}$ is the identity matrix of size $n \times n$.
\end{defn}
\begin{defn}\textbf{(Orthogonal Tensor)} A tensor $\T{Q} \in \mathbb{R}^{n \times n \times n_3}$ is orthogonal if it satisfies
\begin{equation}
\T{Q}\Tra * \T{Q} = \T{Q}* \T{Q}\Tra = \T{I}
\end{equation}
\end{defn}
\begin{defn}\textbf{(f-diagonal Tensor)} A tensor is called f-diagonal if each frontal slice of this tensor is a diagonal matrix.
\end{defn}
\subsection{Tensor Singular Value Decomposition(t-SVD)}
We now define the tensor Singular Value Decomposition using the t-product introduced in previous section.
\begin{defn}The t-SVD of a third-order tensor $\T{M} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ is given by
\begin{equation}
\T{M} = \T{U}*\T{S}*\T{V}\Tra
\end{equation}
where $*$ denotes the t-product, $\T{U} \in \mathbb{R}^{n_1 \times n_1 \times n_3}$ and $\T{V} \in \mathbb{R}^{n_2 \times n_2 \times n_3}$ are orthogonal tensors. $\T{S} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ is a rectangular f-diagonal tensor.
\end{defn}
\begin{figure}[htb]
\centering \makebox[0in]{
\begin{tabular}{c}
\includegraphics[height = 0.85in, width = 3.2in]{figs/tSVD.png}
\end{tabular}}
\caption{t-SVD of an $n_1 \times n_2 \times n_3$ tensor.}
\label{fig:tSVD}
\end{figure}
Figure~\ref{fig:tSVD} illustrates the t-SVD of $3$rd order tensors. Similar to the t-product, we can also compute t-SVD in Fourier domain, see Algorithm~\ref{alg:tSVD}.
\begin{algorithm}
\caption{T-SVD of third order tensors}
\begin{algorithmic}
\STATE \textbf{Input: } $\T{M} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$
\STATE \textbf{Output: } $\T{U} \in \mathbb{R}^{n_1 \times n_1 \times n_3}$, $\T{V} \in \mathbb{R}^{n_2 \times n_2 \times n_3}$ and $\T{S} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ such that $\T{M} = \T{U}*\T{S}*\T{V}\Tra$.
\vspace{1mm}
\STATE ${\widehat{\T{M}}} = \rm{fft}(\T{M},[\hspace{1mm}],3)$;
\FOR{$i = 1 \hspace{2mm} \rm{to} \hspace{2mm} n_3$}
\STATE $ [\M{U}, \M{S}, \M{V}] = \textbf{SVD}(\widehat{\T{M}}(:,:,i))$
\STATE $ {\widehat{\T{U}}}(:,:,i) = \M{U}; \hspace{1mm} {\widehat{\T{S}}}(:,:,i) = \M{S}; \hspace{1mm} {\widehat{\T{V}}}(:,:,i) = \M{V}; $
\ENDFOR
\STATE $\T{U} = \rm{ifft}(\widehat{\T{U}},[\hspace{1mm}],3)$, $\T{S} = \rm{ifft}(\widehat{\T{S}},[\hspace{1mm}],3)$, $\T{V} = \rm{ifft}(\widehat{\T{V}},[\hspace{1mm}],3)$.
\end{algorithmic}
\label{alg:tSVD}
\end{algorithm}
As discussed in \cite{cvprzemin}, t-SVD has many advantages over the classical tensor decompositions such as CANDECOMP/PARAFAC\cite{parafac1970} and Tucker\cite{Tuck1966c}. For example, given a fixed rank, the computation of CANDECOMP/PARAFAC decomposition can be numerically unstable, since calculating the rank-$1$ components in this model is difficult. Similarly, finding the best Tucker multi-rank $\vec{r}$ approximation to a tensor is numerically expensive and often does not yield the best fit to the original tensor. However, the computation of t-SVD is very easy since one only needs to do several SVDs as shown in Algorithm~\ref{alg:tSVD}. Another very important property is the optimality approximation of t-SVD \cite{Kilmer2011641}, described in the following.
\begin{theorem}
\label{thm:optimality}
Let $\T{M} = \T{U} * \T{S} *\T{V}\Tra$ be the t-SVD of $\T{M} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$. Then for $k<\min(n_1,n_2)$, define $\T{M}_k = \sum_{i=1}^{k} \T{U}(:,i,:) * \T{S}(i,i,:) *\T{V}(:,i,:)\Tra$, we have
\begin{equation}
\nonumber
\T{M}_k = \arg \min_{\tilde{\T{M}} \in \mathbb{M}} \|\T{M} - \tilde{\T{M}}\|_F
\end{equation}
where $\mathbb{M} = \{\T{X}*\T{Y} | \T{X} \in \mathbb{R}^{n_1 \times k \times n_3}, \T{Y} \in \mathbb{R}^{k \times n_2 \times n_3}\}$.
\end{theorem}
If we define \emph{\textbf{tensor tubal rank}} of $\T{M}$ to be the number of non-zero diagonal tubes in $\T{S}$\cite{cvprzemin}, then this theorem is saying that $\T{M}_k$ is the closest tensor to $\T{M}$ in Frobenius norm among all tensors of tensor tubal rank at most $k$.
\subsection{t-linear Combination of Tensor Dictionaries and Coefficients}
As in the matrix case, given an overcomplete dictionary $D \in \mathbb{R}^{n \times K}$ which contains $K$ prototype signal-atoms for columns, a signal $y \in \mathbb{R}^{n}$ can be represented as a linear combination of columns of $D$
\begin{equation}
\label{eq:linear_combination}
y=Dx
\end{equation}
\noindent where $x \in \mathbb{R}^{K}$ is called the representation coefficient vector of $y$. This set up could be easily extended to $3$rd order tensors using the framework outlined in the previous section. Given $K$ tensor columns (or dictionary atoms) $\overrightarrow{\T{D}}_k \in \mathbb{R}^{n_1 \times 1 \times n_3}$, we represent a tensor signal $\overrightarrow{\T{X}} \in \mathbb{R}^{n_1 \times 1 \times n_3}$ using the \emph{\textbf{t-linear combination}} of the given tensor dictionaries as follows,
\begin{equation}
\label{eq:t_linear_combination}
\overrightarrow{\T{X}} = \sum_{k=1}^{K} \overrightarrow{\T{D}}_k * \vec{c}_k = \T{D} * \overrightarrow{\T{C}}
\end{equation}
\noindent where $\{\vec{c}_k\}_{k=1}^K$ are tubes of size $1 \times 1 \times n_3$; $\overrightarrow{\T{C}}\in \mathbb{R}^{K \times 1 \times n_3}$ is called coefficient tensor obtained by aligning all the $\vec{c}_k$. $\T{D} = \{\overrightarrow{\T{D}}_1, \overrightarrow{\T{D}}_2, ..., \overrightarrow{\T{D}}_K\} \in \mathbb{R}^{n_1 \times K \times n_3}$ is the tensor dictionary. The representation (\ref{eq:t_linear_combination}) may either be exact or approximate satisfying
\begin{equation}
\|\overrightarrow{\T{X}} - \T{D} * \overrightarrow{\T{C}} \| \le \epsilon
\end{equation}
\noindent for some $\epsilon >0$. When $K>n$, we say the tensor dictionary $\T{D}$ is overcomplete.
\begin{figure}[htb]
\centering \makebox[0in]{
\begin{tabular}{c}
\includegraphics[height = 2in, width = 3.2in]{figs/t_linear_combination.png}
\end{tabular}}
\caption{A tensor signal represented by a t-linear combination of $K$ tensor dictionary atoms.}
\label{fig:t_linear_combination}
\end{figure}
\section{Conclusion}
\vspace{-0.5mm}
In this paper, we present a new method for tensor dictionary learning algorithm K-TSVD, using the t-SVD framework. Our main contribution lies in explicitly integrating the sparse coding of third order tensors in t-SVD sense, and based on this we generalize the K-SVD dictionary learning model to deal with higher order tensors. The experimental results show that our approach yields very good performance on video completion and multispectral images denoising. Possible future work includes applying the group technique used in BM3D and DNMDL to process groups of similar patches separately.
\section{Introduction}
Sparsity driven signal processing has been widely used in many areas across computer vision and image analysis, such as image restoration and classification \cite{ksvd,superresolution,dlclassification}. The main principle driving the gains is the idea of sparse coding, i.e. the underlying signal is compactly represented by a few large coefficients in the overcomplete dictionary, while the noise and the sampling process are incohrent.
Since the performance heavily relies on the chosen dictionary, a lot of dictionary learning algorithms are developed to obtain dictionaries that are more adapted to the signal than the predefined ones, such as wavelet and DCT. In \cite{ksvd}, Aharon \emph{et al}. proposed an algorithm called K-SVD, which efficiently learns an overcomplete dictionary from a set of training signals. The method of optimal directions (MOD)\cite{MOD} shares the same effective sparse coding principle for dictionary learning as K-SVD. Discriminative K-SVD algorithm (D-KSVD) proposed in \cite{dksvd} improved the K-SVD method by unifying the dictionary and classifier learning processes. \cite{bomp} efficiently accelerated the K-SVD algorithm and reduced its memory consumption using a batch orthogonal matching pursuit method.
When the signal is not limited to two dimensional signals, traditional methods generally embed the high dimensional data into a vector space by vectorizing the data points; therefore the conventional matrix based approaches can still be used. This kind of vectorization, however, will lead to a poor sparse representation since it breaks the original multidimensional structure of the signal and reduce the reliability of post processing. To this end, some dictionary learning techniques have been explored based on different tensor decompositions such as CP decomposition \cite{kcpd,CP2}, Tukcer Decomposition\cite{tucker1,tucker2,tucker3} and tensor-SVD\cite{tsvddl}. In \cite{kcpd}, the authors developed an algorithm called K-CPD which learns high order dictionaries based on the CP decomposition. \cite{tucker1} proposed a tensor dictionary learning algorithm based on the Tucker model with sparsity constraints over its core tensor, and applied gradient descent algorithm to learn overcomplete dictionaries along each mode of the tensor (see \cite{Tuck1966c} for definition of tensor modes). Peng et al. \cite{tucker2} presented a tensor dictionary learning algorithm based on Tucker model with Group-block-sparsity constraint on the core tensor with good performance.
In this paper, we present a novel multidimensional dictionary learning approach based on a notion of tensor-SVD proposed in \cite{Braman2010,KBN,Kilmer2011641}. Essentially the t-SVD is based on an operator theoretic interpretation of the 3rd order tensors \cite{Braman2010}, as linear operators over the set of 2-D matrices. This framework has recently been used for dictionary learning for 2-D images in \cite{tsvddl}, but the authors there employ a different algorithm and the problem considered is tomographic image reconstruction. Moreover we will also consider the problem of filling in missing data by sparse coding using the learned dictionary.
The paper is organized as follows. In section 2 we go over the definitions and notations, then illustrate the main differences and advantages over other tensor decomposition methods. Section 3 formulates the objective function for tensor dictionary learning problem using t-SVD, by introducing the ``tubal sparsity" of third-order tensors. Our tensor dictionary learning model and detailed algorithm to solve the problem are presented in Section 4. In Section 5 we show experiment results on third order tensor completion and denoising. Finally we conclude our paper in Section 6.
\section{Problem Formulation}
In this section, we introduce our tensor dictionary learning model and the related algorithm.
\subsection{From Matrix to Tensor Dictionary Learning}
Given an overcomplete dictionary $D \in \mathbb{R}^{n \times K}$ with $K>n$, if $D$ is full rank, there are infinite number of solutions to the representation problem (\ref{eq:linear_combination}); therefore in order to constran the solution set, one common approach is to enforce sparsity. As in classic dictionary learning model which was first designed for the purpose of reconstruction, one adaptively learns an overcomplete dictionary using the training data, which leads to the best possible representation of the data with sparsity constraints. Specifically, given training data $\{y_i\}_{i=1}^{n} \in \mathbb{R}^{d}$ where $d$ is the dimensionality and $n$ is the total number of training data used, dictionary learning methods aim at finding an overcomplete dictionary $D\in \mathbb{R}^{d \times K}$ with $K>d$, and a coefficient matrix $X = [x_1,x_2,...,x_n] \in \mathbb{R}^{K \times n}$ by the following optimization problem,
\begin{equation}
\label{eq:DL}
\begin{aligned}
\min_{D,X} \hspace{2mm} &\sum_{i=1}^{n}\|y_i - Dx_i\|_F^2 \\
\mbox{subject to } \hspace{2mm} &\|x_i\|_q \le T,\hspace{2mm} i=1,2,...,n
\end{aligned}
\end{equation}
\noindent where $\|\cdot\|_q, q\geq 1$ is the $\ell_q$ norm which represents different sparsity regularization.
Using t-SVD structure discussed in the previous section, we generalize this dictionary learning model to higher dimensional cases. Given training data as tensor columns $\{ {\overrightarrow{\T{Y}}}_i \}_{i=1}^{n} \in \mathbb{R}^{d \times 1 \times n_3}$, we want to find a dictionary $\T{D} \in \mathbb{R}^{n \times K \times n_3}$ with $K>n$, and ``\emph{\textbf{tubal sparse}}" tensor coefficients $\{ \overrightarrow{\T{X}}_i \}_{i=1}^{n} \in \mathbb{R}^{K \times 1 \times n_3}$ to represent the training data using t-product. The tubal sparsity of a tensor column is defined in \cite{cvprzemin} as follows.
\begin{defn} \textbf{(tensor tubal sparsity)} Given a tensor column $\overrightarrow{{\T{X}}}$, the tensor tubal sparsity $\|\cdot\|_\text{TS}$ is defined as the number of non-zero tubes of $\overrightarrow{{\T{X}}}$ in the third dimension.
\end{defn}
Then we can construct our dictionary learning model:
\begin{equation}
\label{eq:tensor DL}
\begin{aligned}
\min_{\T{D},\overrightarrow{\T{X}}_i} \hspace{2mm} &\sum_{i=1}^{n}\|\overrightarrow{\T{Y}}_i - \T{D}*\overrightarrow{\T{X}}_i\|_F^2 \\
\mbox{subject to}\hspace{2mm} &\|\overrightarrow{\T{X}}\|_{\text{TS}} \le T,\hspace{2mm}i = 1,2,...,n
\end{aligned}
\end{equation}
\noindent or equivalently,
\begin{equation}
\label{eq:tensor DL2}
\begin{aligned}
\min_{\T{D},\T{X}} \hspace{2mm} &\|\T{Y} - \T{D}*\T{X}\|_F^2 \\
\mbox{subject to}\hspace{2mm} &\|\T{X}\|_{\text{TS}} \le T_0
\end{aligned}
\end{equation}
\noindent where $\T{Y} = \left[\overrightarrow{\T{Y}}_1, \overrightarrow{\T{Y}}_2,...,\overrightarrow{\T{Y}}_n \right] \in \mathbb{R}^{d \times n \times n_3}$ and $\T{X} = \left[\overrightarrow{\T{X}}_1, \overrightarrow{\T{X}}_2,...,\overrightarrow{\T{X}}_n \right] \in \mathbb{R}^{K \times n \times n_3}$. Figure~\ref{fig:tensor_sparse_coding} illustrates the tensor sparse coding model. Note that if the $j$th tube of $\overrightarrow{\T{X}}_i(j,1,:)$ is zero, then it means that the $j$th dictionary $\T{D}(:,j,:)$ is not being used in the representation of $\overrightarrow{\T{Y}}_i$.
\begin{figure}[htb]
\centering \makebox[0in]{
\begin{tabular}{c}
\includegraphics[height = 1.2in, width = 3.2in]{figs/tensor_sparse_coding.png}
\end{tabular}}
\caption{Data in the form of tensor columns represented by the t-product of tensor dictioanry and tubal-sparse coefficient tensors. The red tubes in the coefficient tensors stand for the non-zero tubes and white ones are zero tubes.}
\label{fig:tensor_sparse_coding}
\end{figure}
\vspace{-1mm}
\subsection{K-TSVD}
We now discuss our tensor dictionary learning model in details. Our model is called K-TSVD since it is a general extension from classic K-SVD to high dimensional tensor based on t-SVD. Similarly to K-SVD algorithm, K-TSVD also consists of two stages: the tensor sparse coding stage and the tensor dictionary update stage. First let's consider the sparse coding stage where the tensor dictionary $\T{D}$ is fixed. So we need to solve
\begin{equation}
\label{eq:sparse_coding}
\begin{aligned}
\min_{\T{X}} \hspace{2mm} &\|\T{Y} - \T{D}*\T{X}\|_F^2 \\
\mbox{subject to}\hspace{2mm} &\|\T{X}\|_{\text{TS}} \le T_0
\end{aligned}
\end{equation}
\noindent or alternatively we can work on an equivalent form,
\begin{equation}
\label{eq:sparse_coding_2}
\min_{\overrightarrow{\T{X}}_i}\|\T{Y} - \T{D}*\T{X}\|_F^2 + \lambda \|\T{X}\|_{\text{TS}}
\end{equation}
\noindent for some positive $\lambda$. Since the sparsity measure is computational intractable in both matrix and tensor cases, we use the $\|\cdot\|_{1,1,2}$ norm \cite{cvprzemin} instead as a convex relaxation for the tubal sparsity, where the $\|\cdot\|_{1,1,2}$ norm of a $3$rd order tensor $\T{X}$ is defined as
\begin{equation}
\nonumber
\|\T{X}\|_{1,1,2} = \sum_{i,j} \|\T{X}(i,j,:)\|_F
\end{equation}
\noindent If we regard a third dimensional tube $\vec{x} \in \mathbb{R}^{1 \times 1 \times n_3}$ as a $n_3 \times 1$ column vector, then the $\ell_{1,1,2}$ norm of $\T{X}$ is just the summation of $\ell_2$ norm of all such tubes along the third dimension in $\T{X}$.
Replacing the tubal sparsity with the $\ell_{1,1,2}$ norm, the problem becomes
\begin{equation}
\label{eq:sparse_coding_3}
\min_{\T{X}} \|\T{Y} - \T{D}*\T{X}\|^2_F + \lambda \|\T{X}\|_{1,1,2}
\end{equation}
In order to solve this problem, one more definition is needed here. For a third order tensor $\T{A} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$, define the block diagonal form $\xbar{\T{A}}$ in Fourier domain as follows,
\begin{equation}
\label{eq:blkdiag}
\xbar{\T{A}} = \text{blkdiag}(\widehat{\T{A}}) =
\left[\begin{array}{cccc}\widehat{{\T{A}}}^{(1)}& & & \\
& \widehat{{\T{A}}}^{(2)} & & \\
& &\ddots & \\
& & & \widehat{{\T{A}}}^{(n_3)}\end{array} \right]
\end{equation}
\noindent where $\widehat{\T{A}} = \mbox{\tt fft} (\T{A},[\hspace{2mm}],3)$ and $\T{A}^{(i)}$ is the $i$th frontal slice of $\T{A}$. Then (\ref{eq:sparse_coding_3}) can be equivalently reformulated in Fourier domain as
\begin{equation}
\nonumber
\min_{\xbar{\T{X}}} \|\xbar{\T{Y}} - \xbar{\T{D}} \xbar{\T{X}}\|^2_F + \lambda \sqrt{n_3} \|\widehat{\T{X}}\|_{1,1,2}
\end{equation}
where the $\sqrt{n_3}$ factor comes from the fact that $\|\T{X}\|_F = \|\widehat{\T{X}}\|_F/\sqrt{n_3}$ \cite{cvprzemin}. Use the general framework of Alternating Direction Method of Multipliers (ADMM) \cite{ADMM}, we can solve this optimization problem recursively with the following algorithm:
\begin{align}
\label{eq:admm1}
\xbar{\T{X}}_{k+1} &= \arg\min_{\xbar{\T{X}}} \|\xbar{\T{Y}} - \xbar{\T{D}}\xbar{\T{X}}\|_F^2 + \text{tr}\left( \xbar{\T{Q}}_k\Tra \xbar{\T{X}}\right) + \frac{\rho}{2}\|\xbar{\T{X}} - \xbar{\T{Z}}_k\|_F^2 \\
\label{eq:admm2}
\T{Z}_{k+1} &= \arg\min_{\T{Z}} \|\T{Z}\|_{1,1,2} + \frac{\rho}{2 \lambda} \|\T{X}_{k+1} + \frac{1}{\rho}\T{Q}_k - \T{Z}\|_F^2 \\
\label{eq:admm3}
\T{Q}_{k+1} &= \T{Q}_k + \rho(\T{X}_{k+1} - \T{Z}_{k+1})
\end{align}
\noindent where $\rho>0$. (\ref{eq:admm1}) is essentially a least square minimization problem and we can separately solve it in each frontal slice of $\widehat{\T{X}}$ (or equivalently, each diagonal block of $\xbar{\T{X}}$). Let $\T{C}_{k+1} = \T{X}_{k+1} + \T{Q}_k/\rho$, the update of (\ref{eq:admm2}) is given by
\begin{equation}
\label{eq:sol_admm2}
\begin{aligned}
\T{Z}_{k+1}(i,j,:) = &\left( 1-\frac{\lambda}{\rho \|\T{C}_k(i,j,:)\|_F} \right)_+ \T{C}(i,j,:) \\
& \forall i=1,2,...,K,\hspace{1mm}j = 1,2,...,n
\end{aligned}
\end{equation}
\noindent where $(\cdot)_+ = \max (0,\cdot) $.
\\
The second stage of our tensor dictionary learning model is dictionary update. Given fixed $\T{D}$ and $\T{X}$, suppose we only want to update the $k$-th element of $\T{D}$, we can decompose the error term as follows,
\begin{equation}
\nonumber
\begin{aligned}
&\|\T{Y} - \T{D}*\T{X}\|^2_F \\
= &\left\| \T{Y} - \sum_{j=1}^{K} \overrightarrow{\T{D}}_j*\T{X}(j,:,:) \right\|_F^2 \\
= &\left\| \left(\T{Y} - \sum_{j \neq k}\overrightarrow{\T{D}}_j*\T{X}(j,:,:)\right) - \overrightarrow{\T{D}}_k*\T{X}(k,:,:)\right\|_F^2 \\
= &\|\T{E}_k - \overrightarrow{\T{D}}_k*\T{X}(k,:,:) \|_F^2 \\
= &\|\T{E}_k - \T{D}(:,k,:)*\T{X}(k,:,:) \|_F^2
\end{aligned}
\end{equation}
\noindent $\T{E}_k$ here stands for the representation error when the $k$-th atom $\T{D}(:,k,:)$ is removed from the dictionary. The next step is to find $\T{D}(:,k,:) * \T{X}(k,:,:)$ which best approximates $\T{E}_k$, so that the error term is minimized. This is essentially to compute the best tubal rank-$1$ approximation using Theorem~\ref{thm:optimality}. Since we need to maintain the tubal sparsity of $\T{X}$ and don't want to fully fill $\T{X}(k,:,:)$, let $w_k = \{i| \T{X}(k,i,:) \neq 0,i=1,2,...,n\}$ be the set of indices where data $\T{Y}$ uses tensor dictionary $\T{D}(:,k,:)$ and restrict $\T{E}_k$ by choosing the tensor columns corresponding to $w_k$ to obtain $\T{R}_k: \T{R}(:,i,:) = \T{E}(:,w_k(i),:), i= 1,2,...,|w_k|$. From Theorem~\ref{thm:optimality}, we apply t-SVD on $\T{R}_k$ to get $\T{U},\T{S}$ and $\T{V}$, and take the first tensor column of $\T{U}$ to update $\T{D}(:,k,:)$, use $\T{S}(1,1,:)*\T{V}(:,1,:)\Tra$ to renovate the coefficient tensors which use the $k$-th dictionary. To accelerate the algorithm we only compute the approximate rank-1 SVDs in Fourier domain when we compute t-SVD of $\T{R}$. The complete algorithm is presented in Algorithm~\ref{alg:k-tsvd}.
\begin{algorithm} [thb]
\caption{K-TSVD}
\label{alg:k-tsvd}
\textbf{Input }: Observed tensor data $\T{Y} = \{\overrightarrow{\T{Y}}_i\}_{i=1}^{n_2} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$, $\lambda >0$.\\
\textbf{Initialize}: Dictionary $\T{D}_0 \in \mathbb{R}^{n_1 \times K \times n_3}$ \\
\textbf{Repeat until convergence}:\\
\vspace{-5mm}
\begin{algorithmic}[1]
\STATE Compute the sparse coefficient tensor using (\ref{eq:admm1})-(\ref{eq:admm3}):
\begin{equation}
\nonumber
\T{X} = \arg\min_{\T{X}} \hspace{2mm}\|\T{Y} - \T{D}*\T{X}\|_F^2 + \lambda \|\T{X}\|_{1,1,2}
\end{equation}
\FOR{$k=1,2,...,K$}
\STATE Let $w_k = \{i| \T{X}(k,i,:) \neq 0\}$ be the set of indices where data $\T{Y}$ uses dictionary $\T{D}(:,k,:)$.
\STATE Compute $\T{E}_k = \T{Y} - \sum_{j \neq k} \T{D}(:,j,:)*\T{X}(j,:,:)\Tra$, which is the over all error without using the $k$-th dictionary atom $\T{D}(:,k,:)$.
\STATE Restrict $\T{E}_k$ by choosing only the tensor columns corresponding to $w_k$ and obtain $\T{R}_k$:
\begin{equation}
\T{R}(:,i,:) = \T{E}(:,w_k(i),:)
\end{equation}
for $i= 1,2,...,|w_k|$.
\STATE Compute the t-SVD of $\T{R}_k$:
\begin{equation}
\nonumber
\T{R}_k = \T{U} * \T{S} * \T{V}\Tra.
\end{equation}
\vspace{-5mm}
\STATE Update $\T{D}(:,k,:) = \T{U}(:,1,:)$.
\STATE Update $\T{X}(k,w_k,:) = \T{S}(1,1,:)*\T{V}(:,1,:)\Tra$.
\ENDFOR
\end{algorithmic}
\textbf{Output}: Trained tensor dictionary $\T{D}$.\\
\end{algorithm}
\section{Experiment Results}
\subsection{Filling Missing Pixels in Tensors}
In this section we consider the application of filling missing pixels in third order tensors. Suppose that we are given a video with dead pixels, where the dead pixels mean pixel values are deleted or missing on some fixed positions of each frame. Specifically, let $\Omega$ indicate the set of indices of the remaining pixels and $\T{M}$ be the data tensor, then $\T{M}(i,j,:) = 0$ for all $(i,j) \notin \Omega$.
Our goal is to recover such tensors with missing pixels. Suppose $\T{D}$ is the learned overcomplete dictionary on the training data, define $P_\Omega$ as an orthogonal projector such that $P_\Omega(\T{M})(i,j,:) = \T{M}(i,j,:)$, if $(i,j) \in \Omega$ and $0$ otherwise. Then for each patch $\overrightarrow{\T{M}}_k$ in the test data, the reconstruction of this patch is $\T{D}*\overrightarrow{\T{C}}_k$, where $\overrightarrow{{\T{C}}}_k$ is the solution to
\begin{equation}
\label{eq:complete}
\min_{\overrightarrow{\T{C}}_k} \|P_\Omega(\overrightarrow{{\T{M}}}_k) - P_\Omega(\T{D}*\overrightarrow{\T{C}}_k )\|_F^2 + \lambda \|\overrightarrow{\T{C}}_k\|_{1,1,2}
\end{equation}
\noindent which can be solved in the same manner as (\ref{eq:sparse_coding_3}).
We utilized a basketball video here to apply K-TSVD algorithm and reconstruct $\T{M}$ from missing pixels. There are $40$ frames in the video and the resolution of each frame is $144 \times 256$. To learn the overcomplete dictionary using K-TSVD, we randomly took $9000$ overlapping block patches of size $8 \times 8 \times 10$ from the first $30$ frames, saved them as tensor columns of size $64 \times 1 \times 10$, and obtained our training data $\T{Y}$ of total size $64 \times 9000 \times 10$. All these patches were used to train a tensor dictionary with $K=256$ atoms. The last $10$ frames of the video were used for testing. We took the total $576$ disjoint $8 \times 8 \times 10$ blocks in the last $10$ frames, saved each block into a tensor column, and obtained our training data of size $64 \times 576 \times 10$.
We investigated the performance of K-TSVD by comparing it with K-SVD and DCT. In K-SVD, in order to have a fair comparison, for each test frame we also randomly trained $10000$ block patches of size $ 8 \times 8$ in the first $30$ frames. We visualize an example of the overcomplete DCT dictioanry, the K-SVD learned dictionary and the K-TSVD learned dictionary in Figure~\ref{fig:dictionary_basketball}. One frame with $50\%$ and $70\%$ missing pixels and its reconstructions are shown in Figure~\ref{fig:fill_missing_basketball}. As one can see the reconstruction based on K-TSVD learned dictionary has a better quality. Figure~\ref{fig:fill_missing_compare} shows the reconstruction error (RE) comparison of those three approaches, where the error is computed via $\text{RE} = \sqrt{\|\T{X} - \T{X}_{\text{rec}}\|_F^2/N}$, $N$ is the total number of pixels in the data. We can see that when the percentage of missing pixels is small, all three methods perform equally well. With more missing pixels, K-TSVD gives better performance over the other two methods.
\begin{figure}[htbp]
\centering \makebox[0in]{
\begin{tabular}{c c}
\includegraphics[height = 1.4in, width = 1.6in]{figs/DCT_dictionary.png}
\includegraphics[height = 1.4in, width = 1.6in]{figs/basketball_dic_KSVD.png}\\
\includegraphics[height = 1.4in, width = 1.6in]{figs/basketball_dic_1.png}
\includegraphics[height = 1.4in, width = 1.6in]{figs/basketball_dic_3.png}
\end{tabular}}
\caption{\textbf{Upper left}: The overcomplete DCT dictionary. \textbf{Upper right}: Dictionary learned on the first frame of the basketball video using K-SVD. \textbf{Lower left} The first frontal slice $\T{D}(:,:,1)$ of the learned dictionary of the tensor. \textbf{Lower right} The $3$rd frontal slice $\T{D}(:,:,3)$ of the learned dictionary of the tensor.}
\label{fig:dictionary_basketball}
\end{figure}
\begin{figure}[htbp]
\centering \makebox[0in]{
\begin{tabular}{c c}
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_missing_50.png}
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_DCT_50.png}\\
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_missing_70.png}
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_DCT_70.png}\\
\noindent\footnotesize{(a) Frame with missing pixels~~~~~~~~~~~~~~~
(b) DCT reconstruction~~~~~~~~}\\
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_KSVD_50.png}
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_KTSVD_50.png}\\
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_KSVD_70.png}
\includegraphics[width=.49\linewidth]{figs/fill_missing/fill_KTSVD_70.png}\\
\noindent\footnotesize{(c) K-SVD reconstruction ~~~~~~~~~~~~~~~
(d) K-TSVD reconstruction~~}\\
\end{tabular}}
\caption{The reconstruction result from missing pixels on the basketball video. The different rows are for $50\%$ and $70\%$ of missing pixels respectively.}
\label{fig:fill_missing_basketball}
\end{figure}
\begin{figure}
\centering
\includegraphics[height = 2in, width = 3in]{figs/fill_missing/fill_compare.png}
\caption{The reconstruction error comparison of DCT, K-SVD and K-TSVD on the reconstruction. The sparsity varies from $10\%$ to $80\%$. }
\label{fig:fill_missing_compare}
\end{figure}
\vspace{-4mm}
\subsection{Multispectral Image and Video Denoising}
In order to further test the proposed method, we applied our algorithm on multispectral/hyperspectral images and video data denoising. In the first experiment the multispectral data was from the \textbf{Columbia datasets} \footnote{\url{http://www1.cs.columbia.edu/CAVE/databases/multispectral/}}, each dataset contains 31 real-world images of size $512 \times 512$ and is collected from $400$nm to $700$nm at $10nm$ steps. In our experiment we resized each image into size of $205 \times 205$, and took images of the last $10$ bands to accelerate the speed of training tensor dictionaries. Therefore the total size of the tensor data we used here is $205 \times 205 \times 10$. Further work is required to fully deploy the algorithm in large-scale high order tensor applications.
For the noise model we consider the fixed-location defects without knowing the noisy positions, which commonly exists in video and multispectral images. On image of each bandwidth, some fixed pixel locations are corrupted with very high noise and our task is to recover the image. Specifically in our experiment we picked a sparse number of pixel locations and added Gaussian noise on these positions of each image. Let $\Omega$ indicate the set of noisy pixel locations, then what we did was for each $(i,j) \in \Omega$, $k=1,2,...,10$, $\T{Y}(i,j,k) = \T{Y}(i,j,k)+w_{ijk}$, where $\T{Y}$ is the clean tensor and $w_{ijk} \sim \mathcal{N}(0,\sigma)$ is the additive Gaussian noise.
To train the data and learn the dictionaries, similarly to what we did in the previous experiment, we randomly took $10000$ overlapping patches of size $8\times 8 \times 10$ from the noisy tensor data, and saved each patch into a tensor column of size $64 \times 1 \times 10$. Therefore the tensor $\T{Y}$ to train here was of size $64 \times 10000 \times 10$. Since the total number of overlapping patches is $(205-7)^2 = 39204$, we only trained about a quarter of all the overlapping patches for the reason of computation time. If the size of data gets larger, then more patches are needed to ensure a more accurate dictionary. For a fair comparison, in K-SVD we also randomly select $10000$ overlapping patches of size $8 \times 8$ within each noisy image. The trained dictionaries of KSVD and K-TSVD on the noisy tensor data are shown in Figure~\ref{fig:dictionary_toy}.
\begin{figure}[htbp]
\centering \makebox[0in]{
\begin{tabular}{c c}
\includegraphics[height = 1.4in, width = 1.6in]{figs/toy_KSVD_dic.png}
\includegraphics[height = 1.4in, width = 1.6in]{figs/toy_dic_1.png}
\end{tabular}}
\caption{\textbf{Left} The learned dictionary on the first image using K-SVD. \textbf{Right} The first frontal slice $\T{D}(:,:,1)$ of the learned dictionary of the tensor.}
\label{fig:dictionary_toy}
\end{figure}
The denoising process of our method includes a tensor sparse coding stage based on the learned tensor dictionary. We extracted each $8 \times 8 \times 10$ patch in the noisy multispectral images and solved the tensor sparse coding problem (\ref{eq:sparse_coding_3}) to obtain the denoised patch. Following a similar idea in \cite{ksvddenoise}, we averaged all the denoised patches with some relaxation obtained by averaging with the original noisy data then got our denoised tensor.
To test the performance of our method, we compared our K-TSVD to these methods: K-SVD (band-wise)\cite{ksvd,ksvddenoise} 3D K-SVD \cite{ksvddenoise}, BM3D (band-wise) \cite{bm3d}, LRTA\cite{LRTA}, DNMDL\cite{tucker2} and PARAFAC\cite{parafac}. BM3D is a non-local denoising method based on an enhanced sparse representation in the transform domain, achieved by grouping similar patches into 3D data arrays. DNMDL is a Tucker dictionary learning based method, which like BM3D first groups the 3D patches and then use Tucker dictionary learning approach within each group to denoise. These two methods take the non-local similarity properties of different patches into consideration, and have very good denoising performance on some cases. LRTA is a Tucker3 based method which simply employs a low rank tensor approximation in Tucker3 model as denoised images. Similarly, PARAFAC is a CANDECOMP/PARAFAC based approach and it also obtains denoising result using a low CP rank approximation. Therefore these two methods can be regarded as a same type of denoising approach. K-SVD, 3DK-SVD and our method K-TSVD perform denoising by learning a overcomplete dictionary on the noisy data and reconstruct the image using sparse coding, which is different from the other methods. The result with $\sigma = 100$ and the sparsity of noisy pixels equaling $10\%$ is shown in Figure~\ref{fig:denoise_toy}. The detailed PSNR comparison on different noise levels of these methods is in Table~\ref{tab:toy_compare}. We can see that our algorithm has a better performance over the other competing methods on most cases.
\begin{figure}[htbp]
\centering \makebox[0in]{
\begin{tabular}{c c}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_clean.png}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_noisy.png}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_KSVD.png}\\
\noindent\footnotesize{~~~~~(a) Clearn image ~~~~~~~~~~~
(b) Noisy image ~~~~~~~~
(c) Bandwise K-SVD ~~}\\
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_3DKSVD.png}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_bm3d.png}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_LRTA.png}\\
\noindent\footnotesize{(d) 3DK-SVD ~~~~~~~~~~
(e) Bandwise BM3D ~~~~~~~~~~~~~~~
(f) LRTA ~~~~}\\
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_TuckerDL.png}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_parafac.png}
\includegraphics[width=.33\linewidth]{figs/denoise/toy_2_KTSVD.png}\\
\noindent\footnotesize{~~(g) DNMDL ~~~~~~~~~~~~~~~~~
(h) PARAFAC ~~~~~~~~~~~~~~~~~
(i) \textbf{K-TSVD} ~~}\\
\end{tabular}}
\caption{Denoised image at the $610$nm band of chart and stuffed toy. The sparsity of the noisy pixels is $10$\% and the locations of noisy pixels are consistent on image of each band. The additive noise is Gaussian with $\sigma = 100$.}
\label{fig:denoise_toy}
\end{figure}
\begin{table}
\begin{center}
\caption{PSNR(dB) of chart and stuffed toy images.}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Sparsity& $5\%$ & $10\%$ & $15\%$ & $10\%$ & $10\%$ \\ \space
Noise level& $100$ & $100$ & $100$ & $150$ & $200$ \\ \hline
Noisy image& 20.96 & 18.18 & 16.35 &14.75 &12.10 \\ \hline
K-SVD &22.73& 22.60 & 22.49 &22.38 &22.00 \\ \hline
3DK-SVD &22.61 &22.53&22.47 &22.41 &22.20 \\ \hline
BM3D & 26.95& 26.62 & 26.36 &25.23 &24.29 \\ \hline
LRTA & 23.54 & 26.84 & 26.65 &23.90 &22.03 \\ \hline
DNMDL & 24.07 & 23.73& 25.16 &17.89 &16.83 \\ \hline
PARAFAC & 27.07& 26.86 & 26.72 &26.13 & 25.24\\ \hline
\textbf{KTSVD} & \textbf{27.19} & \textbf{26.98} & \textbf{26.79} &\textbf{26.18} & \textbf{25.44} \\ \hline
\end{tabular}
\label{tab:toy_compare}
\end{center}
\end{table}
The second dataset we used was a set of hyperspectral images of natural scenes \cite{hyperspectral}. Similarly as before, we only took the images from bandwidth $630$nm to $720$nm and obtain a clean tensor of size $205 \times 268 \times 10$. We trained 10000 dictionaries on the noisy data and perform denoising process using the same technique. The performance is shown in Figure~\ref{fig:denoise_scene} and the PSNR comparison on different noise levels is given in Table~\ref{tab:scene_compare}. In this dataset, the PSNR result shows that our algorithm also gives the good denoising performance on most cases. As one of the tensor based approach, LRTA gives the best PSNR on the case of sparsity $10\%$ and standard deviation of the noise being $100$. PARAFAC also works pretty well when sparsity equals $10\%$ and noise level is $200$.
\vspace{-1mm}
\begin{figure}[htbp]
\centering \makebox[0in]{
\begin{tabular}{c c}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_clean.png}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_noisy.png}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_KSVD.png}\\
\noindent\footnotesize{~~~~~(a) Clearn image ~~~~~~~~~~~
(b) Noisy image ~~~~~~~~
(c) Bandwise K-SVD ~~}\\
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_3DKSVD.png}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_bm3d.png}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_LRTA.png}\\
\noindent\footnotesize{(d) 3DK-SVD ~~~~~~~~~~
(e) Bandwise BM3D ~~~~~~~~~~~~~~~
(f) LRTA ~~~~}\\
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_TuckerDL.png}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_parafac.png}
\includegraphics[width=.33\linewidth]{figs/denoise_scene/scene_8_KTSVD.png}\\
\noindent\footnotesize{~~(g) DNMDL ~~~~~~~~~~~~~~~~~
(h) PARAFAC ~~~~~~~~~~~~~~~~~
(i) \textbf{K-TSVD} ~~}\\
\end{tabular}}
\caption{Denoised image at the $700$nm band of hyperspectral images on natural scene. The sparsity of the noisy pixels is $10$\% and the locations of noisy pixels are consistent on image of each band. The additive noise is Gaussian with $\sigma = 100$.}
\label{fig:denoise_scene}
\end{figure}
\begin{table}
\begin{center}
\caption{PSNR(dB) of natural scene images.}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Sparsity& $5\%$ & $10\%$ & $15\%$ & $10\%$ & $10\%$ \\ \space
Noise level& $100$ & $100$ & $100$ & $150$ & $200$ \\ \hline
Noisy image& 21.29 & 18.02 & 16.45 &14.62 &12.19 \\ \hline
K-SVD &22.81& 22.70 & 22.64 &22.51 &22.28 \\ \hline
3DK-SVD &22.78 &22.73&22.71 &22.66 &22.58 \\ \hline
BM3D & 24.93& 24.56 & 24.37 &23.56 &22.90 \\ \hline
LRTA & 25.64 & 25.68 & \textbf{26.12} &23.76 &21.96 \\ \hline
DNMDL & 22.01 & 23.40& 24.62 &20.68 &18.47 \\ \hline
PARAFAC & 24.57& 24.48 & 24.39 &24.21 & \textbf{23.60} \\ \hline
\textbf{KTSVD} & \textbf{25.94} & \textbf{25.73} & 25.53 &\textbf{24.96} &23.55 \\ \hline
\end{tabular}
\label{tab:scene_compare}
\end{center}
\end{table}
We also applied K-TSVD algorithm on video denoising. The video that we used here was footage from a still camera view of a traffic intersection \footnote{\url{www.changedetection.net}}. The resolution of each frame is $175 \times 328$, and we performed our method on every $10$ frames. Figure~\ref{fig:denoise_video} shows one frame of the denoising result with sparsity $=10\%$ and noise level $100$. As one can see in this experiment both LRTA and K-TSVD perform well.
\vspace{-2mm}
\begin{figure}[htbp]
\centering \makebox[0in]{
\begin{tabular}{c c}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_clean.png}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_noisy.png}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_KSVD.png}\\
\noindent\footnotesize{~~~~~(a) Clearn image ~~~~~~~~~~~
(b) Noisy image ~~~~~~~~
(c) Bandwise K-SVD ~~}\\
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_3DKSVD.png}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_BM3D.png}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_LRTA.png}\\
\noindent\footnotesize{(d) 3DK-SVD ~~~~~~~~~~
(e) Bandwise BM3D ~~~~~~~~~~~~~~~
(f) LRTA ~~~~}\\
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_TuckerDL.png}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_PARAFAC.png}
\includegraphics[width=.33\linewidth]{figs/denoise_video/video_KTSVD.png}\\
\noindent\footnotesize{~~(g) DNMDL ~~~~~~~~~~~~~~~~~
(h) PARAFAC ~~~~~~~~~~~~~~~~~
(i) \textbf{K-TSVD} ~~}\\
\end{tabular}}
\caption{Video denoising result. The sparsity is $10\%$ and $\sigma = 100$.}
\label{fig:denoise_video}
\end{figure}
\vspace{-5mm} |
1,108,101,563,876 | arxiv | \section{Introduction}
\label{S:intro}
\setcounter{footnote}{0}
The extragalactic background light (EBL) is a measure of the
integrated radiation produced by stellar nucleosynthesis and
gravitational accretion over cosmic history. The EBL must contain
the radiation produced during the epoch of reionization (the
reionization-EBL, or simply the REBL). The REBL comes from
the UV and optical photons emitted by the first
ionizing stars and stellar remnants, radiation that is now redshifted
into the near-infrared (NIR). The REBL is expected to peak at 1-2 um
due to the redshifted Lyman-$\alpha$ and Lyman-break features. Furthermore
while the brightness of the REBL must be sufficient to initiate and sustain
ionization, the individual sources may be quite faint \citep{Salvaterra2011}.
We have developed a specialized imaging instrument to measure REBL
spatial fluctuations, consisting of two wide-field cameras that are part
of the Cosmic Infrared Background Experiment (CIBER; \citealt{Bock2006}),
developed to measure the absolute intensity, spectrum, and spatial
properties of the EBL. CIBER's imaging cameras are combined with a
low-resolution spectrometer (LRS; \citealt{Tsumura2012}) designed to
measure the absolute sky brightness at wavelengths $0.75 < \lambda <
2.1 \, \mu$m, and a narrow-band spectrometer (NBS; \citealt{Korngut2012})
designed to measure the absolute ZL intensity using the $854.2 \,$nm Ca$\,$II
Fraunhofer line. A full description of the CIBER payload, including the overall
mechanical and thermal design, and detailed descriptions of the focal plane housings,
calibration lamps, shutters, electronic systems, telemetry and data handling,
laboratory calibration equipment, flight events, and flight thermal performance,
is given in \citet{Zemcov2012}. The observation sequence and science targets from
the first flight are available in \citet{Tsumura2010}.
In this paper, we describe the scientific background of EBL fluctuation measurements
in sections \ref{sS:science} and \ref{sS:drivers}, the instrument design in section
\ref{S:camera}, laboratory instrument characterization in section \ref{S:characterization},
modifications following the first flight in section \ref{S:mods}, and performance in
the second flight in section \ref{S:performance}. Sensitivity calculations are given
in a short appendix.
\subsection{Science Background}
\label{sS:science}
Searching for the REBL appears to be more tractable in a multi-color
fluctuations measurement than by absolute photometry. Absolute photometry,
measuring the sky brightness with a photometer and removing local foregrounds,
has proven to be problematic in the NIR, where the main difficulty is subtracting
the Zodiacal light (ZL) foreground, which is a combination of scattered sunlight and
thermal emission from interplanetary dust grains in our solar system. However,
absolute photometry studies give consistent results in the far-infrared
(\citealt{Hauser1998}, \citealt{Fixsen1998}, \citealt{Juvela2009},
\citealt{Matsuura2011}, \citealt{Penin2011}). These far-infrared
measurements are close to the EBL derived from galaxy counts though statistical
and lensing techniques that probe below the confusion limit (\citealt{Marsden2009},
\citealt{Zemcov2010}, \citealt{Bethermin2010}, \citealt{Berta2010}).
However in the NIR, at wavelengths appropriate for a REBL search, absolute EBL
measurements are not internally consistent (\citealt{Cambresy2001}, \citealt{Dwek1998},
\citealt{Matsumoto2005}, \citealt{Wright2001}, \citealt{Levenson2008}). A
significant component of this disagreement is related to the choice of model used
to subtract ZL (\citealt{Kelsall1998}, \citealt{Wright2001}). Furthermore, some
absolute EBL measurements (\citealt{Cambresy2001}, \citealt{Matsumoto2005}) are
significantly higher than the integrated galaxy light derived from source counts
(\citealt{Madau2000}, \citealt{Totani2001}, \citealt{Levenson2007}, \citealt{Keenan2010}).
The current disagreement between absolute measurements and galaxy counts are difficult
to reconcile with theoretical calculations \citep{Madau2005} or TeV absorption
measurements from blazars (\citealt{Gilmore2011}, \citealt{Aharonian2006},
\citealt{Schroedter2005}). However TeV constraints on the NIR EBL require an
assumption about the intrinsic blazar spectrum \citep{Dwek2005}. Furthermore cosmic
rays produced at the blazar are not attenuated by the EBL and can produce secondary
gamma rays that may explain the current TeV data without placing a serious constraint
on the NIR EBL \citep{Essey2010}.
Instead of measuring the absolute sky brightness, it is possible to
detect or constrain the REBL by studying the spatial properties of the
background (\citealt{Cooray2004}, \citealt{Kashlinsky2004}). A spatial
power spectrum of the EBL contains a REBL clustering component, evident
at an angular scale of approximately 10 arcminutes as shown in Figure
\ref{fig:pwrspec}, that is related to the underlying power spectrum of
dark matter. Numerical simulations of first galaxy formation indicate the
effects of non-linear clustering are significant \citep{Fernandez2010}.
There are also REBL fluctuations from the Poisson (unclustered shot noise)
component, but the amplitude of this term is more difficult to predict
as it is related to the number counts of the first galaxies, that is,
the brightness distribution and surface density of sources. In
addition, REBL fluctuations are thought to have a characteristic
electromagnetic spectrum, peaking at the redshift-integrated
Lyman-$\alpha$ emission feature. If reionization occurs at $z \sim
10$, this emission peak is redshifted into the NIR, with a spectral
shape that depends on the luminosity and duration of the epoch of
reionization.
Early measurements with the Diffuse Infrared Background Experiment
(DIRBE; \citealt{Kashlinsky2000}) and the Infrared Telescope in Space
(IRTS; \citealt{Matsumoto2005}) used fluctuations as a tracer of the
total EBL. A first detection of REBL fluctuations was reported by
\citet{Kashlinsky2005} using the \textit{Spitzer} Infrared Array
Camera (IRAC; \citealt{Fazio2004}) in the 3.6 and $4.5 \, \mu$m bands
in $5 \times 5$ arcminute regions, corresponding to the IRAC field of
view. The authors observe a departure from Poisson noise on $1{-}5$
arcminute scales which they attribute to first-light galaxies, after
ruling out Zodiacal, Galactic, and galaxy clustering foregrounds.
The observed brightness of the fluctuations is approximately constant
at 3.6 and $4.5 \, \mu$m. This analysis was later extended to $10 \times 10
\,$arcmin fields, giving similar results \citep{Kashlinsky2007}.
\citet{Thompson2007a} studied a $144 \times 144 \,$arcsec field with
the Hubble Space Telescope (\textit{HST}) at 1.1 and $1.6 \, \mu$m,
finding no evidence for $z > 8$ galaxies contributing to the
\textit{HST} or the \textit{Spitzer} fluctuations
\citep{Thompson2007b}. Finally, \citet{Matsumoto2011} report
first-light galaxy fluctuations with \textit{AKARI} at 2.4, 3.2 and
$4.1 \, \mu$m in a 10 arcminute field. Their reported spectrum shows
a strong increase from 4.1 to $2.4 \, \mu$m, consistent with a
Rayleigh-Jeans spectrum.
In Figure \ref{fig:pwrspec} we show two predictions related to the angular power spectrum
of REBL anisotropies. The lower prediction (solid red line) is from \citet{Cooray2012},
derived from the observed luminosity functions of Lyman dropout galaxies at redshifts
of 6, 7 and 8 \citep{Bouwens2008} at the bright end. The reionization history involves
an optical depth to electron scattering of 0.09, consistent with
the WMAP 7-year measurement of $\tau=0.088 \pm 0.014$ \citep{Komatsu2011}. The absolute REBL
background is $0.3 \, nW m^{-2} sr^{-1}$ at 3.6 $\mu$m for this model. \citet{Cooray2012}
improved on previous predictions \citep{Cooray2004} by accounting for non-linear clustering at
small angular scales with a halo model for reionization galaxies at $z > 6$. Note that the
REBL fluctuation power is similar at 1.6 and 1.1 $\mu$m given the redshift
of reionization is around at $z \sim 10$.
The upper prediction (dashed red line) is normalized to the anisotropy amplitude level
reported by {\it Spitzer}-IRAC at 3.6 $\mu$m \citep{Kashlinsky2005}. This power spectrum
requires an absolute REBL background between 2 to 3 $nW m^{-2} sr^{-1}$ at 3.6 $\mu$m. We
scale the power spectra to shorter wavelengths based on a Rayleigh-Jeans spectrum, consistent
with the combined measurements of {\it Spitzer} and {\it AKARI} \citep{Matsumoto2011}.
\begin{figure*}[ht]
\epsfig{file=fig_pwrspec1.eps,width=0.5\textwidth}
\epsfig{file=fig_pwrspec2.eps,width=0.5\textwidth}
\caption{Power spectra of REBL and foreground fluctuations
at $1.1 \, \mu$m (left) and $1.6 \, \mu$m (right). In both cases
the clustering power spectra of local ($z<3$) galaxies, for sources
brighter than two different magnitude cutoffs, are shown as the blue
solid and dashed lines. These galaxy clustering power spectra are based
on measured fluctuations as a function of cutoff magnitude from \citet{Sullivan2007}
and are consistent with the predictions by \citet{Helgason2012} based on
a large compilation of galaxy luminosity functions between $z=0$ and 4.
The two red lines correspond to two expectations on the REBL
anisotropy power spectrum as described in section \ref{sS:science}. Upper
limits to the ZL fluctuation power, shown in black, are scaled from experimental upper
limits at longer wavelengths by the ZL spectrum. The predicted CIBER sensitivities
in both bands are shown in orange. These are calculated with the instrument parameters
listed in Table \ref{tab:imagerprops} assuming that the detector noise
given in Table \ref{tab:imagersens} is uncorrelated and Gaussian over
the array and using the $\Delta C_{\ell}$ formalism in \citet{Knox1995}.}
\label{fig:pwrspec}
\end{figure*}
Fluctuation measurements are only feasible if the contributions from
foregrounds can be removed. Fortunately, it appears easier to remove
foregrounds in fluctuation measurements than in absolute
photometry measurements. The largest foreground, ZL, is known to be
spatially uniform on spatial scales smaller than a degree
(\citealt{Abraham1997}, \citealt{Kashlinsky2005}, \citealt{Pyo2011}).
Furthermore, any spatial variations in ZL can be monitored and removed
by observing a field over a period of time, as the view through the
interplanetary dust cloud changes annually. Galaxies and stars give
spatial fluctuations from Poisson variations and clustering. These
can be eliminated by masking sources from the image, either through
detection or by using an external catalog of known sources. Galaxy
clustering, arguably the most serious of these potential contaminants,
requires a sufficiently deep source cutoff to reduce the clustering
spectrum below the level of REBL fluctuations by masking sources.
\subsection{Theoretical Design Drivers}
\label{sS:drivers}
These early fluctuation results call for a next generation of improved
measurements at shorter wavelengths, spanning the expected peak of the
REBL electromagnetic spectrum, with wide angular coverage, to definitively
measure the expected peak in the REBL spatial power spectrum. In order to
make a definitive REBL fluctuations measurement, we require: (1) a wide field
of view to allow measurements of the characteristic REBL spatial power spectrum,
(2) observations in multiple NIR bands in order to characterize the REBL
electromagnetic spectrum and distinguish it from potential foregrounds,
and (3) arcsecond angular resolution to remove galaxies to a sufficient depth
to minimize the galaxy clustering foreground signal.
High-fidelity spatial imaging on degree scales is problematic in the
NIR due to airglow emission from the Earth's atmosphere, which is some
$200 {-} 1500$ times brighter than the astrophysical sky in the NIR J,
H and K bands \citep{Allen1976}. Airglow emission has time-variable
structure \citep{Ramsay1992} with spatial variations that increase on
larger angular scales, especially from $1^{\circ}$ to $10^{\circ}$
\citep{Adams1996}. We therefore conduct observations on a sounding
rocket flight, at altitudes above the layers in the atmosphere
responsible for airglow emission at characteristic altitudes of $\sim
100 \,$km. To measure the $\sim 10'$ peak in the REBL spatial power
spectrum, it is necessary to image an area of sky on the order of a
square degree. While one can image a large field with a mosaic
using a small field of view, this requires a highly stable instrument.
A wide field of view allows a measurement using single exposures in
the short time available on a sounding rocket flight.
The REBL electromagnetic spectrum is predicted to peak at $1{-}2 \,
\mu$m (\citealt{Cooray2004}, \citealt{Kashlinsky2004}) due to the
redshift-integrated Lyman-$\alpha$ emission feature, with a decreasing
spectrum at longer wavelengths that depends on the history of
reionization and the presence of free-free emission from ionized gas
surrounding the first galaxies. Observations in the optical and
near-IR should detect this spectrum, which is distinct from that of
local foregrounds, namely ZL, stars, galaxies, scattered starlight
(i.e.~diffuse galactic light), and other Galactic emission. Though
ideally the wavelength coverage would extend out to $\sim 5 \, \mu$m,
the key wavelengths for REBL science bracket the $1{-}2 \, \mu$m peak.
Longer wavelength information can be obtained by cross-correlating CIBER
data with overlapping wide-field \textit{Spitzer} and \textit{AKARI} maps.
The local-galaxy fluctuations foreground is mitigated by masking galaxies
down to a given flux threshold. The masking depth needed depends on the
residual clustering and Poisson fluctuations of galaxies below the
cutoff flux. \citet{Sullivan2007} measured galaxy clustering as a
function of cutoff from a wide-field ground-based NIR survey catalog.
We note that the REBL is best discriminated from low-redshift galaxy
clustering and Poisson fluctuations at ~10 arcminutes, as is evident
in Figure \ref{fig:pwrspec} by comparing the REBL and galaxy
clustering power spectra. Thus wide-field observations are also helpful
for discriminating REBL from local galaxy fluctuations.
The flux cutoff needed to separate the optimistic REBL model from local galaxy
fluctuations is $\sim 17^{th}$ Vega magnitude at $1.6 \, \mu$m, as is evident from
the curves in Figure \ref{fig:pwrspec}. The spatial density of galaxies brighter
than $17^{th}$ Vega magnitude is $N(>S) = 500$ galaxies per square
degree. The cutoff required to remove galaxies well below the expected CIBER instrument
sensitivity is $\sim 23^{rd}$ Vega magnitude at $1.6 \, \mu$m, corresponding
to $N(>S) = 1.5 \times 10^{5}$ galaxies per square degree. Thus we find an angular
resolution of $4 {-} 80$ arcseconds is needed to remove galaxies in order to lose
less than 25 \% of the pixels from masking.
Galaxy masking can be accomplished using ancillary observations with greater point
source depth, masking pixels in the CIBER images below the CIBER point source sensitivity.
The fields observed in the first two flights of CIBER, listed in Table \ref{tab:ancfields},
allows source masking using deep companion catalogs obtained in ground based NIR observations.
Details on first flight observations of these fields is available in \citet{Tsumura2010}.
These fields have also been observed in a search for REBL fluctuations by \textit{AKARI} and
\textit{Spitzer} at longer wavelengths, allowing for a cross-correlation analysis with CIBER.
\begin{table*}[htb]
\centering
\caption{CIBER Survey Fields and Ancillary Data Depths.}
\begin{tabular}{llccccl}
\hline
CIBER Field & Ancillary & $\lambda$ & Field Coverage &
\multicolumn{2}{c}{Ancillary Depth} & Reference \\
& Coverage & ($\mu$m) & (\%) & (Vega mag) & ($\sigma$) & \\ \hline
Bo\"{o}tes & NDWFS & 0.83 & 100 & 25.5 & 5 & \citet{Jannuzi1999} \\
& NEWFIRM & 1.0 & 100 & 22.0 & 5 & \citet{Gonzalez2011} \\
& NEWFIRM & 1.6 & 100 & 20.8 & 5 & \citet{Gonzalez2011} \\
& NEWFIRM & 2.4 & 100 & 19.5 & 5 & \citet{Gonzalez2011} \\
& \textit{Spitzer}-SDWFS & 3.6 & 100 & 19.7 & 5 & \citet{Ashby2009} \\
North Ecliptic Pole & Maidanak & 0.9 & 60 & 21.9 & 5 & \citet{Jeon2010} \\
& CFHT & 1.2 & 50 & 24 & 4 & \citet{Hwang2007} \\
& 2MASS & 1.6 & 100 & 17.9 & 10 & \citet{Cutri2003} \\
& \textit{AKARI} & 2.4 & 98 & 19.7 & 5 & \citet{Lee2009} \\
ELIAS-N1 & UKIDSS-DR6 & 0.9 & 75 & 22.3 & 5 & \citet{Lawrence2007} \\
& INT & 0.9 & 100 & 21.9 & 5 & \citet{GS2011} \\
& 2MASS & 1.6 & 100 & 17.8 & 10 & \citet{Cutri2003} \\
& \textit{Spitzer}-SWIRE & 3.6 & 100 & 18.6 & 10 & \citet{Lonsdale2003} \\
\hline
\end{tabular}
\label{tab:ancfields}
\end{table*}
\section{Instrument Design}
\label{S:camera}
The Imager instrument consists of two wide-field refracting NIR
telescopes each with an $11 \,$cm aperture, combined with
band-defining filters, a cold shutter, and a $1024 \times 1024$ HgCdTe
$2.5 \, \mu$m Hawaii-1\footnote{Manufactured by Teledyne Scientific \&
Imaging, LLC.} focal plane array. The Imager optics were designed
and built by Genesia Corporation using the cryogenic index of
refraction measurements of \citet{Yamamuro2006}. A schematic of the
assembly is shown in Figure \ref{fig:imageroptics}. The assembly
housing the Imager optics are constructed from aluminum alloy 6061,
and the lenses are made from anti-reflection coated Silica,
S-FPL53 and S-TIL25 glass. The assembly is carefully designed to maintain
optical alignment and focus through launch acceleration and vibration.
The aluminum housing is hard black anodized to reduce reflections
inside the cryogenic insert and telescope assembly, with the exception
of the static baffle at the front of the assembly which is gold plated
on its external surface and Epner laser black coated\footnote{This is
a proprietary process of Epner Technology, Inc.} on its inner
surface. This scheme serves to reduce the absorptivity of the baffle
on the side facing warm components at the front of the payload
section, and increase the absorptivity to NIR light on the inside.
At the other end of the camera, a focal plane assembly is mounted to
the back of the optical assembly and thermally isolated using Vespel
SP-1 standoffs. The assembly includes a cold shutter and active
thermal control for each detector. In addition, a calibration lamp
system illuminates the focal plane in a repeatable way to provide a
transfer standard during flight. The design of the calibration lamp
system is common to all of the CIBER instruments and is presented in
\citet{Zemcov2012}.
\begin{figure*}[htb]
\centering
\epsfig{file=fig_imagerphoto.eps,width=0.9\textwidth}
\caption{Schematic and photograph of the CIBER imaging camera. Light
enters the optical system at left and is imaged to the focal plane
at right. A fixed baffle is used to reduce scattering on the first
optic. The Imager assembly employs a fiber-fed calibration lamp
system, band-defining and blocking filters, and a focal plane
assembly as described in \citet{Zemcov2012}. Both Imager assemblies
used in CIBER are identical except for their band defining filters,
set with $\Delta \lambda / \lambda \sim 0.5$ bandpasses centered at
$1.1 \, \mu$m and $1.6 \, \mu$m, roughly corresponding to
astronomical I and H band. The photograph shows a fully assembled
Imager in the lab. The entire assembly mounts to the CIBER optical
bench when installed in the payload and operates at $\sim 80 \,$K.}
\label{fig:imageroptics}
\end{figure*}
The optical transmittance of the two Imager filters are shown in
Figure \ref{fig:filters}. The filter stack is located behind the
optical elements and in front of the focal plane assembly and cold
shutter as shown in Figure \ref{fig:imageroptics}. Each lens provides
additional filtering for wavelengths that are out of band for both
instruments, as their anti-reflection coatings transmit less than
1.5\% of light with wavelengths shorter than $0.75 \, \mu$m or longer
than $2.0 \, \mu$m.
\begin{figure}[htb]
\epsfig{file=fig_filterprofiles.eps,width=0.48\textwidth}
\caption{$1.1 \, \mu$m and $1.6 \, \mu$m Imager filter responses.
These curves represent the transmission of the optical stack which
includes band defining and blocking filters as well as 5 anti-reflection
coated lenses. This response does not include the response of the detector
array, which typically cuts off at $\sim 900 \,$nm for a Hawaii-1
array with a sapphire substrate (Mark Farris, private
communication).}
\label{fig:filters}
\end{figure}
Table \ref{tab:imagerprops} summarizes the design properties of the
optics and detector system, and the measured efficiencies, bands, and
read noise for the two cameras. The optical efficiency is the product
of the reflectance and absorption of the anti-reflection coated lenses
taken from witness samples. The instrument performance is calculated
in the appendix based on data from Table \ref{tab:imagerprops} and
presented in Table \ref{tab:imagersens}.
\begin{table}[ht]
\centering
\caption{Imager Instrument Properties.}
\begin{tabular}{lccc}
\hline
& 1.1 $\mu$m Band & 1.6 $\mu$m Band & Units \\
\hline
Wavelength Range & $900{-}1320 \ast$ & $1150{-}2040$ & nm\\
Pupil Diameter & 110 & 110 & mm \\
F\# & 4.95 & 4.95 \\
Focal Length & 545 & 545 & mm \\
Pixel Size & $7 \times 7$ & $7 \times 7$ & arcsec\\
Field of View & $2.0 \times 2.0$ & $2.0 \times 2.0$ & deg\\
Optics Efficiency & 0.90 & 0.90 & \\
Filter Efficiency & 0.92 & 0.89 & \\
Array QE & 0.51 & 0.70 & $\ast \ast$ \\
Total Efficiency & 0.42 & 0.56 & \\
Array Format & $1024^{2}$ & $1024^{2}$ \\
Pixel Pitch & 18 & 18 & $\mu$m\\
Read Noise (CDS) & 10 & 9 & e$^{-}$\\
Frame Interval & 1.78 & 1.78 & s\\
\hline
\multicolumn{4}{l}{$\ast$ We assume a $900 \,$nm cut-on wavelength
from the} \\
\multicolumn{4}{l}{Hawaii-1 substrate.} \\
\multicolumn{4}{l}{$\ast \ast$ Array QE is estimated from QE measured at
$2.2 \, \mu$m} \\
\multicolumn{4}{l}{for each array and scaled based on the response of}
\\
\multicolumn{4}{l}{a typical Hawaii-1.} \\
\end{tabular}
\label{tab:imagerprops}
\vspace{5pt}
\end{table}
Once assembled, the cameras mount to an optical bench shared with the
LRS and NBS. The completed instrument section is then inserted into
the experiment vacuum skin. Like the other CIBER instruments, the
Imager optics are cooled to $\sim 80 \,$K to reduce their in-band
emission using a liquid nitrogen cryostat system.
\citet{Zemcov2012} describes the various payload configurations used
in calibration and in flight which allow both dark and optical testing in
the laboratory.
\section{Instrument Characterization}
\label{S:characterization}
REBL fluctuation measurements place demanding requirements on the
instrument, including the detector noise properties, linearity and
transient response, optical focus, control of stray radiation, and
knowledge of the flat field response. We have carried out a series of
laboratory measurements to characterize these properties.
\subsection{Dark Current}
\label{sS:darkcurrent}
The detector dark current is measured in both flight and laboratory
configurations by closing the cold shutters, which attenuate the optical
signal by a measured factor of $\sim 10^{3}$. Array data are acquired
at $6.8 \, \mu$s per pixel sample, so that the full array is read in
$1.78 \,$s. The pixels are read non-destructively, and integrate
charge until reset. The integration time may be selected, but the
flight integrations are typically $\sim 50\,$s. To maximize the
signal-to-noise ratio, for each pixel we fit the measured output voltage
to a slope and an offset as described in \citet{Garnett1993}. All CIBER
Imager data are analyzed using this method, except where noted.
The measured dark current also depends on the detector thermal
stability. For the Imagers we require dark current stability of $0.1
\,$e$^{-}/$s, which is equivalent to $\pm 100 \, \mu$K/s given a
temperature coefficient of $1000 \,$e$^{-}$/K. The Imager detector
arrays are controlled to $\pm 10 \, \mu$K/s both in the lab and in
flight, exceeding this specification \citep{Zemcov2012}. In the
flight configuration with the cold shutter closed and the focal plane
under active thermal control, we achieve $\sim 0.3 \,$e$^{-}/$s\ mean dark
current, as shown in Figure \ref{fig:darkcurrent}. The dark current
is measured frequently before launch as a monitor of the instrument
stability and is entirely consistent with the dark current measured in
the laboratory. The stability of the dark current from run to run
indicates the dominant contributor to dark current is the array
itself, as opposed to temperature or bias drift.
\begin{figure}[htb]
\epsfig{file=fig_darkcurrent.eps,width=0.48\textwidth}
\caption{CIBER Imager dark currents for both cameras. The mean dark
current is $0.3 \,$e$^{-}/$s, which is consistent with the manufacturer's
specifications for Hawaii-1 arrays operating near LN$_{2}$
temperature.}
\label{fig:darkcurrent}
\end{figure}
\subsection{Noise Performance}
\label{sS:noise}
Measuring the REBL spatial power spectrum requires a precise
understanding of the noise properties of the array. The array noise
introduces a bias that must be accounted and removed in
auto-correlation analysis, and determines the uncertainty in the
measured power spectrum. The instrument sensitivity shown in Figure
\ref{fig:pwrspec} assumes the noise over the array is uncorrelated
between pixels. Unfortunately, HgCdTe arrays exhibit correlated
noise, as described by \citet{Moseley2010}. This noise is associated
with pickup from the clock drivers to the signal lines, with $1/f$ noise
in the multiplexer readout, and depending on the implementation, with
$1/f$ noise on the bias and reference voltages supplied to the array.
\subsubsection{Noise Model}
We characterized array noise using dark laboratory images and data
obtained just prior to flight. We first took a series of dark
integrations to characterize the noise behavior similar to the $\sim
50\,$s integrations used in flight. In the left hand panels of Figure
\ref{fig:powerspectrum} we show the two dimensional power spectrum of
the difference of two consecutive $50 \,$s laboratory integrations.
The spectrum shows enhanced noise at low spatial frequencies along the
read direction that is largely independent of the cross-read spatial
frequency, symptomatic of correlated noise in the readout.
\begin{figure*}[ht]
\epsfig{file=fig_darknoise.eps,width=0.99\textwidth}
\epsfig{file=fig_darkspec.eps,width=0.99\textwidth}
\caption{Images (top) and two dimensional power spectra (bottom) of the
difference between two dark images, each obtained in a $50 \,$s integration.
The upper and lower left hand panels show the image and power spectra of data
taken minutes before flight, while the two right hand panels show the same for
random realizations using the noise model presented in Section \ref{sS:noise}.
The spatial scale of these images has been restricted to $250 \times 250$
pixels to better show the spatial structure. In both cases the read direction
is horizontal along pixel rows. The vertical structure in the two
dimensional power spectra shows increased noise power in the read
direction on scales $> 50$ pixels. The noise model accurately captures
this behavior, both in real and Fourier space.}
\label{fig:powerspectrum}
\end{figure*}
We then generate an estimate of the noise by constructing time streams
for the array readout. First, we determine the best fit slope and
offset for each pixel. We then subtract this estimate of the photo
current signal in each pixel in each frame. Finally, we form a
sequence of data for each of the four readout quadrants in the order
that the readout addresses individual pixels. An example of
time-ordered data and its noise spectrum is shown in Figure
\ref{fig:todpowerspectrum}, exhibiting excess noise behavior similar
to that described in \citet{Moseley2010}.
\begin{figure}[ht]
\epsfig{file=fig_tsfft.eps,width=0.48\textwidth}
\caption{The upper panel shows $30 \,$ms of signal-subtracted time ordered
data from the $1.6 \, \mu$m Imager. The lower panel shows the noise
spectrum derived from a longer such time series of reads over $50 \,$s.
The noise increases at $\sim 10 \,$kHz, visible in the time stream in
the upper panel as the characteristic scale of the noise at $\sim 0.5 \,$ms.
The ringing visible in the power spectrum below $10 \,$kHz corresponds to
the harmonics of the clock signals used to address the array.}
\label{fig:todpowerspectrum}
\end{figure}
The correlated noise in the readout may reduce the in-flight
sensitivity, and must be modeled to remove noise bias in the
auto-correlation power spectra. While a full description of a noise
model of the flight data is outside the scope of this paper, we can
generate a model confined to the noise properties of the arrays
observed in laboratory testing. This model is generated by producing
a Gaussian noise realization of the power spectrum given in Figure
\ref{fig:todpowerspectrum}. This is used to generate random
realizations of time ordered data. These data are mapped back into
raw frames, and fit to slopes and offsets to determine the images for
a full $50 \,$s integration. To generate images like those shown in
Figure \ref{fig:powerspectrum}, we generate multiple images and
display the difference of two $50 \,$s images. This formalism will be
extended to the flight data by adding photon shot noise from the
astrophysical sky, and correcting for source masking, in a future
publication.
\subsubsection{Estimated Flight Sensitivity}
To calculate the effect of correlated noise on the final science
sensitivity, we take our sequence of dark laboratory images, calculate
the two dimensional power spectrum, and apply a two-dimensional
Fourier mask that removes modes sensitive to the excess low frequency
noise. We remove these modes because they have a phase coherence in
real data that is not fully captured by the Gaussian noise model.
After Fourier masking, we calculate the spatial power in logarithmic
multipole bins. We then evaluate the standard deviation in the
spatial power among eight dark images, and refer this to sky
brightness units using the measured calibration factors in Table
\ref{tab:imagersens}. Because the laboratory data do not have
appreciable photon noise, we add an estimate of uncorrelated
photon noise from the flight photo currents. We compare this empirical
determination of the noise with the na\"{i}ve sensitivity calculation
in Figure \ref{fig:newpssensitivity} \citep{Knox1995}. The empirical
noise is close to the na\"{i}ve calculation on small spatial
frequencies, but is degraded by correlated noise on large spatial
scales. However the instrument is still sufficiently sensitive to
easily detect the optimistic REBL power spectrum. For future experiments,
one may address the reference pixels in Hawaii-RG arrays to mitigate
the effects of correlated noise.
\begin{figure*}[ht]
\epsfig{file=fig_pwrspec3.eps,width=0.48\textwidth}
\epsfig{file=fig_pwrspec4.eps,width=0.48\textwidth}
\caption{The Imager sensitivity to REBL fluctuations. The left hand
panel shows the estimated sensitivity for the $1.1 \, \mu$m channel,
and the right for the $1.6 \, \mu$m channel. In addition to the
curves taken from Figure \ref{fig:pwrspec}, we show the sensitivity
derived from laboratory data for both bands as described in the text
using the same $\ell$ binning as the na\"{\i}ve sensitivity estimate
shown by the orange curve. The black curve is an estimate of the flight
sensitivity, combining measured laboratory noise from an ensemble of $50 \,$s
integrations, added with uncorrelated photon noise derived from the
flight photo currents. This estimate is for a single $50 \,$s integration,
and does not include the effects of noise in the flat field or the
loss of pixels from galaxy masking.}
\label{fig:newpssensitivity}
\end{figure*}
\subsection{Detector Non-linearity and Saturation}
\label{sS:effects}
The Imager detectors have a dynamic range over which the response
tracks the source brightness in a linear fashion. As is typical for
Hawaii-1 detectors, the full well depth is measured to be $\sim 10^{5}
\,$e$^{-}$; however, the detectors begin to deviate from linearity
well before this. In order to flag detector non-linearity, we find
pixels with different illumination levels and track their behavior
during an integration. Figure \ref{fig:linearity} shows the typical response
of a pixel to a bright $\sim 3500 \,$e$^{-}/$s\ source over time. This plot shows
a deviation from the linear model which is large at half the full well
depth. Except for a few bright stars, Imager flight data are well within
the linear regime. Pixels with an integrated charge greater than $7000 \, e^{-}$
have a non-linearity $\sim 1 \%$ are simply flagged and removed from further
analysis, amounting to a pixel loss of $< 0.5 \%$ over the array.
\begin{figure}[ht]
\epsfig{file=fig_linearity.eps,width=0.48\textwidth}
\caption{Integrated signal as a function of time for a typical Imager
pixel. The black data show subsequent reads of the Imager detector
for an incident brightness of $\sim 3500 \,$e$^{-}/$s. The dashed red
line shows the linear model matching the slope of the first $10 \,$s
of the integration. Finally, the blue line is a fit to the model
from \cite{Bies2011}, which agrees well with the data.}
\label{fig:linearity}
\end{figure}
\subsection{Focus and Point Spread Function}
\label{sS:focus}
CIBER is focused in the laboratory by viewing an external collimated
source through a vacuum window. Early on in focus testing we found
that the best focus position depended on the temperature of the
optics. Thermal radiation incident on the cameras can heat the front
of the optics and affect their optical performance due to both
differential thermal expansion and the temperature-dependent refractive
index of the lenses. We reduced the incident thermal radiation
by installing two fused silica windows in front of the cameras for
laboratory testing. The cold windows themselves are $125 \,$mm
diameter, $5 \,$mm thick SiO$_{2}$, operating at a temperature of $120
\,$K, and have $1/10$ surface flatness and $< 5"$ wedge. As described
in \citet{Zemcov2012}, these windows are thermally connected to the
radiation shield to direct the absorbed thermal power to the
liquid nitrogen tank instead of routing the power through the optical
bench where it would produce a temperature gradient across the optics.
With the cold windows in place, we measure focus using a collimator
consisting of an off-axis reflecting telescope with a focal length of
$1900 \,$mm, a $235 \,$mm unobstructed aperture, and an $8 \, \mu$m pinhole
placed at prime focus. Since the focus position of the instruments is fixed, we
scan the pinhole through the focus position of the collimator to find the displacement
from collimator best focus at which each Imager has its best focus. This
procedure is repeated at the center of the array, the corner of each
quadrant, and in the center again as a check of consistency. Figure
\ref{fig:labpsf} shows data from such a test. If the focal plane focal
distance is found to be outside the $\pm 80 \, \mu$m focal depth of the
Imagers, we mechanically shim the focal plane assembly to the best focus
position and remeasure the focus. We verify the focus position
before and after pre-flight vibration testing, performed for each flight,
to ensure that the focus will not change in flight.
\begin{figure}[ht]
\epsfig{file=fig_labpsf.eps,width=0.48\textwidth}
\caption{The variation of the PSF width measured in the laboratory as
a function of collimator focus position $\Delta x$ shifted away from
its best focus position. At each collimator position we measure the
PSF by fitting a Gaussian and determining its full width at half
maximum (FWHM) and uncertainty. The points show the data and the
black line the best fit parabola to the points, yielding the best
estimate of the focus position of the Imager instrument. The curve
is consistent with the $f/4.95$ focal ratio, where the array pixels
are $18 \times 18 \, \mu$m and subtend $7 \times 7$ arcseconds on
the sky.}
\label{fig:labpsf}
\end{figure}
We measure the point spread function (PSF) in flight using stars as
point sources. Given the large number of sources detected in each
field, a measurement of the average PSF across the array can be
obtained by fitting all of the bright sources. In fact, because the
astrometric solution of the images allows us to determine source
positions more accurately than a single pixel, and because the pixels
undersample the PSF of the optics, stacking sources gives a more
accurate determination of the central PSF. To generate the stack, the
region containing each source is re-gridded to be $3 \times$ finer than
the native resolution. The finer resolution image is not interpolated
from the native image, rather, the nine pixels which correspond to a
single native pixel all take on the same value. However, when we stack
the re-gridded point source images we center each image based on the known
source positions, and thus the stacked PSF is improved using this sub-pixel
prior information.
To measure the extended PSF, we combined data from bright sources, which
saturate the PSF core, with faint sources that accurately measure the PSF
core. We generate the core PSF by stacking sources between 16.0 and 16.1 Vega
magnitudes from the 2MASS catalog \citep{Skrutskie2006}, which provides
a set of sources that are safely in the linear regime of the detector. The
source population is a combination of stars and galaxies, however with $7''$
pixels, galaxies are unresolved. As a check, this same analysis was repeated
for sources between 15.0 and 15.1, and 17.0 and 17.1 Vega magnitudes. The PSF
generated from these magnitude bands agreed with the nominal PSF.
To measure the extended PSF, we stack bright sources between 7 and 9 Vega
magnitudes from the 2MASS catalog. Since these bright sources are heavily
saturated, the best fit Gaussian is only fit to the outer wings for
normalization. After the core and extended PSFs are created, we find they
agree well in the region between $r \sim 13 \,$arcsec, inside of which the
bright sources are saturated, to $r \sim 30\,$arcsec, where the faint sources
are limited by noise.
\begin{figure}[htb]
\epsfig{file=fig_Iband_2dpsf.eps,width=0.57\textwidth}
\epsfig{file=fig_Hband_2dpsf.eps,width=0.57\textwidth}
\caption{The $1.1 \, \mu$m (left) and $1.6 \, \mu$m (right) Imager PSFs measured
using stacked flight images from a combination of bright and faint sources as
described in the text. The Imager PSF has a bright core with a faint
extension to $r \sim 1'$, and is circularly symmetric.}
\label{fig:flightpsf}
\end{figure}
We synthesize the full PSF by matching the amplitudes of the
core and extended PSFs in the overlap region, producing the smooth two
dimensional PSF shown in Figure~\ref{fig:flightpsf}. The radial
average of this full PSF is shown in Figure \ref{fig:ringpsf} and
highlights that the core PSF is consistent with the laboratory focus data.
However, the extended PSF deviates significantly from this approximation
and is better described by a Voigt profile shape, characteristic of
scattering in the optical components.
\begin{figure}[htb]
\epsfig{file=fig_psf_cross_sectionI.eps,width=0.48\textwidth}
\epsfig{file=fig_psf_cross_sectionH.eps,width=0.48\textwidth}
\caption{The radial profile of the $1.1 \, \mu$m (left) and $1.6 \, \mu$m
(right) Imager flight PSFs from Figure \ref{fig:flightpsf} (blue circles).
The red curve shows the best fit Gaussian to the PSF core, while the black
curve shows a best fit Voigt (i.e.~the convolution of a Gaussian and
Lorentzian) function to the extended PSF. This is indicative of scattering
in the optical components. Finally, the black dash-dotted line shows
the HWHM of the PSF, which matches the value measured in the
laboratory.}
\label{fig:ringpsf}
\end{figure}
The extended PSF is essential for determining the appropriate mask to
apply for bright sources. The diameter of the PSF mask is adjusted
based on the brightness of the source, and pixels above a given flux
are cut. The cut is calculated by simulating all sources in either
the 2MASS or \textit{Spitzer}-NDWFS catalogs using their known fluxes
and the Imager PSF. The cut mask is generated by finding all points
on this simulation with fluxes $> 3.3 \,$nW/m$^2$/sr\ and $1.8 \,$nW/m$^2$/sr\ at
$1.1$ and $1.6 \, \mu$m, respectively. This masking algorithm retains
$\sim 50 \,$\% of the pixels for a cutoff of $18$ Vega mag, and
$\sim$30\% of the pixels for a cutoff of $20$ Vega mag. To test the
cutoff threshold, we simulate an image of stars and galaxies and find
that, cutting to $20$ mag, the residual spatial power from masked
sources is $< 8 \times 10^{-2} \, nW^{2} m^{-4} sr^{-2}$ at $\ell
= 10^{4}$, comparable to the instrument sensitivity shown in Figure
\ref{fig:newpssensitivity}.
\subsection{Off-axis response}
\label{sS:offaxis}
The Imagers must have negligible response to bright off-axis sources,
including the ambient-temperature rocket skin and shutter door, and
the Earth. As described in \citet{Zemcov2012}, we added an extendable
baffle to eliminate thermal emission from the rocket skin and
experiment door, heated during ascent by air friction, from illuminating
the inside of the Imager baffle tube and scattering to the focal plane.
We measured the off-axis response of the full baffle system following
the methodology in \citet{Bock1995}. We replaced the Hawaii-1 focal
plane array with a single optical photo diode\footnote{Hamamatsu Si
$10 \times 10\,$mm$^{2}$ detector part number S10043.} detector and
measured the response to a distant chopped source (see
\citealt{Tsumura2012} for a complete treatment of the measurement).
The telescope gain function,
\begin{equation}
\label{eq:gth}
g(\theta) = \frac{4 \pi}{\Omega} G(\theta) ,
\end{equation}
where $\Omega$ is the solid angle of the detector and $G(\theta)$ is
the normalized response to a point source is the quantity of interest
for immunity to off-axis sources in surface brightness measurements
(\citealt{Page2003}) and is independent of the optical field of view.
The gain function was measured for three baffle configurations and is
shown in Figure \ref{fig:offaxis}. The improvement from blackening
the baffle tube and adding an extendable baffle section is notable for
angles $\theta > 20^{\circ}$. The stray light level from the Earth
is given by
\begin{equation}
\label{eq:Istray}
I_{\mathrm{stray}} = \frac{1}{4 \pi}\int g(\theta)
I_{\earth}(\theta,\phi)d \Omega ,
\end{equation}
where $I_{\earth}$ is the surface brightness of the Earth, and
$I_{\mathrm{stray}}$ is the apparent surface brightness of stray light
referred to the sky. Following the calculation described in
\citet{Tsumura2012}, we estimate that during the second flight CIBER
observations of the fields listed in Table \ref{tab:ancfields}, where the
Earth's limb is $> 72^{\circ}$ off-axis, the stray light level is
calculated to be $2 \,$nW/m$^2$/sr\ and $1 \,$nW/m$^2$/sr\ in the $1.6$ and $1.1 \, \mu$m
channels, respectively.
\begin{figure}[htb]
\epsfig{file=fig_offaxis.eps,width=0.48\textwidth}
\caption{The Imager telescope gain function, measured with the
anodized fixed black baffle tube used in the first flight (dotted
black line), an improved fixed baffle tube with a better laser black
optical coating (Epner Technology Inc., dashed blue line), and the
combination of the improved fixed baffle with an extendable baffle
used in the second flight (solid red line). Details of the optical
baffling can be found in \citet{Zemcov2012}.}
\label{fig:offaxis}
\end{figure}
This level of stray light is quite small but not completely
negligible, and potentially problematic in an anisotropy measurement
depending on its morphology over the field of view. To quantify how
stray light affects our measurements, we calculated the spatial power
spectrum of the difference between two images, Bo\"{o}tesA -
Bo\"{o}tesB which are separated by only $2^{\circ}$ on the sky and
taken at nearly the same Earth limb avoidance angle, and Bo\"{o}tesA -
NEP, from second flight data (see section \ref{S:performance}). We
find that the power spectra of these differences are the same to
within statistical noise, and that the spatial fluctuations of the
stray light signal are negligible.
We plan to observe these fields again in future flights at different
Earth limb avoidance angles, including angles greater than
$90^{\circ}$. The cross-correlation of such images from different
flights is highly immune to residual stray light.
\subsection{Flat Field Response}
The instrumental flat field, which is the relative response of each
detector pixel to a uniform illumination at the telescope aperture, is
determined in flight by averaging observations of independent fields.
Additionally, the flat field can be independently measured in the laboratory
before and after flight as a check for systematic error. The laboratory flat
field response is measured by illuminating the full aperture of a camera with
the output of an integrating sphere. The sphere is illuminated with a
quartz-tungsten halogen lamp which is filtered to produce an approximately solar
spectrum at the output of the sphere, mimicking the spectrum of ZL.
The sphere was measured by the manufacturer to have uniformity as a function of
angle to better than $5 \times 10^{-3}$ over $10^{\circ} \times 10^{\circ}$. We
scanned a small collimating telescope with a single pixel over the aperture, and
determined that the sphere has angular uniformity to better than $1 \times 10^{-3}$
over the $2^{\circ} \times 2^{\circ}$ Imager field of view. We also measured the
spatial uniformity over the output port and saw no evidence of non-uniformity to
$< 7 \times 10^{-3}$ over an 11 cm aperture.
To eliminate any effects from vacuum and thermal windows, we house the integrating
sphere inside a vacuum chamber which mates to the front of the cryostat in place
of the shutter door (see \citet{Zemcov2012} for details). Light is fed into the
sphere from outside of the vacuum box so that the lamp can be chopped at the
source, allowing us to remove the thermal background. An example flat field
measurement for the $1.1 \, \mu$m camera is shown in Figure \ref{fig:flatfield}.
The laboratory data are fitted over a limited period of the integration following
array reset so as to avoid an appreciable error from non-linearity, as described in
section \ref{sS:effects}, taking into account the minimum well depth of all pixels
in the array. The instruments have a residual response to thermal infrared radiation
in the laboratory with a typical photo current of $600 \, e^{-}/s$ in the $1.6 \, \mu$m
array, which therefore limits the linear integration period to $\sim 5$ s. We obtained
interleaved data with the source on and off to monitor and subtract this thermal
background. After accounting for these effects, the final statistical accuracy of
the laboratory flat field images shown in Figure~\ref{fig:flatfield} is $1.6 \%$
per pixel. Laboratory flat fields were measured before and after the second flight
to quantify the reproducibility of the lab flat field response. We binned
$1.6 \, \mu$m camera laboratory flat field images into 64 $(15 \times 15)$
arcminute square patches in order to reduce statistical noise, and found the binned images
agree to $< 1 \% (1 \, \sigma$). The agreement between the flight and laboratory flat fields
requires a full reduction of the flight data and will be presented in a future science paper.
\begin{figure*}[ht]
\epsfig{file=fig_iband_flatfield.eps,width=0.48\textwidth}
\epsfig{file=fig_hband_flatfield.eps,width=0.48\textwidth}
\caption{The $1.1 \, \mu$m and $1.6 \, \mu$m Imager flat fields as
measured in the lab using the apparatus described in
\citet{Zemcov2012}. The average response has been scaled to $1.0$
in this image, which shows the typical relative responsivity
performance of the Hawaii-1 arrays in conjunction with the optics.
The RMS variation in the pixel responsivities is $0.09$ at $1.1 \,
\mu$m and $0.12$ at $1.6 \, \mu$m.}
\label{fig:flatfield}
\end{figure*}
\section{Modifications Following the First Flight}
\label{S:mods}
The Imagers were flown on the CIBER instrument on a Terrier Black
Brant sounding rocket flight from White Sands Missile Range in
2009 February. Many aspects of the experiment worked well, including
the focus, arrays and readout electronics, shutters, and calibration
lamps. However, we also found several anomalies that led to
modifications for subsequent flights.
\subsection{Thermal Emission from the Rocket Skin}
\label{sS:thermalemission}
The instruments showed an elevated photon level during the flight due
to thermal emission from the rocket skin, heated by air friction during
ascent, scattering into the optics. The edge of the skin near the shutter
door can directly view the first optic and the inside of the static baffle.
This thermal response was pronounced at long wavelengths, as traced by the LRS
\citep{Tsumura2010}. The $1.6 \, \mu$m Imager was more affected by thermal
emission than the $1.1 \, \mu$m Imager, as expected from its longer wavelength
response, giving 40 and 7 times the predicted photo current, respectively.
The measured thermal spectrum with the LRS should not produce a significant
photo-current in the $1.1 \, \mu$m Imager, as the band is supposed to cut off
at $1.32 \, \mu$m. The excess photo-current indicates the $1.1 \, \mu$m Imager
has some long wavelength response. The array response may continue somewhat
beyond $2.5 \, \mu$m, as the band-defining filters provided blocking out
to just $2.5 \, \mu$m and then open up. Also as with the NBS \citep{Korngut2012}
the filters may not attenuate scattered light at large incident angles
as effectively as at normal incidence. The brightness observed by the
$1.6 \, \mu$m Imager is 6 times higher than the band-averaged LRS brightness.
This could be due to a combination of the higher stray light response in
the $1.6 \, \mu$m Imager, and the filter blocking issues mentioned above.
We installed an additional blocking filter providing $< 0.1 \%$ transmittance
from $2.4 \, \mu$m to $3.0 \, \mu$m for both imagers.
We modified the front of the experiment section to better control the
thermal and radiative environment at the telescope apertures. Most
notably, we added extendable baffles to each of the instruments to
eliminate all lines of sight from the skin to the optics or the inside
surfaces of the baffle tubes. \citet{Zemcov2012} details the design
of these baffles and the other changes made to the experiment section
front end. Thermal emission is not detectable in the Imagers in the
second flight, and is at least 100 times smaller than the first flight
in the LRS data.
\subsection{Rings and Ghosts from Bright Sources}
\label{sS:imagerrings}
During analysis of the first flight data, we discovered that bright
objects outside of the Imager field of view create diffuse rings in
the final images, as shown in Figure \ref{fig:bootesrings}. Upon
further analysis, we found that each of these rings was centered on a
bright star outside the geometric field of view. The rings were
caused by reflections off internal elements of the telescope assembly,
as illustrated in Figure \ref{fig:ringraytrace1}. There are two
general classes of rings in the first flight images, though the second
class contains two distinct populations; we denote these ring
populations 1, 2 and 3 below. Table \ref{tab:rings} gives details of
the ring populations including their angular extent and coupling
coefficients.
\begin{figure}[htb]
\epsfig{file=fig_bootesrings.eps,width=0.48\textwidth}
\caption{$1.1 \, \mu$m image of the Bo\"{o}tes A field from CIBER's
first flight showing rings which were later traced to reflections
off components inside the Imagers, namely the lens mounts and
instrument walls. As a guide the brightest rings are indicated with
arrows. There are three separate populations of reflections which
produce these rings. All sources which fall into their angular
response regions will produce a ring, though only sources brighter
than magnitude $\sim 4$ produce rings which are visible by eye.
These rings produce excess power in the science power spectrum, but
were eliminated by modifying the optics for the second flight.}
\label{fig:bootesrings}
\end{figure}
\begin{table*}[htb]
\centering
\caption{First Flight Imager Ring Parameters.}
\begin{tabular}{lccccc}
\hline
Ring Type & $\theta_{\mathrm{min}}$ & $\theta_{\mathrm{max}}$ &
\multicolumn{3}{c}{$\int d \phi I_{\mathrm{ring}}(\phi) / \int I_{0}$} \\
& & & Pre-fix & Post-fix ($3 \sigma$) & Reduction in $C_{\ell}$ \\ \hline
$1.1 \mu$m Imager \\ \hline
1 & $3.4^{\circ}$ & $6.6^{\circ}$ & $2.2 \times 10^{-3}$ & $< 2.6 \times 10^{-6}$ & $ > 7 \times 10^{5}$ \\
2 & $6.7^{\circ}$ & $8.8^{\circ}$ & $2.7 \times 10^{-4}$ & $< 1.5 \times 10^{-6}$ & $> 3 \times 10^{4}$ \\
3 & $11.2^{\circ}$ & $13.2^{\circ}$ & $6.6 \times 10^{-4}$ & $< 1.6 \times 10^{-6}$ & $> 1 \times 10^{5}$ \\ \hline
$1.6 \mu$m Imager \\ \hline
1 & $3.4^{\circ}$ & $6.6^{\circ}$ & $4.1 \times 10^{-3}$ & $< 3.0 \times 10^{-6}$ & $ > 1 \times 10^{6} $ \\
2 & $6.7^{\circ}$ & $8.8^{\circ}$ & $3.5 \times 10^{-4}$ & $< 1.9 \times 10^{-6}$ & $> 3 \times 10^{4}$ \\
3 & $11.2^{\circ}$ & $13.2^{\circ}$ & $1.3 \times 10^{-3}$ & $< 4.3 \times 10^{-6}$ & $> 9 \times 10^{4}$ \\
\hline
\end{tabular}
\label{tab:rings}
\end{table*}
Population 1 rings are generated by reflections off a lens mounting
flange (Figure \ref{fig:ringraytrace1}), and are produced by bright
sources between $3.4^{\circ}$ and $6.6^{\circ}$ off-axis. These rings
also have the strongest optical coupling, with an integrated flux in
the ring a few tenths of percent of the incident source flux. Given
their large acceptance angle, stars brighter than $4^{\mathrm{th}}$
magnitude are sufficiently abundant to generate multiple bright rings.
\begin{figure}[htb]
\epsfig{file=fig_ringraytrace.eps,width=0.48\textwidth}
\caption{Ray trace from an off-axis source which produces the rings
observed at the focal plane. The first class of rings (labeled as
$5^{\circ}$ in the Figure) are caused by glancing reflections off a
flange supporting the back lens. The second class of rings (labeled
as $7.5^{\circ}$) is produced by glancing reflections off flanges
and lens holders in the front set of optics. For the second flight,
these surfaces were cut back and grooved to reduce the glancing
reflectance, removing the rings to a negligible level, as verified
by laboratory measurements.}
\label{fig:ringraytrace1}
\end{figure}
Following their discovery in the first flight data, we measured the
population 1 rings and searched for other optical reflections in the
laboratory. We illuminated each Imager aperture with collimated light
and then scanned the angle of incidence of the collimated beam up to
$25^{\circ}$ off-axis. The first set of measurements confirmed the
existence of the population 1 rings, and allowed the discovery of the
second class of fainter rings.
The second class of rings is comprised of two sub-populations which
are both generated by reflections off the lens tube and lens support
fixtures at the front of the optics assembly (Figure
\ref{fig:ringraytrace1}). These rings have flux coupling coefficients
similar to, but slightly less than, the population 1 rings, but have
much larger solid angles on the array and so produce smaller per pixel
brightness. Together, population 2 and 3 rings are caused by bright
sources $6.7^{\circ}$ to $13.2^{\circ}$ off-axis. These
rings are not readily visible in the images from the first flight,
though their presence was verified in the lab after flight.
Given the acceptance angles, star number counts and the quality of the
ancillary data, the first set of rings are sufficiently bright to be
modeled and masked from the first flight images. However, the second set
of rings have a more complex morphology and fainter surface brightness,
and are more difficult for us to confidently account for in the images.
To understand the systematic error associated with the population 2
and 3 rings, we modeled their effect by convolving the measured
laboratory response with an off-axis star catalog for each field, and
calculated the spatial power spectrum of the resulting images. These
rings, if left unmasked, produce power above the instrument
sensitivity level, as shown in Figure \ref{fig:rings}.
\begin{figure*}[htb]
\epsfig{file=fig_Ibrings.eps,width=0.5\textwidth}
\epsfig{file=fig_Hbrings.eps,width=0.5\textwidth}
\caption{Simulated power spectra for the second class of rings for
both Imager instruments, $1.1 \, \mu$m (left) and $1.6 \, \mu$m (right).
These spectra were computed given the ring parameters in Table
\ref{tab:rings} and the known star fluxes and positions near
the CIBER fields. The instrument sensitivity is the the same
as modeled in Figure \ref{fig:pwrspec}. The amplitude of the
power spectrum of the rings is different for each field because
of the differing stellar populations near each, but similar between
the bands because of the typical color of stars. For the second
flight, the level of ring contamination is well below the instrument
sensitivity, based on upper limits obtained in the laboratory following
the modifications to the optics described in the text. The upper limit
is shown for SWIRE, the most demanding field.}
\label{fig:rings}
\end{figure*}
To remove the rings entirely, we made the optical simulation shown in
Figure \ref{fig:ringraytrace1}. Following characterization of the
rings, the Imager optical assemblies were disassembled. The
components responsible for the rings were grooved or cut back and
re-anodized. The Imager optics were then reassembled, and the
off-axis measurements were repeated. We did not observe any rings
following these modifications. We place upper limits on the ring
coupling factors shown in Table \ref{tab:rings} which are based on the
uncertainty in the integrated surface brightness over the nominal ring
solid angles from the laboratory measurements. We propagated these
upper limits through the model to produce synthetic images and then
power spectra. The estimated reduction in the power spectrum from the
first class of rings are given in Table \ref{tab:rings}. We find that
the effect on the power spectrum is negligible compared with the
instrument sensitivity after the optics modifications.
\section{Instrument Performance from the Second Flight}
\label{S:performance}
The Imagers were flown on the CIBER instrument on a second sounding
rocket flight in 2010 July. All aspects of the experiment performed
well. We found no evidence of bright thermal emission from the rocket
skin in either of the Imagers. We did not observe rings in the flight images.
While the science data are still being analyzed, we summarize the observed
brightness and array photo-currents in Table \ref{tab:imagersens}. Unfortunately,
it is difficult to estimate the full in-flight sensitivity in the power
spectrum without a noise estimator that accounts for correlated noise
in the presence of sources and masking. Therefore we estimate the
in-flight per-pixel sensitivities by evaluating the noise in the
flight difference images (see Section \ref{sS:noise}). The
corresponding per pixel surface brightness sensitivities, and point
source sensitivities using a $2 \times 2$ pixel aperture, are listed
in Table \ref{tab:imagersens}. Our estimated sensitivity to the spatial
power spectrum is shown in Figure \ref{fig:newpssensitivity} based on the
variance of the power spectra of an ensemble of dark laboratory images
combined with flight photon noise.
\begin{table*}[ht]
\centering
\caption{Calculated and Second Flight Sensitivities in a $50 \,$s
Observation.}
\begin{tabular}{lccccc}
\hline
& \multicolumn{2}{c}{$1.1 \, \mu$m Imager} &
\multicolumn{2}{c}{$1.6 \, \mu$m Imager} \\
& Predicted & Achieved & Predicted & Achieved \\ \hline
Sky brightness & 450 & 420 & 300 & 370 & nW m$^{-2}$ sr$^{-1}$\\
Photo current & 4.4 & 4.9 & 8.2 & 11.0 & e$^{-}/$s\\
Responsivity & 10 & 11 & 28 & 31 & me$^{-}$ s$^{-1}$ / nW m$^{-2}$ sr$^{-1}$\\
Current Noise & 0.31 & 0.35 & 0.41 & 0.45 & e$^{-}$ s$^{-1}$ ($ 1
\sigma /$pix) \\
$\delta \lambda I_{\lambda}$ & 31.7 & 33.1 & 15.1 & 17.5 & nW
m$^{-2}$ sr$^{-1}$ ($ 1 \sigma /$pix) \\
$\delta F_{\nu}$ & 18.5 & 18.4 & 18.2 & 17.8 & Vega Mag $(3 \sigma)$\\
\hline
\end{tabular}
\label{tab:imagersens}
\end{table*}
We scale the photo currents in Table \ref{tab:imagersens} to sky
brightness units using a calibration based on point sources observed
in flight. We stacked sources with flux between 16.0 and
16.1 Vega magnitudes in the 2MASS catalog and integrated over the stacked
image to account for the extended PSF. We converted this point source
calibration to surface brightness using the pixel solid angle, giving
the calibration factors in Table \ref{tab:imagersens}.
\section{Conclusions}
We have designed and tested an imaging instrument optimized to search
for the predicted spatial and spectral signatures of fluctuations from
the epoch of reionization. The instrument demonstrates the sensitivity
needed to detect, or place interesting limits upon, REBL fluctuations
in the short observing time available in a sounding rocket flight. We
have carried out a comprehensive laboratory characterization program
to confirm the focus, characterize the flat field response, perform an
end-to-end calibration, and measure the stray light response and
detailed noise properties. After a first sounding
rocket flight in 2009 February, we modified the instrument to
eliminate response to thermal radiation from ambient portions of the
payload, and to reduce stray light to bright stars outside of the field
of view. Scientific data from the second flight in 2010 July are currently
under analysis, and the instrument demonstrated sensitivity close to
design expectations. The instrument characterization shows that systematic
errors from the extended PSF, stray light, and correlated noise over the
array are controlled sufficiently to allow a deep search for REBL spatial
fluctuations. We recently completed a third flight in 2012 March that allows
us to cross-correlate images at different seasons to directly assess any ZL
fluctuations. The flight and recovery were successful, and a fourth flight
is now planned. A successor instrument, with 3 or more simultaneous spectral
bands and with higher sensitivity using a 30 cm telescope and improved
Hawaii-2RG arrays, is currently in development.
\section{Appendix}
The calculated sensitivities in Table \ref{tab:imagersens}, Figure \ref{fig:pwrspec}
and Figure \ref{fig:newpssensitivity} are based on a $50 \,$s integration with the
instrument parameters given in Table \ref{tab:imagerprops}. The estimated photo
current $i_{\mathrm{phot}}$ given by:
\begin{equation}
\label{eq:Iph}
i_{\mathrm{phot}} \simeq \lambda I_{\lambda} \left( \frac{\eta A \Omega}{h \nu}
\frac{\Delta \lambda}{\lambda} \right) \hspace{0.5cm}
[\mathrm{e}^{-}/\mathrm{s}],
\end{equation}
where $A \Omega$ is the pixel throughput, $\eta$ is the total
efficiency, $\lambda I_{\lambda}$ is the sky intensity, and $\Delta
\lambda$ is the integral bandwidth. The term in brackets in Equation
\ref{eq:Iph} gives the surface brightness calibration from e$^{-}/$s\ to
nW/m$^2$/sr. The current noise over an integration with continuous sampling
is given by:
\begin{equation}
\label{eq:deltaI}
\delta i_{\mathrm{phot}} = \sqrt{\frac{i_{\mathrm{phot}}}{T} + \delta
Q_{\mathrm{CDS}}^{2}\frac{6 T_{0}}{T^{3}}} \hspace{0.5cm}
[\mathrm{e}^{-}/\mathrm{s}],
\end{equation}
where $\delta Q_{\mathrm{CDS}}$ is the correlated double sample read
noise, $T = 50 \,$s is the integration time, and the frame rate
$T_{0}=1.78 \,$s. The surface brightness sensitivity is therefore:
\begin{equation}
\label{eq:deltanuInu}
\delta \lambda I_{\lambda} = \delta i_{\mathrm{phot}} \frac{h \nu}{A \Omega \eta
\Delta \lambda / \lambda} \hspace{0.5cm} [\mathrm{nW \; m}^{-2} \;
\mathrm{sr}^{-1}].
\end{equation}
Finally, the point source sensitivity is given by:
\begin{equation}
\label{eq:deltaF}
\delta \lambda F_{\lambda} = \delta i_{\mathrm{phot}} \frac{\sqrt{N_{pix}} h
\nu}{A \eta \Delta \lambda / \lambda} \hspace{0.5cm} [\mathrm{nW \;
m}^{-2}],
\end{equation}
where $N_{pix}$ is the effective number of pixels that must be
combined to detect a point source, and we have assumed $N_{pix} = 4$.
These per-pixel sensitivities are used to estimate the sensitivity
on the power spectrum in Figure \ref{fig:pwrspec} and Figure \ref{fig:newpssensitivity}
using the formalism in \citet{Cooray2004}. The calculation assumes
the noise in each pixel is independent, and ignores errors from source
removal and flat-field estimation.
\section*{Acknowledgments}
This work was supported by NASA APRA research grants NNX07AI54G,
NNG05WC18G, NNX07AG43G, NNX07AJ24G, and NNX10AE12G. Initial support
was provided by an award to J.B.~from the Jet Propulsion Laboratory's
Director's Research and Development Fund. Japanese participation in
CIBER was supported by KAKENHI (20$\cdot$34, 18204018, 19540250,
21340047 and 21111004) from Japan Society for the Promotion of Science
(JSPS) and the Ministry of Education, Culture, Sports, Science and
Technology (MEXT). Korean participation in CIBER was supported by the
Pioneer Project from Korea Astronomy and Space science Institute
(KASI).
This publication makes use of data products from the Two Micron All
Sky Survey (2MASS), which is a joint project of the University of
Massachusetts and the Infrared Processing and Analysis
Center/California Institute of Technology, funded by the National
Aeronautics and Space Administration and the National Science
Foundation. This work made use of images and/or data products
provided by the NOAO Deep Wide-Field Survey (NDWFS), which is
supported by the National Optical Astronomy Observatory, operated by
AURA, Inc., under a cooperative agreement with the National Science
Foundation.
We would like to acknowledge the dedicated efforts of the sounding
rocket staff at the NASA Wallops Flight Facility and the White Sands
Missile Range. We also acknowledge the work of the Genesia
Corporation for technical support of the CIBER optics. Our thanks to
Y.~Gong for sharing the REBL curves shown in Figure 1.
A.C.~acknowledges support from an NSF CAREER award, B.K.~acknowledges
support from a UCSD Hellman Faculty Fellowship, K.T.~acknowledges
support from the JSPS Research Fellowship for Young Scientists, and
M.Z.~acknowledges support from a NASA Postdoctoral Program Fellowship.
{\it Facility:} CIBER
|
1,108,101,563,877 | arxiv | \section{Introduction}
We follow the analysis of Dimock \cite{Di1}, \cite{Di2} concerning the construction of the quantum Hilbert space of the Bosonic string and of the superstring the purpose being of presenting the facts as elementary as possible and in the same time rigorous.
To be able to do that we start with a very simple method of constructing
representations of the Virasoro and Kac-Moody algebras acting in Bosonic
and Fermionic Fock spaces. We present a different way of computing things
based on Wick theorem. Then we remind the main ingredients of the
light-cone formalism and we prove the Poincar\'e invariance of the
string and superstring systems. Most of the results obtained in this paper
are known in the standard literature \cite{GRT}, \cite{GGRT}, \cite{GS},
\cite{GSW}, \cite{GH} but we offer some new simple proofs. There are various attempts to clarify the main mathematical aspects of this topics (see the references) but we are closest to the spirit of \cite{Lu}, \cite{Ot}, \cite{Ne} and \cite{Di1},\cite{Di2}.
To establish the equivalence between the light-cone and the covariant formalism one needs the so-called DDF operators. Usually DDF operators are introduced using formal series which are elements of the so-called vertex algebras \cite{FB}. We present here an elementary derivation in the Bosonic case without using vertex algebras. Next we show that the covariant construction is equivalent (in the sense of group representation theory) with the light-cone construction; we are using the Hilbert space fiber-bundle formalism \cite{Va}. Finally we give an elementary proof of the BRST quantization procedure for all the string models considered previously. In particular we are able to find very explicit formulas for the cohomology of the BRST operator.
\section{Quadratic Hamiltonians in Fock spaces \label{quad}}
\subsection{Bose systems of oscillators\label{bose}}
We consider the Hilbert space
${\cal H}$
with the scalar product
$<\cdot,\cdot>$
generated by $N$ Bose oscillators; the creation
and annihilation operators are
$
a_{m}, a_{m}^{+}, \quad m = 1,\dots,N
$
and verify
\begin{equation}
[ a_{m}, a_{n}^{+} ] = \delta_{mn} \cdot I \quad
[ a_{m}, a_{n} ] = 0, \quad [ a^{+}_{m}, a^{+}_{n} ] = 0,
\quad \forall m, n = 1,\dots,N.
\end{equation}
If
$
\Omega \in {\cal H}
$
is the vacuum state we have
$
a_{m} \Omega = 0, \quad m > 0.
$
As usual \cite{GSW} it is more convenient to introduce the operators
$
\alpha_{m}, m = \{\pm 1,\dots,\pm N\}
$
according to:
\begin{equation}
\alpha_{m} = \sqrt{m}~a_{m}, \forall m > 0 \qquad
\alpha_{m} = a_{-m}^{+}, \forall m < 0
\end{equation}
Then the canonical commutation relation from above can be compactly
written as follows:
\begin{equation}
[ \alpha_{m}, \alpha_{n} ] = m~\delta_{m+n} \cdot I,
\quad \forall m, n \not= 0
\label{com-alpha}
\end{equation}
where
$
\delta_{m} = \delta_{m,0}
$
and we also have
\begin{eqnarray}
\alpha_{m} \Omega = 0, \quad m > 0
\nonumber \\
\alpha_{m}^{+} = \alpha_{-m} \quad \forall m.
\end{eqnarray}
To apply Wick theorem we will also need the $2$-point function;
we easily derive
\begin{equation}
<\Omega, \alpha_{m} \alpha_{n}\Omega> = \theta(m)~m~\delta_{m+n},
\quad \forall m, n \not= 0
\label{vac-alpha}
\end{equation}
where
$\theta$
is the usual Heaviside function. The main result is the following
\begin{thm}
Let us consider operators of the form
\begin{equation}
H(A) \equiv {1\over 2} A_{mn} :\alpha_{m}\alpha_{n}:
\end{equation}
where $A$ is a symmetric matrix
$A^{T} = A$
and the double dots give the Wick ordering. Then:
\begin{equation}
[ H(A), H(B) ] = H([A,B]) + \omega_{\alpha}(A,B) \cdot I
\label{hh-alpha}
\end{equation}
where the commutator
$
[A,B] = A\cdot B - B\cdot A
$
is computed using the following matrix product
\begin{equation}
(A\cdot B)_{pq} \equiv \sum_{m \not= 0} m~A_{pm}~B_{-m,q}
\label{multi}
\end{equation}
and we have defined
\begin{equation}
\omega_{\alpha}(A,B) \equiv {1\over 2} \sum_{m,n > 0} mn~A_{mn}~B_{-n,-m}
- (A \leftrightarrow B).
\end{equation}
\label{H-alpha}
\end{thm}
{\bf Proof:} Is elementary and relies on computing the expression
$H(A)H(B)$
using Wick theorem:
\begin{eqnarray}
H(A) H(B) = :H(A)H(B):
\nonumber \\
+ {1\over 4} A_{mn} B_{pq} [ <\Omega, \alpha_{m}\alpha_{p}\Omega>
:\alpha_{n}\alpha_{q}: + (m \leftrightarrow n) +
(p \leftrightarrow q) + (m \leftrightarrow n, p \leftrightarrow q) ]
\nonumber \\
+ <\Omega, \alpha_{m}\alpha_{p}\Omega> <\Omega, \alpha_{n}\alpha_{q}\Omega> \cdot I
+ <\Omega, \alpha_{m}\alpha_{q}\Omega> <\Omega, \alpha_{n}\alpha_{p}\Omega> \cdot I]
\end{eqnarray}
If we use the $2$-point function we easily arrive at the formula from the
statement.
$\blacksquare$
Now we extend the previous result to the case when we have an infinite number
of oscillators. We consider that
$m \in \mathbb{Z}^{*}$
and the matrix $A$ is {\it semi-finite} i.e. there exists
$N > 0$
such that
\begin{equation}
A_{mn} = 0 \quad {\rm for} \quad |m + n| > N.
\label{semi}
\end{equation}
We note that if $A$ and $B$ are semi-finite then
$A \cdot B$
is also semi-finite. We will need the {\it algebraic Fock space} which is the subspace
${\cal D}_{0} \subset {\cal H}$
with finite number of particles. The elements of
${\cal D}_{0}$
are, by definition, finite linear combinations of vectors of the type
$
a^{+}_{m_{1}}\dots a^{+}_{m_{k}}\Omega;
$
the subspace
${\cal D}_{0}$
is dense in
${\cal H}$.
Then one can prove easily that the operator
$H(A)$
is well defined on
${\cal D}_{0}$,
leaves
${\cal D}_{0}$
invariant and formula (\ref{hh-alpha}) remains true in
${\cal D}_{0}$.
We will need an extension of this result namely we want to consider the case when
the index $m$ takes the value
$m = 0$
also i.e.
$m \in \mathbb{Z}$
and we preserve the commutation relation (\ref{com-alpha}).
We note that the relation (\ref{vac-alpha}) is not valid if one (or both) of the indices are null so the previous proof does not
work. It can be proved, however, directly that the statement of the theorem remains
true if we extend accordingly the definition of the matrix product including the
value $0$ also i.e in (\ref{multi}) we leave aside the restriction
$m = 0$.
In general, the Hilbert space of this case will not be entirely
of Fock type: the operators
$\alpha_{m}~m \not= 0$
will live in a Fock space tensored with another Hilbert space where live
the operators
$\alpha_{0}$.
\subsection{A Systems of Fermi Oscillators\label{fermi-b}}
We consider the Hilbert space
${\cal H}$
with the scalar product
$<\cdot,\cdot>$
generated by $N$ Fermi oscillators; the creation
and annihilation operators are
$
b_{m}, b_{m}^{+}, \quad m = 1,\dots,N
$
and verify
\begin{equation}
\{ b_{m}, b_{n}^{+} \} = \delta_{mn} \cdot I \quad
\{ b_{m}, b_{n} \} = 0, \quad \{ b^{+}_{m}, b^{+}_{n} \} = 0,
\quad \forall m, n \not= 0.
\end{equation}
If
$
\Omega \in {\cal H}
$
is the vacuum state we have
$
b_{m} \Omega = 0, \quad m > 0.
$
As above it is more convenient to introduce the operators
$
b_{m}, m = \{\pm 1,\dots,\pm N\}
$
according to:
\begin{equation}
b_{m} = b_{m}, \forall m > 0 \qquad
b_{m} = b_{-m}^{+}, \forall m < 0
\end{equation}
and the canonical anti-commutation relation from above can be compactly
rewritten as follows:
\begin{equation}
\{ b_{m}, b_{n} \} = \delta_{m+n} \cdot I,
\quad \forall m, n \not= 0.
\label{com-b}
\end{equation}
We also have
\begin{eqnarray}
b_{m} \Omega = 0, \quad m > 0
\nonumber \\
b_{m}^{+} = b_{-m}.
\end{eqnarray}
The $2$-point function is in this case:
\begin{equation}
<\Omega, b_{m} b_{n}\Omega> = \theta(m) \delta_{m+n},
\quad \forall m, n \not= 0.
\label{vac-b}
\end{equation}
The main result is the following
\begin{thm}
Let us consider operators of the form
\begin{equation}
H(A) \equiv {1\over 2} A_{mn} :b_{m}b_{n}:
\end{equation}
where $A$ is a antisymmetric matrix
$A^{T} = - A$
and the double dots give the Wick ordering. Then:
\begin{equation}
[ H(A), H(B) ] = H([A,B]) + \omega_{b}(A,B) \cdot I
\end{equation}
where the commutator
$
[A,B] = A\cdot B - B\cdot A
$
is computed using the following matrix product
\begin{equation}
(A\cdot B)_{pq} \equiv \sum_{m \not= 0} A_{pm} B_{-m,q}
\end{equation}
and we have defined
\begin{equation}
\omega_{b}(A,B) \equiv {1\over 2} \sum_{m,n > 0} A_{mn} B_{-n,-m}
- (A \leftrightarrow B).
\end{equation}
\label{H-b}
\end{thm}
The proof is similar to the preceding theorem. The previous result can be extended
to the case when we have an infinite number of oscillators i.e.
$m \in \mathbb{Z}^{*}$
and the matrix $A$ is semi-finite: the operator
$H(A)$
is well defined on the corresponding algebraic Fock space
${\cal D}_{0}$,
leaves invariant this subspace and the previous theorem remains true.
\subsection{Another System of Fermi Oscillators\label{fermi-d}}
We extend the previous results to the case when the value
$m = 0$
is allowed i.e. the Hilbert space
$
{\cal H}
$
is generated by the operators
$
d_{m}, \quad m = -N,\dots,N
$
and verify
\begin{eqnarray}
\{ d_{m}, d_{n} \} = \delta_{mn} \cdot I \quad \forall m, n
\nonumber \\
d_{m}^{\dagger} = d_{-m}\quad \forall m
\nonumber \\
d_{m}\Omega = 0 \quad \forall m > 0
\label{dd}
\end{eqnarray}
where
$
\Omega \in {\cal H}
$
is the vacuum state. One can realize this construction if one takes
$
{\cal H} = {\cal F} \otimes^{s} {\cal C}
$
where
$
{\cal F}
$
is the Fock space from the preceding Section,
$
{\cal C}
$
is the Clifford algebra generated by the element
$
b_{0}
$
verifying
$
b_{0}^{\dagger} = b_{0} \qquad b_{0}^{2} = 1/2
$
and the skew tensor product
$
\otimes^{s}
$
is chosen such that the operators
\begin{equation}
d_{n} \equiv b_{n} \otimes^{s} I_{2} \quad \forall m \not= 0 \qquad
d_{0} \equiv I_{1} \otimes^{s} b_{0}
\end{equation}
verify (\ref{dd}). Another more explicit construction is to consider the Hilbert space is generated by the creation and annihilation operators
$
b_{m}, b_{m}^{+}, \quad m = 0,\dots,N
$
such that we have
$
b_{m} \Omega = 0, \quad m \geq 0
$
and to define the operators
$
d_{m}, m = \{-N,\dots,N\}
$
according to:
\begin{eqnarray}
d_{m} = \cases{ b_{m}, & for m $>$ 0 \cr
{1\over \sqrt{2}} (b_{0} + b_{0}^{+}), & for m = 0 \cr
b_{-m}^{+}, & for m $<$ 0 \cr}
\label{d-b}
\end{eqnarray}
The $2$-point function is in this case:
\begin{equation}
<\Omega, d_{m} d_{n}\Omega> = \theta_{+}(m) \delta_{m+n},
\quad \forall m, n
\label{vac-d}
\end{equation}
where we have introduced the modified Heaviside function
\begin{eqnarray}
\theta_{+}(m) = \cases{ 1, & for m $>$ 0 \cr
{1\over 2}, & for m = 0 \cr
0, & for m $<$ 0 \cr}.
\end{eqnarray}
It follows that the main result is similar to the previous one:
\begin{thm}
Let us consider operators of the form
\begin{equation}
H(A) \equiv {1\over 2} A_{mn} :d_{m}d_{n}:
\end{equation}
where $A$ is a antisymmetric matrix
$A^{T} = - A$
and the double dots give the Wick ordering. Then:
\begin{equation}
[ H(A), H(B) ] = H([A,B]) + \omega_{d}(A,B) \cdot I
\end{equation}
where the commutator
$
[A,B] = A\cdot B - B\cdot A
$
is computed using the following matrix product
\begin{equation}
(A\cdot B)_{pq} \equiv \sum_{m = -N}^{N} A_{pm} B_{-m,q}
\end{equation}
and we have defined
\begin{equation}
\omega_{d}(A,B) \equiv {1\over 2} \sum_{m,n \geq 0} \theta_{+}(m) \theta_{+}(n)
A_{mn} B_{-n,-m}
- (A \leftrightarrow B).
\end{equation}
\label{H-d}
\end{thm}
The proof is similar to the preceding Section. The previous result can be extended
to the case when we have an infinite number of oscillators i.e.
$m \in \mathbb{Z}$
and the matrix $A$ is semi-finite: the operator
$H(A)$
is well defined on the corresponding algebraic Fock space
${\cal D}_{0}$,
leaves invariant this subspace and the previous theorem remains true.
\begin{rem}
It follows easily that the expressions
$
\omega_{\alpha},\omega_{b},\omega_{d}
$
are $2$-cocyles. They are quantum obstructions (or anomalies) because they do not appear
if we work in classical field theory replacing the commutators by Poisson brackets.
\end{rem}
\section{Virasoro Algebras in Fock Spaces\label{vir}}
We have constructed some Fock spaces for which we have a nice commutation relation
of the bilinear operators. In all these cases we will be able to construct
representations of the Virasoro algebra taking convenient expressions for the
matrix $A$. We give the details corresponding to the structure of the closed
strings.
\subsection{Bose Case\label{vir-bose}}
\begin{thm}
In the conditions of theorem \ref{H-alpha} the operators given by the formulas
\begin{equation}
L_{m} \equiv {1\over 2}~\sum_{n \not= 0,m}~:\alpha_{m-n} \alpha_{n}:
\label{vir-alpha}
\end{equation}
are well defined on the algebraic Fock subspace, leave invariant this subspace and verify the following relations:
\begin{eqnarray}
[ L_{m}, L_{n} ] = (m - n) L_{m+n} + {m (m^{2} - 1) \over 12}~\delta_{m+n}~\cdot I
\nonumber \\
L_{m}^{+} = L_{-m} \quad \forall m \in \mathbb{Z}
\nonumber \\
L_{0} \Omega = 0.
\end{eqnarray}
\end{thm}
{\bf Proof:} We consider the matrices
$A_{m}$
given by
$
(A_{m})_{pq} \equiv \delta_{p+q-m}
$
and we are in the conditions of theorem \ref{H-alpha}: the matrices
$A_{m}$
are symmetric and semi-finite. It remains to prove that
$
[ A_{m}, A_{n} ] = (m -n) A_{m+n}
$
and to compute the $2$-cocycle
$\omega_{\alpha}(A_{m}, A_{n}) = {m(m^{2} - 1) \over 12} \delta_{m+n}$
and we obtain the commutation relation from the statement.
$\blacksquare$
One can express everything in terms of the original creation and annihilation operators
$a_{m}^{\#}$;
if we use the holomorphic representation for the harmonic oscillator operators
$
a_{m}^{+} = z_{m} \quad a_{m} = {\partial \over \partial z_{m}}
$
we obtain the formula (7.2.10) from \cite{Ne}.
It is important that we can extend the previous results to the case when
$\alpha_{0} \not= 0$
(see the end of Subsection \ref{bose}.) To preserve (\ref{com-alpha}) we impose
\begin{equation}
[ \alpha_{0}, \alpha_{m} ] = 0 \quad \forall m \in \mathbb{Z}^{*}
\end{equation}
and we keep the relation (\ref{vir-alpha}) without the restrictions
$n \not= 0, m$; explicitly:
\begin{equation}
L_{m} = \dots + \alpha_{m} \alpha_{0} \quad \forall m \not= 0,
\qquad
L_{0} = \dots + {1\over 2}~\alpha_{0}^{2}
\end{equation}
where by $\dots$ we mean the expressions from the preceding theorem corresponding to
$\alpha_{0} \equiv 0$.
Eventually we have to consider a larger Hilbert space containing as a subspace the Fock space generated by the operators
$\alpha_{n}\quad n \not= 0$.
By direct computations we can prove that the statement of the theorem remains true; also
we have
\begin{equation}
[ L_{m}, \alpha_{n} ] = - n~\alpha_{m+n}.
\label{l-a}
\end{equation}
In the following we will use only the case when
$\alpha_{0} \not= 0$.
\subsection{First Fermi Case\label{vir-ns}}
We have a similar result for the Fermi operators of type $b$: we will consider
that these operators are
$b_{r}$
indexed by
$r \in {1\over 2} + \mathbb{Z}$
and they verify:
\begin{eqnarray}
\{ b_{r}, b_{s} \} = \delta_{r+s} \cdot I,
\quad \forall r,s \in {1\over 2} + \mathbb{Z}
\nonumber \\
b_{r} \Omega = 0, \quad r > 0
\nonumber \\
b_{r}^{+} = b_{-r}.
\end{eqnarray}
Then:
\begin{thm}
In the conditions of theorem \ref{H-b} the operators given by the formulas
\begin{equation}
L_{m} \equiv {1\over 2}~\sum_{r \in 1/2 + \mathbb{Z}}~\left(r + {m\over 2}\right)
:b_{-r} b_{m+r}: = {1\over 2}~\sum_{r \in 1/2 + \mathbb{Z}}~r~:b_{-r} b_{m+r}:
\label{vir-b}
\end{equation}
are well defined on the algebraic Fock subspace, leave invariant this subspace and verify the following relations:
\begin{eqnarray}
[ L_{m}, L_{n} ] = (m - n) L_{m+n} + {m (m^{2} - 1) \over 24}~\delta_{m+n}~\cdot I.
\nonumber \\
~[ L_{m}, b_{r} ] = - \left(r + {m\over 2}\right)~b_{m+r}
\nonumber \\
L_{m}^{+} = L_{-m}
\nonumber \\
L_{0} \Omega = 0.
\end{eqnarray}
\end{thm}
{\bf Proof:} We consider the matrices
$A_{m}$
given by
$
(A_{m})_{rs} \equiv {1\over 2} (s - r) \delta_{r+s-m}
$
and we are in the conditions of theorem \ref{H-b}: the matrices
$A_{m}$
are anti-symmetric and semi-finite. It remains to compute the $2$-cocycle
$\omega_{b}$
to obtain the commutation relation from the statement.
$\blacksquare$
If we use the representation in terms of Grassmann variables
$
b_{r}^{+} = \xi_{r} \quad b_{r} = {\partial \over \partial \xi_{r}}
$
for these operators we obtain the formulas from \cite{Ne} pg. 225.
\subsection{Second Fermi Case\label{vir-r}}
Finally we have a similar result for the Fermi operators of type $d$.
\begin{thm}
In the conditions of theorem \ref{H-d} the operators given by the formulas
\begin{equation}
L_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~\left(n + {m\over 2}\right)
:d_{-n} d_{m+n}: = {1\over 2}~\sum_{n \in \mathbb{Z}}~n~:d_{-n} d_{m+n}:
\label{vir-d}
\end{equation}
are well defined on the algebraic Fock subspace, leave invariant this subspace and verify the following relations:
\begin{eqnarray}
[ L_{m}, L_{n} ] = (m - n) L_{m+n} + {m (m^{2} + 2) \over 24}~\delta_{m+n}~\cdot I. \nonumber \\
~[ L_{m}, d_{n} ] = - \left(n + {m\over 2}\right)~b_{m+n}
\nonumber \\
L_{m}^{+} = L_{-m}
\nonumber \\
L_{0} \Omega = 0.
\end{eqnarray}
\end{thm}
{\bf Proof:} We consider the matrices
$A_{m}$
given by
$
(A_{m})_{pq} \equiv {1\over 2} (q - p) \delta_{p+q-m}
$
and we are in the conditions of theorem \ref{H-d}: the matrices
$A_{m}$
are anti-symmetric and semi-finite. It remains to compute the $2$-cocycle
$\omega_{d}$
to obtain the commutation relation from the statement.
$\blacksquare$
We observe that in the commutation relation of the preceding theorem the expression
of the cocycle is different from the usual form
$
c~{m(m^{2} - 1)\over 12};
$
we can fix this inconvenience immediately if we define:
\begin{equation}
\tilde{L}_{m} \equiv L_{m} \quad \forall m \not= 0
\qquad
\tilde{L}_{0} \equiv L_{0} + {1\over 16} \cdot I;
\end{equation}
we obtain in this case:
\begin{equation}
[ \tilde{L}_{m}, \tilde{L}_{n} ] = (m - n) \tilde{L}_{m+n}
+ {m (m^{2} - 1) \over 24}~\delta_{m+n}~\cdot I.
\end{equation}
and
\begin{equation}
\tilde{L}_{0} \Omega = {1\over 16} \cdot I.
\end{equation}
\subsection{Multi-Dimensional Cases\label{vir-multi}}
In the preceding Subsections we have obtained three representations of the Virasoro algebra corresponding to
$
(c,h) = (1, 0), \left({1\over 2}, 0\right), \left({1\over 2}, {1\over 16} \right).
$
The previous results can be easily extended to a more general case. Let
$
\eta^{jk} = \eta_{jk}, \quad j,k = 1,\dots,D
$
be a diagonal matrix with the diagonal elements
$
\epsilon_{1},\dots,\epsilon_{D} = \pm 1.
$
In the Bose case we can consider that we have the family of operators:
$
\alpha^{j}_{m}, m \in \mathbb{Z}, j = 1,\dots,D
$
acting in the Hilbert space
$
{\cal F}^{(\alpha)}
$
such that:
\begin{eqnarray}
[ \alpha^{j}_{m}, \alpha^{k}_{n} ] = m~\eta_{jk}\delta_{m+n} \cdot I,
\quad \forall m, n
\nonumber \\
\alpha^{j}_{m} \Omega = 0, \quad m > 0
\nonumber \\
(\alpha^{j}_{m})^{+} = \alpha^{j}_{-m} \quad \forall m.
\end{eqnarray}
We can define
\begin{equation}
L^{(\alpha)}_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~\eta_{jk}:\alpha^{j}_{m-n} \alpha^{k}_{n}:
\label{vir-alpha-D}
\end{equation}
and we have:
\begin{equation}
[ L^{(\alpha)}_{m}, L^{(\alpha)}_{n} ] = (m - n) L^{(\alpha)}_{m+n}
+ D~{ m (m^{2} - 1) \over 12}~\delta_{m+n}~\cdot I.
\end{equation}
In the first Fermi case we have the operators:
$
b^{j}_{r}, r \in {1\over 2} + \mathbb{Z}, j = 1,\dots,D
$
acting in the Hilbert space
$
{\cal F}^{(b)}
$
such that:
\begin{eqnarray}
\{ b^{j}_{r}, b^{k}_{s} \} = \eta_{jk}~\delta_{r+s} \cdot I,
\quad \forall r,s
\nonumber \\
b^{j}_{r} \Omega = 0, \quad r > 0
\nonumber \\
(b^{j}_{r})^{+} = b^{j}_{-r} \quad \forall r.
\end{eqnarray}
We define
\begin{equation}
L^{(b)}_{m} \equiv {1\over 2}~\sum_{r \in 1/2 + \mathbb{Z}}~
\left(r + {m\over 2}\right)~\eta_{jk}
:b^{j}_{-r} b^{k}_{m+r}:
\label{vir-b-D}
\end{equation}
are well defined and verify the following relations:
\begin{equation}
[ L^{(b)}_{m}, L^{(b)}_{n} ] = (m - n) L^{(b)}_{m+n}
+ D~{m (m^{2} - 1) \over 24}~\delta_{m+n}~\cdot I.
\end{equation}
Finally, in the second Fermi case we have the operators
$
d^{j}_{m}, m \in \mathbb{Z}, j = 1,\dots,D
$
acting in the Hilbert space
$
{\cal F}^{d}
$
such that
\begin{eqnarray}
\{ d^{j}_{m}, d^{k}_{n} \} = \eta_{jk}~\delta_{m+n} \cdot I
\nonumber \\
d^{j}_{m} \Omega = 0, \quad m > 0
\nonumber \\
(d^{j}_{m})^{+} = d^{j}_{-m}\quad \forall m.
\end{eqnarray}
We can define
\begin{equation}
L^{(d)}_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~\left(n + {m\over 2}\right)~\eta_{jk}
:d^{j}_{-n} d^{k}_{m+n}:
\label{vir-d-D}
\end{equation}
and we have the following relations:
\begin{equation}
[ L^{(d)}_{m}, L^{(d)}_{n} ] = (m - n) L^{(d)}_{m+n}
+ D~{m (m^{2} + 2) \over 24}~\delta_{m+n}~\cdot I.
\end{equation}
We redefine
\begin{equation}
\tilde{L}^{(d)}_{m} \equiv L^{(d)}_{m} \quad \forall m \not= 0
\qquad
\tilde{L}^{(d)}_{0} \equiv L^{(d)}_{0} + {D\over 16} \cdot I;
\label{shift}
\end{equation}
we obtain in this case:
\begin{equation}
[ \tilde{L}^{(d)}_{m}, \tilde{L}^{(d)}_{n} ] = (m - n) \tilde{L}^{(d)}_{m+n}
+ D~{m (m^{2} - 1) \over 24}~\delta_{m+n}~\cdot I.
\end{equation}
\begin{equation}
\tilde{L}^{(d)}_{0} \Omega = {D\over 16} \cdot I.
\end{equation}
In all these cases the $2$-cocycle gets multiplied by $D$. The Hilbert space has a
positively defined scalar product only in the case
$
\epsilon_{1} = \cdots = \epsilon_{D} = 1.
$
We can combine the Bose and Fermi cases as follows. We consider the Hilbert spaces
$
{\cal F}^{(NS)} \equiv {\cal F}^{(\alpha)} \otimes {\cal F}^{(b)}
$
and
$
{\cal F}^{(R)} \equiv {\cal F}^{(\alpha)} \otimes {\cal F}^{(d)}
$
respectively; the Virasoro operators are
\begin{eqnarray}
L^{(NS)}_{m} \equiv L^{(\alpha)} \otimes I_{2} + I_{1} \otimes L^{(b)}
\nonumber \\
L^{(R)}_{m} \equiv L^{(\alpha)} \otimes I_{2} + I_{1} \otimes \tilde{L}^{(d)}
\end{eqnarray}
and we have in both cases:
\begin{equation}
[ L^{(NS,R)}_{m}, L^{(NS,R)}_{n} ] = (m - n) L^{(NS,R)}_{m+n}
+ D~{m (m^{2} - 1) \over 8}~\delta_{m+n}~\cdot I.
\label{lll}
\end{equation}
These two constructions are called {\it Neveau-Schwartz} (and {\it Ramond}) respectively.
In these cases one can extend the Virasoro algebra to a {\it super-Virasoro} algebra
\cite{GSW}.
We conclude this Subsection with some simple propositions. First we have a natural
representation of the rotation group in the Fock space:
\begin{prop}
Suppose that the signature of $\eta$ is
$(r,s)$;
then we can define in the corresponding Hilbert spaces a representation of the Lie
algebra
$so(r,s)$
according to:
\begin{eqnarray}
J^{(\alpha)jk} \equiv - i~\sum_{m > 0} {1\over m}
\alpha_{-m}^{j} \alpha_{m}^{k} - (j \leftrightarrow k)
\nonumber \\
J^{(b)jk} \equiv - i~\sum_{r > 0} b_{-r}^{j} b_{r}^{k} - (j \leftrightarrow k)
\nonumber \\
J^{(d)jk} \equiv - i~\sum_{m > 0} d_{-m}^{j} d_{m}^{k} - (j \leftrightarrow k)
\label{j}
\end{eqnarray}
respectively.
\end{prop}
Indeed, we can obtain directly from the (anti)commutation relations in all the cases:
\begin{equation}
[ J^{kl}, J^{pq} ] = - i~( \eta^{lp}~J^{jq} + \eta^{jq}~J^{lp}
- \eta^{kp}~J^{lq} + \eta^{lq}~J^{kp}).
\end{equation}
We also note that the Virasoro operators are rotational invariant: in all cases
\begin{equation}
[ J^{kl} , L_{m} ] = 0.
\end{equation}
Next, we have a proposition which will be important for proving the Poincar\'e
invariance:
\begin{prop}
If
$\Psi \in {\cal D}_{0}$
is an arbitrary vector from the algebraic Hilbert space then we have in all cases
\begin{equation}
L_{m} \Psi = 0
\end{equation}
for sufficiently large
$m > 0$.
\label{aaP}
\end{prop}
{\bf Proof:} We consider only the one-dimensional Bose case; the other cases are similar. If
$\Psi$
is a vector in the algebraic Fock space it is clear that we have
\begin{equation}
\alpha_{m} \Psi = 0
\label{aP}
\end{equation}
for $m$ sufficiently large. This implies immediately that all the sums in
(\ref{vir-alpha}) are finite (it is better to re-express everything in terms of the original creation and annihilation operators). It is clear that
$
\sum_{n > 0} \dots a^{+}_{n} a_{m+n} \Psi = 0
$
for sufficiently large $m$ because of (\ref{aP}). Also
$
\sum_{n = 1}^{m-1} \dots a_{n} a_{m-n} \Psi = 0
$
for sufficiently large $m$; indeed, all the indices in the preceding sum are
larger than
$
{m\over 2}
$
so again we can apply (\ref{aP}).
$\blacksquare$
It is known that any $2$-cocycle of the Virasoro algebra is cohomologous to a standard form
$
c~{m(m^{2} - 1)\over 12};
$
however, we can always add a trivial cocycle.
\begin{prop}
Suppose that we have
\begin{equation}
[ L_{m}, L_{n} ] = (m - n) L_{m+n} + c~{m (m^{2} - 1) \over 12}~\delta_{m+n}~\cdot I
\end{equation}
and we redefine
\begin{equation}
L_{m}(a) \equiv L_{m} \quad \forall m \not= 0
\qquad
L_{0}(a) \equiv L_{0} - a \cdot I; \qquad a \in \mathbb{R}.
\end{equation}
Then we have:
\begin{equation}
[ L_{m}(a), L_{n}(a) ] = (m - n) L_{m+n}(a)
+ \left[ c~{m (m^{2} - 1) \over 12} + 2 a m \right]~\delta_{m+n}~\cdot I.
\label{ll-c}
\end{equation}
\end{prop}
\section{Kac-Moody Algebras in Fock Spaces\label{km}}
We use here the techniques from Section \ref{quad}. We suppose that we have
$
t_{A}, A = 1,\dots,r
$
a $N$-dimensional representation of the Lie algebra
$\mathfrak{g}$:
\begin{equation}
[ t_{A}, t_{B} ] = f_{ABC} t_{C};
\end{equation}
here
$
f_{ABC}
$
are the {\it structure constants} of
$\mathfrak{g}$
and we need also the {\it Killing-Cartan form} of the representation $t$:
\begin{equation}
g_{AB} \equiv Tr(t_{A}t_{B}).
\end{equation}
We will suppose that the matrices
$
t_{A}
$
are antisymmetric:
\begin{equation}
t_{A}^{T} = - t_{A}.
\end{equation}
Finally, we will need the {\it contra-gradient} representation
\begin{equation}
\tilde{t}_{A} = - t^{+}_{A} = \bar{t}_{A} \quad \forall A.
\end{equation}
We will construct some representations of the Kac-Moody algebras acting in
${\cal F}^{(b)}$
and
${\cal F}^{(d)}$
respectively.
\subsection{Neveau-Schwartz Case}
In this case
\begin{thm}
In the conditions of theorem \ref{H-b} the operators given by the formulas
\begin{equation}
K^{A}_{m} \equiv {1\over 2}~(t_{A})_{jk}~\sum_{r \in 1/2 + \mathbb{Z}}~:b^{j}_{-r} b^{k}_{m+r}:
\label{km-b}
\end{equation}
are well defined on the algebraic Fock subspace, leave invariant this subspace and verify the following relation:
\begin{equation}
[ K^{A}_{m}, K^{B}_{n} ] = f_{ABC} K^{C}_{m+n} + {1 \over 2}~m~g_{AB}\delta_{m+n}~\cdot I.
\label{kk-ns}
\end{equation}
We have in this case also the Hermiticity property:
\begin{equation}
(K^{A}_{m})^{+} = - \tilde{K}^{A}_{-m}
\label{herm-km-b}
\end{equation}
where
$
\tilde{K}^{A}_{m}
$
is associated to the representation
$
\tilde{t}_{A}.
$
The following commutation relation are true:
\begin{equation}
[ K^{A}_{m}, L_{n} ] = m~K^{A}_{m+n}.
\label{kl-b}
\end{equation}
If
$
\Psi \in {\cal D}_{0}
$
then we have
\begin{equation}
K^{A}_{m} \Psi = 0
\label{kp-ns}
\end{equation}
for
$m > 0$
large enough.
\end{thm}
{\bf Proof:} We consider the matrices
$A^{A}_{m}$
given by
$
A^{A}_{m} \equiv \tilde{A}_{m} \otimes t_{A}
$
where
$
(\tilde{A}_{m})_{rs} \equiv \delta_{r+s-m}
$
and we are in the conditions of theorem \ref{H-b}: the matrices
$A^{A}_{m}$
are anti-symmetric (because
$\tilde{A}_{m}$
are symmetric and
$t_{A}$
are anti-symmetric) and semi-finite. We can easily prove that
$
[ A^{A}_{m}, A^{B}_{n} ] = f_{ABC} A^{C}_{m+n};
$
it remains to compute the $2$-cocycle
$\omega_{b}$
to obtain the commutation relation (\ref{kk-ns}). The relation (\ref{kl-b})
follows from the same theorem \ref{H-b} if we take
$
A \rightarrow A_{m}^{A}, \quad B \rightarrow A_{n} \otimes I_{2};
$
in this case the cocycle
$\omega_{b}$
is null because
$
Tr(t_{A}) = 0.
$
The relation (\ref{kp-ns}) is proved as Proposition \ref{aaP}.
$\blacksquare$
We have an important particular case:
\begin{prop}
Let us consider the case
$\mathfrak{g} = so(D)$
with the fundamental representation
\begin{equation}
(E_{jk})_{pq} = \delta_{jp} \delta_{kq} - \delta_{jq} \delta_{kp}.
\end{equation}
Then the associated operators, denoted by
$
K^{jk}_{m}
$
verify the following relations:
\begin{equation}
[ K^{jk}_{m}, K^{pq}_{n} ] = \delta_{kp} K^{jq}_{m+n} - \delta_{kq} K^{jp}_{m+n}
+ \delta_{jq} K^{kp}_{m+n} - \delta_{jp} K^{kq}_{m+n}
+ m~( \delta_{kp} \delta_{jq} - \delta_{kq} \delta_{jp})~\delta_{m+n} \cdot I
\label{kkk}
\end{equation}
\begin{equation}
(K^{jk}_{m})^{+} = - K^{jk}_{-m}
\label{herm-km-b-D}
\end{equation}
\end{prop}
{\bf Proof:}
We apply theorem \ref{H-b}; in our case we have by direct computation
\begin{equation}
[ E_{jk}, E_{pq} ] = \delta_{kp} E_{jq} - \delta_{kq} E_{jp} +
\delta_{jq} E_{kp} - \delta_{jp} E_{kq}
\end{equation}
and the Killing-Cartan form
\begin{equation}
g_{jk,pq} = 2 ( \delta_{kp} \delta_{jq} - \delta_{kq} \delta_{jp}).
\end{equation}
Then we apply the preceding theorem.
$\blacksquare$
From the commutation relation of the preceding Proposition we observe that
- see (\ref{j}):
$
J^{(b)jk} = - i~K^{jk}_{0}
$
so we have another proof that these operators give a representation of the group
$
so(D).
$
Finally we give the following technical result \cite{GS}:
\begin{prop}
In the preceding conditions, the following formula is true
\begin{eqnarray}
\sum_{m > 0} m~[ K^{jl}_{-m} K^{kl}_{m} - (j \leftrightarrow k) ]
+ 2 \sum_{m > 0} ( K^{jk}_{-m} L_{m} + L_{-m}K^{jk}_{m})
+ 2 K^{jk}_{0} L_{0}
\nonumber \\
= {8 - D \over 8} \sum_{r > 0} (4 r^{2} - 1) [ :b^{j}_{-r} b^{k}_{r}:
- (j \leftrightarrow k) ] + ( 1 - 2a )~K^{jk}_{0};
\label{kkl-ns}
\end{eqnarray}
here
$
L_{m} = L_{m}(a).
$
\end{prop}
{\bf Proof:} First we note that all the sums are in fact finite when applied on the
algebraic Fock space. Then one substitutes the expressions for $K$'s and $L$'s
and applies Wick theorem. The computation is tedious but straightforward.
$\blacksquare$
Let us note that we have a quantum anomaly in the right-hand side which vanishes
for
$
D = 8, \quad a = {1\over 2}.
$
\subsection{Ramond Case}
\begin{thm}
In the conditions of theorem \ref{H-d} the operators given by the formulas
\begin{equation}
K^{A}_{m} \equiv {1\over 2}~(t_{A})_{jk}~\sum_{n \in \mathbb{Z}}~:d^{j}_{-n} d^{k}_{m+n}:
\label{km-d}
\end{equation}
are well defined on the algebraic Fock subspace, leave invariant this subspace and verify the same relations as in the preceding theorem.
\end{thm}
{\bf Proof:} We consider the matrices
$A_{m}$
given by
$
A^{A}_{m} \equiv \tilde{A}_{m} \otimes t_{A}
$
where
$
(A_{m})_{pq} \equiv \delta_{p+q-m}
$
and we are in the conditions of theorem \ref{H-d}.
$\blacksquare$
The technical result is in this case:
\begin{prop}
In the preceding conditions, the following formula is true
\begin{eqnarray}
\sum_{m > 0} m~[ K^{jl}_{-m} K^{kl}_{m} - (j \leftrightarrow k) ]
+ 2 \sum_{m > 0} ( K^{jk}_{-m} L_{m} + L_{-m}K^{jk}_{m})
+ 2 K^{jk}_{0} L_{0}
\nonumber \\
= {8 - D \over 2} \sum_{k > 0} k^{2} [ :d^{j}_{-r} d^{k}_{r}:
- (j \leftrightarrow k) ] + \left( {D \over 8} - 2a\right)~K^{jk}_{0}.
\label{kkl-r}
\end{eqnarray}
\end{prop}
Again we have in the right-hand side a quantum anomaly which vanishes for
$
D = 8, \quad a = {1\over 2}.
$
\begin{rem}
We note that in the relations (\ref{kkl-ns})
and (\ref{kkl-r}) the appearance of the constant $a$ is in fact spurious: if we
express the Virasoro operators
$
L_{m}
$
in terms of the operators
$\alpha, b$
or $d$ then the constant $a$ drops out.
\end{rem}
\section{Light-Cone Coordinates}
In this Section we remind the basic ingredients of the light-cone description of
the representations of the Poincar\'e group. We consider this group in $D$ dimensions
and we denote the indices by
$
\mu, \nu = 0,1,\dots,D-1
$.
Let
$
\eta^{\mu\nu}
$
be the Minkowski diagonal form with
$
diag(\eta) = 1,-1,\dots,-1.
$
Then the algebra of the Poincar\'e group is generated by the basis elements
$
P^{\mu},\quad J^{\mu\nu}
$
where
$
J^{\mu\nu}
$
is anti-symmetric. The Lie brackets are:
\begin{eqnarray}
~[ P^{\mu}, P^{\nu} ] = 0
\nonumber \\
~[ P^{\mu}, J^{\nu\rho} ] = i~\eta^{\mu\nu}~P^{\rho} - i~\eta^{\mu\rho}~P^{\nu}
\nonumber \\
~[ J^{\mu\nu}, J^{\rho\lambda} ]
= i~\eta^{\nu\rho}~J^{\mu\lambda} - i~\eta^{\mu\rho}~J^{\nu\lambda}
- i~\eta^{\nu\lambda}~J^{\mu\rho} + i~\eta^{\mu\lambda}~J^{\nu\rho}.
\label{p1}
\end{eqnarray}
Let us change the basis
$
(P^{\mu}, J^{\mu\nu}) \rightarrow (P^{\pm}, P^{j}, J^{+-}, J^{j\pm}, J^{jk})
$
where
$j = 1,\dots,D-2$
and
\begin{eqnarray}
P^{\pm} \equiv {1\over \sqrt{2}} (P^{0} \pm P^{D-1})
\nonumber \\
J^{j\pm} \equiv {1\over \sqrt{2}} (J^{j0} \pm J^{j,D-1})
\nonumber \\
J^{+-} \equiv J^{0,D-1}.
\end{eqnarray}
Then in the new basis the Lie brackets are:
\begin{eqnarray}
~[ P^{\epsilon_{1}}, P^{\epsilon_{2}} ] = 0 \quad
[ P^{\epsilon}, P^{j} ] = 0 \quad
[ P^{j}, P^{k} ] = 0
\nonumber \\
~[ P^{\epsilon}, J^{jk} ] = 0 \quad
[ P^{j}, J^{+-} ] = 0
\nonumber \\
~[ P^{\epsilon}, J^{+-} ] = i~\epsilon~P^{\epsilon}
\nonumber \\
~[ P^{\epsilon_{1}}, J^{j\epsilon_{2}} ] = - {i \over 2} (1 - \epsilon_{1}\epsilon_{2})~P^{j}
\nonumber \\
~[ P^{j}, J^{kl} ] = - i~\delta_{jk}~P^{l} + i~\delta_{jl}~P^{k}
\nonumber \\
~[ P^{j}, J^{k\epsilon} ] = - i~\delta_{jk}~P^{\epsilon}
\nonumber \\
~[ J^{+-}, J^{jk} ] = 0
\nonumber \\
~[ J^{+-}, J^{j\epsilon} ] = - i~\epsilon~J^{j\epsilon}
\nonumber \\
~[ J^{j\epsilon_{1}}, J^{k\epsilon_{2}} ]
= {i\over 2}~(-1 +\epsilon_{1}\epsilon_{2})~J^{jk}
+ {i\over 2}~(\epsilon_{2} - \epsilon_{1})~\delta_{jk}~J^{+-}
\nonumber \\
~[ J^{j\epsilon}, J^{kl} ]
= - i~\delta_{jk}~J^{l\epsilon} + i~\delta_{jl}~J^{k\epsilon}
\nonumber \\
~[ J^{kl}, J^{pq} ]
= - i~\delta_{lp}~J^{kq} + i~\delta_{kp}~J^{lq}
+ i~\delta_{lq}~J^{kp} - i~\delta_{kq}~J^{lp};
\label{p2}
\end{eqnarray}
here
$j,k,l = 1,\dots,D-2$
and
$\epsilon = \pm$.
We will need the representation of mass
$m > 0$
and spin $0$ in light-cone coordinates. The Hilbert space is
\begin{equation}
{\cal H}^{[m,0]} \equiv {\cal H}^{[m]}
\equiv L^{2}\left( \mathbb{R}^{D-1}, {d{\bf p}\over 2p^{0}}\right)
\end{equation}
where
$
d{\bf p} \equiv dp^{1}\dots dp^{D-1},
$
$
p^{0} \equiv \sqrt{{\bf p}^{2} + m^{2}}
$
and
$
{\bf p}^{2} = ({\bf p}^{1})^{2} + \cdots ({\bf p}^{D-1})^{2}.
$
One can easily derive the expressions of the Lie algebra generators; they are:
\begin{eqnarray}
P^{\mu} = p^{\mu}
\nonumber \\
L^{jk} = - i~\left( p^{j} {\partial \over \partial p^{k}}
- p^{k} {\partial \over \partial p^{j}} \right)
\nonumber \\
K^{j} \equiv L^{j0} = i~p^{0}~{\partial \over \partial p^{j}}
\end{eqnarray}
where
$j,k = 1,\dots,D-1$.
These operators are defined on
$
{\cal C}^{\infty}(\mathbb{R}^{D-1})
$
where they are essentially self-adjoint; one can easily verify that on this domain the
relations (\ref{p1}) are valid.
Now we perform a change of variables \cite{Di1}:
\begin{equation}
(p^{1},\dots,p^{D-1}) \rightarrow (p^{+},\tilde{p})
\end{equation}
where
\begin{equation}
p^{+} \equiv {1\over \sqrt{2}} (p^{0} + p^{D-1}) \qquad
\tilde{p} = (p^{1},\dots,p^{D-2});
\end{equation}
here, as before, we have denoted
$
p^{0} \equiv \sqrt{{\bf p}^{2} + m^{2}}.
$
We will also denote
$
\tilde{\bf p}^{2} = ({\bf p}^{1})^{2} + \cdots ({\bf p}^{D-2})^{2}.
$
The converse of this change of variables is:
\begin{equation}
p^{D-1} = {1\over \sqrt{2}} (p^{+} - p^{-})
\end{equation}
where we have introduced the notation
\begin{equation}
p^{-} = p^{-}(p^{+},\tilde{\bf p}) \equiv { \tilde{\bf p}^{2} + m^{2} \over 2 p^{+}}.
\end{equation}
Now we re-express the measure
$
{d{\bf p}\over 2p^{0}}
$
in the new variables and easily get:
\begin{equation}
{d{\bf p}\over 2p^{0}} = {d\tilde{p} dp^{+}\over 2p^{+}}.
\label{measure}
\end{equation}
Let us note that for
$
m > 0
$
the variable
$
p^{+}
$
takes values in the whole positive real axis so, in the new variables the Hilbert space is
\begin{equation}
{\cal H}^{[m]}
\equiv L^{2}\left( \mathbb{R}_{+} \times \mathbb{R}^{D-2}, {dp^{+} d\tilde{p} \over 2p^{+}}\right)
\label{h-m}
\end{equation}
where as usual
$
\mathbb{R}_{+} \equiv (0, \infty ).
$
The unitary transformation connecting the two representations is
$
V: L^{2}\left( \mathbb{R}^{D-1}, {d{\bf p}\over 2p^{0}}\right) \rightarrow
L^{2}\left( \mathbb{R}_{+} \times \mathbb{R}^{D-2}, {dp^{+} d\tilde{p} \over 2p^{+}}\right)
$
is:
\begin{equation}
(Vf)(p^{+},\tilde{p}) = f\left(\tilde{p}, {p^{+} - p^{-} \over \sqrt{2}}\right);
\label{V}
\end{equation}
the inverse of this transformation can be easily provided.
It is a straightforward exercise to derive the expressions of the Lie algebra generators
in the light-cone coordinates:
\begin{eqnarray}
P^{j} = p^{j} \qquad
P^{+} = p^{+} \qquad
P^{-} = p^{-} \equiv { \tilde{\bf p}^{2} + m^{2} \over 2 p^{+}}
\nonumber \\
L^{jk} = - i~\left( p^{j} {\partial \over \partial p^{k}}
- p^{k} {\partial \over \partial p^{j}} \right) \qquad
L^{j+} = i~ p^{+} {\partial \over \partial p^{j}}
\nonumber \\
L^{j-} = i~\left( p^{-} {\partial \over \partial p^{j}}
+ p^{j} {\partial \over \partial p^{+}} \right) \qquad
L^{+-} = - i~p^{+}~{\partial \over \partial p^{+}}
\label{lc}
\end{eqnarray}
where
$j,k = 1,\dots,D-2$.
These operators are defined on
$
{\cal D}^{[m]} \equiv {\cal C}^{\infty}(\mathbb{R}_{+} \times \mathbb{R}^{D-2})
$
where they are essentially self-adjoint; one can easily verify that on this domain the
relations (\ref{p2}) are valid.
The extensions of the preceding formulas to the case of arbitrary $m$ (i.e.
$m = 0$ and
$m$ purely imaginary)
can be done using Lemma 3 of \cite{Di2}. We define for any
$
j = 1,\dots,D-1
$
expressions of the type
$p^{+}$
namely
\begin{equation}
p^{+}_{j} \equiv {1\over \sqrt{2}} (p^{0} + p^{j})
\end{equation}
and the chart
\begin{equation}
V_{j} = \{ (p^{+}_{j}, p^{1},\dots,p^{j-1},p^{j+1},\dots,p^{D-1})
\in \mathbb{R}_{+} \times \mathbb{R}^{D-2} \}.
\end{equation}
In every such chart there are no singularities and one can consider the corresponding measure defined as in
(\ref{measure}). The reunion of all these charts is a cover of the whole mass shell
and if we consider a partition of the unity subordinated to this partition then we
can obtain the corresponding measure in light-cone coordinates.
\section{Quantum Strings and Superstrings}
In this Section we use the preceding construction to give a proper mathematical
definition for the string and superstring systems.
\subsection{The Quantum Bosonic String\label{b-s}}
We introduce the following construction. First we consider the system of Bose oscillators
$
\alpha_{m}^{j}, m \in \mathbb{Z}^{*}, j = 1,\dots,D-2
$
(i.e. we exclude for the moment the value
$m = 0$). The Hilbert space generated by these operators is
$
{\cal F}^{(\alpha)};
$
we are in the conditions of Subsections \ref{vir-bose} and \ref{vir-multi}. We now consider
the following Hilbert space:
\begin{equation}
{\cal H}^{(\mu,\alpha)} \equiv {\cal H}^{[\mu]} \otimes {\cal F}^{(\alpha)}
\label{h-b}
\end{equation}
with
$
{\cal H}^{[\mu]}
$
defined by (\ref{h-m}); we are denoting the mass by
$
\mu > 0
$
to avoid confusion with the indices
$m \in \mathbb{Z}$.
If we identify the Fock space
$
{\cal F}^{(\alpha)}
$
with its dual (using the scalar product
$
<\cdot,\cdot>_{\alpha}
$)
then we can describe the preceding Hilbert space
as the space of Borel maps from
$\mathbb{R}^{3}$
into
$
{\cal F}^{(\alpha)}
$
square integrable with respect to the scalar product
\begin{equation}
< f, g> = \int {d{\bf p} \over 2 p^{0}} <f(p), g(p)>_{\alpha}.
\end{equation}
We define
\begin{equation}
\alpha_{0}^{j} \equiv p^{j} \quad j = 1,\dots,D-2
\end{equation}
and we can define the Virasoro generators by
\begin{equation}
L^{(\alpha)}_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~:\alpha^{j}_{m-n} \alpha^{j}_{n}:
- a \delta_{m} \cdot I
\label{l1}
\end{equation}
(i.e. we are in the conditions from Subsection \ref{vir-multi} with
$
\epsilon_{1} = \dots = \epsilon_{D-2} = 1).
$
More rigorously we have
\begin{eqnarray}
L^{(\alpha)}_{m} = I_{1} \otimes
{1\over 2}~\left(\sum_{n \not= 0,m}~:\alpha^{j}_{m-n} \alpha^{j}_{n}: \right)
+ p^{j} \otimes \alpha_{m}^{j} \quad \forall m \not= 0
\nonumber \\
L^{(\alpha)}_{0} = I_{1} \otimes \sum_{n > 0} :\alpha^{j}_{-n} \alpha^{j}_{n}:
+ {1\over 2} \tilde{\bf p}^{2} \otimes I_{2} - a \cdot I
\label{l0}
\end{eqnarray}
but we an use the more compact definition (\ref{l1}) without risk of confusion.
By a slight abuse of notation we will called
$
{\cal D}_{0} \simeq {\cal H}^{[\mu]} \otimes {\cal D}_{0}
$
the algebraic Fock space. Then we have
\begin{lemma}
The operators
$
E^{j}: {\cal H}^{(\mu,\alpha)} \rightarrow {\cal H}^{(\mu,\alpha)}, \quad j = 1,\dots,D-2
$
are well defined on the algebraic Fock by the formulas
\begin{equation}
E^{j} \equiv - i \sum_{m > 0} {1\over m}
( \alpha_{-m}^{j}~L_{m} - L_{-m} \alpha_{m}^{j})
\label{e}
\end{equation}
and are formally self-adjoint.
\end{lemma}
{\bf Proof:} We have noticed previously that the vectors
$\alpha_{m}^{j}\Psi$
and
$L_{m}\Psi$
(here
$\Psi \in {\cal D}_{0}$)
are null for sufficiently large
$m > 0$
(see Proposition \ref{aaP}.) It follows that the sums in (\ref{e}) are in fact
finite if we consider vectors of the form
$E^{j}\Psi$.
$\blacksquare$
It is convenient to introduce the {\it Hamiltonian} operator according to
\begin{equation}
H^{(\alpha)} \equiv \sum_{n > 0} :\alpha^{j}_{-n} \alpha^{j}_{n}: =
\sum_{n > 0} n~(a^{j}_{n})^{+} a^{j}_{n}.
\end{equation}
Now we have the main result:
\begin{thm}
Let us define the following operators on
$
{\cal D}_{0}:
$
\begin{eqnarray}
P^{\pm} = p^{\pm} \otimes I_{2} \qquad P^{j} = p^{j} \otimes I_{2}
\nonumber \\
J^{kl} = L^{kl} \otimes I_{2} + I_{1} \otimes J^{(\alpha)kl} \qquad
J^{k+} = L^{k+} \otimes I_{2}
\nonumber \\
J^{k-} = L^{k-} \otimes I_{2} + {1\over p^{+}} E^{k} \qquad
J^{+-} = L^{+-} \otimes I_{2};
\label{p-b}
\end{eqnarray}
here
$k,l = 1,\dots,D-2$
as usual and the operators
$
J^{(\alpha)kl}
$
are those defined by the first formula of (\ref{j}). Then these operators are a
representation of the Poincar\'e Lie algebra iff
$D = 26$
and we consider only the states from the {\bf physical Hilbert space}
\begin{equation}
{\cal H}^{(\mu,\alpha)}_{phys} \equiv \left\{ \Psi \in {\cal H}^{(\mu,\alpha)} |
H^{(\alpha)} \Psi = \left( 1 + {\mu^{2} \over 2} \right) \Psi \right\}.
\label{phys-a1}
\end{equation}
\label{bs-inv}
\end{thm}
{\bf Proof:} We have to check the formulas (\ref{p2}). Of course, we will use the
fact the the operators
$
L^{\dots}
$
verify these relations as stated in the previous Section so the non-trivial
ones must have at least a
$
J^{k-}
$
entry. We are left with the following non-trivial relations to check:
\begin{eqnarray}
~[ J^{+-}, J^{k-} ] = i~J^{k-} \qquad
~[ J^{j+}, J^{k-} ] = - i~J^{jk} - i~\delta_{jk}~J^{+-}
\nonumber \\
~[ J^{j-}, J^{kl} ] = - i~\delta_{jk}~J^{l-} + i~\delta_{jl}~J^{k-} \qquad
~[ J^{j-}, J^{k-} ] = 0.
\end{eqnarray}
The first three preceding relations can be checked elementary and do not
produce anomalies. Only the last relation is highly non-trivial. To
compute the commutator
$
[ J^{j-}, J^{k-} ] = 0
$
we use the elementary formula
\begin{equation}
[ AB, CD] = A [ B, C ] D + AC [ B, D ] + [ A, C ] DB + C [ A, D ] B.
\label{com}
\end{equation}
We need to compute first
\begin{equation}
[ E^{j}, E^{k} ] = - (C^{jk}_{1} + C^{jk}_{2} - H.c.)
\label{ee}
\end{equation}
where
\begin{equation}
C^{jk}_{1} = \sum_{m,n > 0} {1\over mn} [ \alpha_{-m}^{j} L_{m}, \alpha_{-n}^{k} L_{n} ]
\quad
C^{jk}_{2} = \sum_{m,n > 0} {1\over mn} [ \alpha_{-m}^{j} L_{m}, L_{-n} \alpha_{n}^{k} ]
\end{equation}
and we understand that all operators are acting on vectors from the
algebraic Fock space.
These commutators can be computed using the formulas (\ref{com}) and
(\ref{vir-alpha-D}) with
$
D \rightarrow D - 2.
$
One checks that at
every stage of the computations the sums are in fact finite because we apply the
commutators only on vectors from the algebraic Fock space.
We give only the final results:
\begin{eqnarray}
C^{jk}_{1} = \left( \sum_{ m > 0} {m - 1\over 2} \alpha^{j}_{-m} \alpha^{k}_{m}
+ \sum_{ m > n \geq 0} {1\over m} \alpha^{j}_{-m} L_{m-n} \alpha^{k}_{n} \right)
- ( j \leftrightarrow k)
\nonumber \\
C^{jk}_{2} = \sum_{ m \geq n > 0} {1\over m} \alpha^{j}_{-m} L_{m-n} \alpha^{k}_{n}
+ \sum_{ n \geq m > 0} {1\over n} \alpha^{j}_{-m} L_{m-n} \alpha^{k}_{n}
\nonumber \\
+ \sum_{ m > 0} \left[ m {D - 26\over 2} + {1\over m} \left( 2a - {D - 2 \over 12}
\right) + 1 \right] \alpha^{k}_{-m} \alpha^{j}_{m} + \dots
\end{eqnarray}
where by $\dots$ we mean a term proportional to
$\delta_{jk}$
which disappears from (\ref{ee}). We finally obtain
\begin{eqnarray}
[ E^{j}, E^{k} ] =
\sum_{ m > 0} \left[ m {D - 26\over 12} + {1\over m} \left( 2a - {D - 2 \over 12}
\right) \right] [ \alpha^{j}_{-m} \alpha^{k}_{m} - ( j \leftrightarrow k) ]
\nonumber \\
+ i~(p^{j} E^{k} - p^{k} E^{j})
+ 2 \sum_{ m > 0} {1\over m} [\alpha^{j}_{-m} \alpha^{k}_{m}
- ( j \leftrightarrow k) ] L_{0}.
\label{ee1}
\end{eqnarray}
Let us note that if we use the definition (\ref{l0}) we have:
$
L_{0} = H^{(\alpha)} + {1\over 2} \tilde{\bf p}^{2} - a
$
and we observe that the constant $a$ drops out! Now we insert the
commutator (\ref{ee1}) in the formula
\begin{eqnarray}
[ J^{j-}, J^{k-} ] = [ L^{j-}, L^{k-} ] \otimes I_{2} +
{1\over (p^{+})^{2}} [ E^{j}, E^{k} ]
\nonumber \\
+ \left( L^{j-} {1\over p^{+}} \right) E^{k} - \left( L^{k-} {1\over p^{+}} \right) E^{j}
+ {1\over p^{+}} (L^{j-} E^{k}) - {1\over p^{+}} (L^{k-} E^{j})
\end{eqnarray}
and get the final result
\begin{equation}
[ J^{j-}, J^{k-} ] = {1\over (p^{+})^{2}}
\sum_{ m > 0} [ \alpha^{k}_{-m} \alpha^{j}_{m} - ( j \leftrightarrow k) ]
\left[ m {D - 26\over 12} + {1\over m} \left( 2 H^{(\alpha)} - {D - 2 \over 12}
- {\mu}^{2} \right) \right];
\label{comm-a1}
\end{equation}
equating to zero we obtain the value of $D$ and also the expression
for the physical Hilbert space. It remains to show that the physical Hilbert
space is Poincar\'e invariant i.e. it is left invariant by the operators
(\ref{p-b}) from the statement. This follows from
\begin{equation}
[ H^{(\alpha)}, K^{j} ] = [ L_{0}, K^{j} ] = 0
\end{equation}
and the proof is finished.
$\blacksquare$
Let us remark that the vacuum
$\Omega$
does {\it not} belong to the physical Hilbert space. The preceding system seems to
be closest to what the physical intuition tells us a vibrating string of mass
$\mu$
should be:
the first factor in (\ref{h-b}) describes the translation of the string in
space-time and the second factor the vibrations of the string in the rest frame.
Because the operator
$
H^{(\alpha)}
$
has the spectrum
$
\sigma(H^{(\alpha)}) = \mathbb{Z}
$
we obtain that in this case the mass $\mu$ is quantized:
$
\mu^{2} \in 2 \cdot \mathbb{N}.
$
However, a different construction is preferred in the literature and is called the
{\it Bosonic string}. Instead of (\ref{h-b}) we take
\begin{equation}
{\cal H}^{(b)} \equiv
\left( \oplus_{l \in L}{\cal H}^{[\mu_{l}]} \right)\otimes {\cal F}^{(\alpha)};
\label{h-b1}
\end{equation}
where we the sum is over an unspecified set $L$ (not necessarily finite) of masses. The extensions
of the formulas (\ref{p-b}) to this case are obvious. (In fact,
$
{\cal H}^{(b)}
$
is a direct sum of Hilbert spaces of the type
$
{\cal H}^{(\mu,\alpha)}).
$
The same computation as above
leads to
\begin{equation}
[ J^{j-}, J^{k-} ] = {1\over (p^{+})^{2}}
\sum_{ m > 0} [ \alpha^{k}_{-m} \alpha^{j}_{m} - ( j \leftrightarrow k) ]
\left[ m {D - 26\over 12} + {1\over m} \left( 2 H^{(\alpha)} - {D - 2 \over 12}
- p^{2} \right) \right]
\label{comm-a2}
\end{equation}
where now
\begin{equation}
p^{2} = \oplus_{l \in L} \mu^{2}_{l} I_{l}.
\end{equation}
We obtain as before
$D = 26$
but the physical Hilbert space is
\begin{equation}
{\cal H}^{(b)}_{phys} \equiv \left\{ \Psi \in {\cal H}^{(b)} |
2 (H^{(\alpha)} - 1)\Psi = p^{2} \Psi \right\}.
\label{phys-a2}
\end{equation}
In this way we get tachyons in the spectrum of the model (for instance the vacuum
state corresponds to
$p^{2} = -2$).
In this case the expression of
$
J^{kl}
$
from (\ref{p-b}) makes sense only on functions defined in the chart
$
V_{D-1}
$
and of compact support (such that the singularity in
$p^{+} = 0$
is integrable). Similar constructions must be considered in all charts.
We note in the end that the necessity of considering only states lying in the physical
Hilbert space (\ref{phys-a1}) or (\ref{phys-a2}) appears in the standard literature in a different form e.g. equation (2.3.12) from \cite{GSW}. We also note that the condition
$
a = 1
$
appearing frequently in the literature is not needed. Apparently this fact is known in the literature but we cannot provide an explicit reference on this point.
\subsection{The Neveau-Schwartz Superstring\label{ns-s}}
We generalize the previous arguments for the superstring. In the NS case we consider
the Hilbert space generated by the system of Bose oscillators
$
\alpha_{m}^{j}, m \in \mathbb{Z}^{*}, j = 1,\dots,D-2
$
(i.e. we exclude for the moment the value
$m = 0$) and the the Fermi oscillators
$
b_{r}^{j}, r \in {1\over 2} + \mathbb{Z}, j = 1,\dots,D-2;
$
The Hilbert space generated by these operators is
$
{\cal F}^{(NS)};
$
we are in the conditions of Subsections \ref{vir-ns} and \ref{vir-multi}. We now consider
the following Hilbert space:
\begin{equation}
{\cal H}^{(NS)} \equiv {\cal H}^{[\mu]} \otimes {\cal F}^{(NS)}
\label{h-ns}
\end{equation}
with
$
{\cal H}^{[\mu]}
$
defined by (\ref{h-m}). We define as in the previous Subsection
\begin{equation}
\alpha_{0}^{j} \equiv p^{j} \quad j = 1,\dots,D-2
\end{equation}
and we are in the conditions of Subsection \ref{vir-multi} so we can define the
Virasoro generators
\begin{equation}
L^{(\alpha)}_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~:\alpha^{j}_{m-n} \alpha^{j}_{n}:
+ {1\over 2}~\sum_{r \in 1/2 + \mathbb{Z}}~r~:b^{j}_{-r} b^{j}_{m+r}: - a \delta_{m} \cdot I.
\label{l2}
\end{equation}
Then we can define the operators
$
E^{j}, F^{j}: {\cal H}^{(NS)} \rightarrow {\cal H}^{(NS)}, \quad j = 1,\dots,D-2
$
on the algebraic Fock space by the formulas
\begin{equation}
E^{j} \equiv - i \sum_{m > 0} {1\over m}
( \alpha_{-m}^{j}~L_{m} - L_{-m} \alpha_{m}^{j})
\label{e1}
\end{equation}
\begin{equation}
F^{j} \equiv - i \sum_{m \in \mathbb{Z}} K^{jl}_{-m} \alpha_{m}^{l}
= - i \sum_{m > 0} (\alpha_{-m}^{l} K^{jl}_{m} + K^{jl}_{-m} \alpha_{m}^{l})
- i K^{jl}_{0} \alpha_{0}^{l}
\label{f1}
\end{equation}
where the operators
$
K^{jl}_{m}
$
have been defined in the Subsection \ref{km}.
We remark that the expression (\ref{e1}) formally coincides with (\ref{e}) but the
expression of
$
L_{m}
$
is in fact different: we have a Fermi contribution in (\ref{l2}).
\begin{equation}
K^{j} \equiv E^{j} + F^{j}
\end{equation}
and all operators
$
E^{j}, F^{j}, K^{j})
$
are formally self-adjoint. The {\it Hamiltonian} operator also has a Fermi contribution:
\begin{equation}
H^{(NS)} \equiv \sum_{n > 0}
:\alpha^{j}_{-n} \alpha^{j}_{n}: + \sum_{r > 0}~r~:b^{j}_{-r} b^{j}_{r}:
= \sum_{n > 0} n~(a^{j}_{n})^{+} a^{j}_{n} + \sum_{r > 0}~r~:b^{j}_{-r} b^{j}_{r}:
\end{equation}
Now we have the main result:
\begin{thm}
Let us define the following operators on
$
{\cal D}^{[m]} \otimes {\cal D}_{0}:
$
\begin{eqnarray}
P^{\pm} = p^{\pm} \otimes I_{2} \qquad P^{j} = p^{j} \otimes I_{2}
\nonumber \\
J^{kl} = L^{kl} \otimes I_{2} + I_{1} \otimes J^{(NS)kl} \qquad
J^{k+} = L^{k+} \otimes I_{2}
\nonumber \\
J^{k-} = L^{k-} \otimes I_{2} + {1\over p^{+}} K^{k} \qquad
J^{+-} = L^{+-} \otimes I_{2};
\label{p-ns}
\end{eqnarray}
here
$k,l = 1,\dots,D-2$
as usual and the operators
$
J^{(NS)kl} = J^{(\alpha)kl} + J^{(b)kl}
$
are those defined by the formula (\ref{j}). Then these operators are a
representation of the Poincar\'e Lie algebra iff
$D = 10$
and we consider only the states from the {\bf physical Hilbert space}
\begin{equation}
{\cal H}^{(NS)}_{phys} \equiv \left\{ \Psi \in {\cal H}^{(NS)} |
H^{(NS)} \Psi = {1\over 2} ( 1 + \mu^{2}) \Psi \right\}.
\end{equation}
\end{thm}
{\bf Proof:} As in the previous Subsection we check the formulas (\ref{p2}) and the
obstructions can come only from the commutator
$
[ J^{j-}, J^{k-} ].
$
The commutator
$
[ E^{j}, E^{k} ]
$
can be obtained from the corresponding formula of the preceding Subsection with the
substitution
$
{D-2 \over 12} \rightarrow {D-2 \over 8}
$
in the commutators of
$L_{m}$'s
as it follows by comparing (\ref{vir-alpha-D}) to (\ref{lll}). We get in this way easily:
\begin{eqnarray}
[ E^{j}, E^{k} ] =
\sum_{ m > 0} \left[ m {D - 10\over 8} + {1\over m} \left( 2a - {D - 2 \over 8}
\right) \right] \alpha^{k}_{-m} \alpha^{j}_{m} - ( j \leftrightarrow k)
\nonumber \\
+ i~(p^{j} E^{k} - p^{k} E^{j})
+ 2 \sum_{ m > 0} {1\over m} [ \alpha^{j}_{-m} \alpha^{k}_{m}
- ( j \leftrightarrow k) ] L_{0}.
\end{eqnarray}
To obtain the expression
$
[ K^{j}, K^{k} ]
$
we still have to compute the commutators
$
[ F^{j}, F^{k} ]
$
and
$
[ E^{j}, F^{k} ]
$
for which we use again (\ref{com}). After a tedious but straightforward algebra we get
\begin{eqnarray}
[ J^{j-}, J^{k-} ] = {1\over (p^{+})^{2}}
\sum_{ m > 0} [\alpha^{k}_{-m} \alpha^{j}_{m} - ( j \leftrightarrow k) ]
\left[ m {D - 10\over 8} + {1\over m} \left( 2 H^{(NS)} - {D - 2 \over 8}
- \mu^{2} \right) \right]
\nonumber \\
+ {D - 10\over 4} \sum_{ r > 0} r (2r -1)
\left[ b^{j}_{-r} b^{k}_{r} - ( j \leftrightarrow k) \right]
+ {1\over p^{+}}~(2 H^{(NS)} - 1 - \mu^{2})~K^{jk}_{0};
\end{eqnarray}
to obtain this formula we use in an essential way the formula (\ref{kkl-ns}) of Section \ref{km}. Now we equate to zero the right hand side and the theorem follows.
$\blacksquare$
The vacuum
$\Omega$
does {\it not} belong to the physical Hilbert space in this case also. As in the
preceding Subsection, a different construction is preferred in the literature and
is called the {\it Neveau-Schwartz superstring}. Instead of (\ref{h-ns}) we take
\begin{equation}
{\cal H}^{(NS)} \equiv
\left( \oplus_{l \in L}{\cal H}^{[\mu_{l}]} \right)\otimes {\cal F}^{(NS)};
\label{h-ns1}
\end{equation}
where we the sum is over an unspecified set $L$ of masses. The extensions
of the formulas (\ref{p-ns}) to this case are obvious. The same computation as above
leads to
\begin{eqnarray}
[ J^{j-}, J^{k-} ] = {1\over (p^{+})^{2}}
\sum_{ m > 0} [\alpha^{k}_{-m} \alpha^{j}_{m} - ( j \leftrightarrow k) ]
\left[ m {D - 10\over 8} + {1\over m} \left( 2 H^{(NS)} - {D - 2 \over 8}
- p^{2} \right) \right]
\nonumber \\
+ {D - 10\over 4} \sum_{ r > 0} r (2r -1)
\left[ b^{j}_{-r} b^{k}_{r} - ( j \leftrightarrow k) \right]
+ {1\over p^{+}}~(2 H^{(NS)} -1 - p^{2})~K^{jk}_{0};
\end{eqnarray}
where now
\begin{equation}
p^{2} = \oplus_{l \in L} \mu^{2}_{l} I_{l}.
\end{equation}
We obtain as before
$D = 10$
but the physical Hilbert space is \cite{GSW}
\begin{equation}
{\cal H}^{(NS)}_{phys} \equiv \left\{ \Psi \in {\cal H}^{(NS)} |
(2 H^{(NS)} - 1)\Psi = p^{2} \Psi \right\}.
\end{equation}
In this way we get tachyons in the spectrum of the model (for instance the vacuum state
corresponds to
$p^{2} = -1$).
One can eliminate the tachyons imposing the GSO condition \cite{GSW} namely, considering that
the physical Hilbert space is the subspace of
$
{\cal H}^{(NS)}_{phys}
$
generated by odd numbers of $b$ oscillators and arbitrary numbers of
$\alpha$
oscillators; this subspace is again Poincar\'e invariant.
The parameter $a$ remains unconstrained in this case also.
\subsection{The Ramond Superstring\label{r-s}}
In the Ramond case we consider the Hilbert space generated by the system of Bose oscillators
$
\alpha_{m}^{j}, m \in \mathbb{Z}^{*}, j = 1,\dots,D-2
$
(i.e. we exclude for the moment the value
$m = 0$) and the the Fermi oscillators
$
d_{m}^{j}, m \in \mathbb{Z}, j = 1,\dots,D-2;
$
the Hilbert space generated by these operators is
$
{\cal F}^{(R)};
$
we are in the conditions of Subsections \ref{vir-ns} and \ref{vir-multi}. We consider
the following Hilbert space:
\begin{equation}
{\cal H}^{(R)} \equiv {\cal H}^{[\mu]} \otimes {\cal F}^{(R)}
\label{h-r}
\end{equation}
with
$
{\cal H}^{[\mu]}
$
defined by (\ref{h-m}). We define
\begin{equation}
\alpha_{0}^{j} \equiv p^{j} \quad j = 1,\dots,D-2
\end{equation}
and we are in the conditions of Subsection \ref{vir-multi} so we can define the
Virasoro generators
\begin{equation}
L^{(\alpha)}_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~:\alpha^{j}_{m-n} \alpha^{j}_{n}:
+ {1\over 2}~\sum_{n \in \mathbb{Z}}~n~:d^{j}_{-n} d^{j}_{m+n}:
+ \left( {D\over 16} - a \right)~\delta_{m} \cdot I;
\label{l3}
\end{equation}
we remark that we have included the shift (\ref{shift}) of
$L_{0}$
such that we have the canonical form for the $2$-cocycle of the Virasoro algebra.
Then we can define the operators
$
E^{j}, F^{j}, K^{j}: {\cal H}^{(NS)} \rightarrow {\cal H}^{(NS)}, \quad j = 1,\dots,D-2
$
on the algebraic Fock by the same formulas as in the preceding Subsection
(however the Virasoro operators are different).
In this case the {\it Hamiltonian} operator also has a Fermi contribution:
\begin{equation}
H^{(R)} \equiv \sum_{n > 0}
:\alpha^{j}_{-n} \alpha^{j}_{n}: + \sum_{n > 0}~n~:d^{j}_{-n} d^{j}_{n}:
= \sum_{n > 0} n~(a^{j}_{n})^{+} a^{j}_{n} + \sum_{n > 0}~n~:d^{j}_{-n} d^{j}_{n}:
\end{equation}
Now the main result is:
\begin{thm}
Let us define the following operators on
$
{\cal D}^{[m]} \otimes {\cal D}_{0}:
$
\begin{eqnarray}
P^{\pm} = p^{\pm} \otimes I_{2} \qquad P^{j} = p^{j} \otimes I_{2}
\nonumber \\
J^{kl} = L^{kl} \otimes I_{2} + I_{1} \otimes J^{(R)kl} \qquad
J^{k+} = L^{k+} \otimes I_{2}
\nonumber \\
J^{k-} = L^{k-} \otimes I_{2} + {1\over p^{+}} K^{k} \qquad
J^{+-} = L^{+-} \otimes I_{2};
\label{p-r}
\end{eqnarray}
here
$k,l = 1,\dots,D-2$
as usual and the operators
$
J^{(R)kl} = J^{(\alpha)kl} + J^{(d)kl}
$
are those defined by the formula (\ref{j}). Then these operators are a
representation of the Poincar\'e Lie algebra iff
$D = 10$
and we consider only the states from the {\bf physical Hilbert space}
\begin{equation}
{\cal H}^{(R)}_{phys} \equiv \left\{ \Psi \in {\cal H}^{(NS)} |
H^{(NS)} \Psi = {\mu^{2}\over 2} \Psi \right\}.
\end{equation}
\end{thm}
{\bf Proof:} Formally, the content of this theorem coincides with the previous theorem.
Similar computations, making use of (\ref{kkl-r}) this time, lead to:
\begin{eqnarray}
[ J^{j-}, J^{k-} ] = {1\over (p^{+})^{2}}
\sum_{ m > 0} [ \alpha^{k}_{-m} \alpha^{j}_{m} - ( j \leftrightarrow k) ]
\left[ m {D - 10\over 8} + {1\over m} \left( 2H^{(R)} - \mu^{2} \right) \right]
\nonumber \\
+ {D - 10\over 2} \sum_{ n > 0} n^{2}
\left[ d^{j}_{-n} d^{k}_{n} - ( j \leftrightarrow k) \right]
+ {1\over p^{+}}~\left(2 H^{(R)} - \mu^{2} \right)~K^{jk}_{0}
\end{eqnarray}
and equating to zero the right hand side and the theorem follows.
$\blacksquare$
In the Ramond model the vacuum
$\Omega$
belongs to the physical Hilbert space. As in the
preceding Subsection, a different construction is preferred in the literature and
is called the {\it Ramond superstring}. Instead of (\ref{h-r}) we take
\begin{equation}
{\cal H}^{(R)} \equiv
\left( \oplus_{l \in L}{\cal H}^{[\mu_{l}]} \right)\otimes {\cal F}^{(R)};
\label{h-r1}
\end{equation}
where we the sum is over an unspecified set $L$ of masses. The extensions
of the formulas (\ref{p-r}) to this case are obvious. We obtain as before
$D = 10$
but the physical Hilbert space is
\begin{equation}
{\cal H}^{(R)}_{phys} \equiv \left\{ \Psi \in {\cal H}^{(R)} |
H^{(R)} \Psi = {p^{2}\over 2} \Psi \right\}.
\end{equation}
In this way we do not get tachyons in the spectrum of the model.
\subsection{Other Superstring Models}
From the preceding two Subsections it is clear that formulas of the type
(\ref{kkl-ns}) and (\ref{kkl-r}) are essential for establishing Lorentz
invariance. We investigate now if such formulas can be valid for more
general cases. More precisely, suppose that we have a $N$-dimensional
representation
$
\sigma_{jk}, j,k = 1,\dots, D
$
of the algebra
$
so(D)
$
such that
\begin{equation}
\sigma_{jk}^{T} = - \sigma_{jk};
\end{equation}
then we can define the associated Kac-Moody algebras
$
K^{jk}_{m}(\sigma)
$
according to the formulas (\ref{km-b}) and (\ref{km-d}) respectively.
We are interested if in some special cases formulas of the type
(\ref{kkl-ns}) and (\ref{kkl-r}) hold. A necessary condition is that
the terms quadratic in the operators $b$ (resp. $d$) cancel identically.
It is not very difficult to prove that this requirement is equivalent to
\begin{equation}
\sum_{r_{1} + \dots + r_{4} = 0} X^{jk}_{p_{1}r_{1};\dots;p_{4}r_{4}}
:b^{p_{1}}_{r_{1}} \dots b^{p_{4}}_{r_{4}}: = 0
\label{xb}
\end{equation}
where
\begin{equation}
X^{jk}_{p_{1}r_{1};\dots;p_{4}r_{4}} \equiv
(r_{1} + r_{2}) (\sigma_{jl})_{p_{1}p_{2}} (\sigma_{kl})_{p_{3}p_{4}}
+ (r_{3} - r_{4}) (\sigma_{jk})_{p_{1}p_{2}}~\delta_{p_{3}p_{4}}
\label{x}
\end{equation}
in the Neveau-Schwartz case and similar relations for the Ramond case.
The relation (\ref{xb}) is equivalent to
\begin{equation}
X^{jk}_{p_{1}r_{1};\dots;p_{4}r_{4}} - (1 \leftrightarrow 3)
- (1 \leftrightarrow 4) - (2 \leftrightarrow 3) - (2 \leftrightarrow 4)
+ (1 \leftrightarrow 3, 2 \leftrightarrow 4) = 0.
\end{equation}
One inserts here the definition (\ref{x}) and eliminates
$
r_{4} = - (r_{1} + r_{2} + r_{3});
$
the result is an equation of the form
\begin{equation}
r_{1} E^{(1)jk}_{p_{1}\dots;p_{4}} + r_{2} E^{(2)jk}_{p_{1}\dots;p_{4}}
+ r_{3} E^{(3)jk}_{p_{1}\dots;p_{4}} = 0
\end{equation}
so we obtain three relations
\begin{equation}
E^{(a)jk}_{p_{1}\dots;p_{4}} = 0 \quad a = 1,2,3.
\label{eee}
\end{equation}
One can easily see that the relation
\begin{equation}
E^{(1)} + E^{(2)} - E^{(3)} = 0
\end{equation}
is equivalent to
\begin{equation}
(\sigma_{jl})_{ab} (\sigma_{kl})_{cd} - (j \leftrightarrow k) =
\delta_{bd} (\sigma_{jk})_{ac} - (a \leftrightarrow b)
- (c \leftrightarrow d) + (a \leftrightarrow b, c \leftrightarrow d)
\label{repr}
\end{equation}
and conversely (\ref{repr}) is implies (\ref{eee}). Moreover, if we have
(\ref{repr}) the one can prove that relations of the type (\ref{kkl-ns})
and (\ref{kkl-r}) hold and we have Lorentz invariance theorems like in
the preceding Subsection in $10$ dimensions. So the key relation
(\ref{repr}) must be analyzed in the case
$D = 8$.
We note that in this case one should
modify in an appropriate way the expression (\ref{j}) for the generators
of the rotations
$J^{(b)jk}$
and
$J^{(d)jk}$.
One can obtain an important consequence of (\ref{repr}) if we take
$b = c$
and sum over
$b = 1, \dots, N$.
We obtain
\begin{equation}
[ \sigma_{jl}, \sigma_{kl} ] = (2 - N) \sigma_{jk};
\end{equation}
on the other hand we have from the representation property of
$\sigma_{jk}$
\begin{equation}
[ \sigma_{jl}, \sigma_{kl} ] = (2 - D) \sigma_{jk}
\end{equation}
so we conclude that the representation
$
\sigma_{jk}
$
should be $D$-dimensional, i.e. we need to consider only the
representations of dimension $8$ of the algebra
$so(8)$.
It is known that there are four (non-equivalent) such representations:
the vector representation (which we have already used in the preceding
Subsection), the adjoint representation and two the spinor
representations
${\bf 8}_{s}$
and
${\bf 8}_{c}$
of opposite chirality. It seems that the identity (\ref{repr}) is valid
for the for the spinor representations also but the details are not
easily found in the literature so we provide an elementary analysis.
First, it is clear that if we multiply (\ref{repr}) with
$
M_{dc}
$
and make the summation we obtain an equivalent relation
\begin{equation}
Tr(\sigma_{jl} M) \sigma_{kl} - (j \leftrightarrow k) = [ M - M^{T}, \sigma_{jk} ], \quad \forall M.
\end{equation}
If $M$ is symmetric then the preceding relation is an identity. So (\ref{repr}) is equivalent to
\begin{equation}
Tr(\sigma_{jl} M) \sigma_{kl} - (j \leftrightarrow k) = 2~[ M, \sigma_{jk} ]
\label{m}
\end{equation}
for all antisymmetric
$N \times N$-
matrices $M$. Now the number of
$N \times N$-
antisymmetric matrices is
$
{N(N-1) \over 2};
$
on the other hand the number of matrices
$
\sigma_{jk}
$
is
$
{D(D-1) \over 2}.
$
But we have already established that
$N = D$
so if the matrices
$
\sigma_{jk}
$
are linear independent the relation (\ref{m}) is equivalent to
\begin{equation}
Tr(\sigma_{jl} \sigma_{pq}) \sigma_{kl} - (j \leftrightarrow k) = 2~[ \sigma_{pq}, \sigma_{jk} ],
\quad \forall p,q.
\label{m1}
\end{equation}
In particular is is easy to see that the fundamental representation
$E_{jk}$
verifies the preceding identity. We check the identity for the spinor representations.
According to \cite{GSW} one can describe the spinor representations of the algebra
$
so(2n)
$
considering the Fermi Fock space $S$ generated by the operators
$
b_{j}, b_{j}^{*}, j = 1,\dots,n;
$
we have the CAR algebra:
\begin{equation}
\{ b_{j}, b_{k} \} = 0 \quad \{ b_{j}, b^{*}_{k} \} = \delta_{jk}.
\end{equation}
Next, we define the operators
\begin{eqnarray}
\gamma_{j} = b_{j} + b_{j}^{*}, \quad j = 1, \dots, n
\nonumber \\
\gamma_{j} = -i~(b_{n-j} - b_{n-j}^{*}), \quad j = n+1, \dots, 2n
\end{eqnarray}
and prove immediately that they form a
$2^{n}$-
dimensional representation of the Clifford algebra
$C(2n,0)$
i.e. we have
\begin{equation}
\{ \gamma_{j}, \gamma_{k} \} = \delta_{jk} \cdot I.
\end{equation}
Then a representation of the algebra
$so(2n)$
is given by the operators
\begin{equation}
\sigma_{jk} = {1\over 4} [ \gamma_{j}, \gamma_{k} ].
\end{equation}
This representation is not irreducible. In fact let us denote by
$
S_{+}
$
(resp.
$
S_{-})
$
the subspaces of $S$ generated by applying an even (resp. odd) number of creation operators
$b_{j}^{*}$
on the vacuum. The projectors on these subspaces will be denoted by
$P_{\pm}$.
It is easy to see that these two subspaces are left invariant by the representation
$
\sigma_{jk}
$
so it makes sense to define the restrictions
$
\sigma^{\pm}_{jk}.
$
The operators
$
\sigma^{\pm}_{jk}
$
are immediately seen to be linear independent. It is also easy to prove that
$
dim(S_{+}) = dim(S_{-}) = 2^{n-1}
$
i.e. both representations
$
\sigma^{\pm}_{jk}.
$
are of the same dimension
$2^{n-1}$;
these are, by definition, the spinor representations of the algebra
$so(2n)$.
Finally we prove:
\begin{equation}
Tr(\sigma^{\pm}_{jk} \sigma_{pq}^{\pm}) = 2^{n-3}~(\delta_{kp} \delta_{jq} - \delta_{jp} \delta_{kq}).
\label{lamb}
\end{equation}
Indeed, because the left hand side is a
$SO(2n)$-
invariant tensor and because of the antisymmetry properties we know that the right hand side
must have the form
$
\lambda~(\delta_{kp} \delta_{jq} - \delta_{jp} \delta_{kq});
$
to determine the constant
$\lambda$
we consider a particular case, say
$
p = j, q = k \quad j \not= k
$
and we obtain
\begin{equation}
\lambda = - Tr(\sigma_{jk}^{2} P_{\pm}) = {1\over 4} Tr(P_{\pm})
= {1\over 4} dim(S_{\pm}) = 2^{n-3}.
\end{equation}
It follows that only for
$n = 4$
we have
$\lambda = 2$
and in this case if we use (\ref{lamb}) in (\ref{m1}) we obtain an identity. It follows
that the two spinor representations of
$so(8)$
verify the identity (\ref{m}) so they can be used to construct supersymmetric string
models as in the preceding two Subsections. These models are considered more consistent
because we quantize using Fermi statistics oscillators pertaining to spinor representations
so we are in agreement with spin-statistics correspondence. Moreover these models
exhibit also supersymmetry invariance.
There is yet another possibility of constructing consistent models, namely by modifying
the Bosonic string from Subsection \ref{b-s}. We consider that we have another
representation of the Virasoro algebra
$
L_{m}^{c}
$
with central charge $c$ acting in the Hilbert space
$
{\cal H}^{c};
$
we consider the Hilbert space
$
{\cal H}^{\mu,\alpha} \otimes {\cal H}^{c}
$
where
$
{\cal H}^{\mu,\alpha}
$
is given by (\ref{h-b}) and modify the Virasoro algebra given by (\ref{l1}) according to
$
L_{m} \rightarrow L_{m} + L_{m}^{c}.
$
Because the central charges are additive the new Virasoro algebra will have the
central charge
$D - 2 + c$
so the consistency condition is in this case
\begin{equation}
c = 26 - D
\end{equation}
and the expression of the physical Hilbert space from theorem \ref{bs-inv} remains the same.
In particular if we want a model in
$D = 10$
dimensions we must find out a representation of the Virasoro algebra with central charge
$c = 16$.
It is known that such representations can be found for the groups
$SO(32)$
and
$E_{8} \times E_{8}$
using Sugawara construction \cite{Ot}. This new possibility is used in the construction of the heterotic string models.
For the description of closed strings a doubling of the Bose oscillators
$
\alpha_{m}^{j}
$
appears corresponding to the left and right oscillator modes. Composing in various ways the models described above one can obtain the well-known string models of type I, IIA, IIB and heterotic.
\section{Covariant Quantization of Strings and Superstrings}
One can construct a manifestly covariant formalism also \cite{GSW}. The idea
is to take in Subsection \ref{vir-multi} the case
$
\epsilon_{0} = - 1, \epsilon_{1} = \dots = \epsilon_{D-1} = 1.
$
In this way the Hilbert space will have states of negative or zero norm.
So we consider that we have the family of operators:
$
\alpha^{\mu}_{m}, m \in \mathbb{Z}^{*}, \mu = 0,\dots,D
$
acting in the Hilbert space
$
{\cal F}^{(\alpha)}_{\rm cov}
$
such that:
\begin{eqnarray}
[ \alpha^{\mu}_{m}, \alpha^{\nu}_{n} ] = - \eta_{\mu\nu}~m~\delta_{m+n} \cdot I,
\quad \forall m, n
\nonumber \\
\alpha^{\mu}_{m} \Omega = 0, \quad m > 0
\nonumber \\
(\alpha^{\mu}_{m})^{+} = \alpha^{\mu}_{-m} \quad \forall m;
\end{eqnarray}
this Hilbert space will not be positively defined. Define the Virasoro operators
\begin{equation}
\bar{L}_{m} \equiv - {1\over 2}~\eta_{\mu\nu}~\sum_{n \not= 0,m}~
:\alpha^{\mu}_{m-n} \alpha^{\nu}_{n}:
\label{vir-bar}
\end{equation}
and we have the following commutation relations:
\begin{equation}
[ \bar{L}_{m}, \bar{L}_{n} ] = (m - n) \bar{L}^{m+n}
+ D~{ m (m^{2} - 1) \over 12}~\delta_{m+n}~\cdot I.
\end{equation}
\begin{prop}
We consider
$
k \in \mathbb{R}^{D}
$
and recursively define the operators
$
U_{n}(k), n \in \mathbb{N}
$
according to
\begin{equation}
U_{0} = I \qquad
U_{n}(k) = {1 \over n} \sum_{l=1}^{n} U_{n-l}(k) k\cdot \alpha_{l}.
\end{equation}
For convenience we define
$
U_{n} = 0 \quad \forall n < 0.
$
Then the following relation is valid:
\begin{equation}
[ \alpha^{\mu}_{m}, U_{n}(k) ] = \theta(m - 1)~k^{\mu}~U_{m+n}(k)
\end{equation}
where
$
\theta(m)
$
is the usual Heaviside function.
\end{prop}
The proof is easily done by induction on $n$. Let us remark that the expressions
$
U_{n}(k)
$
are the coefficients of the formal series in
$
z \in \mathbb{C}:
$
\begin{equation}
U(z,k) \equiv e^{A(z,k)} \qquad
A(z,k) \equiv \sum_{n \geq 1} {1\over n} k\cdot \alpha_{n} z^{n}.
\end{equation}
The recurrence relation from the statement of the proposition can be found if we compute the derivative of
\begin{equation}
U(z,k) = \sum_{n \geq 1} U_{n}(k) z^{n}
\end{equation}
in two ways. The explicit relation is
\begin{equation}
U_{n}(k) = \sum_{p \geq 0} {1\over p!} \sum_{i_{1},\dots,i_{p} > 0}
\sum_{i_{1}+\cdots i_{p}=n}~{1 \over i_{1} \dots i_{p}}
(k\cdot \alpha_{i_{1}}) \cdots (k\cdot \alpha_{i_{p}})
\end{equation}
but it is convenient to work with the recurrence relation and not with the explicit expression given above.
The operators
$
U_{n}(k)
$
are leaving the algebraic Fock space
$
{\cal D}_{0}
$
invariant and moreover for every
$
\Psi \in {\cal D}_{0}
$
we have
\begin{equation}
U_{n}(k) \Psi = 0
\end{equation}
for sufficiently large $n$. It is useful to expressed the formal series relations:
\begin{eqnarray}
[ U(z,k)^{\dagger}, U(z,k^{\prime}) ] = 0 \quad {\rm for} \quad k\cdot k^{\prime} = 0
\nonumber \\
U(z,k) U(z,k^{\prime}) = U(z,k+k^{\prime}).
\end{eqnarray}
in terms of the
$
U_{p}(k)
$
operators. In particular we have
\begin{equation}
\sum_{p \in \mathbb{Z}} U_{n-p}(k)~U_{p}(k^{\prime}) = U_{n}(k+k^{\prime}).
\end{equation}
Less elementary are the following sum relations:
\begin{eqnarray}
\sum_{p \in \mathbb{Z}} p~U_{n-p}(k)~U_{m+p}(k^{\prime})
= \sum_{l > 0} U_{m+n-l}(k+k^{\prime})~k^{\prime}\cdot\alpha_{l} - m~U_{m+n}(k+k^{\prime})
\nonumber \\
\sum_{p \in \mathbb{Z}} p~U_{n-p}(k)~U_{m+p}(-k)
= -\theta(m+n) k\cdot\alpha_{m+n} - m~\delta_{m+n} \cdot I
\nonumber \\
\sum_{p \in \mathbb{Z}} p~U_{n-p}(\alpha k)~U_{m+p}(\beta k^{\prime})
= {n\beta - m \alpha \over \alpha + \beta}~U_{m+n}((\alpha + \beta) k)
\qquad \alpha, \beta \in \mathbb{R}, \alpha + \beta \not= 0.
\label{sum-u}
\end{eqnarray}
Now we have
\begin{prop}
The operators
$
V_{n}(k), n \in \mathbb{Z}
$
are well defined on the algebraic Fock space according to the relations
\begin{equation}
V_{n}(k) = \sum_{p \in \mathbb{Z}} U_{p-n}(-k)^{\dagger}~U_{p}(k).
\end{equation}
\end{prop}
Indeed the sum over $p$ is in fact finite because we have
$
U_{n}(k) \Psi = 0
$
for sufficiently high $n$ if
$
\Psi \in {\cal D}_{0}.
$
The expressions
$
V_{n}(k)
$
are the coefficients of the formal series
\begin{equation}
V(z,k) \equiv U(z,-k)^{\dagger}~U(z,k)
\end{equation}
We have analogue elementary properties: the operators
$
V_{n}(k)
$
are leaving the algebraic Fock space
$
{\cal D}_{0}
$
invariant and moreover for every
$
\Psi \in {\cal D}_{0}
$
we have
\begin{equation}
V_{n}(k) \Psi = 0
\end{equation}
for sufficiently large $n$. We have
\begin{eqnarray}
V_{n}(k)^{\dagger} = V_{-n}(-k).
\nonumber \\
~[ \alpha^{\mu}_{m}, V_{n}(k) ] = (1 - \delta_{m})~k^{\mu}~V_{m+n}(k).
\nonumber \\
\sum_{p \in \mathbb{Z}} V_{n-p}(k)~V_{p}(k^{\prime}) = V_{n}(k+k^{\prime}).
\end{eqnarray}
Sum relations of the type (\ref{sum-u}) can be found for the
$
V_{p}(k)
$
operators.
Now we can derive the conformal properties of these operators i.e. the commutation relations with the Virasoro operators.
\begin{prop}
The following relation is true:
\begin{equation}
[ \bar{L}_{m}, V_{n}(k) ] = - (c_{m} k^{2} + m + n)~V_{m+n}
\end{equation}
where we have defined
\begin{equation}
c_{m} \equiv \cases{ {m - 1\over 2}, & for m $>$ 0 \cr
{m + 1\over 2}, & for m $<$ 0 \cr
0, & for m = 0 \cr}.
\end{equation}
In particular if
$
k^{2} = 0
$
we have
\begin{equation}
[ \bar{L}_{m}, V_{n}(k) ] = - (m + n)~V_{m+n}
\end{equation}
i.e. the operators
$
V_{n}(k)
$
have conformal dimension $0$.
\end{prop}
The computation is straightforward: we first compute the commutations relation of the Virasoro operators with
$
U_{n}(k)
$
and then we use the definition of the operators
$
V_{n}(k).
$
We only note a discrepancy with the standard literature where it is asserted that these operators have defined conformal dimension for any
$
k \in \mathbb{R}^{D};
$
the origin of this discrepancy can be traced to the coefficient
$
c_{m}
$
which is different from the standard literature. Fortunately, only the case
$
k^{2} = 0
$
is needed for the construction of the DDF operators.
We are approaching the definition of the DDF operators. First we define the operators
\begin{equation}
\bar{V}^{\mu}_{n}(k) \equiv \sum_{p > 0}
~[ \alpha^{\mu}_{-p} V_{n+p}(k) + V_{n-p}(k) \alpha^{\mu}_{p} ]
\end{equation}
and we have
\begin{prop}
Let
$
k \in \mathbb{R}^{D}, \quad k^{2} = 0.
$
Let us define the following operators:
\begin{equation}
\bar{A}^{\mu}_{m} \equiv \bar{V}^{\mu}_{m}(mk).
\end{equation}
Then the following relations are verified:
\begin{eqnarray}
[ \bar{A}^{\mu}_{m}, \bar{A}^{\nu}_{n} ] = - \eta^{\mu\nu}~m~\delta_{m+n}\cdot I
+ k^{\mu}~\bar{V}^{\nu}_{m,n}(k) - k^{\nu}~\bar{V}^{\mu}_{n,m}(k)
\nonumber \\
~[ \bar{L}_{m}, \bar{A}^{\mu}_{n} ] = - n~\bar{V}^{\mu}_{m+n}
+ {m(m-1)\over 2} k^{\mu}~V_{m+n}(nk)
\nonumber \\
\bar{A}_{n}^{\mu}(k)^{\dagger} = \bar{A}^{\mu}_{-n}(-k)
\nonumber \\
\bar{A}^{\mu}_{n} \Omega = 0 \quad \forall m > 0
\nonumber \\
\bar{A}^{\mu}_{0} = 0
\end{eqnarray}
where the explicit expressions
$
\bar{V}^{\nu}_{m,n}(k)
$
are not important.
\end{prop}
To construct the DDF operators we have to include the kinematic degrees of freedom also.
We define the Hilbert space
$
{\cal H}^{[\mu,\alpha]}_{\rm cov} \equiv
{\cal H}^{[\mu]} \otimes {\cal F}^{(\alpha)}_{\rm cov}
$
where
$
{\cal H}^{[\mu]}
$
is the Hilbert space of a particle of mass
$
\mu
$
and spin $0$ and
$
{\cal F}^{(\alpha)}_{\rm cov}
$
is the Fock space defined at the beginning of this Section. We use the convention
\begin{equation}
\alpha^{\mu}_{0} = p^{\mu}
\end{equation}
and define the covariant Virasoro operators
\begin{equation}
L^{(\alpha)}_{m} \equiv - {1\over 2}~\eta_{\mu\nu}~\sum_{n \in \mathbb{Z}}~
:\alpha^{\mu}_{m-n} \alpha^{\nu}_{n}: - a\delta_{m} \cdot I
\label{vir-cov}
\end{equation}
such that we have the following commutation relations:
\begin{equation}
[ L^{(\alpha)}_{m}, L^{(\alpha)}_{n} ] = (m - n) L^{(\alpha)}_{m+n}
+ D~{ m (m^{2} - 1) \over 12}~\delta_{m+n}~\cdot I.
\end{equation}
In this Hilbert space we have a natural action of the Poincar\'e
algebra. This Hilbert space will have states of negative or zero norm.
Now we can define the DDF operators:
\begin{thm}
Let
$
k \in \mathbb{R}^{D}
$
be such that
$
k^{2} = 0.
$
Let us define in
$
{\cal H}^{[\mu,\alpha]}_{\rm cov}
$
the operators
\begin{equation}
V^{\mu}_{n}(k) \equiv \bar{V}^{\mu}_{n}(k) + p^{\mu}~V_{n}(k)
\end{equation}
and
\begin{equation}
A^{\mu}_{n} \equiv V^{\mu}_{n}(nk)
\end{equation}
Then the following relations are true:
\begin{eqnarray}
[ A^{\mu}_{m}, A^{\nu}_{n} ] = - \eta^{\mu\nu}~m~\delta_{m+n}\cdot I
+ k^{\mu}~V^{\nu}_{m,n}(k) - k^{\nu}~V^{\mu}_{n,m}(k)
\nonumber \\
~[ L^{(\alpha)}_{m}, A^{\mu}_{n} ] = - n~(1 + k\cdot p)~V^{\mu}_{m+n}
+ {m(m-1)\over 2}~n~k^{\mu}~V_{m+n}(nk)
\end{eqnarray}
where the explicit expressions
$
V^{\nu}_{m,n}(k)
$
are not important. In particular consider that
$
k \in \mathbb{R}^{D}
$
also verifies
$
k^{j} = 0, \quad j = 1,\cdots,D-1, \quad k\cdot p = - 1
$
(e.g.
$
k^{+} = 0, \quad k^{-} = - {1\over p^{+}}, \quad k^{j} = 0, \quad j = 1,\cdots,D-1
$)
then the operators
$
A^{j}_{n}, \quad j = 1,\cdots,D-1
$
verify
\begin{eqnarray}
[ A^{j}_{m}, A^{k}_{n} ] = \delta_{jk}~m~\delta_{m+n}\cdot I
\nonumber \\
~[ L^{(\alpha)}_{m}, A^{j}_{n} ] = 0, \quad m \not= 0 \qquad
[ L^{(\alpha)}_{0}, A^{j}_{n} ] = - n~A^{j}_{n}
\nonumber \\
(A_{n}^{j})^{\dagger} = A^{j}_{-n}
\nonumber \\
A^{j}_{n} \Omega = 0 \quad \forall m > 0
\nonumber \\
A^{j}_{0} = p^{j}
\label{A-b}
\end{eqnarray}
so they verify the same algebra as the operators
$
\alpha^{j}_{m}.
$
\label{DDF-b}
\end{thm}
The {\it DDF operators}
$
A^{j}_{n}
$
are the $z$-independent component of the vertex operator
\begin{equation}
\dot{X}^{j}(z)~e^{ik\cdot X(z,nk)}
\end{equation}
where
\begin{equation}
X^{\mu}(z) \equiv \sum_{n \not= 0} {1\over n} \alpha^{\mu}_{n}~z^{n} + p^{\mu}~ln(z)
\qquad
\dot{X}^{j}_{n} \equiv \sum_{n \not= 0} \alpha^{j}_{n}~z^{n} + p^{j}.
\end{equation}
\section{The Covariant Quantum Bosonic String\label{b-s-cov}}
We describe the Bosonic string (see Section \ref{b-s}) using the Hilbert space bundle formalism \cite{Va}. First we consider the system of Bose oscillators
$
\alpha_{m}^{j}, m \in \mathbb{Z}^{*}, j = 1,\dots,D-2
$
(i.e. we exclude for the moment the value
$m = 0$). The Hilbert space generated by these operators is
$
{\cal F}^{(\alpha)};
$
we are in the conditions of Subsections \ref{vir-multi} and \ref{b-s}. We now consider
the following Hilbert space:
\begin{equation}
{\cal H}^{(\mu,\alpha)} \equiv {\cal H}^{[\mu]} \otimes {\cal F}^{(\alpha)}
\label{h-b-cov}
\end{equation}
with
$
{\cal H}^{[\mu]}
$
defined as above. We define
\begin{equation}
\alpha_{0}^{j} \equiv p^{j} \quad j = 1,\dots,D-2
\end{equation}
and the {\it transversal Virasoro generators} by
\begin{equation}
L^{T}_{m} \equiv {1\over 2}~\sum_{n \in \mathbb{Z}}~:\alpha^{j}_{m-n} \alpha^{j}_{n}:
- a \delta_{m} \cdot I
\label{l1-trans}
\end{equation}
which verify the Virasoro algebra with central charge
$
D - 2.
$
Then we define similarly to (\ref{e}) the operators
$
E^{j}(p): {\cal H}^{(\mu,\alpha)} \rightarrow {\cal H}^{(\mu,\alpha)}, \quad j = 1,\dots,D-2
$
on the algebraic Fock according to the formulas
\begin{equation}
E^{j}(p) \equiv - i \sum_{m > 0} {1\over m}
( \alpha_{-m}^{j}~L^{T}_{m} - L^{T}_{-m} \alpha_{m}^{j});
\label{e-cov}
\end{equation}
we can now construct the generators of the Poincar\'e group as in (\ref{p-b}) and the physical Hilbert space
$
{\cal H}^{(\mu,\alpha)}_{\rm phys}
$
as in (\ref{phys-a1}); we take
$
D = 26
$
such that the Poincar\'e algebra closes. The Hilbert space bundle
$
{\cal B}^{[\mu,\alpha]}
$
is made of couples
$
(p,f)
$
where
$
p = (p^{+},\tilde{p}) \in \mathbb{R}_{+} \otimes \mathbb{R}^{D-2}
$
is a chart on the mass-shell and
$
f \in {\cal H}^{(\mu,\alpha)}_{\rm phys};
$
there is a natural fibration over the mass shell given by the canonical projection on the first component. On this bundle we have the following action of the Lorentz algebra:
\begin{equation}
\xi \cdot (p,f) = (\xi\cdot p, \xi\cdot f) \quad \forall \xi \in {\rm Lie}({\cal L})
\end{equation}
where
\begin{equation}
j^{\mu\nu}\cdot p = L^{\mu\nu} \cdot p
\label{kin}
\end{equation}
and
\begin{equation}
j^{kl} = J^{(\alpha)kl}
\quad
j^{k+} = 0
\quad
j^{k-} = {1\over p^{+}} E^{k}(p)
\quad
j^{+-} = 0;
\label{p-b-bundle}
\end{equation}
here
$
j,k = 1,\dots,D-2.
$
The scalar product in the fiber over $p$ is simply the scalar product from
$
{\cal F}^{(\alpha)}.
$
It is easy to verify all the axioms of a Hilbert space bundle. As it is well-known the representations of the Poincar\'e group are induced by representations of the stability subgroup of any point on the mass-shell. If we take the point
$
p^{(0)}
$
with coordinates
$
p^{+} = {\mu\over \sqrt{2}}, p^{j} = 0 \quad (j = 1,\cdots,D-2)
$
it is easy to get from (\ref{kin}) that the stability subgroup is
$
SO(D-1)
$
and the infinitesimal generators are
$
j^{kl} \quad k,l = 1,\dots,D-1.
$
Next we get from (\ref{p-b-bundle}) that the representation of
$
SO(D-1)
$
inducing the representation of the Poincar\'e group from the theorem is
\begin{equation}
j^{kl} = J^{(\alpha)kl}
\quad
j^{k+} = 0
\quad
j^{k-} = - {1\over \mu} E^{j}(p^{(0)})
\quad
j^{+-} = 0;
\end{equation}
one can check the representation property using the definition of
$
{\cal H}^{(\mu,\alpha)}_{\rm phys}.
$
We give now the covariant description of the preceding construction. We define the Hilbert space bundle
$
{\cal B}^{[\mu,\alpha]}_{\rm cov}
$
of couples
$
(p,\Psi)
$
where $p$ is on the positive mass-shell
$
p \in \mathbb{R}^{D} \quad p^{0} > 0 \quad p^{2} = \mu^{2}
$
and
$
\Psi \in {\cal F}^{[\mu,\alpha]}_{\rm cov}
$
verifies the supplementary restrictions
\begin{equation}
L_{m} \Psi = 0 \quad m \geq 0
\end{equation}
where the Virasoro operators
$
L_{m} = L_{m}^{(\alpha)}
$
are given by (\ref{vir-cov}) for the value
$
a = 1.
$
The Hermitian form in the fiber over $p$ is the form defined on
$
{\cal F}^{(\alpha)}_{\rm cov}.
$
We want to obtain an isomorphism to the previously obtained
$
{\cal H}_{phys}.
$
We present briefly the usual argument with some simplifications. We first define the
{\it DDF states} as linear combinations of states of the form:
\begin{equation}
f = A^{j_{1}}_{m_{1}} \dots A^{j_{l}}_{m_{l}}\Omega \quad m_{1},\dots,m_{l} < 0.
\end{equation}
It is useful to introduce the notation
\begin{equation}
K_{m} \equiv k\cdot \alpha_{m}
\end{equation}
and we easily obtain
\begin{equation}
[ K_{m}, K_{n} ] = 0 \qquad [K_{m}, L_{n} ] = m~K_{m+n};
\label{kl}
\end{equation}
we also have for any DDF state:
\begin{equation}
K_{m} f = 0 \quad \forall m > 0.
\end{equation}
Next we have the following technical result for which we present a simpler proof:
\begin{prop}
The vectors of the type
\begin{equation}
\Psi_{\lambda,\mu,f} \equiv L^{\lambda_{1}}_{-1} \cdots L^{\lambda_{m}}_{-m}
K^{\mu_{1}}_{-1} \cdots K^{\mu_{n}}_{-n}~f
\end{equation}
where
$
\lambda_{1},\dots,\lambda_{m},\mu_{1},\dots,\mu_{n} \in \mathbb{N}^{*}
$
and $f$ is a DDF state, are linearly independent and generates the whole space
$
{\cal F}^{(\alpha)}_{\rm cov}.
$
\label{basis}
\end{prop}
{\bf Proof:} (i) We know that the Hilbert space
$
{\cal F}^{(\alpha)}_{\rm cov}
$
is generated by the operators
$
\alpha^{\mu}_{-m}, \quad m > 0, \quad \mu = 0,\dots,D-1
$
applied on the vacuum. It is convenient to work with the operators
\begin{equation}
\alpha^{\pm}_{m} \equiv {1\over \sqrt{2}} (\alpha^{0}_{m} \pm \alpha^{D-1}_{m})
\qquad
\alpha^{j}_{m} \quad j = 1,\dots,D-2.
\end{equation}
If we take
$
k \in \mathbb{R}^{D}
$
as in the construction of the DDF operators i.e.
$
k^{+} = 0,~k^{j} = 0~(j = 1,\dots,D-2)
$
we have
\begin{equation}
K_{m} = k^{-} \alpha^{+}_{m}.
\end{equation}
So if we apply on the vacuum operators of the form
$
K_{-1},\dots,K_{-m}
$
we obtain all the states of the form
$
\Psi^{+} = P(\alpha^{+}_{-1},\dots,\alpha^{+}_{-m})\Omega
$
with $P$ a polynomial. Now we easily compute
\begin{equation}
A^{j}_{-1}\Omega = \alpha^{j}_{-1}\Omega + p^{j} V_{-1}(k)\Omega.
\end{equation}
Because the second vector is of the type
$
\Psi^{+}
$
we can generate the states
$
\alpha^{j}_{-1}\Omega
$
using the DDF operators and the
$
K_{-m}
$
operators. Afterwards, using the $K$ operators we can generate all the states of the form
$
\alpha^{j}_{-1}P(\alpha^{+}_{-1},\dots,\alpha^{+}_{-m})\Omega.
$
Using now
$
2,3,\dots
$
DDF operators we can establish by inductions that all states of the form
$
P(\alpha^{j}_{-1},\alpha^{+}_{-1},\dots,\alpha^{+}_{-m})\Omega
$
can be obtained using only DDF and $K$ operators. Next we suppose that we can
create all states of the form
$
P(\alpha^{j_{1}}_{-1},\dots,\alpha^{j_{n-1}}_{-(n-1)},
\alpha^{+}_{-1},\dots,\alpha^{+}_{-m})\Omega
$
using only DDF and $K$ operators and extend the result to $n$ by applying DDF operators
of the form
$
A^{j}_{-n}
$
on such a state. Finally, we note that we have
\begin{equation}
L^{\lambda_{1}}_{-1} \cdots L^{\lambda_{m}}_{-m}
= {\rm const} (\alpha^{-}_{-1})^{\lambda_{1}+\cdots \lambda_{m}} + \cdots
\end{equation}
where by $\cdots$ we mean terms containing
$
\alpha^{-}_{-1}
$
at a power strictly smaller that
$
\lambda_{1}+\cdots \lambda_{m}.
$
If we choose the preceding sum conveniently we can generate all states with
$
\alpha^{-}_{-1}
$
factors. In the same way we obtain the states with
$
\alpha^{-}_{-l} \quad l > 1
$
factors. It follows that the states of the form
$
\Psi_{\lambda,\mu,f}
$
are really generating the whole Hilbert space
$
{\cal F}^{(\alpha)}_{\rm cov}.
$
(ii) We must prove that there are no linear dependencies between such vectors. If we use
the well-known relation
\begin{equation}
[ \bar{L}_{0}, \alpha^{\mu}_{-m} ] = m~\alpha^{\mu}_{-m}
\end{equation}
we easily obtain that the vector
\begin{equation}
\Psi = \prod_{n,\rho} (\alpha^{\rho}_{-n})^{\epsilon_{n,\rho}}\Omega,
\label{vect1}
\end{equation}
where the product runs over a finite set of indices, is an eigenvector of
$
\bar{L}_{0}
$
corresponding to the eigenvalue
\begin{equation}
\lambda = \sum_{n,\rho} n \epsilon_{n,\rho}.
\end{equation}
We will denote by
$
{\cal F}_{n}
$
the eigenspace of
$
\bar{L}_{0}
$
corresponding to the eigenvalue $n$. One can prove that
\begin{equation}
{\cal F}_{m} \cap {\cal F}_{n} = \{ 0 \}
\end{equation}
for
$
m \not= n
$
using a Vandermonde determinant. Because the subspaces
$
{\cal F}_{n}, \quad n \geq 0
$
generate the whole
$
{\cal F}^{(\alpha)}_{\rm cov}
$
we have the direct sum decomposition
\begin{equation}
{\cal F}^{(\alpha)}_{\rm cov} = \oplus_{n \geq 0} {\cal F}_{n}.
\end{equation}
Now we use the relations
\begin{equation}
[ \bar{L}_{0}, L_{-m} ] = m~L_{-m} \qquad
[ \bar{L}_{0}, K_{-m} ] = m~K_{-m} \qquad
[ \bar{L}_{0}, A^{j}_{-m} ] = m~A^{j}_{-m}
\end{equation}
and find out that the vector
\begin{equation}
\Psi^{\prime} = \prod L_{-m}^{\lambda_{m}} \prod K_{-n}^{\mu_{n}}
\prod (A^{j}_{-p})^{\beta_{p,j}}\Omega
\label{vect2}
\end{equation}
is an eigenvector of
$
\bar{L}_{0}
$
corresponding to the eigenvalue
\begin{equation}
\lambda^{\prime} = \sum_{n} n (\lambda_{n} + \mu_{n} + \sum_{j} \beta_{n,j}).
\end{equation}
Let us fix
$
N \in \mathbb{N}.
$
Then
$
{\cal F}_{N}
$
is generated by vectors of the type (\ref{vect1}) with
$
\lambda = N;
$
on the other hand the vectors of the type (\ref{vect2}) are also generating the whole Hilbert space but only those corresponding to
$
\lambda^{\prime} = N
$
are in
$
{\cal F}_{N}.
$
It follows that
$
{\cal F}_{N}
$
is generated by the vectors of the type (\ref{vect2}) corresponding to
$
\lambda^{\prime} = N.
$
Because the (finite) number of vectors of the type (\ref{vect1}) corresponding to
$
\lambda = N
$
is the same as the number of the vectors of the type (\ref{vect2}) corresponding to
$
\lambda^{\prime} = N
$
it means that the vectors of the type (\ref{vect2}) corresponding to
$
\lambda^{\prime} = N
$
must be linear independent.
$\blacksquare$
We note that in our proof we did not have to compute the complicated determinant used in the proof from \cite{GSW}.
The rest of the proof is standard and can be found in \cite{GSW}. The final result is:
\begin{prop}
Let
$
D = 26
$
and the vector
$
\Psi \in {\cal F}^{(\alpha)}_{\rm cov}
$
verifying
\begin{equation}
L_{m}\Psi = 0 \quad \forall m \geq 0;
\end{equation}
then we can uniquely write it in the form
\begin{equation}
\Psi = f + s
\end{equation}
where $f$ is a DDF state,
$
s \in S
$
and we have
\begin{equation}
L_{0} f = f \qquad L_{0} s = s \qquad L_{m} s = 0 \quad (\forall m > 0).
\label{fs}
\end{equation}
\end{prop}
The end of this analysis is:
\begin{thm}
The Hermitian form on the Hilbert space bundle
$
{\cal B}^{[\mu,\alpha]}_{\rm cov}
$
is positively defined. If we factor out the states of null norm we obtain a representation of the Poincar\'e group equivalent to the representation in the bundle
$
{\cal B}^{[\mu,\alpha]}.
$
\end{thm}
{\bf Proof:} (i) If we use the preceding proposition we can write any element in the fiber over $p$ as
$
\Psi = f + s
$
where $f$ is a DDF state,
$
s \in S
$
and we have the relations (\ref{fs}). From these relations it easily follows that we have
\begin{equation}
< \Psi,\Psi>~=~<f,f>~~\geq 0
\end{equation}
so if we eliminate the null-norm states we end up with a factor Hilbert space bundle with fibres isomorphic to the subspace of DDF states.
(ii) We have to determine the representation of the stability subgroup
$
SO(D-1)
$
of the point
$
p^{(0)}.
$
It is clear that we have
\begin{equation}
J^{(\alpha)kl}~A^{j}_{n} = i~(\delta_{kj}~A^{l}_{n} - \delta_{lj}~A^{k}_{n}) \quad
j,k,l = 1,\dots,D-2
\end{equation}
so we have for any DDF state
\begin{equation}
J^{(\alpha)kl}f = j^{(\alpha)kl}f.
\label{ddf0}
\end{equation}
We still have to compute the action of the generators
$
J^{(\alpha)k,D-1}
$
on the fiber. It is important to note that from the first relation (\ref{fs}) we have
\begin{equation}
\alpha^{-}_{m}f = {\sqrt{2}\over \mu}~\bar{L}_{m}f \quad \forall m > 0;
\end{equation}
also it is easy to prove that
\begin{equation}
\alpha^{+}_{m}f = 0 \quad \forall m > 0.
\end{equation}
Using these relations it follows that for any two DDF states
$
f, f^{\prime}
$
we have
\begin{equation}
<f^{\prime}, J^{(\alpha)k,D-1}f> = <f^{\prime}, j^{(\alpha)k,D-1}f>
\end{equation}
where in the right-hand side we have the operators (\ref{p-b-bundle}). It is more complicated to extend this relation for
$
f^{\prime} \rightarrow \Psi \in {\cal F}^{(\alpha)}_{\rm cov};
$
for this we have to use the generic form of states
$
\Psi_{\lambda,\mu,f}
$
and commute
$
L_{m}
$
and
$
K_{m}
$
with
$
E^{j}(p^{(0)}).
$
As a result we have for any DDF state
\begin{equation}
J^{(\alpha)k,D-1}f = j^{(\alpha)k,D-1}f.
\label{ddf1}
\end{equation}
Next we note that we have for any DDF state
\begin{equation}
\bar{L}_{0} f = \left(1 + {\mu^{2}\over 2} \right)f;
\end{equation}
from here it follows that
\begin{equation}
<f^{\prime},\bar{L}^{T}_{0} f> = \left(1 + {\mu^{2}\over 2} \right)<f^{\prime},f>
\end{equation}
where
$
\bar{L}^{T}_{0} = L^{T}_{0}
$
is the transversal part of
$
\bar{L}_{0}
$
(i.e. it contains only the modes
$
1,\dots,D-2
$).
As above we can extend the relation for
$
f^{\prime} \rightarrow \Psi \in {\cal F}^{(\alpha)}_{\rm cov}
$
so we have
\begin{equation}
L^{T}_{0} f = \left(1 + {\mu^{2}\over 2} \right)f
\label{ddf2}
\end{equation}
for any DDF state.
From (\ref{ddf1}) and (\ref{ddf2}) it follows that the fiber over
$
p^{(0)}
$
of the fiber bundle
$
{\cal B}^{[\mu,\alpha]}_{\rm cov}
$
coincides with the fiber over the same point of the fiber bundle
$
{\cal B}^{[\mu,\alpha]}.
$
According to a standard theorem 9.20 of \cite{Va} it follows that the two representations of the Poincar\'e group are equivalent.
$\blacksquare$
\section{BRST Quantization of the Bosonic String\label{brst}}
Another possibility is to introduce ghost degrees of freedom and construct a gauge charge operator $Q$ which squares to zero
$
Q^{2} = 0
$
in such a way that there is a canonical isomorphism between the physical
Hilbert space and the factor space
$
Ker(Q)/Im(Q)
$
\cite{KO}, \cite{FO}, \cite{T}, \cite{BP}, \cite{FGZ} \cite{P}. We provide here an elementary treatment. First we define the ghost Hilbert space
$
{\cal F}^{gh}_{1};
$
by definition it is generated by the operators
$
b_{m}, c_{m} \quad m \in \mathbb{Z}
$
from the vacuum
$\Omega_{gh} \in {\cal F}^{gh}_{1}$;
we assume that
\begin{equation}
b_{m}\Omega_{gh} = 0 \quad c_{m}\Omega_{gh} = 0 \quad \forall m > 0.
\end{equation}
These operators are subject to the following anticommutation relations:
\begin{equation}
\{b_{m}, b_{n}\} = 0 \quad \{c_{m}, c_{n}\} = 0
\quad \{b_{m}, c_{n}\} = \delta_{m+n} \cdot I;
\end{equation}
we also suppose that there is a conjugation operation in
$
{\cal F}^{gh}_{1}
$
such that
\begin{equation}
b_{m}^{\dagger} = b_{-m} \quad c_{m}^{\dagger} = c_{-m}.
\end{equation}
We can give a concrete realization as follows:
$
{\cal F}^{gh}_{1} = {\cal F}_{b,c} \otimes {\cal C}
$
where
$
{\cal F}_{b,c}
$
is the Fock space generated by the operators
$
b_{m}, c_{m} \quad m \in \mathbb{Z}^{*}
$
and
$
{\cal C}
$
is the Clifford algebra generated by
$
b_{0}, c_{0}.
$
\begin{prop}
The following operators
\begin{equation}
l^{(1)}_{m} = \sum_{n \in \mathbb{Z}} (m+n) :b_{m-n}c_{n}:
\end{equation}
are well defined on the algebraic Hilbert space and are verifying:
\begin{eqnarray}
[ l^{(1)}_{m}, b_{n}] = (m-n) b_{m+n} \qquad
[ l^{(1)}_{m}, c_{n}] = - (2m+n) b_{m+n}
\nonumber \\
~[ l^{(1)}_{m}, l^{(1)}_{n}] = (m-n) l^{(1)}_{m+n}
+ {1\over 6} m(1 - 13m^{2}) \delta_{m+n} \cdot I
\nonumber \\
(l^{(1)}_{m})^{\dagger} = l^{(1)}_{-m}.
\end{eqnarray}
\end{prop}
{\bf Proof:} We write
\begin{equation}
l^{(1)}_{m} = l_{m}^{\prime} + m b_{m} c_{0} + 2 m b_{0} c_{m}
\end{equation}
where
$
l_{m}^{\prime}
$
contains only the non-zero modes:
\begin{equation}
l_{m}^{\prime} = \sum_{n \not= 0,m} (m+n) :b_{m-n}c_{n}:
\end{equation}
For the non-zero modes the $2$-point functions are
\begin{eqnarray}
<\Omega_{gh},b_{m}c_{n}\Omega_{gh}> = \theta(m) \delta_{m+n} \quad
<\Omega_{gh},c_{m}b_{n}\Omega_{gh}> = \theta(m) \delta_{m+n} \quad
\nonumber \\
<\Omega_{gh},b_{m}b_{n}\Omega_{gh}> = 0 \quad
<\Omega_{gh},c_{m}c_{n}\Omega_{gh}> = 0
\end{eqnarray}
and we can compute the commutators from the statement using Wick theorem.
$\blacksquare$
Next we have
\begin{cor}
Let us consider in the Hilbert spaces
$
{\cal H} \equiv {\cal H}^{[\mu,\alpha]}_{cov} \otimes {\cal F}^{gh}_{1}
$
the following operators:
$
L_{m}^{(\alpha)}
$
cf. (\ref{vir-cov}) and
\begin{equation}
{\cal L}_{m}^{(\alpha)} = L_{m}^{(\alpha)} \otimes I_{2} + I_{1} \otimes l_{m}^{(1)};
\end{equation}
then we have
\begin{eqnarray}
[ {\cal L}_{m}^{(\alpha)}, {\cal L}_{n}^{(\alpha)}] = (m-n) {\cal L}_{m+n}^{(\alpha)}
+ m \left( {D-26\over 12} m^{2} + 2a - {D-2\over 12} \right) \delta_{m+n} \cdot I
\nonumber \\
({\cal L}_{m}^{(\alpha)})^{\dagger} = {\cal L}_{-m}^{(\alpha)}.
\end{eqnarray}
\end{cor}
In this enlarged Hilbert space we have \cite{KO}:
\begin{prop}
The following operator
\begin{equation}
Q \equiv \sum L^{(\alpha)}_{-m} c_{m} - {1\over 2} \sum (m-n) :c_{-m} c_{-n} b_{m+n}:
\end{equation}
is well defined on the algebraic Hilbert space and is formally self-adjoint; it verifies
\begin{equation}
Q^{2} = 0
\end{equation}
{\it iff}
$
D = 26
$
and
$a = 1$.
\end{prop}
{\bf Proof:} We separated the non-zero modes as before:
\begin{equation}
Q = Q_{0} + {\cal L}_{0}^{(\alpha)} c_{0} + C^{(1)}_{0} b_{0}
\end{equation}
where
\begin{equation}
Q_{0} \equiv \sum_{m \not= 0} {\cal L}^{(\alpha)}_{-m} c_{m}
- {1\over 2} \sum_{m,n \not= 0} \sum_{m+n \not= 0} (m-n) :c_{-m} c_{-n} b_{m+n}:
\end{equation}
and
\begin{equation}
C^{(1)}_{m} \equiv {1\over 2} \sum_{p+q=m} (p - q) :c_{p} c_{q}:
= {1\over 2} \sum_{p+q=m} (p - q) c_{p} c_{q}
\end{equation}
The most convenient way to prove the theorem is the following. One proves by direct computation (using our preferred method based on Wick theorem) the following formulas:
\begin{eqnarray}
\{Q, b_{m} \} = {\cal L}_{m}^{(\alpha)} \qquad
\{ Q, c_{m} \} = C^{(1)}_{m}
\nonumber \\
~[ Q, {\cal L}_{m}^{(\alpha)} ] = \rho_{m} c_{m} \qquad
~[ Q, K_{m} ] = - m \sum K_{m-n} c_{n}
\label{Q-b1}
\end{eqnarray}
where
\begin{equation}
\rho_{m} \equiv - m \left( {D-26\over 12} m^{2} + 2a - {D-2\over 12}\right).
\end{equation}
We now use the following observation. According to proposition \ref{basis} we can take in
${\cal H}$
the following basis:
\begin{equation}
\Psi^{\prime} = \prod b_{-i} \prod c_{-j} \prod L^{(\alpha)}_{-m} \prod K_{-n}~f
\end{equation}
where $f$ are DDF states, the indices of type $m, n$ are strictly positive and the indices of the type $i,j$ are
$\geq 0$.
It is easy to substitute
$
L^{(\alpha)}_{m} = {\cal L}_{m}^{(\alpha)} - l_{m}^{(1)}
$
and consider the new basis
\begin{equation}
\Psi = \prod b_{-i} \prod c_{-j} \prod {\cal L}^{(\alpha)}_{-m} \prod K_{-n}~f
\label{basis-gh}
\end{equation}
Because
$
{\cal L}_{m}^{(\alpha)}~f = 0\quad \forall m \geq 0
$
we easily find out that
\begin{equation}
Qf = 0
\label{Q-b2}
\end{equation}
for any DDF state $f$. The operator $Q$ is perfectly well defined by (\ref{Q-b1}) and
(\ref{Q-b2}); indeed we know how to act with $Q$ on states of the form (\ref{basis-gh}): we
commute $Q$ using (\ref{Q-b1}) till it hits a DDF state and gives $0$ according to
(\ref{Q-b2}). Now it is easy to obtain from (\ref{Q-b1}):
\begin{equation}
\{Q^{2}, b_{m} \} = \rho_{m} b_{m} \qquad
\{ Q^{2}, c_{m} \} = 0 \qquad
[ Q^{2}, {\cal L}_{m}^{(\alpha)} ] = \rho_{m} C^{(1)}_{m} \qquad
[ Q^{2}, K_{m} ] = 0.
\label{Q-b3}
\end{equation}
Because we obviously have
$
Q^{2}f = 0
$
it immediately follows that
\begin{equation}
Q^{2} = 0 \Longleftrightarrow \rho_{m} = 0 \Longleftrightarrow D = 26, a = 1
\label{Q-b5}
\end{equation}
i.e. the statement of the theorem.
$\blacksquare$
\begin{rem}
One can directly prove that
\begin{equation}
Q^{2} = {1\over 2} \sum m \left( {D-26\over 12} m^{2} + 2a - {D-2\over 12}\right)
:c_{-m} c_{m}:
\end{equation}
which is another way to obtain the result. Let us also note that if the conditions
$
D = 26, a = 1
$
are meet then we also have no anomalies in the Virasoro algebra:
\begin{equation}
[ {\cal L}_{m}^{(\alpha)}, {\cal L}_{n}^{(\alpha)}] = (m-n) {\cal L}_{m+n}^{(\alpha)}.
\end{equation}
\end{rem}
To analyze the cohomology of the BRST operator $Q$ we need the following result:
\begin{prop}
The operator
$\tilde{Q}$
is well defined on the algebraic Hilbert space through the following formulas:
\begin{equation}
\{\tilde{Q}, b_{m} \} = 0 \qquad
\{ \tilde{Q}, c_{m} \} = \delta_{m} \cdot I \qquad
[ \tilde{Q}, {\cal L}_{m}^{(\alpha)} ] = - m b_{m} \qquad
[ \tilde{Q}, K_{m} ] = 0
\label{tQ-b1}
\end{equation}
and
\begin{equation}
\tilde{Q}f = 0
\end{equation}
for any DDF state $f$. We also have
\begin{equation}
\tilde{Q}^{\dagger} = \tilde{Q} \qquad \tilde{Q}^{2} = 0.
\end{equation}
\end{prop}
{\bf Proof:} Because the operators
$
b_{m}, c_{m}, {\cal L}_{m}^{(\alpha)}, K_{m}
$
are connected by various relations, we have to verify the Jacobi identities of the type:
\begin{equation}
[[X,Y], \tilde{Q} ]_{\rm graded} + {\rm cyclic~permutations} = 0
\end{equation}
where
$X, Y$
are operators from the set
$
b_{m}, c_{m}, {\cal L}_{m}^{(\alpha)}, K_{m}.
$
The non-trivial ones are corresponding to the pairs
$
({\cal L}_{m}^{(\alpha)}, {\cal L}_{n}^{(\alpha)})
$
and
$
({\cal L}_{m}^{(\alpha)}, c_{n})
$
and are easily checked.
$\blacksquare$
The main result is the following
\begin{thm}
If
$
\Psi \in {\cal H}
$
verifies
$
Q \Psi = 0
$
then it is of the form
\begin{equation}
\Psi = Q \Phi + f_{1} + b_{0} f_{2} + c_{0} f_{3}
\end{equation}
where
$
f_{j}
$
are DDF states.
\end{thm}
{\bf Proof:}
A good strategy to determine the cohomology of the operator $Q$ is to mimic Hodge theorem i.e. to find a homotopy operator
$
\tilde{Q}
$
such that the spectrum of the ``Laplacian"
\begin{equation}
\Delta \equiv Q\tilde{Q} + \tilde{Q}Q
\end{equation}
can be easily determined. We take such an operator to be the one determined in the previous proposition. It is now elementary to see that the Laplace operator is alternatively given by:
\begin{eqnarray}
[ \Delta, b_{m} ] = - m b_{m} \quad [ \Delta, c_{m} ] = - m c_{m} \qquad
[ \Delta, {\cal L}^{(\alpha)}_{m} ] = - m {\cal L}^{(\alpha)}_{m} \quad
[ \Delta, K_{m} ] = - m K_{m}
\nonumber \\
\Delta f = 0.
\end{eqnarray}
It follows that we have on states of the form (\ref{basis-gh})
\begin{equation}
\Delta \Psi = (\sum m + \sum n + \sum i + \sum j) \Psi;
\label{delta}
\end{equation}
we observe that the eigenvalues from (\ref{delta}) are
$\geq 0$
the equality sign being true only for vectors of the form
$
f_{1} + b_{0} f_{2} + c_{0} f_{3}
$
with
$
f_{j}
$
DDF states. It means that every vector in
${\cal H}$
is of the form
\begin{equation}
\Psi = \Psi_{0} + f_{1} +b_{0} f_{2} + c_{0} f_{3}
\end{equation}
where
$
\Psi_{0}
$
belongs to the subspace
$
{\cal H}^{\prime} \subset {\cal H}
$
of vectors with strictly positive eigenvalues of
$
\Delta.
$
Now suppose that the vector
$\Psi$
verifies the equation from the statement
$
Q\Psi = 0.
$
Then it is easy to see that we also have
$
Q\Psi_{0} = 0
$
so
$
\Delta\Psi_{0} = Q \tilde{Q} \Psi_{0}.
$
We can write
$
\Psi_{0} = \sum \Psi_{\omega}
$
where
$
\Psi_{\omega}
$
are linear independent vectors from
${\cal H}^{\prime}$
corresponding to distinct eigenvalues
$\omega > 0$.
Then the preceding relation is equivalent to
$
\Psi_{\omega} = {1\over\omega}~Q \tilde{Q} \Psi_{\omega};
$
if we define
$
\Phi = \sum {1\over \omega}~\tilde{Q} \Psi_{\omega}
$
then it follows that
$
\Psi_{0} = Q \Phi
$
and this finishes the proof.
$\blacksquare$
We conclude that the cohomology of the operator $Q$ is described by three copies of the physical space of DDF states. To eliminate this tripling we proceed as follows. We construct the Hilbert space
${\cal F}^{gh}_{1}$
such that it also verifies
\begin{equation}
b_{0} \Omega_{gh} = 0;
\end{equation}
in this case
${\cal F}^{gh}_{1}$
is a Fock space. Then we construct ${\cal H}$ as above and consider the subspace
\begin{equation}
{\cal H}_{0} \equiv \{ \Psi \in {\cal H} | b_{0}\Psi = 0 \quad
{\cal L}^{(\alpha)}_{0} \Psi = 0 \}.
\end{equation}
This subspace is left invariant by the operator $Q$; if
$
\Psi \in {\cal H}_{0}
$
verifies
$
Q\Psi = 0
$
then we have similarly with the preceding theorem
\begin{equation}
\Psi = Q\Phi + f
\end{equation}
with $f$ some DDF state.
\section{The Covariant Neveau-Schwarz Superstring}
We proceed as before from the Hilbert space
$
{\cal H}^{(NS)}_{\rm cov}
$
generated by the operators
$
\alpha^{\mu}_{m}, m \in \mathbb{Z}, \mu = 0,\dots,D
$
and
$
b^{\mu}_{r}, r \in {1\over 2} + \mathbb{Z}, \mu = 0,\dots,D
$
such that:
\begin{eqnarray}
[ \alpha^{\mu}_{m}, \alpha^{\nu}_{n} ] = - \eta_{\mu\nu}~m~\delta_{m+n} \cdot I,
\quad \forall m, n
\nonumber \\
\alpha^{\mu}_{m} \Omega = 0, \quad m > 0
\nonumber \\
(\alpha^{\mu}_{m})^{+} = \alpha^{\mu}_{-m} \quad \forall m;
\nonumber \\
\{ b^{\mu}_{r}, b^{\nu}_{s} \} = - \eta_{\mu\nu}~m~\delta_{r+s} \cdot I,
\quad \forall r, s
\nonumber \\
b^{\mu}_{r} \Omega = 0, \quad r > 0
\nonumber \\
(b^{\mu}_{r})^{+} = b^{\mu}_{-r} \quad \forall r
\end{eqnarray}
and define the covariant Virasoro operators
\begin{equation}
L^{(NS)}_{m} \equiv - {1\over 2}~\eta_{\mu\nu}~\sum_{n \in \mathbb{Z}}~
:\alpha^{\mu}_{m-n} \alpha^{\nu}_{n}:
- {1\over 2}~\eta_{\mu\nu}~\sum_{r \in 1/2+\mathbb{Z}}~r~:b^{\mu}_{-r} b^{\nu}_{m+r}:
- a~\delta_{m} \cdot I
\label{vir-NS}
\end{equation}
and the supersymmetric partners
\begin{equation}
G_{r} \equiv - \eta_{\mu\nu}~\sum_{n \in \mathbb{Z}} \alpha^{\mu}_{-n} b^{\nu}_{n+r}
\end{equation}
such that we have the following relations:
\begin{eqnarray}
[ L^{(NS)}_{m}, L^{(NS)}_{n} ] = (m - n) L^{(NS)}_{m+n}
+ \left[ D~{ m (m^{2} - 1) \over 8} + 2 m a \right]~\delta_{m+n}~\cdot I.
\nonumber \\
~[ L^{(NS)}_{m}, G_{r} ] = \left({m\over 2} - r\right) G_{m+r}
\nonumber \\
\{ G_{r}, G_{s} \} = 2 L^{(NS)}_{r+s}
+ \left[ {D\over 2}~\left(r^{2} - {1\over 4}\right) + 2 a \right]~\delta_{r+s}~\cdot I
\nonumber \\
(L^{(NS)}_{m})^{\dagger} = L^{(NS)}_{-m} \qquad G_{r}^{\dagger} = G_{-r}.
\end{eqnarray}
In this Hilbert space we have an action of the supersymmetric Poincar\'e algebra. Then we can obtain the Neveau-Schwarz case if we take
$D = 10,~a = 1/2$
and restrict the states by the conditions:
\begin{equation}
L_{m}^{(NS)} \Psi = 0 \quad \forall m \geq 0 \qquad
G_{r} \Psi = 0 \quad \forall r > 0.
\end{equation}
The DDF states can be constructed as before. First we have to construct operators
$
A^{j}_{m},~B^{j}_{r}, j = 1,\dots,D-1
$
such that they verify the same algebra as the operators
$
\alpha^{j}_{m}, b^{j}_{r}
$
and they commute with
$
G_{r},\forall r;
$
(this implies that they commute with
$
L_{m}, \forall m.)
$
The DDF states are generated by these operators from the vacuum.
For the BRST description we need to enlarge the ghost space: we consider the Fock space
$
{\cal F}^{gh}_{2}
$
generated by the operators
$
\beta_{r}, \gamma_{r} \quad r \in {1\over 2} + \mathbb{Z}
$
from the vacuum
$\Omega_{gh} \in {\cal F}^{gh}_{2}$;
we assume that
\begin{equation}
\beta_{r}\Omega_{gh} = 0 \quad \gamma_{r}\Omega_{gh} = 0 \quad \forall r > 0.
\end{equation}
These operators are subject to the following anticommutation relations:
\begin{equation}
[\beta_{r}, \beta_{s} ] = 0 \quad [ \gamma_{r}, \gamma_{s} ] = 0 \quad
[ \gamma_{r}, b_{s} ] = \delta_{r+s} \cdot I;
\end{equation}
we also suppose that there is a conjugation operation in
$
{\cal F}^{gh}_{2}
$
such that
\begin{equation}
\beta_{r}^{\dagger} = \beta_{-r} \quad \gamma_{r}^{\dagger} = \gamma_{-r}.
\end{equation}
We can define as usual the algebraic Hilbert space (the subspace of vectors generated by a finite number of operators
$
\beta_{r}, \gamma_{r} \quad r \leq 0)
$
and normal ordering in
$
{\cal F}^{gh}_{2}.
$
\begin{prop}
The following operators
\begin{equation}
l^{(2)}_{m} = \sum_{r \in 1/2 + \mathbb{Z}} \left({m\over 2}+r\right) :\beta_{m-r}\gamma_{r}:
\end{equation}
are well defined on the algebraic Hilbert space and are verifying:
\begin{eqnarray}
[ l^{(2)}_{m}, \beta_{r}] = \left({m\over 2}-r\right) \beta_{m+r} \qquad
[ l^{(2)}_{m}, \gamma_{r}] = - \left({3m\over 2}+r\right) \gamma_{m+r}
\nonumber \\
~[ l^{(2)}_{m}, l^{(2)}_{n}] = (m-n) l^{(2)}_{m+n}
+ {1\over 12} m(11m^{2} + 1) \delta_{m+n} \cdot I
\nonumber \\
(l^{(2)}_{m})^{\dagger} = l^{(2)}_{-m}.
\end{eqnarray}
\end{prop}
{\bf Proof:} The $2$-point functions are
\begin{eqnarray}
<\Omega_{gh},\beta_{r}\gamma_{s}\Omega_{gh}> = -\theta(r) \delta_{r+s} \quad
<\Omega_{gh},\gamma_{r}\beta_{s}\Omega_{gh}> = \theta(r) \delta_{r+s} \quad
\nonumber \\
<\Omega_{gh},\beta_{r}\beta_{s}\Omega_{gh}> = 0 \quad
<\Omega_{gh},\gamma_{r}\gamma_{s}\Omega_{gh}> = 0
\end{eqnarray}
and we can compute the commutators from the statement using Wick theorem.
$\blacksquare$
Next we have
\begin{cor}
Let us consider in the Hilbert spaces
$
{\cal F}^{gh}_{NS} \equiv {\cal F}^{gh}_{1} \otimes {\cal F}^{gh}_{2}
$
the following operators
\begin{equation}
l^{(NS)}_{m} = l^{(1)}_{m} \otimes I_{2} + I_{1} \otimes l^{(2)}_{m};
\end{equation}
then we have
\begin{eqnarray}
[ l_{m}^{(NS)}, l_{n}^{(NS)}] = (m-n) l_{m+n}^{(NS)}
+ {1\over 4}~m (1 - 5m^{2}) \delta_{m+n} \cdot I
\nonumber \\
(l_{m}^{(NS)})^{\dagger} = l_{-m}^{(NS)}.
\end{eqnarray}
\end{cor}
Next, we consider in
$
{\cal F}^{gh}_{NS}
$
the following operators
\begin{equation}
g_{r} \equiv - 2 \sum_{n \in \mathbb{Z}} b_{-n} \gamma_{n+r}
+ \sum_{n \in \mathbb{Z}} \left( {n\over 2} - r\right) c_{-n} \beta_{n+r}
\end{equation}
which are well-defined on the algebraic Fock space. We have
\begin{prop}
The following relations are verified
\begin{eqnarray}
g_{r}^{\dagger} = g_{-r}
\nonumber \\
~[l_{m}^{(NS)}, g_{r} ] = \left( {m\over 2} - r\right) g_{m+r}
\nonumber \\
\{g_{r},g_{s}\} = 2 l_{r+s}^{(NS)} + \left( {1\over 4}- 5r^{2} \right) \delta_{r+s} \cdot I.
\end{eqnarray}
\end{prop}
The proofs of the first two relations are elementary. For the last one we use Wick theorem.
Next we have
\begin{cor}
Let us consider in the Hilbert spaces
$
{\cal H} \equiv {\cal H}^{(b)}_{cov} \otimes^{s} {\cal F}^{gh}_{NS}
$
where the skew tensor product
$
\otimes^{s}
$
is such that we have normal (anti)commutation relations i.e. the Fermionic operators
$
b^{\mu}_{r}
$
are anticommuting with
$
b_{m}, c_{m}.
$
We then define the operators
\begin{eqnarray}
{\cal L}_{m}^{(NS)} = L^{(NS)}_{m} \otimes I_{2} + I_{1} \otimes l^{(NS)}_{m}
\nonumber \\
{\cal G}_{r} = G_{r} \otimes I_{2} + I_{1} \otimes g_{r}
\end{eqnarray}
and we have
\begin{eqnarray}
[ {\cal L}_{m}^{(NS)}, {\cal L}_{n}^{(NS)}] = (m-n) {\cal L}_{m+n}^{(NS)}
+ m \left( {D-10\over 8} m^{2} + 2a - {D-2\over 8} \right) \delta_{m+n} \cdot I
\nonumber \\
\{ {\cal G}_{r}, {\cal G}_{s} \} = 2 {\cal L}^{(NS)}_{r+s}
+ \left( {D-10\over 8}~r^{2} + 2 a - {D-2\over 8}\right)~\delta_{r+s}~\cdot I
\nonumber \\
({\cal L}^{(NS)}_{m})^{\dagger} = {\cal L}^{(NS)}_{-m} \qquad
{\cal G}_{r}^{\dagger} = {\cal G}_{-r}.
\end{eqnarray}
The anomalies cancel iff
$D = 10, a = 1/2$.
\end{cor}
In this enlarged Hilbert space we have:
\begin{prop}
The following operator
\begin{eqnarray}
Q = Q^{(NS)} \equiv \sum L^{(NS)}_{-m} c_{m}
- {1\over 2} \sum (m-n) :c_{-m} c_{-n} b_{m+n}:
\nonumber \\
+ \sum G_{-r} \gamma_{r} + \sum c_{-m} l^{(2)}_{m} + \sum b_{-m} C^{(2)}_{m}
\end{eqnarray}
where
\begin{equation}
C^{(2)}_{m} \equiv - \sum_{r+s=m} :\gamma_{r}\gamma_{s}: =
- \sum_{r+s=m} \gamma_{r}\gamma_{s}
\end{equation}
is well defined on the algebraic Hilbert space and it is formally self-adjoint; it verifies
\begin{equation}
Q^{2} = 0
\end{equation}
{\it iff}
$
D = 10
$
and
$a = 1/2$.
\end{prop}
{\bf Proof:} It is convenient to denote by
$
Q_{j}, j = 1,\dots,5
$
the five terms in the expression of the BRST charge and write
\begin{equation}
Q = Q^{\prime} + \sum_{j=3}^{5} Q_{j}
\end{equation}
where the sum of the first two terms
$
Q^{\prime}
$
can be obtained from $Q$ of the preceding Section with the substitution
$
L^{(\alpha)}_{m} \longrightarrow L^{(NS)}_{m}
$
so we can use some of the computations performed there. We introduce the notation
\begin{equation}
H_{r} \equiv k_{\mu}~b^{\mu}_{r}.
\end{equation}
and we have as before:
\begin{eqnarray}
\{Q, b_{m} \} = {\cal L}_{m}^{(NS)} \qquad
\{ Q, c_{m} \} = C^{(NS)}_{m} \equiv C^{(1)}_{m} + C^{(2)}_{m}
\nonumber \\
~[ Q, \beta_{r} ] = {\cal G}_{r} \qquad
[ Q, \gamma_{r} \} = - \sum \left( {3m\over 2} + r\right) c_{-m} \gamma_{m+r}
\nonumber \\
~[ Q, {\cal L}_{m}^{(NS)} ] = \rho_{m} c_{m} \qquad
\{ Q, {\cal G}_{r} \} = \lambda_{r} \gamma_{r}
\nonumber \\
~[ Q, K_{m} ] = - m \sum K_{m-n} c_{n} + \sum H_{m-r} \gamma_{r}
\nonumber \\
\{ Q, H_{r} \} = - \sum \left( {m\over 2} - r\right) H_{r-m} c_{m} + \sum K_{r-s} \gamma_{s}.
\label{Q-ns1}
\end{eqnarray}
where
\begin{equation}
\rho_{m} \equiv - m \left( {D-10\over 8} m^{2} + 2a - {D-2\over 8}\right)
\qquad
\lambda_{r} \equiv {D-10\over 2} r^{2} + 2a - {D-2\over 8}.
\end{equation}
The only really complicated computation is for the anticommutator
$
\{ Q, {\cal G}_{r} \}.
$
One can prove that we can take in
${\cal H}$
the following basis:
\begin{equation}
\Psi = \prod b_{-i} \prod c_{-j} \prod \beta_{-r} \prod \gamma_{-s}
\prod {\cal L}_{-m}^{(NS)} \prod {\cal G}_{-t} \prod K_{-n} \prod H_{-u}~f
\end{equation}
where $f$ are DDF states, the indices of the type $m,n \in \mathbb{Z}$,$\quad r,s,t,u \in 1/2 + \mathbb{Z}$
are taking positive values and the indices of the type $i,j \in \mathbb{Z}$ are $\geq 0$.
Because
$
L_{m}^{(NS)} f = 0 \quad \forall m \geq 0 \qquad
G_{r} f = 0 \quad \forall r > 0
$
we easily find out that
\begin{equation}
Qf = 0
\label{Q-ns2}
\end{equation}
for any DDF state $f$. We argue as before that the operator $Q$ is well defined by (\ref{Q-ns1}) and (\ref{Q-ns2}). Now it is easy to obtain from (\ref{Q-ns1}):
\begin{eqnarray}
[ Q^{2}, b_{m} ] = \rho_{m} c_{m} \qquad
[ Q^{2}, c_{m} ] = 0 \qquad
[ Q^{2}, \beta_{r} ] = \lambda_{r} \gamma_{r}\qquad
[ Q^{2}, \gamma_{r} \} = 0
\nonumber \\
~[ Q^{2}, {\cal L}_{m}^{(NS)} ] = \rho_{m} C^{(NS)}_{m} \qquad
[ Q^{2}, {\cal G}_{r} ] = \lambda_{r} C^{(3)}_{r} \qquad
[ Q^{2}, K_{m} ] = 0\qquad
[ Q^{2}, H_{r} ] = 0.
\label{Q-ns3}
\end{eqnarray}
Because we obviously have
$
Q^{2}f = 0
$
it immediately follows that
\begin{equation}
Q^{2} = 0 \Longleftrightarrow \rho_{m} = 0 \quad \lambda_{r} = 0 \Longleftrightarrow
D = 10, a = 1/2
\label{Q-ns5}
\end{equation}
i.e. the statement of the theorem.
$\blacksquare$
To analyze the cohomology of the BRST operator
$Q$
we construct as before, its homotopy:
\begin{prop}
The operator
$\tilde{Q}$
is well defined on the algebraic Hilbert space through the following formulas:
\begin{eqnarray}
\{\tilde{Q}, b_{m} \} = 0 \qquad
\{ \tilde{Q}, c_{m} \} = \delta_{m} \cdot I \qquad
\{\tilde{Q}, \beta_{r} \} = 0 \qquad
\{ \tilde{Q}, \gamma_{r} \} = 0
\nonumber \\
~[ \tilde{Q}, {\cal L}_{m}^{(NS)} ] = - m b_{m} \qquad
\{ \tilde{Q}, {\cal G}_{r} \} = - r \beta_{r} \qquad
[ \tilde{Q}, K_{m} ] = 0 \qquad
[ \tilde{Q}, H_{r} ] = 0
\label{tQ-ns1}
\end{eqnarray}
and
\begin{equation}
\tilde{Q}f = 0
\end{equation}
for any DDF state $f$. We also have
\begin{equation}
\tilde{Q}^{\dagger} = \tilde{Q} \qquad \tilde{Q}^{2} = 0.
\end{equation}
\end{prop}
{\bf Proof:} We have to verify the Jacobi identities of the type:
\begin{equation}
[[X,Y], \tilde{Q} ]_{\rm graded} + {\rm cyclic~permutations} = 0
\end{equation}
where
$X, Y$
are operators from the set
$
b_{m}, c_{m}, \beta_{r}, \gamma_{r}, {\cal L}_{m}^{(NS)}, {\cal G}_{r}, K_{m}, H_{r}.
$
We have some non-trivial ones corresponding to pairs
$
({\cal L}_{m}^{(NS)}, {\cal L}_{n}^{(NS)}),~
({\cal L}_{m}^{(NS)}, c_{n}),~
({\cal L}_{m}^{(NS)}, {\cal G}_{r}),
({\cal G}_{r}, {\cal G}_{s})
$
and
$
({\cal G}_{r}, \gamma_{s}).
$
$\blacksquare$
The main result is similar to the one in the previous Section:
\begin{thm}
If
$
\Psi \in {\cal H}
$
verifies
$
Q\Psi = 0
$
then it is of the form
\begin{equation}
\Psi = Q\Phi + f_{1} + b_{0} f_{2} + c_{0} f_{3}
\end{equation}
where
$
f_{j}
$
are DDF states.
\end{thm}
{\bf Proof:}
The ``Laplacian" is as before
$
\Delta \equiv Q\tilde{Q} + \tilde{Q}Q
$
It is now elementary to determine the alternative expression:
\begin{eqnarray}
[ \Delta, b_{m} ] = - m b_{m} \quad [ \Delta^{(NS)}, c_{m} ] = - m c_{m} \quad
[ \Delta, \beta_{r} ] = - r \beta_{r} \quad
[ \Delta, \gamma_{r} ] = - r \gamma_{r} \quad
\nonumber \\
~[ \Delta, {\cal L}^{(NS)}_{m} ] = - m {\cal L}^{(NS)}_{m} \quad
[ \Delta, {\cal G}_{r} ] = - r {\cal G}_{r} \quad
[ \Delta, K_{m} ] = - m K_{m} \quad
[ \Delta, H_{r} ] = - r H_{r}
\nonumber \\
\Delta f = 0.
\end{eqnarray}
It follows that we have
\begin{equation}
\Delta \Psi = (\sum m + \sum n + \sum i + \sum j + \sum r + \sum s + \sum t + \sum u ) \Psi;
\label{delta-ns}
\end{equation}
we observe that the eigenvalues from (\ref{delta}) are
$\geq 0$
the equality sign being true only for vectors of the form
$
f_{1} + b_{0} f_{2} + c_{0} f_{3}
$
with
$
f_{j}
$
DDF states, as in the previous Section. From now on the argument is the same as there.
$\blacksquare$
To eliminate this tripling we proceed, again, as in the previous Section: we construct the Fock space
${\cal F}^{gh}_{NS}$
such that it also verifies
\begin{equation}
b_{0} \Omega_{gh} = 0.
\end{equation}
Then we construct ${\cal H}$ as above and consider the subspace
\begin{equation}
{\cal H}_{0} \equiv \{ \Psi \in {\cal H} | \quad b_{0}\Psi = 0 \quad
{\cal L}^{(NS)}_{0} \Psi = 0 \}.
\end{equation}
This subspace is left invariant by the operator $Q$ and if
$Q^{(NS)}\Psi = 0$
then we have similarly with the preceding theorem
\begin{equation}
\Psi = Q^{(NS)}\Phi + f
\end{equation}
with $f$ some DDF state.
\section{The Quantum Ramond Superstring}
The modification with respect to the preceding Section are minimal. We start from the Hilbert space
$
{\cal H}^{(R)}_{\rm cov}
$
generated by the operators
$
\alpha^{\mu}_{m}, m \in \mathbb{Z}, \mu = 0,\dots,D
$
and
$
d^{\mu}_{m}, m \in \mathbb{Z}, \mu = 0,\dots,D
$
such that:
\begin{eqnarray}
[ \alpha^{\mu}_{m}, \alpha^{\nu}_{n} ] = - \eta_{\mu\nu}~m~\delta_{m+n} \cdot I,
\quad \forall m, n
\nonumber \\
\alpha^{\mu}_{m} \Omega = 0, \quad m > 0
\nonumber \\
(\alpha^{\mu}_{m})^{+} = \alpha^{\mu}_{-m} \quad \forall m;
\nonumber \\
\{ d^{\mu}_{m}, d^{\nu}_{n} \} = - \eta_{\mu\nu}~m~\delta_{m+n} \cdot I,
\quad \forall m, n
\nonumber \\
d^{\mu}_{m} \Omega = 0, \quad m > 0
\nonumber \\
(d^{\mu}_{m})^{+} = d^{\mu}_{-m} \quad \forall m
\end{eqnarray}
and define the covariant Virasoro operators
\begin{equation}
L^{(R)}_{m} \equiv - {1\over 2}~\eta_{\mu\nu}~\sum_{n \in \mathbb{Z}}~
:\alpha^{\mu}_{m-n} \alpha^{\nu}_{n}:
- {1\over 2}~\eta_{\mu\nu}~\sum_{n \in \mathbb{Z}}~n~:d^{\mu}_{-n} d^{\nu}_{m+n}:
- a~\delta_{m} \cdot I
\label{vir-R}
\end{equation}
and the supersymmetric partners
\begin{equation}
F_{m} \equiv - \eta_{\mu\nu}~\sum_{n \in \mathbb{Z}} \alpha^{\mu}_{-n} d^{\nu}_{n+m}
\end{equation}
such that we have the following relations:
\begin{eqnarray}
[ L^{(R)}_{m}, L^{(R)}_{n} ] = (m - n) L^{(R)}_{m+n}
+ m \left( { D \over 8} m^{2} + 2 m a \right)~\delta_{m+n}~\cdot I.
\nonumber \\
~[ L^{(R)}_{m}, F_{n} ] = \left({m\over 2} - n\right) F_{m+n}
\nonumber \\
\{ F_{m}, F_{n} \} = 2 L^{(R)}_{m+n}
+ \left( {D\over 2}~m^{2} + 2 a \right)~\delta_{m+n}~\cdot I
\nonumber \\
(L^{(R)}_{m})^{\dagger} = L^{(R)}_{-m} \qquad F_{m}^{\dagger} = F_{-m}.
\end{eqnarray}
In this Hilbert space we have an action of the supersymmetric Poincar\'e algebra. Then we can obtain the Ramond case if we take
$D = 10, a = 0$
and restrict the states by the conditions:
\begin{equation}
L^{(R)}_{m}\Psi = 0 \quad \forall m \geq 0 \qquad
F_{m} \Psi = 0 \quad \forall m \geq 0.
\end{equation}
The DDF states can be constructed as before. First we have to construct operators
$
A^{j}_{m},~D^{j}_{m}, j=1,\dots,D-1
$
such that they verify the same algebra as the operators
$
\alpha^{j}_{m}, d^{j}_{m}
$
and commute with
$
F_{m}, \forall m
$
(this implies that they commute with
$
L_{m}, \forall m.)
$
The DDF states are generated by these operators from the vacuum.
For the BRST description we need to enlarge the ghost space: we consider the Fock space
$
{\cal F}^{gh}_{3}
$
generated by the operators
$
\beta_{m}, \gamma_{m} \quad m \in \mathbb{Z}
$
from the vacuum
$\Omega_{gh} \in {\cal F}^{gh}_{3}$;
we assume that
\begin{equation}
\beta_{m}\Omega_{gh} = 0 \quad \gamma_{m}\Omega_{gh} = 0 \quad \forall m > 0.
\end{equation}
These operators are subject to the following anticommutation relations:
\begin{equation}
[\beta_{m}, \beta_{n} ] = 0 \quad [ \gamma_{m}, \gamma_{n} ] = 0 \quad
[ \gamma_{m}, b_{n} ] = \delta_{m+n} \cdot I;
\end{equation}
we also suppose that there is a conjugation operation in
$
{\cal F}^{gh}_{3}
$
such that
\begin{equation}
\beta_{m}^{\dagger} = \beta_{-m} \quad \gamma_{m}^{\dagger} = \gamma_{-m}.
\end{equation}
We can define as usual the algebraic Hilbert space (the subspace of vectors generated by a finite number of operators
$
\beta_{m}, \gamma_{m} \quad m \leq 0)
$
and normal ordering in
$
{\cal F}^{gh}_{m}.
$
\begin{prop}
The following operators
\begin{equation}
l^{(3)}_{m} = \sum_{n \in \mathbb{Z}} \left({m\over 2}+n\right) :\beta_{m-n}\gamma_{n}:
\end{equation}
are well defined on the algebraic Hilbert space and are verifying:
\begin{eqnarray}
[ l^{(3)}_{m}, \beta_{n}] = \left({m\over 2}-n\right) \beta_{m+n} \qquad
[ l^{(3)}_{m}, \gamma_{n}] = - \left({3m\over 2}+n\right) \gamma_{m+n}
\nonumber \\
~[ l^{(3)}_{m}, l^{(3)}_{n}] = (m-n) l^{(3)}_{m+n}
+ {1\over 12} m(11m^{2} - 2) \delta_{m+n} \cdot I
\nonumber \\
(l^{(3)}_{m})^{\dagger} = l^{(3)}_{-m}.
\end{eqnarray}
\end{prop}
{\bf Proof:} We proceed as in Section \ref{brst}: first we split
\begin{equation}
l^{(3)}_{m} = \tilde{l}^{\prime}_{m} + {m\over 2} \beta_{m}\gamma_{0}
+ {3m\over 2} \beta_{0}\gamma_{m}
\end{equation}
where the first term includes only the non-zero modes. For these modes the $2$-point functions are
\begin{eqnarray}
<\Omega_{gh},\beta_{m}\gamma_{n}\Omega_{gh}> = -\theta(m) \delta_{m+n} \quad
<\Omega_{gh},\gamma_{m}\beta_{n}\Omega_{gh}> = \theta(m) \delta_{m+n} \quad
\nonumber \\
<\Omega_{gh},\beta_{m}\beta_{n}\Omega_{gh}> = 0 \quad
<\Omega_{gh},\gamma_{m}\gamma_{n}\Omega_{gh}> = 0
\end{eqnarray}
and we can compute the commutators from the statement using Wick theorem.
$\blacksquare$
Next we have
\begin{cor}
Let us consider in the Hilbert spaces
$
{\cal F}^{gh}_{R} \equiv {\cal F}^{gh}_{1} \otimes {\cal F}^{gh}_{3}
$
the following operators
\begin{equation}
l^{(R)}_{m} = l^{(1)}_{m} \otimes I_{2} + I_{1} \otimes l^{(3)}_{m};
\end{equation}
then we have
\begin{eqnarray}
[ l_{m}^{(R)}, l_{n}^{(R)}] = (m-n) l_{m+n}^{(NS)}
- {5\over 4}~m^{3}~\delta_{m+n} \cdot I
\nonumber \\
(l_{m}^{(R)})^{\dagger} = l_{-m}^{(R)}.
\end{eqnarray}
\end{cor}
Next, we define in
$
{\cal F}^{gh}_{R}
$
the following operators
\begin{equation}
f_{m} \equiv - 2 \sum_{n \in \mathbb{Z}} b_{-n} \gamma_{n+m}
+ \sum_{n \in \mathbb{Z}} \left( {n\over 2} - m\right) c_{-n} \beta_{n+m}
\end{equation}
which are well-defined on the algebraic Fock space. We have
\begin{prop}
The following relations are verified
\begin{eqnarray}
f_{m}^{\dagger} = f_{-m}
\nonumber \\
~[l_{m}^{(R)}, f_{n} ] = \left( {m\over 2} - n\right) f_{m+n}
\nonumber \\
\{ f_{m},f_{n}\} = 2 l_{m+n}^{(R)} - 5m^{2}~\delta_{m+n} \cdot I.
\end{eqnarray}
\end{prop}
The proofs of the first two relations are elementary. For the last one we use Wick theorem.
Next we have
\begin{cor}
Let us consider in the Hilbert spaces
$
{\cal H} \equiv {\cal H}^{(b)}_{cov} \otimes^{s} {\cal F}^{gh}_{R}
$
where the skew tensor product
$
\otimes^{s}
$
is such that we have normal (anti)commutation relations i.e. the Fermionic operators
$
d^{\mu}_{m}
$
are anticommuting with
$
b_{m}, c_{m}.
$
We then define the following operators
\begin{eqnarray}
{\cal L}_{m}^{(R)} = L^{(R)}_{m} \otimes I_{2} + I_{1} \otimes l^{(R)}_{m}
\nonumber \\
{\cal F}_{m} = F_{m} \otimes I_{2} + I_{1} \otimes f_{m}
\end{eqnarray}
then we have
\begin{eqnarray}
[ {\cal L}_{m}^{(R)}, {\cal L}_{n}^{(R)}] = (m-n) {\cal L}_{m+n}^{(R)}
+ m \left( {D-10\over 8} m^{2} + 2a \right) \delta_{m+n} \cdot I
\nonumber \\
~[ {\cal L}_{m}^{(R)}, {\cal F}_{n}] = \left({m\over 2} - n\right) {\cal F}_{m+n}
\nonumber \\
\{ {\cal F}_{m}, {\cal F}_{n} \} = 2 {\cal L}^{(R)}_{m+n}
+ \left( {D-10\over 8}~m^{2} + 2 a\right)~\delta_{r+s}~\cdot I
\nonumber \\
({\cal L}^{(R)}_{m})^{\dagger} = {\cal L}^{(R)}_{-m} \qquad
{\cal F}_{m}^{\dagger} = {\cal F}_{-m}.
\end{eqnarray}
The anomalies cancel iff
$D = 10, a = 0$.
\end{cor}
In this enlarged Hilbert space we have:
\begin{prop}
The following operator
\begin{eqnarray}
Q = Q^{(R)} \equiv \sum L^{(R)}_{-m} c_{m}
- {1\over 2} \sum (m-n) :c_{-m} c_{-n} b_{m+n}:
\nonumber \\
+ \sum F_{-m} \gamma_{m} + \sum c_{-m} l^{(3)}_{m} + \sum b_{-m} C^{(4)}_{m}
\end{eqnarray}
where
\begin{equation}
C^{(4)}_{m} \equiv - \sum_{p+q=m} :\gamma_{p}\gamma_{q}: =
- \sum_{p+q=m} \gamma_{p}\gamma_{q}
\end{equation}
is well defined on the algebraic Hilbert space and is formally self-adjoint; it verifies
\begin{equation}
Q^{2} = 0
\end{equation}
{\it iff}
$
D = 10
$
and
$a = 0$.
\end{prop}
{\bf Proof:} It is convenient to denote by
$
Q_{j}, j = 1,\dots,5
$
the five terms in the expression of the BRST charge and write
\begin{equation}
Q = Q^{\prime} + \sum_{j=3}^{5} Q_{j}
\end{equation}
where the sum of the first two terms
$
Q^{\prime}
$
can be obtained from $Q$ of the preceding Section with the substitution
$
L^{(\alpha)}_{m} \longrightarrow L^{(R)}_{m}
$
so we can use some of the computations performed there. We introduce the notation
\begin{equation}
H_{m} \equiv k_{\mu}~d^{\mu}_{m}.
\end{equation}
and we have as before:
\begin{eqnarray}
\{Q, b_{m} \} = {\cal L}_{m}^{(R)} \qquad
\{ Q, c_{m} \} = C^{(R)}_{m} \equiv C^{(1)}_{m} + C^{(4)}_{m}
\nonumber \\
~[ Q, \beta_{m} ] = {\cal F}_{m} \qquad
[ Q, \gamma_{m} \} = - \sum \left( {3n\over 2} + m\right) c_{-n} \gamma_{m+n}
\nonumber \\
~[ Q, {\cal L}_{m}^{(R)} ] = \rho_{m} c_{m} \qquad
\{ Q^{(R)}, {\cal F}_{m} \} = \lambda_{m} \gamma_{m}
\nonumber \\
~[ Q, K_{m} ] = - m \sum K_{m-n} c_{n} + \sum H_{m-n} \gamma_{n}
\nonumber \\
\{ Q, H_{m} \} = - \sum \left( {n\over 2} - m\right) H_{m-n} c_{m} + \sum K_{m-n} \gamma_{n}.
\label{Q-r1}
\end{eqnarray}
where
\begin{equation}
\rho_{m} \equiv - m \left( {D-10\over 8} m^{2} + 2a \right)
\qquad
\lambda_{m} \equiv {D-10\over 2} m^{2} + 2a.
\end{equation}
One can prove that we can take in
${\cal H}$
the following basis:
\begin{equation}
\Psi = \prod b_{-i} \prod c_{-j} \prod \beta_{-p} \prod \gamma_{-q}
\prod {\cal L}_{-m}^{(NS)} \prod {\cal F}_{-l} \prod K_{-n} \prod H_{-k}~f
\label{basis-r}
\end{equation}
where $f$ are DDF states, the indices of the type $m,n \in \mathbb{Z}$
are taking positive values and the indices of the type $i,j, p, q, k, l \in \mathbb{Z}$ are $\geq 0$.
Because
$
L_{m}^{(R)} f = 0 \quad {\cal F}_{m} f = 0 \quad \forall m \geq 0
$
we easily find out that
\begin{equation}
Qf = 0
\label{Q-r2}
\end{equation}
for any DDF state $f$. We argue as before that the operator
$Q$
is well defined by (\ref{Q-r1}) and (\ref{Q-r2}). Now it is easy to obtain from (\ref{Q-r1}) that
$
Q^{2}
$
commutes with all the operators from (\ref{basis-r}). Because we have
$
Q^{2}f = 0
$
it immediately follows that
\begin{equation}
Q^{2} = 0 \Longleftrightarrow \rho^{(R)}_{m} = 0 \quad \lambda^{(R)}_{m} = 0 \Longleftrightarrow D = 10, a = 0
\label{Q-r5}
\end{equation}
i.e. the statement of the theorem.
$\blacksquare$
To analyze the cohomology of the BRST operator
$Q$
we construct its homotopy:
\begin{prop}
The operator
$\tilde{Q}$
is well defined on the algebraic Hilbert space through the following formulas:
\begin{eqnarray}
\{\tilde{Q}, b_{m} \} = 0 \qquad
\{ \tilde{Q}, c_{m} \} = \delta_{m} \cdot I \qquad
\{\tilde{Q}, \beta_{m} \} = 0 \qquad
\nonumber \\
\{ \tilde{Q}, \gamma_{m} \} = 0 \qquad
~[ \tilde{Q}, {\cal L}_{m}^{(R)} ] = - m b _{m} \qquad
\{ \tilde{Q}, {\cal F}_{m} \} = - m \beta_{m}
\nonumber \\
~[ \tilde{Q}, K_{m} ] = 0 \qquad
[ \tilde{Q}, H_{m} ] = 0
\label{tQ-r1}
\end{eqnarray}
and
\begin{equation}
\tilde{Q}f = 0
\end{equation}
for any DDF state $f$. We also have
\begin{equation}
\tilde{Q}^{\dagger} = \tilde{Q} \qquad \tilde{Q}^{2} = 0.
\end{equation}
\end{prop}
{\bf Proof:} We have to verify the Jacobi identities of the type:
\begin{equation}
[[X,Y], \tilde{Q} ]_{\rm graded} + {\rm cyclic~permutations} = 0
\end{equation}
where
$X, Y$
are operators from the set
$
b_{m}, c_{m}, \beta_{m}, \gamma_{m}, {\cal L}_{m}^{(R)}, {\cal F}_{m}, K_{m}, H_{m}.
$
We have some non-trivial ones corresponding to pairs
$
({\cal L}_{m}^{(R)}, {\cal L}_{n}^{(R)}),~
({\cal L}_{m}^{(R)}, c_{n}),~
({\cal L}_{m}^{(R)}, {\cal F}_{r}),
({\cal F}_{m}, {\cal F}_{n})
$
and
$
({\cal F}_{m}, \gamma_{m}).
$
$\blacksquare$
The main result is similar to the one in the previous Section. However, the degeneracy is infinite
in this case, so to avoid this problem we work directly in a smaller Hibert space: we construct the Fock space
${\cal F}^{gh}_{R}$
such that it also verifies
\begin{equation}
b_{0} \Omega_{gh} = 0 \qquad \beta_{0} \Omega_{gh} = 0.
\end{equation}
Then we construct ${\cal H}$ as above and consider the subspace
\begin{equation}
{\cal H}_{0} \equiv \{ \Psi \in {\cal H} | \quad b_{0}\Psi = 0 \quad \beta_{0} \Psi = 0 \quad {\cal L}^{(R)}_{0} \Psi = 0 \quad {\cal F}_{0} \Psi = 0\}.
\end{equation}
This subspace is left invariant by the operator
$Q^{(R)}$
and we have the following result:
\begin{thm}
If
$
\Psi \in {\cal H}_{0}
$
verifies
$
Q\Psi = 0
$
then it is of the form
\begin{equation}
\Psi = Q\Phi + f
\end{equation}
where
$
f
$
is a DDF state and
$
\Phi \in {\cal H}_{0}.
$
\end{thm}
{\bf Proof:}
First we have to prove that the states from
$
{\cal H}_{0}
$
are obtained applying the operators
$
b_{-m}, c_{-m}, \beta_{-m}, \gamma_{-m}, {\cal L}^{(R)}_{-m}, {\cal F}_{-m}, K_{-m}, H_{-m}
$
with
$m > 0$.
The ``Laplacian" is as before
$
\Delta \equiv Q\tilde{Q} + \tilde{Q}Q
$
It is now elementary to determine that the Laplace operator is alternatively given by:
\begin{eqnarray}
[ \Delta, b_{m} ] = - m b_{m} \quad [ \Delta, c_{m} ] = - m c_{m} \qquad
[ \Delta, \beta_{m} ] = - m \beta_{m} \quad
[ \Delta, \gamma_{m} ] = - m \gamma_{m}
\nonumber \\
~[ \Delta, {\cal L}^{(R)}_{m} ] = - m {\cal L}^{(R)}_{m} \quad
[ \Delta, {\cal F}_{m} ] = - m {\cal F}_{m} \qquad
[ \Delta, K_{m} ] = - m K_{m} \quad
[ \Delta, H_{m} ] = - m H_{m}
\nonumber \\
\Delta f = 0.
\end{eqnarray}
It follows that we have
\begin{equation}
\Delta \Psi = (\sum m + \sum n + \sum i + \sum j + \sum p + \sum q + \sum k + \sum l ) \Psi;
\end{equation}
we observe that the eigenvalues from (\ref{delta}) are
$\geq 0$
the equality sign being true only for DDF states. From now on the argument is the same as before.
$\blacksquare$
\section{Conclusions}
The main results of this paper are: a) An elementary treatment of the quantum string models relying only on Wick theorem and paying attention to the domain problems. b) A derivation of the DDF operators without using vertex algebras. c) The clarification of the equivalence between the light-cone and covariant formalism using standard results in induced representation theory; this point seems to be missing from the literature. d) An elementary derivation of the BRST cohomology. A comparison with the standard literature is useful on this point: In \cite{T} one uses a basis of the type (\ref{basis-gh}):
\begin{equation}
\Psi_{I,J,M,N} = b_{-i_{1}} \dots b_{-i_{\beta}} c_{-j_{1}} \dots c_{-j_{\gamma}}
{\cal L}_{-m_{1}} \cdots {\cal L}_{-m_{\lambda}}
K_{-n_{1}} \cdots K_{-n_{\kappa}}~f_{I,J,M,N}
\label{basis-ghT}
\end{equation}
where it can be arranged such that the DDF states
$
f_{I,J,M,N}
$
are completely symmetric in the indices
$
M = \{ m_{1},\dots, m_{\lambda}\}
$
and in the indices
$
N = \{ n_{1},\dots, n_{\kappa}\};
$
of course we have complete antisymmetry in the indices
$
I = \{ i_{1},\dots, i_{\beta}\}
$
and in the indices
$
J = \{ j_{1},\dots, j_{\gamma}\}.
$
Then one decomposes the f's according to Young diagrams (separately for
$
I \cup M
$
and
$
J \cup N).
$
We have in both cases only two projectors: one piece is eliminated by the condition
$Q\Psi = 0$
and the other one can be proved to by a coboundary up to states of the form
$
f_{1} + b_{0} f_{2}.
$
In \cite{FGZ} there are two proofs, one based on a similar idea of Hodge theory (however the expression of the Laplacian seems to be different and the spectral analysis is not provided) and the other proof relies on the use of spectral sequences.
The proof from \cite{KO}, \cite{Oh} makes a convenient rescaling by a parameter
$\beta$
and assumes that the states
$\Psi(\beta)$
are polynomials in this parameter which is an unjustified restriction. The proof from
\cite{P} is closely related and assumes that a certain infinite series
is convergent. The proof from \cite{FB} relies on the existence of the
operators
$
D_{n}
$
formally given by:
$
:\left(1 - \sum K_{m} z^{m}\right)^{-1}: = \sum z^{n} D_{n};
$
such operators are also used in the construction of the DDF states for the superstring models.
|
1,108,101,563,878 | arxiv | \section{Introduction}
\subsection*{1.1 Spin structure function }
The scale dependence of spin-dependent structure functions are in general interpreted by DGLAP \cite{D,G,L,A} equations. A lot of work has gone in understanding the nature of these structure functions over the past decades. Computation of exact \cite{AAC,bb,grsv,lss} as well as approximate analytical solutions \cite{6,7,8,9,10,11,12}
of these equations are equally important as one find simpler insight into the structure of the nucleon at least in some definite $x,Q^2$ range where the later is valid.\\
The aim of the present paper is exactly that, we will report a set of approximate analytical solutions of the LO DGLAP equation for the non-singlet polarized quark distribution $\Delta q^{NS}(x,Q^2)=(\Delta u+\Delta \bar{u})(x,Q^2)-(\Delta d+\Delta \bar{d})(x,Q^2)$. Based on the solution obtained by the Method of characteristics \cite{moc,moc1} and the Lagrange's method \cite{lagranges} we make a relative study of the two methods. While each method have been applied individually \cite{9,10,11,17,18,jksrb,jksrb1,nazir,mdv} earlier, the relative merit and demerit of the two has been reported only recently \cite{13} for unpolarized structure function $F_2^{NS}(x,t)$. The present work reports a similar analysis for the polarized case, as was reported in ref \cite{12} briefly.\\
Our choice of studying $g_1^{NS}(x,t)$ is due to the reason of its simplicity: it evolves independently of singlet and gluon distribution and hence does not require solution of coupled DGLAP equation. The analytical solution of such coupled DGLAP equation invariably needs additional adhoc assumptions relating the two distributions not proved in QCD as noted in literature \cite{10,11,jksrb1,mdv}.
In our work we will show that even at LO level, analytical solutions have several interesting features, which needs attention.\\
The paper is organised as follows, in Section 2, we give the formalism, in Section 3, we present the results and discussions and in section 4 the conclusions are given.
\section{Formalism}
\subsection*{2.1 Polarized Non-singlet DGLAP evolution equation }
The non-singlet polarized DGLAP equation in LO is,
\begin{equation}
\label{eqn:nonsingAP}
\frac{\partial}{\partial t} \Delta q^{NS}(x,t)=\frac{\alpha_s}{2\pi}\int_x^1 \frac{d z}{z}\Delta P_{qq}^{NS}(z)\Delta q^{NS}(\frac{x}{z},t)
\end{equation}
Here $ t=\ln \frac{Q^2}{\Lambda^2}$ and $\Delta P_{qq}^{NS}(z) $ is polarized splitting function. Introducing a variable $u=1-z$ and following the similar formalism as in ref \cite{13} with two different levels of approximation for small $x$, we can express Eq.(\ref{eqn:nonsingAP}) in two partial differential equations as,
\begin{equation}
\label{eqn:pde1}
\frac{\partial \Delta q^{NS}(x,t)}{\partial t}=\frac{2}{\beta_0 t}\left[\frac{4}{3}\left(\log \frac{1}{x}+\frac{1}{2}\right) \Delta q^{NS}(x,t)+\frac{4}{3}\left(1-x-x \log \frac{1}{x}\right)\frac{\partial \Delta q^{NS}(x,t)}{\partial x}\right]
\end{equation}
and
\begin{equation}
\label{eqn:pde2}
\frac{\partial \Delta q^{NS}(x,t)}{\partial t}=\frac{2}{\beta_0 t}\left[\frac{4}{3}\left(\log \frac{1}{x}+\frac{1}{2}\right) \Delta q^{NS}(x,t)+\frac{4}{3}\left(x \log \frac{1}{x}-x+x^2 \right)\frac{\partial \Delta q^{NS}(x,t)}{\partial x}\right]
\end{equation}
Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) are obtained from the same evolution equation Eq.(\ref{eqn:nonsingAP}) with two different levels of approximations given below by Eq.(\ref{eqn:series1}) and Eq.(\ref{eqn:series2}) respectively. While Eq.(\ref{eqn:pde1}) is based on the expansion given by,
\begin{equation}
\label{eqn:series1}
\Delta q^{NS}(\frac{x}{z},t)=\Delta q^{NS}(x,t)+x\sum_{k=1}^\infty u^k \frac{\partial \Delta q^{NS}(x,t)}{\partial x}
\end{equation}
Eq.(\ref{eqn:pde2}) considers only the first term of the expansion series on the RHS of Eq.(\ref{eqn:series2}),
\begin{equation}
\label{eqn:series2}
\Delta q^{NS}(\frac{x}{z},t)\approx \Delta q^{NS}(x,t)+xu \frac{\partial}{\partial x}\Delta q^{NS}(x,t)
\end{equation}
We solve these two equations using two powerful methods for solving PDE: Method of Characteristics \cite{moc,moc1} and Lagrange's Method \cite{lagranges} .\\
To that end we express the Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) as,
\begin{equation}
\label{eqn:pdefrac}
Q(x,t)\frac{\partial \Delta q^{NS}(x,t)}{\partial t}+P(x,t)\frac{\partial \Delta q^{NS}(x,t)}{\partial x}=R(x,t)\Delta q^{NS}(x,t)
\end{equation}
with the forms of $Q(x,t)$, $P(x,t)$ and $R(x,t)$ being different for both the Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}).
Where as,
\begin{equation}
Q(x,t)=t
\end{equation}
\begin{equation}
P(x,t)=-\frac{2}{\beta_0} \left[\frac{4}{3}\left(1-x- x\log \frac{1}{x}\right)\right]
\end{equation}
and
\begin{equation}
R(x,t,\Delta q^{NS}(x,t))=R'(x)\Delta q^{NS}(x,t)
\end{equation}
with
\begin{equation}
R'(x)=\frac{2}{\beta_0}\frac{4}{3}(\log \frac{1}{x}+\frac{1}{2})
\end{equation}
for the Eq.(\ref{eqn:pde1}).\\
Similarly,
\begin{equation}
Q(x,t)=t
\end{equation}
\begin{equation}
P(x,t)=-\frac{2}{\beta_0}\left[ \frac{4}{3}\left(x\log \frac{1}{x} -x+x^2\right)\right]
\end{equation}
and
\begin{equation}
R(x,t,\Delta q^{NS}(x,t))=R'(x)\Delta q^{NS}(x,t)
\end{equation}
with
\begin{equation}
R'(x)=\frac{2}{\beta_0}\frac{4}{3}(\log \frac{1}{x}+\frac{1}{2})
\end{equation}
for the Eq.(\ref{eqn:pde2}). The only difference occurs in the structure of $P(x,t)$ ,Eq.(8) and Eq.(12).\\
\subsection*{2.2 Solution by the Method of Characteristics}
The method of characteristics \cite{moc,moc1} as a strong tool for solving a partial differential equation in two variables been well discussed in our recent work \cite{12,13}. In this formalism, the characteristic equations for the Eq(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) has the form:
\begin{equation}
\label{eqn:cheqn1}
\frac{d x}{d s}=P(x,t)
\end{equation}
\begin{equation}
\label{eqn:cheqn2}
\frac{d t}{d s}=Q(x,t)
\end{equation}
So, along the characteristic curve the PDE (Eq.(\ref{eqn:pde1})) and Eq.(\ref{eqn:pde2}) becomes an ODE:
\begin{equation}
\label{eqn:checurve}
\frac{d \Delta q^{NS}(s,\tau)}{d s}+c(s,\tau)\Delta q^{NS}(s,\tau)=0
\end{equation}
Here,
\begin{equation}
c(s,\tau)=-\frac{2}{\beta_0} \frac{4}{3}\lbrace-\log\lbrace\tau\exp \left(\frac{t}{t_0}\right)^{\frac{8}{3\beta_0}}\rbrace+\frac{1}{2}\rbrace
\end{equation}
and it has the identical form in both the cases. The solutions of the characteristic equations Eq.(\ref{eqn:cheqn1}) and Eq.(\ref{eqn:cheqn2}) for Eq(\ref{eqn:pde1}) comes out as,
\begin{eqnarray}
\label{eqn:solcheqn1}
s=\ln\left(\frac{t}{t_0}\right)\\
\tau =x \exp{[(\frac{t}{t_0})^{8/3\beta_0}]}
\label{eqn:solcheqn2}
\end{eqnarray}
while for Eq.(\ref{eqn:pde2}), these are,
\begin{equation}
\label{eqn:solcheqn3}
s=\ln\left(\frac{t}{t_0}\right)
\end{equation}
\begin{equation}
\label{eqn:solcheqn4}
\tau' =x \exp{[-(\frac{t}{t_0})^{8/3\beta_0}]}
\end{equation}
The solutions Eq.(19-22) are in $(s,\tau)$ space. Using these solutions of the characteristic equations, we can express the solutions of the Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) in $(x,t)$ space in a more precise form as,\\
Eq.(\ref{eqn:pde1}), MOC1:
\begin{equation}
\label{eqn:solnspol1}
\Delta q^{NS}(x,t)=\Delta q^{NS}(x,t_0)\left(\frac{t}{t_0}\right)^{\tilde{n_1}(x,t)}
\end{equation}
where
\begin{equation}
\tilde{n_1}(x,t)=\frac{1}{\log \left( \frac{t}{t_0}\right)}\log \left(\frac{\Delta q^{NS}(\tau)}{\Delta q^{NS}(x,t_0)}\right)+\frac{\alpha_1}{\log \left( \frac{t}{t_0}\right)}
\end{equation}
Eq.(\ref{eqn:pde2})MOC2:
\begin{equation}
\label{eqn:solnspol2}
\Delta q^{NS}(x,t)=\Delta q^{NS}(x,t)\left(\frac{t}{t_0}\right)^{\tilde{n_2}(x,t)}
\end{equation}
where
\begin{equation}
\tilde{n_2}(x,t)=\frac{1}{\log \left( \frac{t}{t_0}\right)}\log \left(\frac{\Delta q^{NS}(\tau')}{\Delta q^{NS}(x,t_0)}\right)+\frac{\alpha_1}{\log \left( \frac{t}{t_0}\right)}
\end{equation}
with
\begin{equation}
\alpha_1 =\frac{4}{3\beta_0}\left[1+2\log \frac{1}{x}\right]
\end{equation}
Eq.(\ref{eqn:solnspol1}) and Eq.(\ref{eqn:solnspol2}) are our solutions for $g_1^{NS}(x,t) $ obtained from the Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}). The exponent $\tilde{n_1}(x,t)$ and $\tilde{n_2}(x,t)$ are different because $\tau$ and $\tau'$ as defined in Eq.(\ref{eqn:solcheqn2}) and Eq.(\ref{eqn:solcheqn4}) are not the same. We have assumed that $\log \frac{1}{x}\gg1$, as well as $x \log \frac{1}{x}\ll1$ in deriving Eq.(\ref{eqn:solnspol1}) and Eq.(\ref{eqn:solnspol2}). Analytical solutions are possible only under these extreme conditions. Thus we see that both the solutions obtained from the same evolution equation with two different levels of approximation leads us to two results, with the difference in the solution of the characteristic equations only.\\
\subsection*{2.3 Solution by the Lagrange's Auxiliary Method}
To solve the equation Eq.(\ref{eqn:pde1}) by the Lagrange's Auxiliary Method\cite{lagranges}, we use the auxiliary system of equation given by Eq.(\ref{eqn:pdefrac}).\\
The general solution of the Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) are obtained by solving the following auxiliary system of ordinary differential equations,
\begin{equation}
\label{auxiliary}
\frac{dx}{P(x)}=\frac{dt}{Q(t)}=\frac{\Delta q^{NS}(x,t)}{R(x,t,\Delta q^{NS}(x,t))}
\end{equation}
If $u(x,t,\Delta q^{NS})=C_1$ and $v(x,t,\Delta q^{NS})=C_2$ are the two independent solutions of Eq.(\ref{auxiliary}),then in general, the solution of Eq.(\ref{eqn:pdefrac}) is
\begin{equation}
F(u,v)=0
\end{equation}
Where $F$ is an arbitrary function of $u$ and $v$. In our recent work we have applied Lagrange's method successsfully for the corresponding unpolarized structure function \cite{13}. Using the same approach with physically plausible boundary conditions, Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) give the specific solution for $g_1^{NS}(x,t) $ as,\\
\begin{equation}
\label{eqn:solnlagmethod1}
\Delta q^{NS}(x,t)=\Delta q^{NS}(x,t_0)\left(\frac{t}{t_0}\right)\frac{[X^{NS}(x)-X^{NS}(1)]}{[X^{NS}(x)-(\frac{t}{t_0})X^{NS}(1)]}
\end{equation}
The explicit analytical form of $X^{NS}(x) $ in the leading $(\frac{1}{x})$ approximation for Eq.(\ref{eqn:pde1}) is,
\begin{equation}
\label{Xnsvalue1}
X^{NS}(x)=\exp[\frac{3\beta_0}{8}\log |\log \frac{1}{x}|]
\end{equation}
while for Eq.(\ref{eqn:pde2}), it is,
\begin{equation}
\label{Xns2}
X^{NS}(x)=\exp[\frac{3\beta_0}{8}\log(-1+\log \frac{1}{x})]
\end{equation}
At this level, the solutions Eq.(\ref{Xnsvalue1}) and Eq.(\ref{Xns2}) are distictively different as was the case with the method of characterisrics (Eq.(\ref{eqn:solnspol1}) and Eq.(\ref{eqn:solnspol2})). However as Eq.(\ref{Xns2}) is not real at $x=1$, it will not give a physically plausible solution of $\Delta q^{NS}(1,t)$ and can be reuled out on physical ground. But in the limit $\log \frac{1}{x}\gg1$, as has been used in the derivation of Eq.(\ref{eqn:solnspol1}) and Eq.(\ref{eqn:solnspol2}) for the method of characteristics,both Eq.(\ref{Xnsvalue1}) and Eq.(\ref{Xns2}) are identical. i.e.
In both the cases,
\begin{equation}
\label{xns1}
X^{NS}(1)\approx 0.
\end{equation}
Hence we get,
\begin{equation}
\label{solnlagmethod1}
\Delta q^{NS}(x,t)=\Delta q^{NS}(x,t_0)\left(\frac{t}{t_0}\right)
\end{equation}
as the analytical solution using Lagrange's method, for Eq.(\ref{eqn:pde1}) and Eq.(\ref{eqn:pde2}) at small $x$.
Unlike the solutions obtained using the method of characteristics Eq.(\ref{eqn:solnspol1}) and Eq.(\ref{eqn:solnspol2}), the solutions obtained by using the Lagrange's auxiliary method are same.\\
In the next section we consider the phenomenological utility of our solutions with respect to each other vis-a-vis the available experimental data. We perform a $\chi^2$ test to test their compatibility with the data.\\
\section{Results and discussion}
\label{sec:results}
We now compare our analytical solutions Eq.(\ref{eqn:solnspol1})(MOC1), Eq.(\ref{eqn:solnspol2})(MOC2) and Eq.(\ref{solnlagmethod1})(LM) with the HERMES \cite{HERMES} and COMPASS \cite{compass} data for the polarized non-singlet structure function $g_1^{NS}(x,t) $ related to the non-singlet polarized parton densites $\Delta q^{NS}(x,t)=\Delta u-\Delta d$, by using the relation \cite{HERMES},
\begin{equation}
\label{g1ns}
g_1^{NS}(x,t)=\frac{1}{2}\frac{1}{n_f}\sum_{i-1}^{n_f} e_i^{2} \Delta q^{NS}(x,Q^2)
\end{equation}
For any flavour, $n_f=2,3,4,5,6$, it turns out to be,
\begin{equation}
g_1^{NS}(x,t)=\frac{1}{9}\Delta q^{NS}(x,Q^2)
\end{equation}
The data of ref \cite{HERMES,compass} are available within the range $0.0264 \leq x \leq 0.7311$, $1.12GeV^2 \leq Q^2 \leq 14GeV^2$ for HERMES and $0.0046 \leq x \leq 0.566$, $1.1GeV^2 \leq Q^2 \leq 55GeV^2$ for COMPASS respectively. The approximate analytical solutions, although derived in the ultra small $x$ limit; ($\log \frac{1}{x}\gg1$, as well as $x \log \frac{1}{x}\ll1$), we study if they are compatible with the available data at the range $x$($x\geq 0.0264$) and ($x\geq 0.0046$) reasonably \cite{saiful}.\\
We consider data for comparison from 36 and 14 individual kinematic bins from (HERMES) and (COMPASS) respectively as well as statistical uncertainities for each bin. To evolve our solutions we take the input distribution from LSS05\cite{lss} and consider $Q_0^2=1GeV^2$ and in LO $\Lambda^2=0.393GeV^2$ \cite{PDG}. \\
To derive the final $g_1^{NS}(x,t)$, the follwing relation is used,\cite{HERMES}
\begin{equation}
g_1^{NS}\equiv g_1^P-g_1^n=2\left[g_1^P-\frac{g_1^d}{1-1.5\omega_D}\right]
\end{equation}
where $\omega_D=0.058$ accounts for the D-state admixture in the deuteron wave function.\cite{wd} \\
From Figure 1,2 and 3, we observe that our analytical models represented by Eq.(\ref{eqn:solnspol2})(MOC2) and Eq.(\ref{solnlagmethod1})(LM) are in good agreement with the experimental data upto the $x$ value $x\leq0.4$. But our solution given by method of characteristics, Eq.(\ref{eqn:solnspol1})(MOC1) overshoots the data beyond $x\geq 0.28$. Thus we can conclude that the valid range of $x$ for the three analytical models is $x\leq0.28$, above which the small $x$ approximation is no more valid.\\
Further we test the compatibility of the three analytical solutions with the $Q^2$ evolution within the valid small $x$ range of the experimental data (both HERMES and COMPASS). For HERMES, small $x$ data $x\leq0.28$ is within the $Q^2$ range $Q^2\leq 6.94 GeV^2$, whereas for COMPASS it is within the $Q^2$ range, $Q^2\leq 17.2 GeV^2$. We therefore confine our comparison within this $Q^2$ range $1.1 GeV^2\leq Q^2\leq 17.2GeV^2$.\\
Figure 4,5 and 6 shows the compatibility of our analytical models with COMPASS data and Figure 7, 8 and 9 shows the same for HERMES data separately. From the figures we observe that our analytical models are reasonably cosistent within the experimentally accessible small $x$ range of data $0.0046\leq x\leq0.28$ and $Q^2$ range $1.1GeV^2\leq Q^2 \leq17.2 GeV^2$.\\
\begin{figure}[hb]
\begin{center}
\includegraphics[width=4in]{LAG1.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $xg_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $x$ at different $Q^2$ according to Eq.(\ref{solnlagmethod1}). Data from refs \cite{HERMES} and \cite{compass}}
\label{fig:1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{MOC1.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $xg_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $x$ at different $Q^2$ according to Eq.(\ref{eqn:solnspol1}). Data from refs \cite{HERMES} and \cite{compass} }
\label{fig:2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{MOC21.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $xg_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $x$ at different $Q^2$ according to Eq.(\ref{eqn:solnspol2}). Data from refs \cite{HERMES} and \cite{compass}}
\label{fig:3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{4.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $g_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $Q^2$ at different $x$ according to Eq.(\ref{solnlagmethod1}). Data from refs \cite{compass}}
\label{fig:4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{5.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $g_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $Q^2$ at different $x$ according to Eq.(\ref{eqn:solnspol1}). Data from refs \cite{compass}}
\label{fig:5}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{6.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $g_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $Q^2$ at different $x$ according to Eq.(\ref{eqn:solnspol2}). Data from refs \cite{compass}}
\label{fig:6}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{7.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $g_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $Q^2$ at different $x$ according to Eq.(\ref{solnlagmethod1}). Data from refs \cite{HERMES}}
\label{fig:4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{8.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $g_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $Q^2$ at different $x$ according to Eq.(\ref{eqn:solnspol1}). Data from refs \cite{HERMES}}
\label{fig:5}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{9.EPS}
\end{center}
\vspace{-0.10in}
\caption[Polarized non-singlet structure function $g_1^{NS}(x,t)$]{Polarized non-singlet structure function $g_1^{NS}(x,t)$ as function of $Q^2$ at different $x$ according to Eq.(\ref{eqn:solnspol2}). Data from refs \cite{HERMES}}
\label{fig:6}
\end{figure}
We perform a $ \chi^2$ test using the formula $\chi^2 =\Sigma_i \frac{(X_{th}-X_{ex})^2}{\sigma^2 } $, to check the quantitative estimate of the goodness of fit among the solutions obtained by two analytical methods with the experimental data. In Table 1 we give the $ \chi^2/d.o.f$ for each analytical solutions given by Eq.(\ref{eqn:solnspol1}), Eq.(\ref{eqn:solnspol2}), Eq.(\ref{solnlagmethod1}). The d.o.f. for HERMES and COMPASS are 36 and 14 respectively.\\
\begin{table}[!ht]
\label{table:chisquare}
\begin{center}
\caption[$\chi^2/d.o.f.$ distribution]{$\chi^2/d.o.f.$ values for Eq.(\ref{eqn:solnspol1}), Eq.(\ref{eqn:solnspol2}) and Eq.(\ref{solnlagmethod1})}
\vspace{0.1in}
\begin{tabular}{|l|l|c|c|}
\hline
Method & Solutions & HERMES Collaboration & COMPASS Collaboration \\
\hline
Method of characteristics & MOC1,Eq.(\ref{eqn:solnspol1}) & 0.0218 & 0.57 \\
\hline
Method of characteristics & MOC2,Eq.(\ref{eqn:solnspol2}) & 0.055 & 0.58\\
\hline
Lagrange's Method & Eq.(\ref{solnlagmethod1}) & 0.054 & 0.616\\
\hline
\end{tabular}
\end{center}
\end{table}
From the $\chi^2$ analysis as given in Table 1, we infer that the analytical solution given by Eq.(\ref{eqn:solnspol1})(MOC1) fares better that the other two Eq.(\ref{eqn:solnspol2}) and Eq.(\ref{solnlagmethod1}).\\
Let us now comment on the Bjorken sum rule \cite{Bjorken1} in the context of present work. As is well known, important informations about the spin structure of nucleon can be extracted from the first moment of spin structure function $g_1^{NS}$, known as Bjorken sum rule. The Bjorken integral defined as,
\begin{equation}
\Gamma_1^{NS}=\int_0^{1}(g_1^{P}(x,Q^2)-g_1^{n}(x,Q^2))dx\equiv \int_0^{1}g_1^{NS}(x,Q^2)dx
\end{equation}
Where $g_1^{NS}(x,Q^2)$ is expressed in Eq.(\ref{g1ns}) in terms of $\Delta q^{NS}(x,Q^2)$.\\
To evaluate theoretically $\Gamma_1^{NS}$, one needs information about $\Delta q^{NS}(x,Q^2)$ in the entire physical region of $x$, $(0\leq x\leq 1)$. A model whose validity is tested only in a limited $x$ range $(x_{a}\leq x\leq x_{b})$, one can obtain only partial information about $\Gamma_1^{NS}$. i.e.
\begin{equation}
\hat\Gamma_1^{NS}=\int_{x_a}^{x_b}g_1^{NS}(x,Q^2)dx
\end{equation}
which gives information about the contribution to the $\Gamma_1^{NS}$ from the partons having fractional momentum $x_{a}\leq x\leq x_{b}$, which should invariably be less than its exact experimental measured value, i.e. $\Gamma_1^{NS}\gg \hat\Gamma_1^{NS}$. A similar analysis for partial momentum sum rule \cite{akbari} has been reported recently. It is to be noted that in ref \cite{12}, $\Gamma_1^{NS}$ was calculated itself as $\hat\Gamma_1^{NS}$, even though our model was valid for $x$ range $0.02\leq x\leq 0.3$ only. The E155 collaboration at SLAC found $\Gamma_1^{NS}=0.176\pm0.003\pm0.007$ at $Q^2=5 GeV^2$, which is confirmative with SMC results, $\Gamma_1^{NS}=0.174\pm0.024\pm0.002$ at $Q^2=5 GeV^2$. For the HERMES collaboration \cite{HERMES}, the cumulative range of $\Gamma_1^{NS}$ within the range $(0.21\leq x\leq 0.9)$ at $Q^2=2.5 GeV^2$ is found to be, $\Gamma_1^{NS}=0.1477\pm0.0055\pm0.01102$. In recent analysis for COMPASS \cite{COMPASS}, the $\Gamma_1^{NS}$ is evaluated in the range $0.004\leq x \leq 0.7$, to be $\Gamma_1^{NS}=0.175\pm 0.009\pm 0.015$ at $Q^2=3GeV^2$. COMPASS has further measured $\hat\Gamma_1^{NS}$ separately as shown in Table 2 below, where $\Gamma_1^{NS}=0.190 \pm 0.009 \pm 0.015$. Let us now estimate the partial value $\hat\Gamma_1^{NS}$ and compare its contribution to the measured value of $\Gamma_1^{NS}$, for the analytical models.\\
\begin{table}[!ht]
\label{table:Integrals of $g_1^{NS}$}
\begin{center}
\caption[Partial Integrals of $g_1^{NS}$]{First moment $\Gamma_1^{NS}$ at $Q^2=3GeV^2$ from the COMPASS\cite{COMPASS} data points.}
\vspace{0.1in}
\begin{tabular}{|c|c|}
\hline
$x$ range & $\Gamma_1^{NS}$\\
\hline
0 - 0.004 & 0.0098\\
\hline
0.004 - 0.7 & $0.175 \pm 0.009 \pm 0.015$\\
\hline
0.7 - 1.0 & 0.0048 \\
\hline
0 - 1 & $0.190 \pm 0.009 \pm 0.015$ \\
\hline
\end{tabular}
\end{center}
\end{table}
In Table 3, we show a prediction to partial $\hat\Gamma_1^{NS}$, for $Q^2=2.5 GeV^2$ and $5GeV^2$, taking contribution from the region $0.0046\leq x\leq0.28$, where the analytical models work reasonably. The results are far less than the corresponding experimental values of $\Gamma_1^{NS}$. The remaining part of $\Gamma_1^{NS}$ is contributed by the small $x$ and large $x$ partons having $x\leq 0.0046$ and $x\geq 0.28$ respectively. \\
\begin{table}[!ht]
\label{table:Integrals of $g_1^{NS}$}
\begin{center}
\caption[Partial Integrals of $g_1^{NS}$]{Predictions for the partial integrals of $g_1^{NS}$ ($\hat\Gamma_1^{NS}$) obtained for the solutions Eq.(\ref{eqn:solnspol1}), Eq.(\ref{eqn:solnspol2}) and Eq.(\ref{solnlagmethod1}) with $x_a=0.0046$ and $x_b=0.28$ for $Q^2=2.5 GeV^2$ and $5GeV^2$.}
\vspace{0.1in}
\begin{tabular}{|c|c|c|}
\hline
Solutions &$Q^2=2.5GeV^2$ & $Q^2=5GeV^2$\\
\hline
MOC1,Eq.(\ref{eqn:solnspol1}) & 0.08939 & 0.08532 \\
\hline
MOC2,Eq.(\ref{eqn:solnspol2}) & 0.03579 & 0.03426\\
\hline
Lagrange's method Eq.(\ref{solnlagmethod1}) & 0.054 & 0.0667\\
\hline
\end{tabular}
\end{center}
\end{table}
As the present formalism is expected to be valid for small $x$, we also estimate the prediction for $\hat\Gamma_1^{NS}$, for the present models, assuming its validity in the small $x$ range $0\leq x\leq 0.0046$ separately. The results are shown in Table 4 for $Q^2=2.5 GeV^2$ and $5GeV^2$, which below the experimental value \cite{COMPASS}, suggests the large $x$,($x\geq 0.28$) contribution is necessary in case of our analytical models to account for the experimental value. \\
\begin{table}[!ht]
\label{table:Integrals of $g_1^{NS}$}
\begin{center}
\caption[Integrals of $g_1^{NS}$]{Integrals of $g_1^{NS}$ for analytical solutions Eq.(\ref{eqn:solnspol1}), Eq.(\ref{eqn:solnspol2}), Eq.(\ref{solnlagmethod1}) in the limited small $x$ range $0\leq x\leq 0.0046$ at $Q^2=2.5GeV^2$ and $5GeV^2$. }
\vspace{0.1in}
\begin{tabular}{|l|c|c|}
\hline
Solutions &$Q^2=2.5 GeV^2$ & $Q^2=5 GeV^2$ \\
\hline
MOC1,Eq.(\ref{eqn:solnspol1}) & 0.0008853 & 0.0009036 \\
\hline
MOC2,Eq.(\ref{eqn:solnspol2}) &0.0004845 & 0.0004753\\
\hline
Lagrange's method,Eq.(\ref{solnlagmethod1}) & 0.0001248 & 0.0001529\\
\hline
\end{tabular}
\end{center}
\end{table}
Though our analytical models works reasonably well within a small $(x,Q^2)$ range as discussed above, we still check their contribution to $\Gamma_1^{NS}$ in the high $x$ range $(0.0046\leq x \leq0.7)$, for completeness of the comparative study. The respective values of $\Gamma_1^{NS}$ at $Q^2=2.5GeV^2$ and $5GeV^2$ are shown in Table 5 below. The values are lower than the experimental value for COMPASS, $0.175\pm 0.009\pm 0.015$ within the region $0.004\leq x \leq0.7$ at $Q^2=3GeV^2$. But at increased $Q^2$ our analytical solution derived by Lagrange's method give higher values. Hewever as our analytical solution Eq.(\ref{eqn:solnspol1}) given by method of characteristics is not valid beyond $x\geq 0.28$, hence we do not get a value of $\Gamma_1^{NS}$ for that model for high $x$ range.\\
\begin{table}[!ht]
\label{table:Integrals of $g_1^{NS}$}
\begin{center}
\caption[Integrals of $g_1^{NS}$]{Integrals of $g_1^{NS}$ for analytical solutions Eq.(\ref{eqn:solnspol2}), Eq.(\ref{solnlagmethod1}) in the high $x$ range $0.0046\leq x\leq 0.7$ at $Q^2=2.5GeV^2$ and $5GeV^2$ .}
\vspace{0.1in}
\begin{tabular}{|l|c|c|}
\hline
Solutions &$Q^2=2.5 GeV^2$ & $Q^2=5 GeV^2$ \\
\hline
MOC2,Eq.(\ref{eqn:solnspol2}) &0.131 & 0.124\\
\hline
Lagrange's method,Eq.(\ref{solnlagmethod1}) & 0.161 & 0.197\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this work we have calculated the non-singlet spin structure function $g_1^{NS}$ using two analytical methods: Lagrange's and Method of Characteristics. The analytical solutions are in good agreement with the experimental data from both HERMES and COMPASS within a comparatively small $x$ range $0.0046\leq x\leq 0.28$ with $Q^2$ range $1.1 GeV^2 \leq Q^2 \leq 17.2 GeV^2$. From our analysis (both graphical and $\chi^2$) we conclude that our analytical solution Eq.(\ref{eqn:solnspol1}) obtained by method of characteristics compares best in the $(x,Q^2)$ range defined above. We have also calculated the partial momentum fractions, carried by small $x$ ($0.0046\leq x\leq 0.28$) non-singlet partons for the analytical models. The behaviour of analytical models in NLO is currently under study.
|
1,108,101,563,879 | arxiv |
\section{SOLUTION OVERVIEW}
\section{Index-Based Local Exploration Algorithms} \label{sec.algo}
In this section, we first introduce a useful core index and the index construction algorithm. Then, we present the index-based intimate-core group search algorithms using local exploration.
\subsection{K-Core Index}
\begin{algorithm}[t]
\small
\caption{Core Index Construction} \label{algo.index}
\begin{flushleft}
\textbf{Input:} A weighted graph $G=(V, E, w)$\\
\textbf{Output:} Coreness $\delta(v)$ for each $v \in V_{G}$\\
\end{flushleft}
\
\begin{algorithmic}[1]
\STATE Sort all nodes in $G$ in ascending order of their degree;
\STATE \textbf{while} $G \neq \emptyset$
\STATE \hspace{0.3cm} Let $d$ be the minimum degree in $G$;
\STATE \hspace{0.3cm} \textbf{while} there exists $deg_{G}(v) \leq d$
\STATE \hspace{0.3cm} \hspace{0.3cm} $\delta(v) \leftarrow d$;
\STATE \hspace{0.3cm} \hspace{0.3cm} Remove $v$ and its incident edges from $G$;
\STATE \hspace{0.3cm} \hspace{0.3cm} Re-order the remaining nodes in $G$ in ascending order of their degree;
\STATE Store $\delta(v)$ in index for each $v \in V_{G}$;
\end{algorithmic}
\end{algorithm}
We start with a useful definition of coreness as follows.
\begin{definition}
[Coreness] The coreness of a node $v\in V$, denoted by $\delta(v)$, is the largest number $k$ such that there exists a connected $k$-core containing $v$.
\end{definition}
Obviously, for a node $q$ with the coreness $\delta(q)= l$, there exists a connected $k$-core containing $q$ where $1\leq k\leq l$; meanwhile, there is no connected $k$-core containing $q$ where $k >l$. The $k$-core index keeps the coreness of all nodes in $G$.
\stitle{K-core index construction.} We apply the existing core decomposition~\cite{batagelj2003m} on graph $G$ to construct the $k$-core index. The algorithm is outlined in Algorithm~\ref{algo.index}. The core decomposition is to compute the coreness of each node in graph $G$. Note that for the self-completeness of our techniques and reproducibility, the detailed algorithm of core decomposition is also presented (lines 1-7).
First, the algorithm sort all nodes in $G$ based on their degree in ascending order. Second, it finds the minimum degree in $G$ as $d$. Based on the definition of $k$-core, it next computes the coreness of nodes with $deg_{G}(v)=d$ as $d$ and removing these nodes and their incident edges from $G$. With the deletion of these nodes, the degree of neighbors of these nodes will decrease. For those nodes which have a new degree at most $d$, they will not be in (d+1)-core while they will get $\delta(v)=d$. It continues the removal of nodes until there is no node has $deg_{G}(v) \leq d$. Then, the algorithm back to line 2 and starts a new iteration to compute the coreness of remaining nodes.
Finally, it stores the coreness of each vertex $v$ in $G$ as the $k$-core index.
\subsection{Solution Overview}
\begin{algorithm}[t]
\small
\caption{\leks Framework} \label{algo.fm}
\begin{flushleft}
\textbf{Input:} $G=(V, E, w)$, an integer $k$, a set of query vertices $Q$\\
\textbf{Output:} Intimate-core group $H$\\
\end{flushleft}
\
\begin{algorithmic}[1]
\STATE Find a tree $T_Q$ for query nodes $Q$ using Algorithm~\ref{algo.tree} or Algorithm~\ref{algo.path};
\STATE Expand the tree $T_Q$ to a candidate graph $G_{Q}$ in Algorithm~\ref{algo.expand};
\STATE Apply \icgm~\cite{zheng2017querying} on graph $G_{Q}$;
\STATE Return a refined intimate-core group as answers;
\end{algorithmic}
\end{algorithm}
\begin{figure}[!h]
\includegraphics[width=\textwidth]{Figure/fm.eps}
\caption{\leks framework for intimate-core group search}
\label{fig.fm}
\end{figure}
At a high level, our algorithm of \underline{l}ocal \underline{e}xploration based on \underline{k}-core index for intimate-core group \underline{s}earch (\leks) consists of three phases:
\begin{enumerate}
\item \emph{Tree Generation Phase}: This phase invokes the shortest path algorithm to find the distance between any pair of nodes, and then constructs a small-weighted tree by connecting all query nodes.
\item \emph{Expansion Phase}: This phase expands a tree into a graph. It applies the idea of local exploration to add nodes and edges. Finally, it obtains a connected $k$-core containing all query nodes.
\item \emph{Intimate-Core Refinement Phase}: This phase removes nodes with large weights, and maintains the candidate answer as a connected $k$-core. This refinement process stops until an intimate-core group is obtained.
\end{enumerate}
Fig.~\ref{fig.fm} shows the whole framework of our index-based local exploration algorithm. Note that we compute the $k$-core index offline and apply the above solution of online query processing for intimate-core group search. In addition, we consider $|Q|\geq 2$ for tree generation phase, and skip this phase if $|Q|=1$. Algorithm~\ref{algo.fm} also depicts our algorithmic framework of \leks.
\section{Experiments} \label{sec.exp}
\begin{figure} [t]
\begin{tabular} {@{\hspace{0.02\textwidth}}c@{\hspace{0.02\textwidth}} c@{\hspace{0.02\textwidth}} c}
\centering
\hfil\includegraphics[width=0.32\textwidth]{Figure/gw_k_wiki.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/gw_k_flickr.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/gw_k_dblp.eps} \\
\newline
(a) wiki-vote & (b) Flickr & (c) DBLP\\
\end{tabular}
\caption{Effectiveness evaluation by varying k} \label{fig.quality_k}
\end{figure}
\begin{figure} [t]
\begin{tabular} {@{\hspace{0.02\textwidth}}c@{\hspace{0.02\textwidth}} c@{\hspace{0.02\textwidth}} c}
\centering
\hfil\includegraphics[width=0.32\textwidth]{Figure/time_k_wiki.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/time_k_flickr.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/time_k_dblp.eps}\\
\newline
(a) wiki-vote & (b) Flickr & (c) DBLP\\
\end{tabular}
\caption{Efficiency evaluation by varying k}\label{fig.time_k}
\end{figure}
In this section, we experimentally evaluate the performance of our proposed algorithms. All algorithms are implemented in Java and performed on a Linux server with Xeon E5-2630 (2.2 GHz) and 256 GB RAM.
\stitle{Datasets.} We use three real-world datasets in experiments. All datasets are publicly available from~\cite{huang2016truss}. The edge weight represents the existence probability of an edge. A smaller weight indicates a higher possibility of the edge to existing. The statistics of datasets are shown in Table~\ref{tab.dataset}. The maximum coreness $\delta_{max} =\max_{v\in V} \delta(v)$.
\begin{table}[h]
\centering
\caption{Network statistics}\label{tab.dataset}
\begin{tabular}{|l|l|l|l|}
\hline
Datasets & $|V|$ & $|E|$ & $\delta_{max}$ \\
\hline
wiki-vote & 7,115 & 103,689 & 56 \\
Flickr & 24,125 & 300,836 & 225 \\
DBLP & 684,911 & 2,284,991 & 114 \\
\hline
\end{tabular}
\end{table}
\stitle{Algorithms.} We compare 3 algorithms as follows.
\squishlisttight
\item \icgm: is the state-of-the-art approach for finding intimate-core group using bulk deletion~\cite{zheng2017querying}.
\item \ltree: is our index-based search framework in Algorithm~\ref{algo.fm} using Algorithm~\ref{algo.tree} for tree generation.
\item \lpath: is our index-based search framework in Algorithm~\ref{algo.fm} using Algorithm~\ref{algo.path} for tree generation.
\end{list}
We evaluate all algorithms by comparing the running time and the intimate-core group weight. The less running time costs, the more efficient the algorithm is. Smaller the group weight of the answer, better effectiveness is.
\stitle{Queries and parameters.} We evaluate all competitive approaches by varying parameters $k$ and $|Q|$. The range of $k$ is \{2, 4, 6, 8\}. The number of query nodes $|Q|$ falls in \{1, 2, 3, 4, 5, 6, 7\}. We randomly generate 100 sets of queries by different $k$ and $|Q|$.
\stitle{Exp-1: Varying $k$.} Fig.~\ref{fig.quality_k} shows the group weight of three algorithms by varying parameter $k$ on all datasets. The results show that our local search methods \ltree and \lpath can find intimate groups with lower group weights than \icgm, for different $k$. The performance of \ltree and \lpath are similar.
Fig.~\ref{fig.time_k} shows that \lpath performs the best for most cases, and runs significantly faster than \icgm. Interestingly, \icgm can find answers quickly for $k=4$, which achieves similar performance with \leks methods.
\begin{figure} [ht]
\begin{tabular} {@{\hspace{0.02\textwidth}}c@{\hspace{0.02\textwidth}} c@{\hspace{0.02\textwidth}} c}
\centering
\hfil\includegraphics[width=0.32\textwidth]{Figure/gw_q_wiki.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/gw_q_flickr.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/gw_q_dblp.eps}\\
\newline
(a) wiki-vote & (b) Flickr & (c) DBLP\\
\end{tabular}
\caption{Effectiveness evaluation by varying $|Q|$}\label{fig.quality_q}
\end{figure}
\begin{figure} [ht]
\begin{tabular} {@{\hspace{0.02\textwidth}}c@{\hspace{0.02\textwidth}} c@{\hspace{0.02\textwidth}} c}
\centering
\hfil\includegraphics[width=0.32\textwidth]{Figure/time_q_wiki.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/time_q_flickr.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/time_q_dblp.eps}\\
\newline
(a) wiki-vote & (b) Flickr & (c) DBLP\\
\end{tabular}
\caption{Efficiency evaluation by varying $|Q|$}\label{fig.time_q}
\end{figure}
\stitle{Exp-2: Varying $|Q|$.} Fig.~\ref{fig.quality_q} reports the group weight results of three algorithms for different queries by varying $|Q|$. With the increasing $|Q|$, \ltree and \lpath methods can always find intimate groups with smaller weights than \icgm. \ltree and \lpath perform similarly. Fig.~\ref{fig.time_q} reports the results of running time. It shows that our methods are always faster than \icgm.
\stitle{Exp-3: Quality evaluation of candidate intimate-core groups.}
This experiment evaluates the subgraphs of candidate intimate-core groups by all methods, in terms of vertex size and group weight.
\icgm takes the maximal connected $k$-core subgraph containing query nodes as an initial candidate, and iteratively shrinks it.
\ltree and \lpath both generate an initial candidate subgraph locally expanded from a tree, and then iteratively shrink the candidate by removing nodes.
We use $k=6$ and $|Q|=5$.
We report the results of the first 5 removal iterations and the initial candidate at the \#iteration of 0. Fig.~\ref{fig.iteration}(a) shows that the group weight of candidates by our methods is much smaller than \icgm. Fig.~\ref{fig.iteration}(b) reports the vertex size of all candidates at each iteration. The number of vertices in the candidate group by \ltree and \lpath at the \#iteration of 0, is even less than the vertex size of candidate group by \icgm at the \#iteration of 5.
\begin{figure}[ht]
\begin{tabular} {@{\hspace{0.07\textwidth}}c @{\hspace{0.05\textwidth}}c}
\centering
\hfil\includegraphics[width=0.38\textwidth]{Figure/gw.eps} &
\hfil\includegraphics[width=0.38\textwidth]{Figure/vn.eps}\\
\newline
(a) Group weight varied by \#iterations & (b) Number of vertices varied by \#iterations \\
\end{tabular}
\caption{The size and weight of intimate-groups varied by \#iterations}\label{fig.iteration}
\end{figure}
\stitle{Exp-4: Case study on the DBLP network.} We conduct a case study of intimate-core group search on the collaboration DBLP network~\cite{zheng2017querying}. Each node represents an author, and an edge is added between two authors if they have co-authored papers. The weight of an edge $(u,v)$ is the reciprocal of the number of papers they have co-authored. The smaller weight of $(u,v)$, the closer intimacy between authors $u$ and $v$. We use the query $Q=$\{``Huan Liu", ``Xia Hu", ``Jiliang Tang"\} and $k=4$. We apply \lpath and \icgm to find 4-core intimate groups for $Q$. The results of \lpath and \icgm are shown in Fig.~\ref{fig.case}(a) and Fig.~\ref{fig.case}(b) respectively. The bolder lines of an edge represent a smaller weight, indicating closer intimate relationships. Our \leks method discovers a compact 4-core with 5 nodes and 10 edges in Fig.~\ref{fig.case}(a), which has the group weight of 1.6, while \icgm finds a subgraph with 12 nodes, which has a larger group weight of 16.7 in Fig.~\ref{fig.case}(b). We can see that nodes on the right side of Fig.~\ref{fig.case}(b) has no co-author connections with two query nodes ``Xia Hu" and ``Jiliang Tang" at all. This case study verifies that our \lpath can successfully find a better intimate-core group than \icgm.
\begin{figure}[ht]
\begin{tabular} {@{\hspace{0.05\textwidth}}c@{\hspace{0.03\textwidth}} c}
\centering
\hfil\includegraphics[width=0.30\textwidth]{Figure/huanliu_path.eps} &
\hfil\includegraphics[width=0.53\textwidth]{Figure/huanliu_ori.eps}\\
\newline
(a) \lpath & (b) \icgm \\
\end{tabular}
\caption{Case study of intimate-core group search on the DBLP network. Here, query $Q=$\{``Huan Liu", ``Xia Hu", ``Jiliang Tang"\} and $k=4$.}\label{fig.case}
\end{figure}
\section{Introduction}
\section{Introduction}
Graphs widely exist in social networks, biomolecular structures, traffic networks, world wide web, and so on. Weighted graphs have not only the simple topological structure but also edge weights. The edge weight is often used to indicate the strength of the relationship, such as interval in social communications, traffic flow in the transportation network, carbon flow in the food chain, and so on~\cite{newman2001scientific,opsahl2010node,newman2004analysis}. Weighted graphs provide information that better describes the organization and hierarchy of the network, which is helpful for community detection~\cite{newman2004analysis} and community search~\cite{hlx2019community,yuan2017index,fang2019survey,huang2014querying}. Community detection aims at finding all communities on the entire network, which has been studied a lot in the literature. Different from community detection, the task of community search finds only query-dependent communities, which has a wide application of disease infection control, tag recommendation, and social event organization~\cite{sozio2010community,zheng2017querying}. Recently, several community search models have been proposed in different dense subgraphs of $k$-core~\cite{batagelj2003m,sariyuce2013streaming} and $k$-truss~\cite{wang2012truss,huang2014querying}.
As a notation of dense subgraph, $k$-core requires that every vertex has $k$ neighbors in the $k$-core. For example, Fig.~\ref{fig.motivate}(a) shows a graph $G$. Subgraphs $G_1$ and $G_2$ are both connected 3-cores, in which each vertex has at least three neighbors. $K$-core has been popularly used in many community search models~\cite{zhu2018k,fang2016effective,li2015influential,sozio2010community,barbieri2015efficient,zhu2018k,medya2019k}. Recently, Zheng et al.~\cite{zheng2017querying} proposed one problem of intimate-core group search in weighted graphs as follows.
\begin{figure}[ht]
\begin{tabular} {@{\hspace{0.1\textwidth}}c @{\hspace{0.06\textwidth}} c}
\includegraphics[width=0.50\textwidth]{Figure/original.eps} &
\includegraphics[width=0.15\textwidth]{Figure/sample-figure1.eps}\\
\newline
(a) Graph $G$ & (b) Intimate-core group\\
\end{tabular}
\caption{An example of intimate-core group search in graph $G$ for $Q=\{v_8, v_{10}\}$ and $k=3$.}
\label{fig.motivate}
\end{figure}
\stitle{Motivating example.} Consider a social network $G$ in Fig.~\ref{fig.motivate}(a). Two individuals have a closer friendship if they have a shorter interval for communication, indicating a smaller weight of the relationship edge. The problem of intimate-core group search aims at finding a densely-connected $k$-core containing query nodes $Q$ with the smallest group weight as an answer. For $Q=\{v_{8}, v_{10}\}$ and $k=3$, the intimate-core group is shown in Fig.~\ref{fig.motivate}(b) with a minimum group weight of 13.
This paper studies the problem of intimate-core group search in weighted graphs. Given an input of query nodes in a graph and a number $k$, the problem is to find a connected $k$-core containing query nodes with the smallest weight. In the literature, existing solutions proposed in~\cite{zheng2017querying} find the maximal connected $k$-core and iteratively remove a node from this subgraph for intimate-core group refinement. However, this approach may take a large number of iterations, which is inefficient for big graphs with a large component of $k$-core. Therefore, we propose a solution of local exploration to find a small candidate $k$-core, which takes a few iterations to find answers. To further speed up the efficiency, we build a $k$-core index, which keeps the structural information of $k$-core for fast identification. Based on the $k$-core index, we develop a local exploration algorithm \leks for intimate-core group search. Our algorithm \leks first generates a tree to connect all query nodes, and then expands it to a connected subgraph of $k$-core. Finally, \leks keeps refining candidate graphs into an intimate-core group with small weights. We propose several well-designed strategies for \leks to ensure the fast-efficiency and high-quality of answer generations.
\stitle{Contributions.} Our main contributions of this paper are summarized as follows.
\begin{itemize}
\item We investigate and tackle the problem of intimate-core group search in weighted graphs, which has wide applications on real-world networks.
The problem is \NPhard, which bring challenges to develop efficient algorithms.
\item We develop an efficient local exploration framework of \leks based on the $k$-core index for intimate-core group search. \leks consists of three phases: tree generation, tree-to-graph expansion, and intimate-core refinement.
\item In the phase of tree generation, we propose to find a seed tree to connect all query nodes, based on two generated strategies of \emph{spanning tree} and \emph{weighted path} respectively. Next, we develop the tree-to-graph expansion, which constructs a hierarchical structure by expanding a tree to a connected $k$-core subgraph level by level. Finally, we refine a candidate $k$-core to an intimate-core group with a small weight. During the phases of expansion and refinement, we design a protection mechanism for query nodes, which protects critical nodes to collapse the $k$-core.
\item Our experimental evaluation demonstrates the effectiveness and efficiency of our \leks algorithm on real-world weighted graphs. We show the superiority of our methods in finding intimate groups with smaller weights, against the state-of-the-art \icgm method~\cite{zheng2017querying}.
\end{itemize}
\stitle{Roadmap.} The rest of the paper is organized as follows. Section~\ref{sec.relate} reviews the previous work related to ours. Section~\ref{sec.problem} presents the basic concepts and formally defines our problem. Section~\ref{sec.algo} introduces our index-based local exploration approach \leks. Section~\ref{sec.exp} presents the experimental evaluation. Finally, Section~\ref{sec.con} concludes the paper.
\section{Conclusion} \label{sec.con}
This paper presents a local exploration $k$-core search (\leks) framework for efficient intimate-core group search. \leks generates a spanning tree to connect query nodes in a compact structure, and locally expands it for intimate-core group refinement. Extensive experiments on real datasets show that our approach achieves a higher quality of answers using less running time, in comparison with the existing \icgm method.
\input{ms.bbl}
\end{document}
\subsection{Tree-to-Graph Expansion}
\begin{algorithm} [t]
\small
\caption{Tree-to-Graph Expansion} \label{algo.expand}
\begin{flushleft}
\textbf{Input:} $G=(V, E, w)$, a set of query vertices $Q$, $k$-core index, $T_Q$\\
\textbf{Output:} Candidate subgraph $G_{Q}$\\
\end{flushleft}
\
\begin{algorithmic}[1]
\STATE Identify the maximal connected $k$-core of $C_k$ containing query nodes $Q$;
\STATE $L_{0} \leftarrow \{ v | v \in V_{T_Q} \}$; $L' \leftarrow L_{0}$ ;
\STATE $i \leftarrow 0$; $G_{Q} \leftarrow \emptyset$;
\STATE \textbf{while} $G_{Q} = \emptyset$ \textbf{do}
\STATE \hspace{0.3cm} \textbf{for} each $v \in L_{i}$ \textbf{do}
\STATE \hspace{0.3cm} \hspace{0.3cm} \textbf{for} each $u \in N_{C_{k}}(v)$ and $u \notin L' \cup L_{i+1}$ \textbf{do}
\STATE \hspace{0.3cm} \hspace{0.3cm} \hspace{0.3cm} $L_{i+1} \leftarrow L_{i+1} \cup \{u\}$;
\STATE \hspace{0.3cm} $L' \leftarrow L' \cup L_{i+1}$; $i \leftarrow i+1$;
\STATE \hspace{0.3cm}
Let $G_L$ be the induced subgraph of $G$ by the node set $L'$;
\STATE \hspace{0.3cm} Generate a connected $k$-core of $G_{L}$ containing query nodes $Q$ as $G_{Q}$;
\STATE \textbf{return} $G_{Q}$;
\end{algorithmic}
\end{algorithm}
In this section, we introduce the phase of tree-to-graph expansion. This method expands the obtained tree from Algorithm~\ref{algo.tree} or Algorithm~\ref{algo.path} into a connected $k$-core candidate subgraph $G_{Q}$. It consists of two main steps. First, it adds nodes/edges to expand the tree into a graph layer by layer. Then, it prunes disqualified nodes/edges to maintain the remaining graph as a connected $k$-core. The whole procedure is shown in Algorithm~\ref{algo.expand}.
Algorithm~\ref{algo.expand} first gets all nodes in $T_{Q}$ and puts them into $L_0$ (line 2). Let $L_{i}$ be the vertex set at the $i$-th depth of expansion tree, and $L_{0}$ be the initial set of vertices. It uses $L'$ to represent the set of candidate vertices, which is the union of all $L_{i}$ set. The iterative procedure can be divided into three steps (lines 4-10). First, for each vertex $v$ in $L_{i}$, it adds their neighbors into $L_{i+1}$ (lines 5-7). Next, it collects and merges $\{ L_0, ..., L_{i+1} \}$ into $L'$ and constructs a candidate graph $G_L$ as the induced subgraph of $G$ by the node set $L'$ (lines 8-9).
Finally, we apply the core decomposition algorithm on $G_L$ to find the connected $k$-core subgraph containing all query nodes, denoted as $G_Q$. If there exists no such $G_Q$, Algorithm~\ref{algo.expand} explores the $(i+1)$-th depth of expansion tree and repeats the above procedure (lines 4-10). In the worst case, $G_Q$ is exactly the maximum connected $k$-core subgraph containing $Q$. However, $G_Q$ in practice is always much smaller than it. The time complexity for expansion is $O(\sum_{i=0}^{l_{max}} \sum_{v\in V(G_i)} \deg(v) )$, where $l_{max}$ is the iteration number of expansion in Algorithm~\ref{algo.expand}.
\begin{figure}[ht]
\begin{tabular} {@{\hspace{0.1\textwidth}}c @{\hspace{0.1\textwidth}} c@{\hspace{0.1\textwidth}}}
\centering
\hfil\includegraphics[width=0.32\textwidth]{Figure/expansion-1.eps} &
\hfil\includegraphics[width=0.32\textwidth]{Figure/after-expansion.eps} \\
\newline
(a) Expansion & (b) Candidate subgraph construction\\
\end{tabular}
\caption{Tree-to-graph expansion}\label{fig.expand}
\end{figure}
\begin{example} Fig.~\ref{fig.motivate}(a) shows a weighted graph $G$ with query $Q = \{ v_{8}, v_{10} \}$ and $k=3$. We first identify the maximal connected 3-core containing query nodes $Q$. Since there is only 2 query nodes, the spanning tree is same as the shortest path between them, such that $T_{Q}=\spath_{C_3} (v_{8}, v_{10})$. Next, we initialize $L_0$ as $L_0 = \{v_{8}, v_{10}\}$ and expand nodes in $L_0$ to their neighbors. The expansion procedure is shown in Fig.~\ref{fig.expand}(a). We put all nodes in Fig.~\ref{fig.expand}(a) into $L'$ and construct a candidate subgraph $G_L$ shown in Fig.~\ref{fig.expand}(b). Since $G_L$ is a 3-core connected subgraph containing query nodes, the expansion graph $G_Q$ is $G_L$ itself.
\end{example}
\subsection{Intimate-Core Refinement}
This phase refines the candidate connected $k$-core into an answer of the intimate-core group. We apply the existing approach \icgm~\cite{zheng2017querying} by removing nodes to shrink the candidate graph obtained from Algorithm~\ref{algo.expand}.
This step takes $O(m' \log_{\varepsilon}n')$ time, where $\varepsilon > 0$ is a parameter of shrinking graph~\cite{zheng2017querying}. To avoid query nodes deleted by the removal processes of \icgm, we develop a mechanism to protect important query nodes.
\stitle{Protection mechanism for query nodes.} As pointed by~\cite{zhang2017finding,bhawalkar2015preventing,zhang2017olak}, the $k$-core structure may collapse when critical nodes are removed. Thus, we precompute such critical nodes for query nodes in $k$-core and ensure that they are not deleted in any situations. We use an example to illustrate our ideas. For a query node $q$ with an exact degree of $k$, it means that if any neighbor is deleted, there exists no feasible $k$-core containing $q$ any more. Thus, $q$ and all $q$'s neighbors are needed to protect. For example, in Fig.~\ref{fig.expand}(b), assume that $k=3$, there exists $deg_{G}(v_{10})=k$. The removal of each node in $N_{G}(v_{10})$ will cause core decomposition and the deletion of $v_{10}$.
This protection mechanism for query nodes can also be used for $k$-core maintenance in the phrase of tree-to-graph expansion.
\section{Preliminaries}
\section{Preliminaries} \label{sec.problem}
In this section, we formally define the problem of intimate-core group search and revisit the existing intimate-core group search approaches.
\subsection{Problem Definition}
Let $G(V, E, w)$ be a weighted and undirected graph where $V$ is the set of nodes, $E$ is the set of edge, and $w$ is an edge weight function. Let $w(e)$ to indicate the weight of an edge $e\in E$.
The number of nodes in $G$ is defined as $n = |V|$. The number of edges in $G$ is defined as $m =|E|$. We denote the set of neighbors of a node $v$ by $N_{G}(v) = \{u\in V: (u,v)\in E\}$, and the degree of $v$ by $deg_{G}(v)=|N_{G}(v)|$. For example, Fig.~\ref{fig.motivate}(a) shows a weighted graph $G$. Node $v_{5}$ has two neighbors as $N_{G}(v_{5}) = \{v_{4}, v_{6}\}$, thus the degree of $v_{5}$ is $deg_{G}(v_{5})=2$ in graph $G$. Edge $(v_2, v_3)$ has a weight of $w(v_2, v_3) =1$. Based on the definition of degree, we can define the $k$-core as follows.
\begin{definition}
[K-Core~\cite{batagelj2003m}] Given a graph $G$, the $k$-core is the largest subgraph $H$ of $G$ such that every node $v$ has degree at least $k$ in $H$, i.e., $deg_{H}(v)\geq k$.
\end{definition}
For a given integer $k$, the $k$-core of graph $G$ is denoted by $C_{k}(G)$, which is determinative and unique by the definition of largest subgraph constraint. For example, the 3-core of $G$ in Fig.~\ref{fig.motivate}(a) has two components $G_1$ and $G_2$. Every node has at least 3 neighbors in $G_1$ and $G_2$ respectively. However, the nodes are disconnected between $G_1$ and $G_2$ in the 3-core $C_3(G)$. To incorporate connectivity into $k$-core, we define a connected $k$-core.
\begin{definition}
[Connected K-Core] Given graph $G$ and number $k$, a connected $k$-core $H$ is a connected component of $G$ such that every node $v$ has degree at least k in $H$, i.e., $deg_H(v)\geq k$.
\end{definition}
Intuitively, all nodes are reachable in a connected $k$-core, i.e., there exist paths between any pair of nodes. $G_1$ and $G_2$ are two connected 3-cores in Fig.~\ref{fig.motivate}(a).
\begin{definition}
[Group Weight] Given a subgraph $H \subseteq G$, the group weight of $H$, denoted by $w(H)$, is defined as the sum of all edge weights in $H$, i.e., $w(H) = \sum_{e\in E(H)} w(e)$.
\end{definition}
\begin{example}
For the subgraph $G_1 \subseteq G$ in Fig.~\ref{fig.motivate}(a), the group weight of $G_1$ is $w(G_1)= \sum_{e\in E(G_1)} w(e)= 1+3+5+2+1+3 = 15$.
\end{example}
On the basis of the definitions of connected $k$-core and group weight, we define the \emph{intimate-core group} in a graph $G$ as follows.
\begin{definition}
[Intimate-Core Group~\cite{zheng2017querying}] Given a weighted graph $G = (V, E, w)$, a set of query nodes $Q$ and a number $k$, the intimate-core group is a subgraph $H$ of $G$ if $H$ satisfies following conditions:
\begin{itemize}
\item \textbf{Participation.} $H$ contains all the query nodes $Q$, i.e., $Q \subseteq V_{H}$;
\item \textbf{Connected K-Core.} $H$ is a connected $k$-core with $deg_{H}(v)\geq k$;
\item \textbf{Smallest Group Weight.} The group weight $w(H)$ is the smallest, that is, there exists no $H' \subseteq G$ achieving a group weight of $w(H^{'}) < w(H)$ such that $H^{'}$ also satisfies the above two conditions.
\end{itemize}
\end{definition}
Condition (1) of participation makes sure that the intimate-core group contains all query nodes. Moreover, Condition (2) of connected $k$-core requires that all group members are densely connected with at least $k$ intimate neighbors. In addition, Condition (3) of minimized group weight ensures that the group has the smallest group weight, indicating the most intimate in any kinds of edge semantics. A small edge weight means a high intimacy among the group. Overall, intimate core groups have several significant advantages of small-sized group, offering personalized search for different queries, and close relationships with strong connections.
The problem of \emph{intimate-core group search} studies in this paper is formulated in the following.
\stitle{Problem formulation: } Given an undirected weighted graph $G(V, E, w)$, a number $k$, and a set of query nodes $Q$, the problem is to find the intimate-core group of $Q$.
\begin{example}
In Fig.~\ref{fig.motivate}(a), $G$ is a weighted graph with 12 nodes and 20 edges. Each edge has a positive weight. Given two query nodes $Q = \{ v_{8}, v_{10} \}$ and $k = 3$, the answer of intimate-core group for $Q$ is the subgraph shown in Fig.~\ref{fig.motivate}(b). This is a connected 3-core, and also containing two query nodes $\{ v_{8}, v_{10} \}$. Moreover, it has the minimum group weight among all connected 3-core subgraphs containing $Q$.
\end{example}
\subsection{Existing Intimate-Core Group Search Algorithms}
The problem of intimate-core group search has been studied in the literature~\cite{zheng2017querying}. Two heuristic algorithms, namely, \icgs and \icgm, are proposed to deal with this problem in an online manner. No optimal algorithms have been proposed yet because this problem has been proven to be \NPhard~\cite{zheng2017querying}. The NP-hardness is shown by reducing the NP-complete clique decision problem to the intimate-core group search problem.
Existing solutions \icgs and \icgm both first identify a maximal connected $k$-core as a candidate, and then remove the node with the largest weight of its incident edges at each iteration~\cite{zheng2017querying}. The difference between \icgs and \icgm lies on the node removal. \icgs removes one node at each iteration, while \icgm removes a batch of nodes at each iteration. Although \icgm can significantly reduce the total number of removal iterations required by \icgs, it still takes a large number of iterations for large networks. The reason is that the initial candidate subgraph connecting all query nodes is the maximal connected $k$-core, which may be too large to shrink. This, however, is not always necessary. In particular, if there exists a small connected $k$-core surrounding query nodes, then a few numbers of iterations may be enough token for finding answers. This paper proposes a local exploration algorithm to find a smaller candidate subgraph. On the other hand, both \icgs and \icgm apply the core decomposition to identify the $k$-core from scratch, which is also costly expensive. To improve efficiency, we propose to construct an index offline and retrieve $k$-core for queries online.
\section{Related Work}
\section{Related Work} \label{sec.relate}
In the literature, numerous studies have been investigated community search based on various kinds of dense subgraphs, such as $k$-core~\cite{batagelj2003m,sariyuce2013streaming}, $k$-truss~\cite{wang2012truss,huang2014querying} and clique~\cite{yuan2017index,yuan2016diversified}. Community search has been also studied on many labeled graphs, including weighted graphs~\cite{duan2009community,zheng2017finding,zheng2017querying}, influential graphs~\cite{li2015influential,bi2018optimal}, and keyword-based graphs~\cite{fang2017effective,fang2016effective,huang2017attribute}.
Table~\ref{tab.related} compares different characteristics of existing community search studies and ours.
\vspace{5 pt}
\begin{table}[ht]
\centering
\small
\caption{A comparison of existing community search studies and ours}\label{tab.related}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}*{Method} & Dense Subgraph & Node & Edge & Local & \multirow{2}*{Index-based} & Multiple & \multirow{2}*{\NPhard}\\
~& Model & Type & Type & Search &~ & Query Nodes &~\\
\hline
\cite{yuan2016diversified} & clique & $\times$ & $\times$ & \checkmark & \checkmark & $\times$ & \checkmark\\
\cite{yuan2017index} & clique & $\times$ & $\times$ & $\times$ & \checkmark & \checkmark & \checkmark\\
\cite{huang2015approximate} & $k$-truss & $\times$ & $\times$ & \checkmark & \checkmark & \checkmark & \checkmark\\
\cite{zhu2018k,medya2019k} & $k$-core & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & \checkmark\\
\cite{cui2014local} & $k$-core & $\times$ & $\times$ & \checkmark & $\times$ & $\times$ & \checkmark\\
\cite{sozio2010community} & $k$-core & $\times$ & $\times$ & $\times$ & $\times$ & \checkmark & \checkmark\\
\cite{barbieri2015efficient} & $k$-core & $\times$ & $\times$ & \checkmark & \checkmark & \checkmark & \checkmark\\
\cite{huang2017attribute} & $k$-truss & keyword & $\times$ & \checkmark & \checkmark & \checkmark & \checkmark\\
\cite{fang2016effective} & $k$-core & keyword & $\times$ & \checkmark & \checkmark & $\times$ & \checkmark\\
\cite{li2015influential} & $k$-core & influential & $\times$ & $\times$ & \checkmark & $\times$ & $\times$\\
\cite{bi2018optimal} & $k$-core & influential & $\times$ & \checkmark & $\times$ & $\times$ & $\times$\\
\cite{zheng2017finding} & $k$-truss & $\times$ & weighted & \checkmark & \checkmark & $\times$ & $\times$\\
\cite{zheng2017querying} & $k$-core & $\times$ & weighted & $\times$ & $\times$ & \checkmark & \checkmark\\
Ours & $k$-core & $\times$ & weighted & \checkmark & \checkmark & \checkmark & \checkmark\\
\hline
\end{tabular}
\end{table}
The problem of $k$-core minimization~\cite{barbieri2015efficient,zhu2018k,medya2019k,cui2014local} aims to find a minimal connected $k$-core subgraph containing query nodes. The minimum wiener connector problem is finding a small connected subgraph to minimize the sum of all pairwise shortest-path distances between the discovered vertices~\cite{ruchansky2015minimum}. Different from all the above studies, our work aims at finding an intimate-core group containing multiple query nodes in weighted graphs. We propose fast algorithms for intimate-core group search, which outperform the state-of-the-art method~\cite{zheng2017querying} in terms of quality and efficiency.
\subsection{Tree Generation}
\begin{algorithm}[t]
\small
\caption{Tree Construction} \label{algo.tree}
\begin{flushleft}
\textbf{Input:} $G=(V, E, w)$, an integer $k$, a set of query vertices $Q$, the $k$-core index\\
\textbf{Output:} Tree $T_Q$\\
\end{flushleft}
\
\begin{algorithmic}[1]
\STATE Identify the maximal connected $k$-core of $C_k$ containing query nodes $Q$;
\STATE Let $G_{pw}$ be an empty graph;
\STATE \textbf{for} $q_{1}, q_{2} \in Q$
\STATE \hspace{0.3cm} \textbf{if} there is no path between $q_{1}$ and $q_{2}$ in $C_{k}$ \textbf{then}
\STATE \hspace{0.3cm} \hspace{0.3cm} \textbf{return} $\emptyset$;
\STATE \hspace{0.3cm} \textbf{else}
\STATE \hspace{0.3cm} \hspace{0.3cm} Compute the shortest path between $q_1$ and $q_2$ in $C_{k}$;
\STATE \hspace{0.3cm} \hspace{0.3cm} Add the $\spath_{C_k}(q_1, q_2)$ between $q_1$ and $q_2$ into $G_{pw}$;
\STATE Tree: $T_Q \leftarrow \emptyset$;
\STATE Priority queue: $L \leftarrow \emptyset$;
\STATE \textbf{for} each node $v$ in $G_{pw}$
\STATE \hspace{0.3cm} $\dist(v)\leftarrow \infty$;
\STATE $Q \leftarrow Q -\{q_0\}$; \dist$(q_{0})\leftarrow 0$; $L.push(q_{0}, \dist(q_{0}))$;
\STATE \textbf{while} $Q \neq \emptyset$ \textbf{do}
\STATE \hspace{0.3cm} Extract a node $v$ and its edges with the smallest $\dist(v)$ from $L$;
\STATE \hspace{0.3cm} Insert node $v$ and its edges into $T_Q$;
\STATE \hspace{0.3cm} \textbf{if} $v \in Q$ \textbf{then}
\STATE \hspace{0.3cm} \hspace{0.3cm} $Q \leftarrow Q -\{v\}$;
\STATE \hspace{0.3cm} \textbf{for} $u \in N_{G_{pw}}(v)$ \textbf{do}
\STATE \hspace{0.3cm} \hspace{0.3cm} \textbf{if} $\dist(u) > w(u, v)$ \textbf{then}
\STATE \hspace{0.3cm} \hspace{0.3cm} \hspace{0.3cm} $\dist(u) \leftarrow w(u, v)$;
\STATE \hspace{0.3cm} \hspace{0.3cm} \hspace{0.3cm} Update $(u, \dist(u))$ in $L$;
\STATE \textbf{return} $T_Q$;
\end{algorithmic}
\end{algorithm}
In this section, we present the phase of tree generation. Due to the large-scale size of $k$-core in practice, we propose local exploration methods to identify small-scale substructures as candidates from the $k$-core. The approaches produce a tree structure with small weights to connect all query nodes. We develop two algorithms, respectively based on the minimum spanning tree (\MST) and minimum weighted path (\MWP).
\stitle{Tree construction.} The tree construction has three major steps. Specifically, the algorithm firstly generates all-pairs shortest paths for query nodes $Q$ in the $k$-core $C_k$ (lines 1-7). Given a path between nodes $u$ and $v$, the path weight is the total weight of all edges along this path between $u$ and $v$. It uses \spath$_{\mathcal{C}_k}(u,v)$ to represent the shortest path between nodes $u$ and $v$ in the $k$-core $C_k$. For any pair of query nodes $q_i$, $q_j \in Q$, our algorithm invokes the well-known Dijkstra's algorithm~\cite{cormen2009introduction} to find the shortest path \spath$_{C_k}(q_i,q_j)$ in the $k$-core $C_k$.
Second, the algorithm constructs a weighted graph $G_{pw}$ for connecting all query nodes (lines 3-8). Based on the obtained all-pairs shortest paths, it collects and merges all these paths together to construct a weighted graph $G_{pw}$ correspondingly.
Third, the algorithm generates a small spanning tree for $Q$ in the weighted graph $G_{pw}$ (lines 9-22), since not all edges are needed to keep the query nodes connected in $G_{pw}$. This step finds a compact spanning tree to connect all query nodes $Q$, which removes no useful edges to reduce weights. Specifically, the algorithm starts from one of the query nodes and does expand based on Prim's minimum spanning tree algorithm~\cite{cormen2009introduction}. The algorithm stops when all query nodes are connected into a component in $G_{pw}$. Against the maximal connected $k$-core, our compact spanning tree has three significant features: (1) Query-centric. The tree involves all query nodes of $Q$. (2) Compactly connected. The tree is a connected and compact structure; (3) Small-weighted. The generation of minimum spanning tree ensures a small weight of the discovered tree.
\begin{figure}[ht]
\begin{tabular} {@{\hspace{0.12\textwidth}}c@{\hspace{0.12\textwidth}} c@{\hspace{0.1\textwidth}}}
\centering
\includegraphics[width=0.25\textwidth]{Figure/mst.eps} &
\includegraphics[width=0.25\textwidth]{Figure/mst1.eps} \\
\newline
(a) Find all pairs of shortest path & (b) Spanning tree\\
\end{tabular}
\caption{Tree construction for query nodes $v_{1}$, $v_{2}$, $v_{5}$.}\label{fig.tree}
\end{figure}
\begin{example}
Fig.~\ref{fig.tree}(a) shows a weighted graph $G$ with 6 nodes and 8 edges with weights. Assume that $k=2$, the whole graph is 2-core as $C_2$. A set of query nodes $Q = \{ v_{1}, v_{2}, v_{5}\}$ are colored in red in Fig.~\ref{fig.tree}(a). We first find the shortest path between every pair of query nodes in $Q$. All edges along with these shortest path are colored in red in Fig.~\ref{fig.tree}(a). For example, the shortest path between $v_1$ and $v_2$ is $\spath_{C_2}(v_{1}, v_{2}) = \{(v_{1}, v_{3}), (v_3, v_2)\}$. Similarly, $\spath_{C_2}$$(v_{1}, v_{5}) $ $= \{(v_1, v_3), (v_3, v_4), (v_4, v_5)\}$, $\spath_{C_2}$ $(v_{2}, v_{5}) = \{(v_{2}, v_{5} )\}$. All three paths are merged to construct a weighted graph $G_{pw}$ in red in Fig.~\ref{fig.tree}(a). A spanning tree of $T_{Q}$ is shown in Fig.~\ref{algo.tree}(b), which connects all query nodes $\{v_{1}, v_{2}, v_{5}\}$ with a small weight of 7.
\end{example}
\begin{algorithm}[t]
\small
\caption{Path-based Construction} \label{algo.path}
\begin{flushleft}
\textbf{Input:} $G=(V, E, w)$, an integer $k$, a set of query vertices $Q$, the $k$-core index\\
\textbf{Output:} Tree $T_{Q}$\\
\end{flushleft}
\
\begin{algorithmic}[1]
\STATE Identify the maximal connected $k$-core of $C_k$ containing query nodes $Q$;
\STATE Let $q_{0}$ be the first query node of $Q$;
\STATE $Q \leftarrow Q - \{q_{0}\}$;
\STATE \textbf{while} $Q \neq \emptyset$ \textbf{do}
\STATE \hspace{0.3cm} \textbf{if} there is no path between $q$ and $q_{0}$ in $C_{k}$ \textbf{then}
\STATE \hspace{0.3cm} \hspace{0.3cm} \textbf{return} $\emptyset$;
\STATE \hspace{0.3cm} \textbf{else}
\STATE \hspace{0.3cm} \hspace{0.3cm} Compute the shortest path between $q$ and $q_0$ in $C_{k}$;
\STATE \hspace{0.3cm} \hspace{0.3cm} Add the $\spath_{C_k}(q, q_0)$ between $q$ and $q_0$ into $T_{Q}$;
\STATE \hspace{0.3cm} \hspace{0.3cm} $q_0 \leftarrow q$, $Q \leftarrow Q - \{q_0\}$;
\STATE \textbf{return} $T_{Q}$;
\end{algorithmic}
\end{algorithm}
\stitle{Path-based construction.} Algorithm~\ref{algo.tree} may take expensive computation for finding the shortest path between every pair of nodes that are far away from each other. To improve efficiency, we develop a path-based approach to connect all query nodes directly. The path-based construction is outlined in Algorithm~\ref{algo.path}. The algorithm starts from one query node $q_0$, and searches the shortest path to the nearest query node in $Q$(lines 2-8). After that, it collects and merges the weighted path $\spath_{C_k}(q, q_0)$ into $T_{Q}$ to construct the tree(line 9). Recursively, it starts from the new query node $q$ as $q_0$ to find the next nearest query node $q$, until all query nodes in $Q$ are found in such a way(line 10). The algorithm returns the tree connecting all query nodes.
\begin{example} We apply Algorithm~\ref{algo.path} on graph $G$ in Fig.~\ref{fig.tree}(a) with query $Q = \{ v_{1}, v_{2}, v_{5}\}$ and $k=2$. We start the shortest path search from $v_{1}$. The nearest query node to $v_1$ is $v_5$, we can find the shortest path $\spath_{C_2}(v_{1}, v_{5})$ $ = \{ (v_1, v_3), (v_3, v_4), (v_4, v_5) \}$. Next, we start from $v_{5}$ and find the shortest path $\spath_{C_2}$ $(v_{5}, v_{2}) $ $= \{(v_{5}, v_{2}) \}$. Finally, we merge the two paths $\spath_{C_2}(v_{1}, v_{5})$ and $\spath_{C_2}(v_{5}, v_{2})$ to construct the tree $T_{Q}$.
\end{example}
\stitle{Complexity analysis.} We analyze the complexity of Algorithm~\ref{algo.tree} and Algorithm~\ref{algo.path}. Assume that the $k$-core $C_k$ has $n_k$ nodes and $m_k$ edges where $n_k\leq n$ and $m_k\leq m$.
For Algorithm~\ref{algo.tree}, an intuitive implementation of all-pairs-shortest-paths needs to compute the shortest path for every pair nodes in $Q$, which takes $O(|Q|^2m_k\log n_k)$ time. However, a fast implementation of single-source-shortest-path algorithm can compute the shortest path from one query node $q\in Q$ to all other nodes in $Q$, which takes $O(m_k\log n_k)$ time. Overall, the computation of all-pairs-shortest-paths can be done in $O(|Q|m_k\log n_k)$ time. In addition, the weighted graph $G_{pw}$ is a subgraph of $C_k$, thus the size of $G_{pw}$ is $O(n_k+m_k) \subseteq O(m_k)$. Identifying the spanning tree of $G_{pw}$ takes $O(m_k\log n_k)$ time. Overall, Algorithm~\ref{algo.tree} takes $O(|Q|m_k\log n_k)$ time and $O(m_k)$ space.
For Algorithm~\ref{algo.path}, it applies $|Q|$ times of single-source-shortest-path to identify the nearest query node. Thus, Algorithm~\ref{algo.path} also takes $O(|Q|m_k\log n_k)$ time and $O(m_k)$ space. In practice, Algorithm~\ref{algo.path} runs faster than Algorithm~\ref{algo.tree} on large real-world graphs, which avoids the weighted tree construction and all-pairs-shortest-paths detection.
|
1,108,101,563,880 | arxiv | \section{Introduction}\label{sec:intro}
Events with large missing momentum, and in particular those featuring
just one visible recoiling object (a jet, a photon, a weak boson, a top quark), are among
the most promising final states where to look for signs of new physics at colliders.
Their simplicity and model independent nature appeal to both theorists and experimentalists.
There are, however, important challenges that have to be faced with such signatures.
The first ones are of experimental nature. The accurate and precise
determination of the missing momentum
in events needs a detailed control of many aspects, from triggering to jet energy scales, to
underlying event simulation, to pile-up mitigation. In addition, in case of weak boson or top quark,
tagging and reconstruction efficiencies for the recoiling object(s) also enter.
The second class of challenges are more of theoretical nature and have to do with the problem of maximising the information
that can be extracted from data to constrain new physics models.
Model-independent searches for dark matter (DM) constitute the most
popular interpretations of mono-jet analyses at the LHC,
both in an effective field theory (EFT) framework as well as in simplified models,
see e.g.~\cite{DiFranzo:2013vra,Abdallah:2014hon,Malik:2014ggr} and the references therein.
Among new complete physics scenarios leading to mono-object plus large missing momentum
signals, supersymmetric (SUSY) models with a very light (or {\it
superlight}) gravitino play a special role: they offer a concrete
setting where the strengths and limitations of EFT approach vis-a-vis
more UV completed models can be studied in detail.
Let us look closer at model constraints from mono-object
searches at previous and current colliders.
At the LEP collider, the mono-photon signal was used to set a limit on models of
SUSY with the gravitino as the lightest SUSY particle (LSP) and extra
dimensions~\cite{Abbiendi:2000hh,Heister:2002ut,Achard:2003tx,Abdallah:2003np}.
In some SUSY scenarios the gravitino can be very light
of order $m_{3/2}\sim{\cal O}(10^{-14}-10^{-12})$~GeV with all the other SUSY particles being
above the TeV threshold~\cite{Nachtmann:1984xu,Brignole:1997sk}. We dub such a scenario as {\it ``gravitino EFT''}.
The only relevant parameter in this case is the gravitino mass, which is directly related to the SUSY breaking scale,
the lower limit being $m_{3/2}>1.35\times10^{-14}$~GeV~\cite{Achard:2003tx,Agashe:2014kda}.
Alternatively, we consider the gravitino LSP in
the minimal supersymmetric standard model
(MSSM) with other sparticles at the TeV scale.
In this scenario, the process of the neutralino--gravitino associated production with the
subsequent neutralino decay into a photon and a gravitino has been used to put a
limit on the gravitino mass as a function of the neutralino and selectron
masses~\cite{Fayet:1986zc,Dicus:1990vm,Lopez:1996gd,Lopez:1996ey,Baek:2002np,Mawatari:2011cu}, e.g.
$m_{3/2}\gtrsim 10^{-14}$~GeV
for $m_{\tilde\chi^0_1}=140$~GeV and $m_{\tilde e}=150$~GeV~\cite{Abdallah:2003np}.
Such a scenario can also be considered as a simplified SUSY model, where only the
gravitinos, the lightest neutralino and the selectrons play a role in the phenomenology at colliders.
At the Tevatron, not only the mono-photon but also the mono-jet signals
constrain models of SUSY~\cite{Affolder:2000ef,Acosta:2002eq} and extra
dimensions~\cite{Acosta:2002eq,Abazov:2008kp,Aaltonen:2008hh}.
Similar to the LEP bound, in the gravitino-EFT limit~\cite{Brignole:1998me} a gravitino is excluded below
$1.1\times10^{-14}$~GeV and $1.17\times10^{-14}$~GeV in the
mono-jet~\cite{Affolder:2000ef} and mono-photon~\cite{Acosta:2002eq} channels, respectively.
At the LHC, besides the mono-photon~\cite{Khachatryan:2014rwa,Aad:2014tda} and
mono-jet~\cite{ATLAS:2012zim,Khachatryan:2014rra} signals, other
mono-object plus missing transverse momentum signals such as a $Z$
boson~\cite{Aad:2014vka}, a lepton~\cite{ATLAS:2014wra,Khachatryan:2014tva}, and
a top quark~\cite{Khachatryan:2014uma} have been investigated mostly in the context
of DM searches and more exotic models. SUSY models have been considered only in the ATLAS mono-jet
analysis~\cite{ATLAS:2012zim}, where the gluino--gravitino~\cite{Dicus:1989gg,Drees:1990vj,Dicus:1996ua,Kim:1997iwa,Klasen:2006kb,Mawatari:2011jy} and squark--gravitino~\cite{Kim:1997iwa,Klasen:2006kb} associated productions were
taken into account to set a limit on the gravitino mass as a function of the squark and gluino masses as, e.g.
\begin{align}
m_{3/2}> 1\times10^{-13}\ (4\times 10^{-14})\ {\rm GeV}
\end{align}
for the degenerate squark and gluino masses at $m_{\tilde q,\tilde g}=500$ (1700)~GeV.
One point that is relevant for this work is that the above limit may be modified
by the contribution from the direct gravitino-pair
production in association with an extra jet, a production channel so far disregarded in the analysis.
Moreover, while event selection is targeted to the associated
gravitino production, events from squark and gluino pair production
may enter the signal region affecting the results.
We would like to put forward the interpretation of the
mono-object signals in the SUSY context with a very light gravitino
for the LHC.
As mentioned above, extending the gravitino EFT to the full MSSM
(or simplified SUSY models), other production channels can contribute
leading to rather different final state features that in turn depend on the SUSY parameters.
In Ref.~\cite{deAquino:2012ru} we studied the gluino--gravitino and gluino-pair
production in this very same context. However, the gravitino-pair
production associated with a jet and
squark--gravitino production were not included there.
In this work we present, for the first time, the complete set of production channels consistently treated
in a unique framework that can provide accurate predictions for the
general case. In addition, although the gravitino in our scenario is too
light to be a cold DM candidate, the approach we have followed is fully general and
can be used as a template for passing from an EFT approach to simplified models in the context of DM searches~\cite{Papucci:2014iwa}.
The plan of the paper is as follows. In sec.~\ref{sec:theory} we focus on the SUSY QCD sector in order to
assess the parameter space relevant for gravitino production processes and
potentially contributing to the mono-jet signature. We explicitly construct a SUSY QCD model in sec.~\ref{sec:model}.
In sec.~\ref{sec:production} we present the three different yet related mechanisms which
produce gravitinos. Gravitino-pair production with one jet has been studied in the gravitino-EFT limit only~\cite{Brignole:1998me}, where exact tree-level results for $2\to3$ matrix elements for
$p\bar p/pp\to\tilde G\gld j$ have been computed only for the quark--antiquark and quark--gluon initial states, but not for gluon--gluon
ones. We obtain such results for generic squark/gluino masses and for
all processes for the first time in this work.
In sec.~\ref{sec:tool} we briefly review the computation/simulation tools used in this article.
In sec.~\ref{sec:jet} we study all the relevant gravitino
production processes in detail for total as well as differential cross sections. As an application of our results, we
recast the ATLAS mono-jet analysis~\cite{ATLAS:2012zim} with inclusive signal samples by merging
matrix elements with parton showers (ME+PS) in order to set a limit on the masses of the SUSY particles.
We suggest improvements to the analysis so to increase the sensitivity
to the gravitino mass when squarks and gluinos are light.
In sec.~\ref{sec:photon}, we consider the associated production with an electroweak (EW) particle, and
study the mono-photon, -$Z$ and -$W$ signals in the very light
gravitino context. Finally, we recast the mono-photon analyses at the
LHC~\cite{Khachatryan:2014rwa,Aad:2014tda} to set a limit on the gravitino
mass. Section~\ref{sec:summary} is devoted to our conclusions.
\section{Light gravitino production at the LHC}
\label{sec:theory}
In this section we start by constructing a SUSY QCD model by using the superspace formalism.
We then present the three mechanisms of light gravitino production at hadron colliders and finally
we briefly describe the simulation tools we employ for our results.
\subsection{SUSY QCD with a goldstino superfield}
\label{sec:model}
In phenomenologically viable SUSY models, SUSY breaking is often
assumed to take place in a so-called hidden sector, and then
transmitted to the visible sector (i.e. the SM particles and their
superpartners) through some mediation mechanism, e.g. gauge mediation or
gravity mediation. As a result, one
obtains effective couplings of the fields in the visible sector to the
goldstino multiplet.
To illustrate the interactions among the physical
degrees of freedom of the goldstino multiplet and the fields in the
visible sector, we introduce an $R$-parity conserving $N=1$ global
supersymmetric model with the $SU(3)_{C}$ gauge group in the
superspace formalism. The model comprises one vector superfield
$V=(A^{\mu},\lambda,D_V)$, describing a gluon $A^{\mu}$ and a gluino
$\lambda$, and two chiral superfields $\Phi_{L}=(\tilde q_{L},q_{L}, F_{L})$
and $\Phi_{R}=(\tilde q^*_{R},q_{R}^{c},F_{R})$, containing the left- and
right-handed quarks $q_{L/R}$ and squarks $\tilde q_{L/R}$, where the color
and generation indices are suppressed. In
addition, we introduce a chiral superfield in the hidden sector
$X=(\phi,\tilde G,F_X)$, containing a sgoldstino $\phi$ and a goldstino
$\tilde G$. $D_V$, $F_{L/R}$ and $F_X$ are auxiliary fields.
The Lagrangian of the visible sector is
\begin{align}
\mathcal{L}_{\rm vis}=
\sum_{i=L,R}{\int d^4 \theta \,}\,\Phi^{\dagger}_ie^{2g_sV}\Phi_i
+\Big(\frac{1}{16g_s^2}{\int d^2 \theta \,}\,W^{\alpha} W_{\alpha} +{\text{h.c.}}\Big),
\label{L_vis}
\end{align}
where $g_s$ is the strong coupling constant
\footnote{The covariant derivative is defined as
$D_{\mu}=\partial_{\mu}+ig_sT^aA_{\mu}^a$.}
$W_{\alpha}=-\frac{1}{4}\bar{D}\cdot\bar{D}\,e^{-2g_sV}D_{\alpha}\,e^{2g_sV}$ denotes the SUSY
$SU(3)_{C}$ field strength tensor with $D$ being the
superderivative. $\mathcal{L}_{\rm vis}$ contains the kinetic terms as
well as the gauge interactions.
The Lagrangian of the goldstino part is given by
\begin{align}
\mathcal{L}_{X}=
{\int d^4 \theta \,}\,X^{\dagger}X-\Big(F{\int d^2 \theta \,}\,X+{\text{h.c.}}\Big)
-\frac{c_X}{4}{\int d^4 \theta \,}\,(X^{\dagger}X)^2.
\label{L_hid}
\end{align}
The first term gives the kinetic term of the sgoldstino and the goldstino, while
the second term is a source of SUSY breaking and
$F\equiv\langle F_X\rangle$ is a vacuum expectation value (VEV) of
$F_X$
\footnote{Note that we follow the {\sc FeynRules} convention for chiral
superfields
$\Phi(y,\theta)=\phi(y)+\sqrt{2}\,\theta\cdot\psi(y)-\theta\cdot\theta\,F(y)$~\cite{Alloul:2013bka},
which fixes the sign of the Lagrangian so as to give a positive
contribution to the scalar potential.}
The last term is non-renormalizable and provides interactions in
the goldstino multiplet. In addition, this term also gives the sgoldstino mass term
when replacing the auxiliary fields $F_X$ by the VEV, and hence we
assign $c_X=m^2_{\phi}/F^2$.
The effective Lagrangian that leads to
the interactions among the (s)goldstinos and the fields in the visible
sector as well as the soft mass terms for the squarks and the gluinos
is given by
\begin{align}
\mathcal{L}_{\text{int}}=-\sum_{i=L,R}c_{\Phi_i}{\int d^4 \theta \,}\,
X^{\dagger}X\Phi_i^{\dagger}\Phi_i
-\Big(\frac{c_V}{16g_s^2}{\int d^2 \theta \,}\, X W^{\alpha} W_{\alpha} +{\text{h.c.}}\Big),
\label{L_int}
\end{align}
where we identify $c_{\Phi_i}=m^2_{\tilde q_i}/F^2$ and $c_V=2m_{\lambda}/F$.
We note that our model is minimal, yet enough to generate all the
relevant interactions involving two goldstinos in the final state
for the jet(s)$+\slashed{E}_T$ signal at hadron colliders.
The extension of the model including the SM electroweak (EW) gauge
group is straightforward, and we will study mono-$\gamma$, -$W$ and -$Z$
signals later.
Let us briefly refer to the
goldstino equivalence theorem. When the global SUSY is promoted to the
local one, the goldstino is absorbed by the gravitino via the so-called
super-Higgs mechanism.
In the high-energy limit, $\sqrt{s}\gg m_{3/2}$,
the interactions of
the helicity 1/2 components are dominant, and can be well described by
the goldstino interactions due to the gravitino-goldstino equivalence
theorem~\cite{Casalbuoni:1988kv,Casalbuoni:1988qd}.
As a
consequence of the super-Higgs mechanism, the gravitino mass is related
to the SUSY breaking scale and the Planck mass
as~\cite{Volkov:1973jd,Deser:1977uq}
\begin{align}
m_{3/2}=\frac{F}{\sqrt{3}\,\overline{M}_{\rm Pl}},
\label{grav_mass}
\end{align}
where $\overline{M}_{\rm Pl}\equiv M_{\rm Pl}/\sqrt{8\pi}\approx 2.4\times10^{18}$~GeV is
the reduced Planck mass. Therefore, low-scale SUSY breaking scenarios
such as GMSB
provide a gravitino LSP. In the following, we simply call the goldstino
``gravitino''.
\subsection{Light gravitino production}
\label{sec:production}
\begin{figure}
\center
\includegraphics[width=.7\textwidth,clip]{diagram}
\caption{Schematic diagrams for $pp\to\tilde G\gld+0,1,2$ partons. In the
first row the leading gravitino-pair (red),
gluino--gravitino (blue) and gluino-pair (green)
diagrams are sorted. The diagrams are ordered with the number of
additional QCD partons in rows, while with the total parton
multiplicity in columns.
}
\label{fig:diagram}
\end{figure}
Given the model we constructed in the previous section,
we now consider light-gravitino production in $R$-parity conserving scenarios
that lead to jet(s) plus missing momentum at the LHC:
\begin{align}
pp\to {\rm jet(s)}+\slashed{E}_T,
\end{align}
where the missing momentum is carried by two LSP gravitinos.
At the leading order in QCD, the relevant processes are:
\begin{enumerate}
\item gravitino-pair production in association with a quark/gluon emission from initial state radiation,
\item gravitino production associated with a squark/gluino with the
subsequent decay into a gravitino and a quark/gluon,
\item SUSY QCD pair production with the subsequent decay into gravitino and a quark/gluon.
\end{enumerate}
The processes are schematically represented in fig.~\ref{fig:diagram}.
The processes in the second column of fig.~\ref{fig:diagram}
contribute in an obvious way to the mono-jet signal. However, also the 2-parton final states will contribute either in the exclusive
1-jet analysis because one parton might not give rise to a jet or when the analysis is fully or in part inclusive over other jets.
In the current ATLAS and CMS mono-jet analyses~\cite{ATLAS:2012zim,Khachatryan:2014rra}, for example, a second jet is allowed and hence those events potentially fall into the signal region. For the mono-$\gamma$, -$Z$ and -$W$ signals, we simply replace the QCD processes by the EW processes, i.e. replace gluinos by neutralinos and charginos.
We now consider each production channel in more detail.
\subsubsection{Gravitino pair production}
\label{sec:gldgld}
Direct gravitino-pair production at colliders has been studied only in
models where all SUSY particles except for the gravitino are too heavy
to be produced on-shell, i.e. in the gravitino EFT limit~\cite{Nachtmann:1984xu,Brignole:1997sk,Brignole:1998me}.
One of the aims of this article is to extend the previous studies to
take into account the effect of other SUSY particles in spectrum.
This has been done recently for mono-photon signals at future linear colliders~\cite{Mawatari:2014cja},
and we now apply for it to the QCD sector for the LHC.
A pair of gravitinos is produced through both the $q\bar q$ and $gg$
initial states,
\begin{align}
pp(q\bar q, gg)\to\tilde G\gld,
\label{process_gldgld}
\end{align}
and can be observed if extra radiation is hard enough to be detected, for instance in the form of one or more jets.
The helicity amplitudes for the above $2\to2$ processes were presented in terms of the $e^+e^-$ and
$\gamma\gamma$ initial states in~\cite{Mawatari:2014cja}. A remarkable feature of this production channel
is that the corresponding total cross section scales as the inverse of the gravitino mass to the fourth power,
\begin{align}
\sigma(\tilde G\gld)\propto 1/m_{3/2}^4.
\label{xsec_gldgld}
\end{align}
Another feature is that the cross section tends to be larger for heavier
squarks and gluinos, which are propagating in the $t$ and $u$ channels.
For the $gg$ channel, there are diagrams featuring $s$-channel sgoldstino. These
play an important role in the computation of cross sections even when sgoldstinos are too
heavy to be produced; see ref.~\cite{Mawatari:2014cja} for more details.
As expected from the colourless nature of the gravitinos, an extra parton in the final state
mainly comes from initial state radiation and therefore it is naturally suppressed by $\alpha_S/p_T^4$.
Hence the mono-jet rate from this process strongly depends on the jet minimum $p_T$ (or equivalently
from the minimum missing momentum). We will investigate those effects carefully in sec.~\ref{sec:jet}.
The $2\to3$ processes
\begin{align}
pp(q\bar q, qg, gg)\to\tilde G\gld j,
\label{process_gldgldj}
\end{align}
have been calculated for $q\bar q$ and $qg$ initial states
in the gravitino EFT limit, yet the $gg$
process was estimated only by the $2\to2$ cross section in the limit of the
soft and collinear gluon radiation~\cite{Brignole:1998me}.
In this article, as shown later, we consider all the amplitudes at tree-level
without any approximation and calculate the full matrix elements numerically.
\subsubsection{Associated gravitino production}
Gravitino production in association with a squark or a gluino and the
subsequent decay into a gravitino and a quark/gluon,
\begin{align}
pp\to\tilde q\tilde G,\tilde g\tilde G\to\tilde G\gld j,
\label{process_gogld}
\end{align}
leads to the $j+\slashed{E}_T$ signal at the leading order (LO), and has been
studied in~\cite{Dicus:1989gg,Drees:1990vj,Dicus:1996ua,Kim:1997iwa,Klasen:2006kb,Mawatari:2011jy}.
The tree-level ME+PS merging technique has also been applied for this
process in~\cite{deAquino:2012ru}.
Unlike the gravitino-pair production in eq.~\eqref{xsec_gldgld}, the
cross section is inversely proportional to the square of the gravitino
mass,
\begin{align}
\sigma(\tilde q\tilde G,\tilde g\tilde G)\propto 1/m_{3/2}^2,
\label{xsec_gogld}
\end{align}
and hence the dependence of the gravitino mass is milder than in the
gravitino-pair production. Similar to the $\tilde G\gld$ production, heavier squarks and gluinos in the
$t$ and $u$ channels enhance the cross sections, while those in the
final state suppress the cross sections due to the phase space.
\subsubsection{Indirect gravitino production}
SUSY QCD pair productions, i.e. squark-pair, gluino-pair and
squark--gluino productions, have been systematically studied, motivated by the
inclusive SUSY searches as well as in simplified SUSY searches. On the other hand, they
have not been considered in the mono-jet analysis since more than one jet in the final
state is expected. Especially, when squarks and/or gluinos are the next-to-lightest SUSY
particle (NLSP), their decay can provide
the di-jet plus missing momentum signal at the LO~\cite{Baer:1998pg,Kats:2011qh}:
\begin{align}
pp\to\tilde q\sq,\tilde q\tilde g,\tilde g\go\to\tilde G\gld jj.
\label{process_gogo}
\end{align}
As mentioned above, in the current mono-jet analyses by
ATLAS~\cite{ATLAS:2012zim} and CMS~\cite{Khachatryan:2014rra}, events
with a second jet have been included as the signal typically contains
more jets from QCD radiation. Therefore, depending on cuts, the jets coming from
the decay of heavy SUSY particles may contribute to the signal region.
We also note that, when the gravitino is very light, the $t$-channel
gravitino exchange enhances the cross sections~\cite{Dicus:1989gg,Drees:1990vj,Dicus:1996ua,Kim:1997iwa}.
\begin{figure}
\center
\includegraphics[width=.5\textwidth,clip]{width}
\caption{25\% lines of the total width over the mass in the squark (gluino)
and gravitino mass plane for
$m_{\tilde q}=m_{\tilde g}$ (red), $m_{\tilde q}>m_{\tilde g}$ (blue), and $m_{\tilde q}<m_{\tilde g}$
(black) cases, where the main decay mode is
$\tilde q(\tilde g)\to q(g)+\tilde G$.
For the non-degenerate case, we take
$m_{\tilde g}=\{4,\,2,\,1/2,\,1/4\}\times m_{\tilde q}$.
}
\label{fig:width}
\end{figure}
Before turning to the collider phenomenology part,
it may be worth to mention the decay width of the squark and gluino. The partial
decay width of a squark (gluino) into a quark (gluon) and a gravitino is
given by
\begin{align}
\Gamma(\tilde q(\tilde g)\to q(g)+\tilde G)=\frac{m^5_{\tilde q(\tilde g)}}{48\pi\overline{M}_{\rm Pl}^2m^2_{3/2}},
\end{align}
where the gravitino mass in the phase space is neglected. When the gravitino is very light and/or the squarks and gluinos are
heavy, the width of the squark and gluino can be a significant fraction of the mass.
At the same time, the gravitino couplings become strong and the
perturbative calculations are not reliable. To identify a reasonable SUSY parameter space,
in fig.~\ref{fig:width} we show $\Gamma/m=0.25$ lines for
$m_{\tilde q}=m_{\tilde g}$ (red), $m_{\tilde q}>m_{\tilde g}$ (blue), and $m_{\tilde q}<m_{\tilde g}$
(black).%
\footnote{The widths are obtained numerically by the decay
package {\sc MadWidth}~\cite{Alwall:2014bza}.}
We assume all other SUSY particles are heavier than the squarks and the
gluino. For the $m_{\tilde q}>m_{\tilde g}$ case, the additional $\tilde q\to\tilde g+q$ decay channel is
opened.
For the $m_{\tilde q}<m_{\tilde g}$ case, on the other hand, the gluino has
all possible squark decay modes, and hence its width becomes
significantly larger than the squark one, strongly depending on the
mass difference.
In the following, a benchmark scenario will be identified ($m_{3/2}=2\times10^{-13}$~GeV
with $m_{\tilde q}=m_{\tilde g}=1$~TeV) where the widths are 28~GeV.
\subsection{Event simulation tools}
\label{sec:tool}
Here, we briefly describe event simulation tools we employ in this
article. We follow the strategy presented in ref.~\cite{Christensen:2009jx} to new
physics simulations.
Similar to the SUSY QED model of ref.~\cite{Mawatari:2014cja},
we have implemented the SUSY QCD Lagrangian with a goldstino supermultiplet
described in sec.~\ref{sec:model} into
{\sc FeynRules2}~\cite{Alloul:2013bka}, which provides the Feynman
rules in terms of the physical component fields and the {\sc UFO} model
file~\cite{Degrande:2011ua,deAquino:2011ub} for matrix-element
generators such as {\sc MadGraph5\_aMC@NLO}~\cite{Alwall:2014hca}.
In this work, instead of employing a dedicated implementation of the four-fermion vertices
involving more than one Majorana particle~\cite{Mawatari:2014cja}, we
introduce auxiliary heavy particles for the multi-jet simulation.
Parton-level events generated by {\sc MadGraph5\_aMC@NLO} are passed to
{\sc Pythia6.4}~\cite{Sjostrand:2006za} for parton shower and hadronisation, to {\sc Delphes3}~\cite{deFavereau:2013fsa} for
detector simulation, and to {\sc MadAnalysis5}~\cite{Conte:2012fm} for sample analyses.
\section{Mono-jet plus missing momentum}\label{sec:jet}
In this section, we first present total and differential cross sections to
illustrate how the three gravitino production processes depend on the
SUSY mass parameters. Then, we recast the ATLAS mono-jet analysis~\cite{ATLAS:2012zim} to
constrain the gravitino mass in cases that go beyond the gravitino-EFT scenario.
In the following, we consider three scenarios where squark and/or gluino masses are
${\cal O}(10)$~TeV and ${\cal O}(1)$~TeV:
\begin{subequations}
\begin{align}
&{\rm A}:\quad m_{\tilde q}=m_{\tilde g}=20~{\rm TeV}\qquad
&&\hspace*{-2.5cm}\text{(the gravitino-EFT limit)}, \\
&{\rm B}:\quad m_{\tilde q}=20~{\rm TeV},\ m_{\tilde g}=1~{\rm TeV}\qquad
&&\hspace*{-2.5cm}\text{(the heavy-squark limit)}, \\
&{\rm C}:\quad m_{\tilde q}=m_{\tilde g}=1~{\rm TeV}, &
\end{align}\label{cases}%
\end{subequations}
while we keep the sgoldstino masses at 20~TeV.
For simplicity, we assume that all the non-colored SUSY particles are heavier
than the colored ones, and hence the decay mode of squarks and gluinos is only
into gravitinos.
Only the gravitino-pair production contributes to the signal for case A, while the
$\tilde g\tilde G$ and $\tilde g\go$ productions can also give the signal for case B.
In case C all the subprocesses can be comparable.
We note that the masses $m_{\tilde q}=m_{\tilde g}=20$~TeV reproduce the results
of the total and differential cross sections
in ref.~\cite{Brignole:1998me}, where all the SUSY particles except
gravitinos are integrated out, i.e. where the computation has been done in the
gravitino-EFT limit.
\subsection{Total rates}
\begin{figure}
\center
\includegraphics[width=.495\textwidth,clip]{xsec_mgld_a}
\includegraphics[width=.495\textwidth,clip]{xsec_mgld_b}
\caption{Total cross sections of
the gravitino-pair production with a QCD radiation ($\tilde G\gld+j$),
the gluino--gravitino associated production ($\tilde g\tilde G$), and
the gluino-pair production ($\tilde g\go$) at $\sqrt{s}=8$~TeV
as a function of the gravitino mass
for case A (left) and B (right).
For the gravitino-pair production kinematical cuts
$p_T^j>120/350$~GeV (solid/dashed) and $|\eta^j|<4.5$ are applied.
}
\label{fig:xsec_mgld}
\end{figure}
\begin{figure}
\center
\includegraphics[width=.495\textwidth,clip]{xsec_mgld_c}
\includegraphics[width=.495\textwidth,clip]{xsec_msqgo}
\caption{Left: Same as fig.~\ref{fig:xsec_mgld}, but for case C.
Right: Total cross sections as a function of the degenerate squark
and gluino masses with the gravitino mass at
$m_{3/2}=2\times10^{-13}$~GeV.
}
\label{fig:xsec_msqgo}
\end{figure}
Figures~\ref{fig:xsec_mgld} and \ref{fig:xsec_msqgo} (left) show the total cross sections as a function
of the gravitino mass for the three scenarios at $\sqrt{s}=8$~TeV.
For the gravitino-pair production plus an extra QCD emission
($\tilde G\gld+j$), we
impose a minimal transverse momentum cut for the jet with
$p_T^j>120$~GeV and 350~GeV in the region $|\eta^j|<4.5$.
We employ the CTEQ6L1 PDFs~\cite{Pumplin:2002vw} with the factorization and renormalization
scales at $p_T^j$ for the gravitino-pair production,
$(m_{\tilde q,\tilde g}+m_{3/2})/2\sim m_{\tilde q,\tilde g}/2$ for the associated gravitino
production, and $(m_{\tilde q,\tilde g}+m_{\tilde q,\tilde g})/2\sim m_{\tilde q,\tilde g}$ for the
SUSY QCD pair production.
We note that all our results are the LO predictions although it is well
known that higher-order QCD corrections are large.
For example, the $K$ factor of the gluino-pair production is about three
for $m_{\tilde g}\sim 1$~TeV at the 8-TeV
LHC~\cite{Beenakker:1996ch,GoncalvesNetto:2012yt}, while the
higher-order calculations have not yet been done for the gravitino-pair
production and the associated gravitino production.
Our analyses can be redone with different overall normalizations
and yet the main features will not change.
One can clearly see the $m_{3/2}^{-4}$ and $m_{3/2}^{-2}$ dependence for
the $\tilde G\gld(+j)$ and $\tilde q\tilde G/\tilde g\tilde G$ processes, respectively,
as discussed in sec.~\ref{sec:production}. For the SUSY QCD pair
productions, $\tilde q\sq/\tilde q\tilde g/\tilde g\go$, the contribution of the $t$-channel
gravitino exchange can be visible if the
gravitino is lighter than about $3\times10^{-13}$~GeV.
We also show the total rates as a function of the degenerate squark and
gluino masses with the fixed gravitino mass at $2\times10^{-13}$~GeV in
fig.~\ref{fig:xsec_msqgo} (right).
For the gravitino-pair production, the cross section increases as the
squarks and
gluinos become heavier.
On the other hand, the cross sections for the associated production and
the SUSY QCD pair production decreases due to the phase space
suppression.
As can be seen in figs.~\ref{fig:xsec_mgld} and \ref{fig:xsec_msqgo},
each contribution to the total rates strongly depends on the SUSY mass
parameters, and
the different contributions can be comparable for certain
parameters.
However, the resulting signature can be still distinctive among the
subprocesses as shown below.
\subsection{Differential distributions}
\begin{figure}
\center
\includegraphics[width=.328\textwidth,clip]{met_gldpair_a}
\includegraphics[width=.328\textwidth,clip]{met_gldpair_b}
\includegraphics[width=.328\textwidth,clip]{met_gldpair_c}
\caption{Normalized missing transverse energy distributions of the
direct gravitino-pair production with an extra radiation for the three
benchmarks in~\eqref{cases} at the LHC-8TeV.
Parton-shower and detector effects are included for the event
generation and a cut $\slashed{E}_T>120$~GeV is imposed.
The contributions from different initial states are also shown.
}
\label{fig:met_gldpair}
\end{figure}
\begin{figure}
\center
\includegraphics[width=.495\textwidth,clip]{dis_met}
\includegraphics[width=.495\textwidth,clip]{dis_ptj1}\\[2mm]
\includegraphics[width=.495\textwidth,clip]{dis_ptj2}\
\includegraphics[width=.48\textwidth,clip]{dis_dphi}
\caption{Normalized distributions for each signal
subprocess with
$m_{3/2}=2\times10^{-13}$~GeV and $m_{\tilde q,\tilde g}=1$~TeV at the LHC-8TeV.
Parton-shower and detector effects are included for the event
generation, and a cut $\slashed{E}_T>120$~GeV is imposed.
As a reference, the $Z(\to\nu\bar\nu)+j$ background is also shown.}
\label{fig:dis}
\end{figure}
We now consider differential distributions for the direct
gravitino-pair production in detail. This is the first presented result that goes
beyond the gravitino EFT limit.
Figure~\ref{fig:met_gldpair} shows normalized missing transverse momentum
distributions for the three benchmark scenarios in~\eqref{cases}.
Here parton-shower and detector effects are included, and the detector
acceptance cuts $p_T^j>20$~GeV and $|\eta^j|<4.5$ as well as the missing
transverse momentum cut $\slashed{E}_T>120$~GeV are applied.
Jets are reconstructed employing the anti-$k_T$
algorithm~\cite{Cacciari:2008gp} with a radius parameter of 0.4.
Depending on the mass of the $t$-channel exchanged squarks and gluinos,
the contributions from different initial states can be of different relevance.
Moreover, the energy spectra from $q\bar q$ and $gg$ are similar,
while that from $qg$ is harder than the others.
Figure~\ref{fig:dis} presents several kinematical distributions of
all the production channels for case
C as well as the SM $Z+j$ background. We stress that the purpose of including
the $Z+j$ background is illustrative on the one hand and to provide a
``normalisation''
point for experimentalists. Needless to say, many other important sources of backgrounds
need to be included for a complete analysis, such as those coming from $W$+jets or just (mis-measured) jets.
Most of them, however, can only be meaningfully estimated in presence of a
detailed detector simulation and data validation.
We see that the SUSY signals are harder than the
SM background, even for the gravitino-pair production. This is mainly
due to the $2\to3$ kinematics of the
signal, whereas the background essentially has the $2\to2$ kinematics.
Besides the background,
$\tilde G\gld(+j)$ has the softest spectra,
while $\tilde q\sq/\tilde q\tilde g/\tilde g\go$ lead the hardest.
The differences in the $p_T$ spectrum of the second-leading jet are rather significant.
The second jet mostly comes from the squark or gluino decay for
the SUSY QCD pair production, while mainly from QCD radiation
in the gravitino-pair and associated productions.
We note that the shapes for the available subprocesses are very similar
among the three scenarios in~\eqref{cases}, while the rates are different as seen in the
previous subsection.
\subsection{Recasting LHC mono-jet analyses}
ATLAS and CMS have reported a search for new physics in mono-jet plus
missing transverse momentum final states.
The null results are translated into limits on a gauge-mediated SUSY,
large extra dimension and dark matter models in
ATLAS~\cite{ATLAS:2012zim} and
on dark matter, large extra dimension and unparticle models in
CMS~\cite{Khachatryan:2014rra}.
As mentioned in the introduction, in the ATLAS analysis, a light
gravitino scenario has been studied, but only the squark-gravitino and gluino-gravitino
associated productions.
In this section, taking into account all the possible gravitino
production processes described above, we recast the ATLAS 8-TeV mono-jet
analysis with 10.5~fb$^{-1}$ data~\cite{ATLAS:2012zim} to constrain the
gravitino mass for different squark and
gluino masses.
\subsubsection{Selection cuts}
The event selection of the ATLAS analysis~\cite{ATLAS:2012zim} is
\begin{align}
&1.\quad \slashed{E}_T>120~{\rm GeV}, \nonumber\\
&2.\quad \text{leading jet with $p_T^{j_1}>120$~GeV and $|\eta^{j_1}|<2.0$,} \nonumber\\
&3.\quad \text{at most two jets with $p_T^{j}>30$~GeV and
$|\eta^{j}|<4.5$,} \nonumber\\
&4.\quad \Delta\phi(j_2,\slashed{E}_T)>0.5.
\label{cuts}
\end{align}
The third requirement allows the second-leading jet ($j_2$) since signal
events typically contain jets from initial state radiation, while the last one
reduces the QCD background where the large $\slashed{E}_T$ originates from the
mis-measurement of $p_T^{j_2}$.
On top of the above requirements, similarly to the ATLAS analysis, we define three signal regions (SRs) with
different $\slashed{E}_T$ and $p_T^{j_1}$ thresholds as%
\footnote{SR2 in the ATLAS analysis is with the 220~GeV
cut~\cite{ATLAS:2012zim}. On the other hand,
our SR2 is similar to the one of the signal regions in the CMS
analysis~\cite{Khachatryan:2014rra}.}
\begin{align}
&{\rm SR1}:\quad \slashed{E}_T, p_T^{j_1}>120~{\rm GeV}, \nonumber\\
&{\rm SR2}:\quad \slashed{E}_T, p_T^{j_1}>250~{\rm GeV}, \nonumber\\
&{\rm SR3}:\quad \slashed{E}_T, p_T^{j_1}>350~{\rm GeV}.
\label{sr}
\end{align}
\begin{table}
\center
\begin{scriptsize}
\begin{tabular}{ll||r|rrr|rrrrrr|r}
\hline
&& \multicolumn{1}{c|}{A} & \multicolumn{3}{c|}{B}
& \multicolumn{6}{c|}{C} & \multicolumn{1}{c}{bkg} \\
&& $\tilde G\gld$
& $\tilde G\gld$ & $\tilde g\tilde G$ & $\tilde g\go$
& $\tilde G\gld$ & $\tilde q\tilde G$ & $\tilde g\tilde G$ & $\tilde q\sq$ & $\tilde q\tilde g$ & $\tilde g\go$
& $Z+j$\\
\hline\hline
& $\quad\slashed{E}_T>120$~GeV
& 5257
& 5433 & 1770 & 140
& 1400 & 878 & 353 & 1716 & 938 & 79
& 329893\\
& $+\,p_{T}^{j_1}>120$~GeV
& 3164
& 3291 & 1672 & 139
& 800 & 836 & 336 & 1698 & 929 & 79
& 163270\\
& $+\,$at most 2 jets
& 2776
& 2869 &1108 & 15
& 614 & 550 & 180 & 589 & 138 & 6
& 152532\\
SR1 & $+\,\Delta\phi(j_2,\slashed{E}_T)>0.5$
& 2690
& 2778 & 1061 & 14
& 583 & 508 & 170 & 551 & 128 & 5
& 146548\\
SR1' & $+\,p_T^{j_2}<150$~GeV
& 2652
& 2736 & 959 & 3
& 564 & 455 & 152 & 88 & 23 & 1
& 145954\\
\hline\hline
& SR1$\,+\slashed{E}_T>250$~GeV
& 869
& 914 & 956 & 13
& 229 & 454 & 153 & 497 & 116 & 5
& 12604\\
SR2 & $+\,p_{T}^{j_1}>250$~GeV
& 614
& 654 & 863 & 12
& 170 & 424 & 138 & 487 & 114 & 5
& 7554\\
SR2' & $+\,p_{T}^{j_2}<150$~GeV
& 591
& 628 & 778 & 2
& 157 & 379 & 123 & 75 & 21 & 1
& 7512\\
\hline\hline
& SR2$\,+\slashed{E}_T>350$~GeV
& 340
& 369 & 762 & 11
& 109 & 361 & 120 & 432 & 102 & 4
& 2037\\
SR3 & $+\,p_{T}^{j_1}>350$~GeV
& 254
& 281 & 660 & 10
& 86 & 323 & 103 & 403 & 94 & 4
& 1358\\
SR3' & $+\,p_{T}^{j_2}<150$~GeV
& 243
& 268 & 604 & 2
& 79 & 291 & 93 & 61 & 17 & 1
& 1358\\
\hline
\end{tabular}
\end{scriptsize}
\caption{SUSY signal predictions of the three scenarios in~\eqref{cases}
with $m_{3/2}=2\times10^{-13}$~GeV for the number of events passing
each step of the
selection requirements in~\eqref{cuts} and \eqref{sr}, expected for an integrated luminosity of 10.5~fb$^{-1}$
at
the LHC-8TeV.
$Z(\to\nu\bar\nu)+j$ background is also shown as a reference.}
\label{tab:cutflow}
\end{table}
In table~\ref{tab:cutflow} we present SUSY signal predictions for the
number of events passing each step of the above selection requirements.
As in fig.~\ref{fig:dis}, we generate events for each subprocess
including parton-shower and detector effects.
In addition to the three SUSY benchmark scenarios in~\eqref{cases} with
the gravitino mass at $2\times10^{-13}$~GeV, we show the
$Z(\to\nu\bar\nu)+j$ background prediction, which is the dominant
background, as a reference; see table~2 in the ATLAS analysis~\cite{ATLAS:2012zim} for more
details on the background estimation including other channels.
At the LO parton level, $\slashed{E}_T=p_T^{j_1}$ for the $\tilde G\gld(+j)$
and $\tilde q\tilde G/\tilde g\tilde G$ productions. After the parton shower, the
relation does not hold any more, and the effect of the radiation is quite large for the gravitino
pair production.
As expected, the third selection cut in~\eqref{cuts} does not affect so
much for $\tilde G\gld(+j)$ and $\tilde q\tilde G/\tilde g\tilde G$, while
significantly reduces the SUSY QCD pair contributions
although for case C the contribution is still substantial and even
dominant in SR3.
We remind the reader that SUSY QCD pair production is insensitive
to the gravitino mass if the gravitino is heavier than
$3\times10^{-13}$~GeV,
and hence these contributions have to be considered as background to constrain the
gravitino mass.
To reduce this SUSY QCD background, on top of the above signal selection
cuts, we impose a maximal $p_T$ cut
on the second-leading jet in each SR as
\begin{align}
p_T^{j_2}<150~{\rm GeV},
\label{ptj2cut}
\end{align}
denoted as SR1', SR2' and SR3'.
As can be also seen in the $p_T^{j_2}$ distribution in fig.~\ref{fig:dis},
this cut removes the large part of the events coming from $\tilde q\sq$,
$\tilde q\tilde g$ and $\tilde g\go$.
\subsubsection{Merging matrix elements with parton showers}
\begin{figure}
\center
\includegraphics[width=.328\textwidth,clip]{met_a}
\includegraphics[width=.328\textwidth,clip]{met_b}
\includegraphics[width=.328\textwidth,clip]{met_c}
\caption{Missing transverse energy distributions for the three
scenarios in~\eqref{cases} with
$m_{3/2}=2\times 10^{-13}$~GeV, where the inclusive samples of the
$\tilde G\gld+1$ parton in the matrix element (dashed) are compared with the merged
samples containing an extra parton (solid).
Only the cut $\slashed{E}_T>120$~GeV is applied.
}
\label{fig:met}
\end{figure}
So far, in order to identify characteristics and differences among them we have treated each gravitino-production subprocess independently. Now, to constrain the SUSY mass parameters,
we generate inclusive signal samples
by using the ME+PS merging procedure.
In practice, following ref.~\cite{deAquino:2012ru}, we make use of the
shower-$k_T$ scheme~\cite{Alwall:2008qv}, and
generate signal events with parton multiplicity from one
to two, $pp\to\tilde G\gld+1,2$ partons, and merging separation parameters
$Q_{\rm cut}=60$~GeV and $p_{T_{\rm min}}=50$~GeV.
We checked carefully that the variation of $Q_{\rm cut}$ did not
change the distributions after the minimal missing transverse energy cut $\slashed{E}_T>120$~GeV.
The factorization and renormalization scales are set to the scalar
sum of the $p_T$ of all the partons in the final state.
We note that the employment of
the ME+PS merging procedure allows us to treat different contributing
processes, i.e. gravitino-pair, associated gravitino and SUSY QCD pair
productions (see also fig.~\ref{fig:diagram}),
within one event simulation and without double counting.
We also note that the interference among the different production
processes is very small since the width of the on-shell squarks and
gluinos is small with our parameter choice.
To see the effect of an extra parton in the matrix element,
in fig.~\ref{fig:met} we compare the inclusive samples of the
$\tilde G\gld+1$ parton in the matrix element with the merged samples of
$pp\to\tilde G\gld+1,2$ partons.
For case A, where only the gravitino-pair production contributes, we find
a slightly harder spectrum in the high $\slashed{E}_T$ region for the
merged sample due to the second parton in the matrix element.
For case B, as seen in table~\ref{tab:cutflow}, besides
$\tilde G\gld$, the $\tilde g\tilde G$ production contributes significantly,
leading to a much harder spectrum than in case A.
Again, a harder spectrum for the merged sample is observed as expected.
For case C, with the minimal selection cut $\slashed{E}_T>120$~GeV, the SUSY
QCD pair productions, especially $\tilde q\sq$ and $\tilde q\tilde g$, are dominant,
which do not exist in the $\tilde G\gld+1$ parton sample.
Therefore, the distributions are completely different without and with
an extra parton in the matrix-element level.
\subsubsection{Limit on the gravitino mass}
\begin{figure}
\center
\includegraphics[width=.495\textwidth,clip]{limit_j_a}
\includegraphics[width=.495\textwidth,clip]{limit_j}
\caption{Left: Visible cross sections of the mono-jet
signal for case A at $\sqrt{s}=8$~TeV (solid) and 13~TeV (dotted) as a
function of the gravitino mass, where SR1 and SR3 are shown.
The predictions are compared with the model-independent 95\% confidence level (CL) upper
limits by the ATLAS analysis~\cite{ATLAS:2012zim}.
Right: Same as the left panel, but for all the three scenarios in SR3 (solid)
and SR3' (dashed) at $\sqrt{s}=8$~TeV.
}
\label{fig:limit_j}
\end{figure}
By using the inclusive ME+PS merged samples, we can now recast the
ATLAS-8TeV mono-jet analysis with 10.5~fb$^{-1}$ data
set~\cite{ATLAS:2012zim}. ATLAS
reported a model-independent 95\% confidence level (CL) upper limit on the visible cross
section, defined as the production cross section times kinematical
acceptance times detection efficiency ($\sigma\times A\times\varepsilon$).
The values are $2.8\times10^3$~fb and 50~fb for SR1 and SR3 selections, respectively.
Figure~\ref{fig:limit_j} (left) presents the visible cross sections for
case A at $\sqrt{s}=8$ and 13~TeV as a function of the gravitino mass.
The horizontal lines show the ATLAS 95\% CL limits.
In SR1 the SM background is huge, and hence only the very light
gravitino case can be constrained.
The constraint in SR3 is slightly better than in SR1, and the gravitino
mass below about $1.7\times10^{-13}$~GeV are excluded at 95\% CL in the
gravitino EFT.
This limit is one order of magnitude stronger than the limits at the LEP and
Tevatron~\cite{Agashe:2014kda}.
According to the relation in~\eqref{grav_mass}, the above limit
corresponds to the SUSY breaking scale of about 850~GeV.
The coming LHC Run-II with $\sqrt{s}=13$~TeV is expected to explore
heavier gravitinos up to ${\cal O}(10^{-12})$~GeV, i.e. a few TeV of the
SUSY breaking scale.
In fig.~\ref{fig:limit_j} (right), the visible cross sections in SR3 at
$\sqrt{s}=8$~TeV are
shown for case A, B and C.
Roughly speaking, case A and B follow $m_{3/2}^{-4}$ and
$m_{3/2}^{-2}$, respectively, as expected.
For case C, on the other hand, no sensitivity of the cross section to the
gravitino mass is observed when the gravitino mass is heavier than about
$3\times10^{-13}$~GeV.
However, by imposing an additional cut on the second-leading jet
in~\eqref{ptj2cut}, the sensitivity to the gravitino mass recovers even
for heavier gravitinos since the SUSY QCD pair productions are strongly
suppressed.
The maximal $p_T^{j_2}$ cut hardly affects the signals for case A and B.
\section{Mono-photon, -$Z$, or -$W$ plus missing momentum}
\label{sec:photon}
\begin{figure}
\center
\includegraphics[width=.7\textwidth,clip]{diagram_a}
\caption{Schematic diagrams for $pp\to\tilde G\gld+\gamma$, where
gravitino-pair production with a photon emission (left) and
neutralino--gravitino associated production (right) contribute.
}
\label{fig:diagram_a}
\end{figure}
In an analogous way to the mono-jet signal discussed in the previous section,
superlight gravitino scenarios can provide mono-$\gamma$, -$Z$, or -$W$
(mono-EW boson) plus missing momentum signature via
\begin{enumerate}
\item gravitino-pair production with a $\gamma$, $Z$, or $W$ emission,
\item gravitino production associated with a neutralino/chargino with the
subsequent decay into a $\gamma,Z/W$ and a gravitino.
\end{enumerate}
The schematic diagrams are shown in fig.~\ref{fig:diagram_a}.
Unlike the $j+\slashed{E}_T$ signal, only the $q\bar q$ initial state can
contribute to the mono-EW boson$+\slashed{E}_T$ signal.
In this section, for simplicity, we consider the heavy
neutralino/chargino limit, where only the gravitino-pair production
contributes.%
\footnote{The mono-photon signal of $\tilde\chi^0_1\tilde G$ production via
the Higgs decay at the LHC was studied in~\cite{Petersson:2012dp}.}
\begin{figure}
\center
\includegraphics[width=.495\textwidth,clip]{pt_azw}
\caption{Ratio of the transverse momentum distributions of the $Z$ or $W$
boson to that of the photon for
$pp\to\tilde G\gld V$ $(V=\gamma,Z,W)$ at $\sqrt{s}=8$~TeV with
$m_{3/2}=1\times10^{-13}$~GeV.
Three scenarios for different left- and right-handed squark masses are
considered.}
\label{fig:pt_azw}
\end{figure}
So far, new physics searches in mono-$\gamma$, -$Z$ and -$W$ signals
at the LHC have been done independently, but the combined analysis may be very
interesting because there is a possibility to determine left--right
handedness of the new physics interactions.
Instead of studying the gravitino-mass constraint in each search
channel, fig.~\ref{fig:pt_azw} shows the ratio of the $p_T$
distributions of the massive gauge boson to that of the photon for
$pp\to\tilde G\gld V$ $(V=\gamma,Z,W)$ at $\sqrt{s}=8$~TeV with
$m_{3/2}=1\times10^{-13}$~GeV.
There are $t$-channel squark exchange diagrams, and for illustration we take three left-
and right-handed squark mass scenarios:
\begin{align}
(m_{\tilde q_L},m_{\tilde q_R})=\{(20,20),\, (20,1),\, (1, 20)\}~{\rm TeV}.
\end{align}
The effect of the mass of the gauge boson can be seen as suppression
and enhancement in the low and high $p_T$ region, respectively.
Interestingly, the ratios are very sensitive to the mass difference between
$\tilde q_L$ and $\tilde q_R$, especially for the $W$ boson, which only couples to
the left-handed squarks.
Finally, we recast the LHC-8TeV mono-photon
analyses~\cite{Khachatryan:2014rwa,Aad:2014tda}, where non-SUSY models
were studied, to constrain the gravitino mass.
For event selection, we follow the $\gamma+\slashed{E}_T$ analysis by
ATLAS~\cite{Aad:2014tda}.
Events in the signal region are required to have
the missing transverse energy
$\slashed{E}_T>150$~GeV and
a photon with
$p_T>125$~GeV and $|\eta|<1.37$.
The photon and the missing momentum
vector are also required to be well separated as
$\Delta\phi(\gamma,\slashed{E}_T)>0.4$.
Possible jets produced by ISR are
defined by the anti-$k_T$ algorithm~\cite{Cacciari:2011ma} with a
radius parameter of 0.4 and are required to be in the region
$|\eta|<4.5$ with $p_T>30$~GeV.
While events with more than one
jet are rejected,
events with one jet with $\Delta R(\gamma,j)>0.2$ and
$\Delta\phi(\slashed{E}_T,j)>0.4$ are kept for the signal with ISR, where
$\Delta R=\sqrt{(\Delta\eta)^2+(\Delta\phi)^2}$.
\begin{figure}
\center
\includegraphics[width=.5\textwidth,clip]{pt_a}
\caption{Transverse momentum distributions of the photon for
$pp\to\tilde G\gld\gamma$ at $\sqrt{s}=8$~TeV with $m_{3/2}=1\times10^{-13}$~GeV
for three squark masses. All selection cuts described in the text are
applied except the $p_T^{\gamma}$ and $\slashed{E}_T$ cuts. The
$Z(\to\nu\bar\nu)+\gamma$ background is also shown as a reference.}
\label{fig:pt_a}
\end{figure}
Figure~\ref{fig:pt_a} shows the $p_T$ distributions of the photon for
$pp\to\tilde G\gld\gamma$ at $\sqrt{s}=8$~TeV, where all the above selection
cuts are applied except the $p_T^{\gamma}$ and $\slashed{E}_T$ cuts. The
gravitino mass is fixed at $1\times10^{-13}$~GeV, while the masses of squarks
are taken at 1, 2, and 20~TeV. As discussed in the mono-jet signal, the
cross section for the gravitino-pair production becomes larger as the
$t$-channel squark masses increase.
In analogy with the mono-jet case, the SUSY signal is harder than the
SM background mainly due to the kinematics.
We note again that the signal rate strongly
depends on the gravitino mass as $m_{3/2}^{-4}$ and also on the
kinematical cuts.
The ATLAS $\gamma+\slashed{E}_T$ study with 20.3~fb$^{-1}$ of collisions at
$\sqrt{s}=8$~TeV reported a model-independent 95\% CL
upper limit on the fiducial cross section,
$\sigma\times A$. The value is 5.3~fb~\cite{Aad:2014tda}.
Figure~\ref{fig:xsec_a}
presents the visible cross sections for $pp\to\gamma\tilde G\gld$ at
$\sqrt{s}=8$ and 13~TeV as a function of the gravitino mass for three
different squark masses. The horizontal line shows the ATLAS
95\% CL limit, where we take a conservative estimate for
the fiducial reconstruction efficiency $\varepsilon=0.7$~\cite{Aad:2014tda}.
Gravitino masses below about $1.7\times 10^{-13}$~GeV are excluded at 95\%
CL for the heavy SUSY mass limit, which is translated to the lower bound
on the SUSY breaking scale of about 850~GeV, similar to the mono-jet
limit. For lighter squark masses the
limits are lower, for example, $m_{3/2}\sim 8.4\times 10^{-14}$~GeV,
i.e. $\sqrt{F}\sim 600$~GeV for 1-TeV squarks. These results
significantly improve previous ones at LEP and the Tevatron, and are
comparable with the recent ATLAS 8-TeV mono-jet
analysis~\cite{ATLAS:2012zim}.%
\footnote{In the ATLAS study, only associated gravitino production with
a gluino or a squark was considered.}
The coming LHC Run-II with $\sqrt{s}=13$~TeV is expected to explore
heavier gravitinos up to ${\cal O}(10^{-12})$~GeV, i.e. a few TeV of the
SUSY breaking scale. We note that we assumed the heavy neutralino limit
in this section. However, if the neutralino is light enough and promptly
decays, production of the on-shell neutralino can give rise to
characteristic harder photons. This leads to different production rate
as well as $A\times\varepsilon$, and hence the limits can be
modified. The discussions for the mono-jet study in the previous section
can be applied for the mono-photon case by the replacement of
gluino/gluon to neutralino/photon for the $q\bar q$ initial state.
\begin{figure}
\center
\includegraphics[width=.5\textwidth,clip]{limit_a}
\caption{Visible cross sections of the mono-photon signal at
$\sqrt{s}=8$~TeV (solid) and 13~TeV (dotted) as a function of the
gravitino mass for different squark masses.
The predictions are compared with the
model-independent 95\% confidence level (CL) upper limit by
the ATLAS analysis~\cite{Aad:2014tda}.}
\label{fig:xsec_a}
\end{figure}
\section{Summary}\label{sec:summary}
The mono-jet plus missing momentum signal at the LHC is a promising final
state where to look for new physics. In this work we investigated
the possibility of observing a SUSY signal via a very
light gravitino. Gravitino-pair production with extra radiation and
associated gravitino production with a squark or a gluino contribute both to
mono-jet signals. Moreover, in the current ATLAS and
CMS mono-jet analyses, squark and gluino pair production may contribute
to the signal region. We have carefully investigated the impact of consistently including all three
production channels. We have constructed a SUSY QCD model, lifting previous limitations of
gravitino-EFT models. We have implemented it in the {\sc{FeynRules}} and {\sc{MadGraph5\_aMC@NLO}}
simulation framework paying special attention to needed Majorana four-fermion interactions.
We discussed the parameter dependence of the signal rate in detail and
showed that the relative importance of the three contributing subprocesses
varies with the gravitino and SUSY particle masses. We also studied the differential distributions to get better understanding of the expected shape for different parameters.
To constrain the gravitino and other SUSY masses we have recast the
LHC-8TeV mono-jet analyses by the ATLAS and CMS collaborations.
Using matrix-element/parton-shower merged samples, we have been able to treat
all three contributing subprocesses within one event simulation and
without double counting. Re-interpreting the reported model-independent 95\%
CL upper limit on the visible cross section, we found that
a gravitino mass below about $1.7\times10^{-13}$ GeV is excluded in the
limit where all SUSY particles except the gravitino are very
heavy. We showed that this limit changes when allowing squarks and gluinos to be relatively
light. To get a better sensitivity to the
gravitino mass, we suggest an additional cut in the analysis which suppresses
contributions from SUSY QCD pair production. We have also discussed prospects
for the LHC Run-II, which is expected to explore gravitino masses up to $\mathcal{O}(10^{-12})$ GeV.
Finally, we also considered production of EW particles and investigated the
mono-photon, -$Z$ and -$W$ plus missing momentum signals. We have performed a detailed
analysis for gravitino-pair production, showing that the ratios of
the different vector bosons in the final state might reveal information about
left- and right-handed couplings. We have reinterpreted the
mono-photon analysis at $\sqrt{s}=8$ TeV, and found a similar
limit as in the mono-jet analysis in the case where all SUSY particles
except the gravitino are very heavy. For lighter squark masses, the
limits are lower. We have concluded by presenting the outlook for the LHC Run-II.
\section*{Acknowledgments}
This work has been performed in the framework of the ERC grant 291377
``LHCtheory: Theoretical predictions and analyses of LHC physics:
advancing the precision frontier'' and of the FP7 Marie Curie Initial Training Network MCnetITN (PITN-GA-2012-315877).
It is also supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7/37.
The work of AM and FM is supported by the IISN ``MadGraph'' convention
4.4511.10 and the IISN ``Fundamental interactions'' convention 4.4517.08.
KM and BO are supported in part by the Strategic Research Program ``High Energy
Physics'' and the Research Council of the Vrije Universiteit Brussel.
|
1,108,101,563,881 | arxiv | \section{Introduction}
The unit cotangent bundle $ST^*M$ of a Riemannian manifold $M$ is equipped with a canonical contact structure $\xi_{\operatorname{can}}$, given in local coordinates as the kernel of $\alpha = \sum_i p_i dq_i$. The contact manifold $(ST^*M, \xi_{\operatorname{can}})$ is Stein fillable, with one filling given by the disk cotangent bundle $DT^*M$, and it is natural to ask whether other such fillings exist. Our goal in this paper is to study the Stein fillings, and more generally the exact symplectic fillings, of $(ST^*M,\xi_{\operatorname{can}})$ in the case where $M=\Sigma_g$ is a surface of genus $g \geq 2$. We will denote the contact manifold $(ST^*\Sigma_g, \xi_{\operatorname{can}})$ by $(Y_g,\xi_g)$.
In the cases $g=0,1$ the fillings of $(Y_g,\xi_g)$ are already understood, and we know that in fact any minimal symplectic filling must be diffeomorphic to $DT^*\Sigma_g$. McDuff \cite{mcduff-rational-ruled} proved this for $(ST^*S^2 = \mathbb{RP}^3,\xi_{\operatorname{can}})$, and then Hind \cite{hind-rp3} showed that $DT^*S^2$ is the unique Stein filling up to Stein homotopy. Similarly, Stipsicz \cite{stipsicz} proved that a Stein filling of the unit cotangent bundle $ST^*T^2 = T^3$ must be homeomorphic to $DT^*T^2 \cong T^2 \times D^2$, and Wendl \cite{wendl} showed that all of its minimal strong symplectic fillings are symplectically deformation equivalent to $DT^*T^2$.
For $g \geq 2$, however, no such uniqueness results for symplectic fillings are possible. This was observed by Li, Mak, and Yasui \cite[Proposition~3.3]{lmy}, who noted that in this case McDuff \cite{mcduff-disconnected} constructed a symplectic 4-manifold which strongly fills its disconnected boundary, one of whose components is $(Y_g,\xi_g)$. One can glue a symplectic cap with $b_2^+$ arbitrarily large to the remaining component to get an arbitrarily large filling of $(Y_g,\xi_g)$; or, as pointed out by Wendl \cite{wendl-blog}, one can even use this to construct a strong symplectic cobordism from any contact 3-manifold to $(Y_g,\xi_g)$.
Despite this, if we require the fillings in question to be exact or Stein then the situation is drastically simpler. Our main results are the following, which appear as Theorem~\ref{thm:homotopy-equivalent} and Theorem~\ref{thm:exact-homology} respectively.
\begin{theorem} \label{thm:main-stein}
If $(W,J)$ is a Stein filling of $(Y_g,\xi_g) = (ST^*\Sigma_g, \xi_{\operatorname{can}})$, then $W$ is s-cobordant rel boundary to the disk cotangent bundle $DT^*\Sigma_g$.
\end{theorem}
In particular, $W$ is homotopy equivalent rel boundary to $DT^*\Sigma_g$.
\begin{theorem} \label{thm:main-exact}
If $(W,\omega)$ is an exact symplectic filling of $(Y_g,\xi_g) = (ST^*\Sigma_g, \xi_{\operatorname{can}})$, then the homology of $W$ is given by
\begin{align*}
H_1(W;\mathbb{Z}) &\cong \mathbb{Z}^{2g} \oplus \mathbb{Z}/d\mathbb{Z}, & H_2(W;\mathbb{Z}) &\cong \mathbb{Z}, & H_3(W;\mathbb{Z}) &= 0
\end{align*}
for some integer $d$ such that $d^2$ divides $g-1$, and the intersection form on $H_2(W)$ is $\langle \frac{2g-2}{d^2}\rangle$.
\end{theorem}
\begin{remark}
The requirement that $d^2 \mid g-1$ implies that the integral homology and intersection form of an exact filling of $(Y_g,\xi_g)$ are uniquely determined (hence isomorphic to those of $DT^*\Sigma_g$) whenever $g-1$ is square-free. This condition is well-known to hold for a subset of the natural numbers with density $\frac{6}{\pi^2}$.
\end{remark}
\begin{remark}
Li, Mak, and Yasui have independently proved a stronger version of Theorem~\ref{thm:main-exact}, namely that every exact filling of $(Y_g,\xi_g)$ has the integral homology and intersection form of $DT^*\Sigma_g$, by similar arguments. In other words, every exact filling has $d=1$.
\end{remark}
One notable feature of Theorems~\ref{thm:main-stein} and \ref{thm:main-exact} is that all of the fillings involved have $b_2^+(W)$ positive. As far as we are aware, any classification theorems which have been proved to date for symplectic or Stein fillings of fillable contact 3-manifolds $(Y,\xi)$ have the feature that all of the symplectic fillings have $b_2^+=0$. This is true because the classifications usually follow from one of two starting points: either $(Y,\xi)$ has a symplectic cap containing a symplectic sphere of nonnegative self-intersection, or $(Y,\xi)$ is supported by a planar open book.
In the first of these cases, it follows from McDuff \cite{mcduff-rational-ruled} that any filling embeds into a blow-up of either $\mathbb{CP}^2$ or a ruled surface, and in either case the closed manifold has $b_2^+=1$, with $H_2^+$ generated by the symplectic sphere inside the cap. In the second case, the classifications use work of Wendl \cite{wendl}, who showed that all Stein fillings admit Lefschetz fibrations corresponding to factorizations of the monodromy into positive Dehn twists; but the planarity implies by a result of Etnyre \cite{etnyre} (whose proof relies on \cite{mcduff-rational-ruled}) that the filling is negative definite. These techniques have been applied successfully to many contact structures on lens spaces \cite{mcduff-rational-ruled, lisca-lens, pvhm, kaloti}, links of simple singularities \cite{ohta-ono}, and Seifert fibered spaces \cite{starkston}, among others.
The reason we are able to succeed in the absence of either technique is the use of a Calabi-Yau cap, as defined and studied by Li, Mak, and Yasui \cite{lmy}. We find a Lagrangian $\Sigma_g$ inside a K3 surface with simply connected complement, and a Weinstein tubular neighborhood of this Lagrangian is symplectomorphic to the disk cotangent bundle of $\Sigma_g$, so its complement is a symplectic cap for $(Y_g,\xi_g)$. Gluing this cap to any filling produces a closed 4-manifold $X$ of symplectic Kodaira dimension zero, and the classification of the latter \cite{morgan-szabo,bauer,li-quaternionic} tells us that $X$ must be an integral homology K3. In Section~\ref{sec:exact} we then deduce Theorem~\ref{thm:main-exact} from careful application of the Mayer-Vietoris sequence, and following this we use properties of Stein fillings in Section~\ref{sec:stein} to pin down the fundamental group and prove Theorem~\ref{thm:main-stein}.
Finally, we remark that we would like to strengthen Theorem~\ref{thm:main-stein} by showing that any Stein filling $W$ of $(Y_g,\xi_g)$ is homeomorphic to $DT^*\Sigma_g$, but for now this may be out of reach using our techniques. This would require a proof of the topological s-cobordism theorem for s-cobordisms between 4-manifolds with fundamental group $\pi_1(\Sigma_g)$, which is currently only known when the fundamental group is ``good'' (see Freedman-Quinn \cite{freedman-quinn}), and it is an open question whether surface groups are good.
\subsection*{Acknowledgments}
This work began at the Princeton Low-Dimensional Topology Workshop 2015, and we thank the participants for contributing to a productive environment. We thank Matt Day, John Etnyre, Dave Futer and Yo'av Rieck for helpful conversations. We are especially grateful to Ian Agol for pointing out to us that Proposition~\ref{prop:central-extension} should be true and explaining how it should follow from the RFRS condition. SS was supported by NSF grant DMS-1506157. JVHM was supported in part by Simons Foundation grant No. 279342.
\section{Calabi-Yau caps}
In this section, we will construct and study a certain type of concave filling which was originally used by Li, Mak, and Yasui \cite{lmy} to bound the topology of Stein fillings of a given manifold.
\begin{definition}
Let $(Y,\xi)$ be a contact manifold. A \emph{Calabi-Yau cap} for $(Y,\xi)$ is a symplectic manifold $(W,\omega)$ with concave boundary $(Y,\xi)$ and torsion first Chern class, such that there is a contact form $\alpha$ for $\xi$ and a Liouville vector field $X$ near $Y=\partial W$ satisfying $\alpha = \iota_X \omega|_Y$.
\end{definition}
In this section we will show that $(Y_g,\xi_g)$ admits a simply connected Calabi-Yau cap by finding an embedded Lagrangian $\Sigma_g$ of genus $g$ inside the elliptic surface $E(2)$, which is a K3 surface. The cap $X_g$ is then the complement of a Weinstein tubular neighborhood of $\Sigma_g$, and it is Calabi-Yau because a K3 surface has trivial canonical class. We remark that Li, Mak, and Yasui construct a Calabi-Yau cap for $(Y_g,\xi_g)$ by finding a Lagrangian $\Sigma_g$ inside the standard symplectic $T^4$, but this larger cap enables us to place much stronger restrictions on the possible fillings.
\begin{theorem} \label{thm:K3-lagrangian}
The elliptic surface $E(2)$ contains a Lagrangian surface $\Sigma_g$ of genus $g$ such that the complement $X_g$ of a Weinstein tubular neighborhood of $\Sigma_g$ is simply connected.
\end{theorem}
\begin{proof}
We express the elliptic fibration $\pi: E(2) \to S^2$ as a fiber sum $E(1) \#_{T^2} E(1)$, where if $a$ and $b$ are a pair of curves in the torus $T^2$ which intersect exactly once then each fibration $E(1) \to S^2$ has six singular fibers with vanishing cycle $a$ and six with vanishing cycle $b$, corresponding to the relation $(ab)^6=1$ in the mapping class group of the torus. We can think of the base of the fibration $\pi$ as a connected sum $S^2 = S^2 \# S^2$, with one copy of $E(1)$ over each summand.
Let $\gamma \subset S^2$ be a simple closed curve separating the two $S^2$ summands; then we can arrange for $\gamma$ to have a small collar neighborhood $A = (-\epsilon,\epsilon)\times \gamma \subset S^2$, with no critical values of $\pi$, so that the symplectic form on $\pi^{-1}(A) \cong A \times T^2$ is the product symplectic form induced by area forms on each factor. In particular, if we pick distinct values $t_1,\dots,t_g \in (-\epsilon,\epsilon)$ then the $g$ disjoint tori $T_i = \{t_i\} \times \gamma \times a$ are all Lagrangian.
Now let $c \subset S^2$ be a matching path \cite{seidel} between two critical points, one in either $S^2$ summand of $S^2 = S^2 \# S^2$, which each have vanishing cycle $b$. Then $c$ lifts to a Lagrangian sphere $S \subset E(2)$. We can arrange for $c$ to intersect $\gamma$ transversely in a single point, and if each $t_i$ is sufficiently close to zero it follows that $S$ intersects each $T_i$ transversely in a single point as well, namely the point $a\cap b$ in the fiber above $c \cap (\{t_i\}\times\gamma)$. We surger $S$ and $T_i$ together at each of these points \cite{lalonde-sikorav,polterovich} to produce a Lagrangian $\Sigma_g$ of genus $g$. We now take $X_g = E(2) \smallsetminus N(\Sigma_g)$, where $N(\Sigma_g)$ is a small Weinstein neighborhood of $\Sigma_g$.
It remains to be seen that $X_g$ is simply connected. Since $E(2)$ is simply connected, $\pi_1(X_g)$ is normally generated by the class of a meridian $\mu$ of the Lagrangian $\Sigma_g$. We let $c' \subset S^2$ be a path in one of the two $S^2$ summands with endpoints at a pair of critical values which both have vanishing cycle $a$, such that $c'$ intersects $c$ once transversely and is disjoint from each of the $\{t_i\}\times \gamma$. Then there is a sphere $S' \subset E(2)$ lying above $c'$ (which need not be Lagrangian) such that $S' \cap S$ is the single point $a \cap b$ in the fiber over $c' \cap c$, hence $S'$ intersects $\Sigma_g$ transversely in precisely this point. We arrange for $S'$ to intersect $\overline{N(\Sigma_g)}$ in a single meridional disk $D$ about this point, and then $S' \cap X_g$ is a disk $\overline{S'\smallsetminus D}$ with boundary a meridian of $\Sigma_g$, so $[\mu]=0$ and we are done.
\end{proof}
\begin{proposition} \label{prop:cap-betti}
The cap $X_g$ has Betti numbers $b_2^+(X_g) = 2$ and $b_2^-(X_g)=19$.
\end{proposition}
\begin{proof}
Recall that $b_2^+(K3) = 3$ and $b_2^-(K3) = 19$. If $D_g$ is the Weinstein neighborhood of the surface $\Sigma_g$, so that $D_g$ is symplectomorphic to the disk cotangent bundle of $\Sigma_g$, then $H_2(D_g) = H_2(\Sigma_g) = \mathbb{Z}$, and since $\Sigma_g$ has self-intersection $2g-2 > 0$ the signature of $D_g$ is 1. By Novikov additivity we have $\sigma(D_g) + \sigma(X_g) = \sigma(K3) = -16$, so $X_g$ has signature $-17$. It thus suffices to show that $b_2^+(X_g) = 2$.
Let $V_+ \subset H_2(K3;\mathbb{Q})$ be a positive definite 3-dimensional subspace containing the class $[\Sigma_g]$. We can extend $[\Sigma_g]$ to a rational basis of $V_+$ whose other two classes are orthogonal to $[\Sigma_g]$, and thus integral multiples of those two classes can be represented by surfaces which are disjoint from $\Sigma_g$ and even avoid the neighborhood $D_g$. These surfaces span a positive definite subspace of $H_2(X_g;\mathbb{Q})$, so that $b_2^+(X_g) \geq 2$. However, if $b_2^+(X_g) \geq 3$ then adjoining $[\Sigma_g]$ to a basis of a 3-dimensional positive-definite subspace of $H_2(X_g;\mathbb{Q})$ would yield $b_2^+(K3) \geq 4$, which is absurd.
\end{proof}
We can understand the homology of $X_g$ more precisely by considering the Mayer-Vietoris sequence associated to the decomposition $K3 = D_g \cup_{Y_g} X_g$.
\begin{proposition} \label{prop:cap-homology}
We have $H_1(X_g;\mathbb{Z}) = H_3(X_g;\mathbb{Z}) = 0$ and $H_2(X_g;\mathbb{Z}) \cong \mathbb{Z}^{21} \oplus \mathbb{Z}^{2g}$, where the intersection form on $X_g$ has block form $\left(\begin{smallmatrix}Q_{g}&0\\0&0\end{smallmatrix}\right)$ with respect to this decomposition for some nondegenerate form $Q_{g}$ on $\mathbb{Z}^{21}$.
\end{proposition}
\begin{proof}
The claim that $H_1(X_g) = 0$ follows immediately from $X_g$ being simply connected, so then $H^1(X_g)=0$ as well by the universal coefficient theorem. For $H_3(X_g)$, we use the general fact that if $X$ is an orientable $n$-manifold with nonempty boundary, then $H_{n-1}(X)$ injects into $H_{n-1}(X,\partial X) \cong H^1(X)$: the long exact sequence of the pair $(X,\partial X)$ says that
\[ 0 \to H_{n}(X,\partial X) \to H_{n-1}(\partial X) \to H_{n-1}(X) \to H_{n-1}(X,\partial X) \]
is exact, and the map $H_n(X,\partial X) \to H_{n-1}(\partial X)$ is an isomorphism $\mathbb{Z} \xrightarrow{\sim} \mathbb{Z}$ since it carries the relative fundamental class $[X,\partial X]$ to $[\partial X]$. In general, this implies that $H_{n-1}(X)$ is torsion-free since $H^1(X)$ is; in this case, since $H^1(X_g)=0$ we have $H_3(X_g)=0$ as well.
Since $H_1(K3) = H_3(K3) = 0$ and $H_1(X_g)=0$, a portion of the Mayer-Vietoris sequence is given by
\[ 0 \to H_2(Y_g) \xrightarrow{i_2} H_2(D_g) \oplus H_2(X_g) \xrightarrow{j} H_2(K3) \xrightarrow{\delta} H_1(Y_g) \xrightarrow{i_1} H_1(D_g) \to 0. \]
Now from $H_1(Y_g) \cong \mathbb{Z}^{2g} \oplus \mathbb{Z}/(2g-2)$ we compute $H_2(Y_g) \cong H^1(Y_g) \cong \mathbb{Z}^{2g}$, so that
\[ 0 \to \mathbb{Z}^{2g} \xrightarrow{i_2} \mathbb{Z} \oplus H_2(X_g) \xrightarrow{j} \mathbb{Z}^{22} \xrightarrow{\delta} \mathbb{Z}^{2g} \oplus \mathbb{Z}/(2g-2) \xrightarrow{i_1} \mathbb{Z}^{2g} \to 0 \]
is exact. Any torsion in $H_2(X_g)$ must lie in $\ker(j) = \operatorname{Im}(i_2)$, and since $\operatorname{Im}(i_2) \cong \mathbb{Z}^{2g}$ is torsion-free it follows that $H_2(X_g)$ is as well, hence $H_2(X_g) = \mathbb{Z}^{b_2(X_g)}$. The map $i_1: \mathbb{Z}^{2g} \oplus \mathbb{Z}/(2g-2) \to \mathbb{Z}^{2g}$ is surjective and its target $\mathbb{Z}^{2g}$ is free, so $\ker(i_1)$ must be precisely the torsion subgroup $\mathbb{Z}/(2g-2)$ of the domain. Thus $\operatorname{Im}(\delta) = \mathbb{Z}/(2g-2)$, and we deduce from the above sequence that
\[ 0 \to \mathbb{Z}^{2g} \xrightarrow{i_2} \mathbb{Z} \oplus \mathbb{Z}^{b_2(X_g)} \xrightarrow{j} \mathbb{Z}^{22} \to \mathbb{Z}/(2g-2) \to 0 \]
is exact. It follows that $\operatorname{Im}(j)$ has index $2g-2$ and that $b_2(X_g) = 21+2g$; the latter fact implies that $H_2(X_g) = \mathbb{Z}^{21+2g}$.
Next, we note that the image of the natural map $H_2(Y_g) \to H_2(D_g)$ contributes to $b_2^0(D_g)$, since any surface inside $Y_g$ can be displaced inside a collar neighborhood of $\partial D_g$; but $H_2(D_g)$ is torsion-free and positive definite, so the map $H_2(Y_g) \to H_2(D_g)$ must be zero. Since $i_2$ is injective, it follows that $H_2(Y_g) = \mathbb{Z}^{2g}$ injects into $H_2(X_g) = \mathbb{Z}^{21+2g}$. Then $\ker(j) \subset H_2(X_g)$ is isomorphic to $\mathbb{Z}^{2g}$, so that the map $H_2(X_g) \to H_2(K3)$ has rank 21. Since its image is a subgroup of $\mathbb{Z}^{22}$ it is free abelian, hence isomorphic to $\mathbb{Z}^{21}$, and so we have an exact sequence
\[ 0 \to H_2(Y_g) \to H_2(X_g) \to \mathbb{Z}^{21} \to 0 \]
which splits because $\mathbb{Z}^{21}$ is free. Thus we have a direct sum decomposition
\[ H_2(X_g) \cong \mathbb{Z}^{21} \oplus H_2(Y_g) \]
in which the $H_2(Y_g)$ summand lies in the kernel of the intersection form. But we have seen that $b_2^+(X_g)+b_2^-(X_g) = 21$, so the intersection form must be nondegenerate on the $\mathbb{Z}^{21}$ summand and the proof is complete.
\end{proof}
\begin{remark} \label{rem:Xg-sublattice}
The kernel of $j: H_2(D_g) \oplus H_2(X_g) \to H_2(K3)$ is the $H_2(Y_g) \cong \mathbb{Z}^{2g}$ summand of $H_2(X_g)$, so it restricts to an injective map
\[ j': H_2(D_g) \oplus \mathbb{Z}^{21} \to H_2(K3) \]
whose domain is the lattice $\mathbb{Z} \oplus \mathbb{Z}^{21}$ with intersection form $\left(\begin{smallmatrix}2g-2&0\\0&Q_g\end{smallmatrix}\right)$ in block form, and $j'$ embeds this lattice as an index-$(2g-2)$ sublattice of $H_2(K3) \cong \mathbb{Z}^{22} \cong 3H \oplus -2E_8$.
\end{remark}
\section{The topology of exact fillings}
\label{sec:exact}
We can now use the Calabi-Yau cap $(X_g,\omega_g)$ provided by Theorem~\ref{thm:K3-lagrangian} to understand the topology of fillings of $(Y_g,\xi_g)$.
\begin{proposition}
Let $(W,\omega)$ be an exact symplectic filling of $(Y_g,\xi_g)$. Then the closed symplectic manifold
\[ (Z,\omega_Z) = (W,\omega) \cup_{(Y_g,\xi_g)} (X_g,\omega_g) \]
is an integer homology K3, with $H_1(Z;\mathbb{Z})=H_3(Z;\mathbb{Z})=0$ and $H_2(Z;\mathbb{Z}) = \mathbb{Z}^{22}$.
\end{proposition}
\begin{proof}
We can easily verify that $K_Z\cdot[\omega_Z] = 0$, where $K_Z$ is the canonical class of $(Z,\omega_Z)$. Indeed, we can express it as a sum $K_Z|_W \cdot [\omega] + K_Z|_{X_g} \cdot [\omega_g]$, and in the first term we have $[\omega]=0$ since the form $\omega$ is exact, while in the second term we have $K_Z|_{X_g} = 0$ because $X_g$ has trivial canonical class. Moreover, we have $b_2^+(Z) \geq b_2^+(X_g) = 2$, with the latter equality provided by Proposition~\ref{prop:cap-betti}.
Since $b_2^+(Z) \geq 2$ and $K_Z \cdot [\omega_Z] = 0$, it follows from Taubes \cite{taubes-more} that the only Seiberg-Witten basic classes on $Z$ are $\pm K_Z$. Further work of Taubes \cite{taubes-sw-gr} then shows that $K_Z = 0$, hence $0$ is the only basic class: indeed, $K_Z$ is Poincar\'e dual to an embedded symplectic surface $\Sigma$, and we have $K_Z \cdot [\omega_Z] = \int_\Sigma \omega_Z \geq 0$ with equality only if $[\Sigma] = 0$. It follows that $Z$ must be symplectically minimal, since otherwise the blow-up formula \cite{fs-blowup} implies that there would be at least two basic classes.
We have now shown that $(Z,\omega_Z)$ is minimal with trivial canonical class, and this proves that its symplectic Kodaira dimension \cite{li-kodaira-zero} is zero. By work of Morgan--Szab\'o \cite{morgan-szabo}, Bauer \cite{bauer}, and Li \cite{li-quaternionic}, it follows that $Z$ has the rational homology of a K3 surface, an Enriques surface, or a $T^2$-bundle over $T^2$. The latter two cases imply $b_2(Z)=10$ and $b_2(Z) \leq 6$ respectively, and neither of these can happen -- we already know that $b_2(Z) \geq b_2^-(Z) \geq b_2^-(X_g) = 19$ -- so $Z$ is a rational homology K3.
Finally, if $H_1(Z)$ is nontrivial, then it is torsion and so the kernel of the abelianization map $\pi_1(Z) \to H_1(Z)$ has finite index in $\pi_1(Z)$. If the corresponding finite cover $(\tilde{Z},\tilde{\omega}_Z) \to (Z,\omega_Z)$ has degree $n=|H_1(Z)|$, then $(\tilde{Z},\tilde{\omega}_Z)$ has symplectic Kodaira dimension zero and hence signature at least $-16$ \cite{bauer,li-quaternionic}, whereas $\sigma(\tilde{Z}) = -16n$, so we must have $n=1$ and thus $H_1(Z)=0$. It follows from Poincar\'e duality and the universal coefficient theorem that $H_3(Z) \cong H^1(Z) = 0$, and that $H_2(Z) \cong H^2(Z)$ is torsion-free since $H_1(Z)=0$ is.
\end{proof}
\begin{remark} \label{rem:simply-connected}
In the above argument, we see that $Z$ has even intersection form since $K_Z = 0$ is a characteristic class. If we can show that $\pi_1(Z) = 1$, then $Z$ will be a simply connected, even, smooth 4-manifold with $b_2^+(Z)=3$ and $b_2^-(Z)=19$, implying that it is homotopy equivalent and hence homeomorphic to a K3 surface \cite{freedman}.
For example, if $(W,J)$ is a Stein filling of $(Y_g,\xi_g)$ then the inclusion $Y_g \hookrightarrow W$ induces a surjection $\pi_1(Y_g) \to \pi_1(W)$, and the cap $X_g$ is simply connected, so van Kampen's theorem says that $\pi_1(Z) = \pi_1(W) \ast_{\pi_1(Y_g)} 1 = 1$. Thus if $(W,J)$ is a Stein filling then $Z$ is homeomorphic to a K3 surface.
\end{remark}
\begin{corollary} \label{cor:exact-betti}
If $(W,\omega)$ is an exact symplectic filling of $(Y_g,\xi_g)$, then $W$ has the same Betti numbers as the disk cotangent bundle $DT^*\Sigma_g$, namely $b_3(W)=0$, $b_2^+(W)=1$ and $b_2^-(W) = b_2^0(W) = 0$, and $b_1(W)=2g$.
\end{corollary}
\begin{proof}
We glue the cap $(X_g,\omega_g)$ to $(W,\omega)$ to form $Z$, which is a homology K3 and thus has signature $-16$. Novikov additivity says that $-16 = \sigma(W) + \sigma(X_g)$, and from Proposition~\ref{prop:cap-betti} we conclude that $\sigma(W) = 1$. In particular, we have $b_2^+(W) \geq 1$.
Now we consider the Mayer-Vietoris sequence for $Z = W \cup_{Y_g} X_g$ with coefficients in $\mathbb{Q}$: since $b_2(Z)=22$ and $b_2(Y_g)=b_1(Y_g)=2g$, the part of the sequence between $H_3(Z;\mathbb{Q})=0$ and $H_1(Z;\mathbb{Q})=0$ has the form
\[ 0 \to \mathbb{Q}^{2g} \to \mathbb{Q}^{b_2(W)} \oplus \mathbb{Q}^{21+2g} \to \mathbb{Q}^{22} \to \mathbb{Q}^{2g} \to \mathbb{Q}^{b_1(W)} \oplus \mathbb{Q}^0 \to 0. \]
Since we already know that $b_2(W) \geq 1$, an easy exercise shows that in fact $b_2(W)=1$ and $b_1(W)=2g$; then $\sigma(W)=1$ implies that $b_2^+(W)=1$ and $b_2^-(W)=b_2^0(W)=0$ as claimed. Similarly, between $H_4(W;\mathbb{Q}) \oplus H_4(X_g;\mathbb{Q}) = 0$ and $H_3(Z;\mathbb{Q}) = 0$, we have
\[ 0 \to \mathbb{Q} \to \mathbb{Q} \to \mathbb{Q}^{b_3(W)} \oplus \mathbb{Q}^0 \to 0 \]
and this can only be exact if $b_3(W) = 0$.
\end{proof}
\begin{theorem} \label{thm:exact-homology}
If $(W,\omega)$ is an exact filling of $(Y_g,\xi_g)$, then for some integer $d$ such that $d^2$ divides $g-1$ we have $H_3(W;\mathbb{Z}) = 0$; $H_2(W;\mathbb{Z}) \cong \mathbb{Z}$, with intersection form $\langle \frac{2g-2}{d^2}\rangle$; and $H_1(W;\mathbb{Z}) \cong \mathbb{Z}^{2g} \oplus \mathbb{Z}/d\mathbb{Z}$.
\end{theorem}
\begin{proof}
We recall from the proof of Proposition~\ref{prop:cap-homology} that $H_3(W)$ is torsion-free, and so $b_3(W)=0$ implies that $H_3(W)=0$.
Now we write $Z = W \cup_{Y_g} X_g$, with $Z$ an integer homology K3, and consider the Mayer-Vietoris sequence over $\mathbb{Z}$:
\[ 0 \to H_2(Y_g) \xrightarrow{i} H_2(W) \oplus H_2(X_g) \xrightarrow{j} H_2(Z) \xrightarrow{\delta} H_1(Y_g). \]
We know that $H_1(Y_g) = \mathbb{Z}^{2g} \oplus \mathbb{Z}/(2g-2)$, hence $H_2(Y_g) = H^1(Y_g) = \mathbb{Z}^{2g}$, and similarly we know the homology of $X_g$ from Proposition~\ref{prop:cap-homology}. Any torsion in $H_2(W)$ must lie in $\ker(j) = \operatorname{Im}(i)$ since $H_2(Z) = \mathbb{Z}^{22}$ is free, but $\operatorname{Im}(i) \cong \mathbb{Z}^{2g}$ is also free, so $H_2(W)$ is torsion-free and thus $H_2(W) = \mathbb{Z}$ by Corollary~\ref{cor:exact-betti}.
Since $H_2(W)$ is positive definite and both $H_2(Y_g)$ and $H_2(W)$ are torsion-free, the map $H_2(Y_g) \to H_2(W)$ is zero, and we know that $H_2(X_g)$ decomposes as $\mathbb{Z}^{21} \oplus H_2(Y_g)$. Thus we can split off the $H_2(Y_g) \xrightarrow{\sim} H_2(Y_g)$ component of $i$ in the above sequence, leaving us with
\[ 0 \to \mathbb{Z} \oplus \mathbb{Z}^{21} \to H_2(Z) \xrightarrow{\delta} H_1(Y_g). \]
Let $\Lambda \subset H_2(Z)$ be the image of $\mathbb{Z} \oplus \mathbb{Z}^{21}$; then $\Lambda$ is a sublattice of rank 22, so it has finite index, which must equal $\lvert\operatorname{Im}(\delta)\rvert$. But from $H_1(Y_g) = \mathbb{Z}^{2g} \oplus \mathbb{Z}/(2g-2)$ it follows that $\operatorname{Im}(\delta)$ is a subgroup of $\mathbb{Z}/(2g-2)$, which then has order $\frac{2g-2}{d}$ for some integer $d \geq 1$. Since $H_1(X_g) = H_1(Z) = 0$, the portion $H_2(Z) \xrightarrow{\delta} H_1(Y_g) \to H_1(W) \to 0$ of the sequence shows that $H_1(W)$ is isomorphic to $H_1(Y_g) / \operatorname{Im}(\delta) \cong \mathbb{Z}^{2g} \oplus \mathbb{Z}/d$.
Let $e_1,\dots,e_{22}$ be an integral basis of $\Lambda$, where $e_1$ generates the direct summand $H_2(W) \cong \mathbb{Z}$ and $e_2,\dots,e_{22}$ is an integral basis of $\mathbb{Z}^{21} \subset H_2(X_g)$, and form a matrix $A$ whose columns are the elements $e_1,\dots,e_{22}$ expressed in an integral basis of $H_2(Z) \cong \mathbb{Z}^{22}$. Letting $Q_{Z}$ be the intersection form on $H_2(Z)$ in this latter basis, then, the intersection form on $\Lambda$ in the basis $\{e_i\}$ is given by $Q_\Lambda =A^{T}Q_{Z}A$, and we have $\det(Q_\Lambda) = \pm \left(\frac{2g-2}{d}\right)^2$ since $Q_{Z}$ is unimodular and $\lvert\det(A)\rvert = [H_2(Z):\Lambda] = \frac{2g-2}{d}$.
On the other hand, we can write $Q_\Lambda$ in block form with respect to this basis as $\left(\begin{smallmatrix}e_1\cdot e_1&0\\0&Q_g\end{smallmatrix}\right)$, where $Q_g$ is the nondegenerate intersection form on $\mathbb{Z}^{21} \subset H_2(X_g)$; note that $\lvert\det(Q_g)\rvert$ does not depend on $W$ but only on the cap $X_g$. From this it is clear that $\det(Q_\Lambda) = (e_1\cdot e_1) \det(Q_g)$, so it follows that $(e_1\cdot e_1) \lvert\det(Q_g)\rvert = \left(\frac{2g-2}{d}\right)^2$. In the case where $W$ is the disk cotangent bundle $DT^*\Sigma_g$ we have $e_1\cdot e_1 = \Sigma_g^2 = 2g-2$ and $d=1$ (see Remark~\ref{rem:Xg-sublattice}), so it follows that $\lvert\det(Q_g)\rvert$ is equal to $2g-2$. We conclude that
\[ e_1\cdot e_1 = \frac{\left(\frac{2g-2}{d}\right)^2}{2g-2} = \frac{2g-2}{d^2}. \]
Since $Z$ is a homology K3 it has an even intersection form, so $e_1\cdot e_1$ must be an even integer and we have $d^2 \mid g-1$, completing the proof.
\end{proof}
\begin{corollary}
If $g-1$ is square-free then any exact filling of $(Y_g,\xi_g)$ has the same homology and intersection form as the disk cotangent bundle $DT^*\Sigma_g$.
\end{corollary}
\section{The topology of Stein fillings}
\label{sec:stein}
\subsection{The homology of a Stein filling}
In this section we further investigate the topology of a filling $(W,\omega)$ of $(Y_g,\xi_g)$ which is not only exact but Stein; in this case we denote it by $(W,J)$ to avoid confusion. In this case $W$ has a handle decomposition consisting of only 0-, 1-, and 2-handles, from which it follows classically that the inclusion $i: Y_g \hookrightarrow W$ induces a surjection $\pi_1(Y_g) \xrightarrow{i_*} \pi_1(W)$. We note that since $Y_g$ is a circle bundle over $\Sigma_g$ with Euler number $2g-2$, its fundamental group has presentation
\[ \pi_1(Y_g) = \left\langle a_1,\dots,a_g,b_1,\dots,b_g,t \mathrel{}\middle|\mathrel{} \prod_{i=1}^g [a_i,b_i] = t^{2g-2}, [a_i,t]=[b_i,t] = 1 \right\rangle, \]
where $t$ represents a circle fiber and is central. We will define $2g+1$ distinguished elements of $\pi_1(W)$ by
\begin{align*}
\alpha_j &= i_*(a_j), & \beta_j &= i_*(b_j), & \tau &= i_*(t)
\end{align*}
for $j=1,\dots,g$. Since $i_*$ is surjective, we know that $\tau$ is central and that these $2g+1$ elements generate $\pi_1(W)$; in fact, it turns out that $\alpha_1,\dots,\alpha_g$ and $\beta_1,\dots,\beta_g$ suffice.
\begin{proposition} \label{prop:stein-meridian}
Suppose that $(W,J)$ is a Stein filling of $(Y_g,\xi_g)$ as above, and let $H \subset \pi_1(Y_g)$ denote the subgroup generated by $a_1,\dots,a_g,b_1,\dots,b_g$. If $i_*: \pi_1(Y_g) \to \pi_1(W)$ is the inclusion-induced map, then $i_*|_H$ is surjective; in other words, $i_*(H) = \pi_1(W)$, and so $\pi_1(W)$ is generated by the elements $\alpha_1,\dots,\alpha_g$ and $\beta_1,\dots,\beta_g$.
\end{proposition}
\begin{proof}
It is not hard to check that $H$ is normal of index $2g-2$, since the only other generator in the above presentation is central (namely $t$) and the quotient $\pi_1(Y_g)/H$ is $\langle t \mid t^{2g-2}=1\rangle$. Since $i_*$ is surjective, it is also easy to see that $i_*(H)$ is a normal subgroup of $\pi_1(W)$. Moreover, $i_*$ induces a map
\[ \pi_1(Y_g)/H \to \pi_1(W)/i_*(H) \]
between the respective quotients, and this map is surjective, so since $\pi_1(Y_g)/H$ is a finite cyclic group generated by $[t]$ it follows that $\pi_1(W)/i_*(H)$ is also a finite cyclic group which is generated by $[\tau]$. Thus $i_*(H)$ is a normal subgroup of $\pi_1(W)$ of some finite index $k \geq 1$ which divides $|\pi_1(Y_g)/H| = 2g-2$.
Let $p:\tilde{W} \to W$ be a finite $k$-fold covering such that $p_*(\pi_1(\tilde{W})) = i_*(H)$. Then $\tilde{W}$ is also a Stein domain, and its boundary $\tilde{Y} = \partial \tilde{W}$ is a $k$-fold cover of $Y_g = \partial W$, which must be connected since Stein domains have connected boundary. Thus $G = (p|_{\tilde{Y}})_*(\pi_1(\tilde{Y}))$ is an index-$k$ subgroup of $\pi_1(Y_g)$. The cover $\tilde{Y} \to Y_g$ is normal, implying that $G$ is moreover a normal subgroup of $\pi_1(Y_g)$: indeed, the cover $\tilde{W} \to W$ is normal since $i_*(H)$ is a normal subgroup of $\pi_1(W)$, so its deck transformations act transitively on each fiber, and these restrict to deck transformations of $\tilde{Y}$, so the latter act transitively on fibers of $\tilde{Y}$.
We now consider the commutative diagram
\[ \xymatrix{
\tilde{Y} \ar[r]^{\tilde{i}} \ar[d]_{p|_{\tilde{Y}}} & \tilde{W} \ar[d]^{p} \\
Y_g \ar[r]^i & W
} \]
where $i$ and $\tilde{i}$ are the respective inclusion maps of each manifold into the Stein domain which it bounds, and thus induce surjections on the respective fundamental groups. We have $p_*(\pi_1(\tilde{W})) = i_*(H)$ by construction, and since $\tilde{i}_*(\pi_1(\tilde{Y})) = \pi_1(\tilde{W})$ we can write
\[ i_*(H) = p_*(\tilde{i}_*(\pi_1(\tilde{Y}))) = i_*((p|_{\tilde{Y}})_*(\pi_1(\tilde{Y}))) = i_*(G). \]
Thus if $t^j \in G$ for some $j$, then we have $\tau^j \in i_*(G) = i_*(H)$, and so $k$ divides $j$.
Now we consider the composition $\varphi: \tilde{Y} \xrightarrow{p} Y_g \to \Sigma_g$, where we are now using $p$ to denote the restriction $p|_{\tilde{Y}}$. The preimage of a point $x \in \Sigma_g$ is a $k$-fold cover of the circle fiber above $x$ in $Y_g$, which we identify with $t$ (at least up to conjugation, since we should pick a base point). If this preimage is disconnected, then one of its components is a circle $\gamma \subset \tilde{Y}$ which is an $l$-fold cover of the circle fiber in $Y_g$ for some $1 \leq l < k$. Thus $p_*(\gamma)$ is conjugate to $t^l$; but $G$ is normal and does not contain $t^l$, so it cannot actually contain $p_*(\gamma)$ either. We conclude that $\varphi^{-1}(x)$ is a circle, and hence that $\tilde{Y}$ is also a circle bundle over $\Sigma_g$. Its Euler number is then $\frac{2g-2}{k}$, though we only need that it is nonzero: if it were zero, then the image under $p$ of a section would give a section of $Y_g$, which has nonzero Euler number.
From the above we see that $b_1(\tilde{Y}) = 2g$, and since $H_1(\tilde{Y})$ surjects onto $H_1(\tilde{W})$ it follows that $b_1(\tilde{W}) \leq 2g$, hence
\[ \chi(\tilde{W}) = 1 - b_1(\tilde{W}) + b_2(\tilde{W}) \geq 1-2g. \]
But we also know that $\chi(\tilde{W}) = k\chi(W) = k(2-2g)$, so we have $k(2-2g) \geq 1-2g$, or equivalently $(k-1)(2-2g) \geq -1$. Since $2-2g \leq -2$, this can only hold if $k=1$; but $k$ is the index of $i_*(H)$ in $\pi_1(W)$, so the two must be equal.
\end{proof}
\begin{theorem} \label{thm:homology-stein}
Let $(W,J)$ be a Stein filling of $(Y_g,\xi_g)$. Then $W$ has the same integral homology and intersection form as the disk cotangent bundle $DT^*\Sigma_g$. In particular, we have $H_1(W) \cong \mathbb{Z}^{2g}$, and the intersection form on $H_2(W) \cong \mathbb{Z}$ is $\langle 2g-2 \rangle$.
\end{theorem}
\begin{proof}
In light of Theorem~\ref{thm:exact-homology} we know that $H_1(W) \cong \mathbb{Z}^{2g} \oplus \mathbb{Z}/d\mathbb{Z}$ for some $d \geq 1$, and that it suffices to show that $d=1$. Now according to Proposition \ref{prop:stein-meridian}, the fundamental group $\pi_1(W)$ is generated by the $2g$ elements $\alpha_1,\dots,\alpha_g,\beta_1,\dots,\beta_g$, hence its abelianization $H_1(W)$ is also generated by the corresponding homology classes. However, if $d>1$ then any presentation of $\mathbb{Z}^{2g} \oplus \mathbb{Z}/d\mathbb{Z}$ requires at least $2g+1$ generators, so we must have $d=1$.
\end{proof}
Theorem~\ref{thm:homology-stein} tells us the first group homology of $\pi_1(W)$, since $H_1(\pi_1(W);\mathbb{Z}) = H_1(W;\mathbb{Z})$. The second homology of $\pi_1(W)$ will also be useful later:
\begin{proposition} \label{prop:stein-hurewicz}
If $(W,J)$ is a Stein filling of $(Y_g,\xi_g)$, then $H_2(\pi_1(W); \mathbb{Z}) \cong \mathbb{Z}$.
\end{proposition}
\begin{proof}
Let $\pi = \pi_1(W)$, and recall that $H_2(W) \cong \mathbb{Z}$. The group homology $H_2(\pi;\mathbb{Z}) = H_2(K(\pi,1);\mathbb{Z})$ is classically known to be isomorphic to the cokernel of the Hurewicz map
\[ h: \pi_2(W) \to H_2(W), \]
which is $\mathbb{Z} / \operatorname{Im}(h)$, so $H_2(\pi;\mathbb{Z})$ is $\mathbb{Z}$ if the Hurewicz map is zero and finite otherwise. The $2g$ elements $\alpha_1,\dots,\alpha_g,\beta_1,\dots,\beta_g$ generate $\pi$ by Proposition~\ref{prop:stein-meridian}, so their images generate $H_1(W;\mathbb{Z}) = \mathbb{Z}^{2g}$ and are thus linearly independent over $\mathbb{Q}$.
Supposing that $h$ is nonzero, we have $H_2(\pi;\mathbb{Q}) = 0$. According to Stallings \cite[Theorem~7.4]{stallings-homology}, the linear independence of the $\alpha_i$ and $\beta_i$ in $H_1(\pi)$ and the vanishing of $H_2(\pi;\mathbb{Q})$ guarantee that $\alpha_1,\dots,\alpha_g,\beta_1,\dots,\beta_g$ form a basis of a free subgroup of $\pi$, and we conclude that $\pi$ is the free group $F_{2g}$. The element $\prod_{j=1}^g [\alpha_j,\beta_j]$ of $\pi$ is central, since it equals $\tau^{2g-2}$ and $\tau$ is central; but free groups have trivial center, so $\prod_{j=1}^g [\alpha_j,\beta_j] = 1$ and thus $\pi$ is a nontrivial quotient of $F_{2g}$. Since finitely generated free groups are Hopfian, a nontrivial quotient of $F_{2g}$ cannot be isomorphic to $F_{2g}$, and we have a contradiction.
\end{proof}
Since the class $[t]$ of the circle fiber generates the torsion summand of $H_1(Y_g)$, and Theorem~\ref{thm:homology-stein} says that $H_1(W)$ is torsion-free, we see that $[\tau]=0$ in $H_1(W)$. Thus $\tau$ lies in the commutator subgroup of $\pi_1(W)$. In Section~\ref{ssec:pi_1} we will see that in fact $\tau=1$ in $\pi_1(W)$.
\subsection{The fundamental group of a Stein filling}
\label{ssec:pi_1}
Let $(W,J)$ denote a Stein filling of $(Y_g,\xi_g)$ as usual. Our goal in this section is to explicitly determine its fundamental group:
\begin{theorem} \label{thm:surface-group}
The fundamental group $\pi_1(W)$ is isomorphic to $\pi_1(\Sigma_g)$.
\end{theorem}
Our strategy will be to first show that $\pi_1(W)$ must be an extension of $\pi_1(\Sigma_g)$ by a cyclic group and then use what we know about its group homology to show that this cyclic group is in fact trivial.
Summarizing what we know so far about $\pi_1(W)$, we have seen that it is a quotient of
\[ \left\langle \alpha_1,\dots,\alpha_g,\beta_1,\dots,\beta_g,\tau \mathrel{}\middle|\mathrel{} \prod_{i=1}^g [\alpha_i,\beta_i] = \tau^{2g-2}, [\alpha_i,\tau]=[\beta_i,\tau] = 1 \right\rangle, \]
where the central element $\tau$ is the image of a circle fiber $t \in \pi_1(Y_g)$. Moreover, $H_1(W;\mathbb{Z}) = \mathbb{Z}^{2g}$ is generated by the elements $\alpha_i$ and $\beta_i$, and the central element $\tau$ belongs to the commutator subgroup of $\pi_1(W)$. Thus there is a surjection
\[ p: \pi_1(\Sigma_g) = \left\langle \alpha_1,\dots,\alpha_g,\beta_1,\dots,\beta_g \mathrel{}\middle|\mathrel{} \prod_{i=1}^g [\alpha_i,\beta_i] = 1 \right\rangle \to \pi_1(W)/\langle\tau\rangle \]
through which the abelianization map $\operatorname{ab}: \pi_1(\Sigma_g) \to \mathbb{Z}^{2g}$ factors as
\[ \pi_1(\Sigma_g) \xrightarrow{p} \pi_1(W)/\langle\tau\rangle \xrightarrow{\operatorname{ab}} \mathbb{Z}^{2g}. \]
Such a factorization exists for any surjection $p: \pi_1(\Sigma_g) \to \pi_1(W)/\langle\tau\rangle$: since $\pi_1(\Sigma_g) \xrightarrow{\operatorname{ab}\circ p} \mathbb{Z}^{2g}$ is a map to an abelian group, it factors as $\pi_1(\Sigma_g) \xrightarrow{\operatorname{ab}} \mathbb{Z}^{2g} \xrightarrow{\psi} \mathbb{Z}^{2g}$, and $\psi$ is a surjection $\mathbb{Z}^{2g} \to \mathbb{Z}^{2g}$ since $\operatorname{ab}\circ p$ is onto, so it is an isomorphism and then $\operatorname{ab}: \pi_1(\Sigma_g) \to \mathbb{Z}^{2g}$ is equal to $(\psi^{-1}\circ\operatorname{ab})\circ p$.
If we fix a surjection $\varphi: \mathbb{Z}^{2g} \to \mathbb{Z}/n\mathbb{Z}$ for some $n > 1$, then we have a collection of surjective maps of the form
\[ \xymatrix@!R=5pt{
\pi_1(W) \ar[dr] & & & \\
& \pi_1(W)/\langle\tau\rangle \ar[r]^-{\operatorname{ab}} & \mathbb{Z}^{2g} \ar[r]^-{\varphi} & \mathbb{Z}/n\mathbb{Z} \\
\pi_1(\Sigma_g) \ar[ur]_p & & &
} \]
and the kernels of the maps $\pi_1(W) \to \mathbb{Z}/n\mathbb{Z}$ and $\pi_1(\Sigma_g) \to \mathbb{Z}/n\mathbb{Z}$ define normal, $n$-fold cyclic covers $W'$ and $\Sigma_{g'}$ of $W$ and $\Sigma_g$ respectively, where $2-2g' = n(2-2g)$.
\begin{definition}
Let $(W,J)$ be a Stein filling of $(Y_g,\xi_g)$, and let $p: \pi_1(\Sigma_g) \to \pi_1(W)/\langle\tau\rangle$ be a surjection. If $\Sigma_{g'} \to \Sigma_g$ and $W' \to W$ are the finite cyclic covers produced by the above construction, then we will say that $(\Sigma_{g'}, W')$ is \emph{induced by $(p,\varphi)$}.
\end{definition}
Since $W'$ is a finite cover of a Stein manifold it has a natural Stein structure $J'$, so its boundary $(Y',\xi')$ is connected, and as in the proof of Proposition~\ref{prop:stein-meridian} it follows that $Y'$ is a normal, $n$-fold cyclic cover of $Y_g$.
\begin{lemma} \label{lem:canonical-cyclic-cover}
If $(\Sigma_{g'},W')$ is induced by $(p,\varphi)$, then $(W',J')$ is a Stein filling of the canonical contact structure $(Y',\xi') = (Y_{g'},\xi_{g'})$ on the unit cotangent bundle of $\Sigma_{g'}$.
\end{lemma}
\begin{proof}
The circle fiber $t \in \pi_1(Y_g)$ is in the kernel of $\pi_1(Y_g) \xrightarrow{i_*} \pi_1(W) \to \mathbb{Z}/n\mathbb{Z}$ since it maps to $\tau \in \pi_1(W)$, so it lifts to a closed curve in $Y'$. Its preimage in $Y'$ therefore consists of $n$ disjoint circles, so the orbit space $Y'/S^1$ is an $n$-fold cover of $\Sigma_g$, hence $Y'$ is a circle bundle over $\Sigma_{g'}$. The Euler class of $Y'\to\Sigma_{g'}$ is then $n$ times the Euler class of $Y_g\to\Sigma_g$, namely $-n\chi(\Sigma_g) = -\chi(\Sigma_{g'})$, so in fact $Y'$ is the unit cotangent bundle $Y_{g'}$ of $\Sigma_{g'}$.
Since the contact structure $\xi_g$ is tangent to the fibers of $Y_g \to \Sigma_g$, its cover $\xi'$ is likewise tangent to the fibers of $Y_{g'} \to \Sigma_{g'}$, and the only contact structure on the unit cotangent bundle of $\Sigma_{g'}$ with Legendrian fibers is the canonical one \cite[Proposition~3.3]{giroux-circle-bundles} (cf.\ also \cite{lutz}). Thus $(Y',\xi') = (Y_{g'}, \xi_{g'})$, and so $(W',J')$ is a Stein filling of $(Y_{g'},\xi_{g'})$.
\end{proof}
\begin{proposition} \label{prop:cover-surjection}
Suppose that $(\Sigma_{g'},W')$ is induced by $(p,\varphi)$. Identifying $\pi_1(\Sigma_{g'})$ as a subgroup of $\pi_1(\Sigma_g)$, the map $p$ induces a surjection
\[ p': \pi_1(\Sigma_{g'}) \to \pi_1(W')/\langle\tau'\rangle \]
such that $\ker(p') = \ker(p)$.
\end{proposition}
\begin{proof}
It is clear that $\langle \tau \rangle \subset \pi_1(W')$, viewing the latter as a subgroup of $\pi_1(W)$, since $\tau$ is in the kernel of $\pi_1(W) \to \mathbb{Z}/n\mathbb{Z}$. Moreover, if $\tau' \in \pi_1(W')$ denotes the image of the circle fiber $t' \in \pi_1(Y_{g'})$, then since $t'$ projects to the circle fiber $t \in \pi_1(Y_g)$, the covering map $W' \to W$ sends $\tau'$ to $\tau$, so we have
\[ \langle \tau \rangle \cap \pi_1(W') = \langle \tau' \rangle. \]
Thus the kernel of $\pi_1(W') \hookrightarrow \pi_1(W) \to \pi_1(W)/\langle \tau \rangle$ is $\langle \tau'\rangle$, inducing an injective map
\[ \frac{\pi_1(W')}{\langle \tau' \rangle} \hookrightarrow \frac{\pi_1(W)}{\langle \tau \rangle}, \]
and it follows that $\pi_1(W')/\langle \tau' \rangle$ has index $n$ in $\pi_1(W)/\langle \tau\rangle$. Since $\pi_1(W')$ is by definition the kernel of the map $\pi_1(W) \to \mathbb{Z}/n\mathbb{Z}$, the group $\pi_1(W')/\langle\tau'\rangle$ sits in the kernel of the surjective $\pi_1(W)/\langle\tau\rangle \xrightarrow{\varphi\circ\operatorname{ab}} \mathbb{Z}/n\mathbb{Z}$, and this kernel has index $n$ so we conclude that
\[ \pi_1(W')/\langle\tau'\rangle = \ker( \varphi\circ\operatorname{ab}: \pi_1(W)/\langle\tau\rangle \to \mathbb{Z}/n\mathbb{Z} ). \]
Since $\pi_1(\Sigma_{g'})$ is the kernel of $(\varphi \circ \operatorname{ab}) \circ p$, it follows that $p(\pi_1(\Sigma_{g'}))$ lies in $\ker(\varphi\circ \operatorname{ab})$, and so $p$ restricts to a map
\[ p': \pi_1(\Sigma_{g'}) \to \pi_1(W')/\langle \tau' \rangle, \]
which is easily seen to be surjective just as $p$ is. Finally, since $p'$ is the restriction of $p$ to $\pi_1(\Sigma_{g'}) \subset \pi_1(\Sigma_{g})$ it follows that $\ker(p') = \ker(p) \cap \pi_1(\Sigma_{g'})$. But $\ker(p) \subset \ker(\varphi\circ\operatorname{ab}\circ p) = \pi_1(\Sigma_{g'})$, and so $\ker(p)=\ker(p')$ as claimed.
\end{proof}
Proposition~\ref{prop:cover-surjection} allows us to characterize $\pi_1(W)$ as a cyclic extension of a surface group:
\begin{proposition}
\label{prop:central-extension}
The fundamental group $\pi_1(W)$ is a central extension of $\pi_1(\Sigma_g)$ by a cyclic group. More precisely, there is a short exact sequence
\[ 1 \to \langle \tau \rangle \to \pi_1(W) \to \pi_1(\Sigma_g) \to 1 \]
with the image of $\langle \tau \rangle$ being central in $\pi_1(W)$.
\end{proposition}
\begin{proof}
It suffices to show that the surjection $p: \pi_1(\Sigma_g) \to \pi_1(W)/\langle\tau\rangle$ is also injective. Supposing otherwise, let $x$ be a nontrivial element of $\ker(p)$. Since surface groups are RFRS \cite{agol} (cf.\ also \cite{hempel}), there is a descending chain of subgroups
\[ \pi_1(\Sigma_g) = G_0 \supset G_1 \supset G_2 \supset \dots \]
such that each $G_{i+1}$ is a normal subgroup of $G_i$ with finite cyclic quotient, defined as the kernel of a map which factors through $G_i \to (G_i)^{\operatorname{ab}}$, and $\bigcap_{i=0}^\infty G_i = \{1\}$. This corresponds to a tower of normal, finite cyclic covers
\[ \dots \to \Sigma_{g_2} \to \Sigma_{g_1} \to \Sigma_{g_0} = \Sigma_g \]
such that $\pi_1(\Sigma_{g_{i+1}}) = \ker\big(\pi_1(\Sigma_{g_i}) \xrightarrow{\operatorname{ab}} \mathbb{Z}^{2g_i} \xrightarrow{\varphi_i} \mathbb{Z}/n_i\mathbb{Z}\big)$ for some $\varphi_i$. Now by induction, since $\operatorname{ab}: \pi_1(\Sigma_{g}) \to \mathbb{Z}^{2g}$ factors through $p_0 = p: \pi_1(\Sigma_g) \to \pi_1(W_0)/\langle\tau_0\rangle$, with $W_0=W$, we can construct for each $i \geq 0$ a normal cyclic cover $(W_{i+1},J_{i+1})$ of $(W_i,J_i)$ as above, with $(\Sigma_{g_{i+1}},W_{i+1})$ induced by $(p_i,\varphi_i)$.
By Lemma~\ref{lem:canonical-cyclic-cover}, $(W_{i+1},J_{i+1})$ is a Stein filling of $(Y_{g_{i+1}}, \xi_{g_{i+1}})$, and Proposition~\ref{prop:cover-surjection} provides a surjection
\[ p_{i+1}: \pi_1(\Sigma_{g_{i+1}}) \to \pi_1(W_{i+1})/\langle\tau_{i+1}\rangle \]
with $\ker(p_{i+1})=\ker(p_i)$. Thus $x \in \ker(p_0)$ implies that $x \in \ker(p_i) \subset G_i$ for all $i$. But since $\bigcap G_i = \{1\}$ it follows that $x\not\in G_k$ for some $k \geq 0$, and this is a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:surface-group}]
The circle fiber $\tau$ generates a $\mathbb{Z}/n\mathbb{Z}$ subgroup for some $n \geq 0$ (with $n=0$ if it is nontorsion), so Proposition~\ref{prop:central-extension} provides a short exact sequence of groups
\[ 1 \to \mathbb{Z}/n\mathbb{Z} \to \pi_1(W) \to \pi_1(\Sigma_g) \to 1 \]
for some $n \geq 0$. We will show that $n=1$, and thus that $\pi_1(W)\to\pi_1(\Sigma_g)$ is an isomorphism.
The homologies of these groups with $\mathbb{Z}$ coefficients are related by the Lyndon/Hochschild-Serre spectral sequence (see e.g.\ \cite{brown}),
\[ E^2_{p,q} = H_p(\pi_1(\Sigma_g); H_q(\mathbb{Z}/n\mathbb{Z}; \mathbb{Z})) \quad \Longrightarrow \quad H_{p+q}(\pi_1(W); \mathbb{Z}). \]
Since $\pi_1(\Sigma_g)$ has cohomological dimension 2, the $E^2$ page is supported in the interval $0\leq p \leq 2$. Moreover, the homology of $\mathbb{Z}/n\mathbb{Z}$ is given by (letting $k \geq 1$):
\begin{align*}
H_q(\mathbb{Z};\mathbb{Z}) &= \begin{cases} \mathbb{Z}, & q=0,1 \\ 0, & q\geq 2,\end{cases} &
H_q(\mathbb{Z}/k\mathbb{Z};\mathbb{Z}) &= \begin{cases} \mathbb{Z}, & q=0 \\ \mathbb{Z}/k\mathbb{Z}, & q\mathrm{\ odd} \\ 0, & q\geq2 \mathrm{\ even}. \end{cases}
\end{align*}
In either case, the differential $d^2: E^2_{p,q} \to E^2_{p-2,q+1}$ must be identically zero with the possible exception of the map $\delta: E^2_{2,0} \to E^2_{0,1}$, since otherwise either the source or the target vanishes. Each of the higher differentials $d^r: E^r_{p,q} \to E^r_{p-r,q+r-1}$ must vanish for $r \geq 3$ because either $p>2$ or $p-r<0$, so the spectral sequence collapses at the $E^3$ page, and we have
\begin{align*}
E^\infty_{0,2} &= 0, & E^\infty_{1,1} &= (\mathbb{Z}/n\mathbb{Z})^{2g}, & E^\infty_{2,0} &= \ker(\delta: \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z}).
\end{align*}
The convergence of this spectral sequence means that these are the associated graded groups of a filtration on $H_2(\pi_1(W);\mathbb{Z})$. But the latter group is $\mathbb{Z}$ by Proposition~\ref{prop:stein-hurewicz}, so each associated graded group must be cyclic, and since $E^\infty_{1,1}$ is cyclic we must have $n=1$.
\end{proof}
\subsection{The homotopy type of a Stein filling}
So far we have shown that if $(W,J)$ is a Stein filling of $(Y_g,\xi_g)$, then $W$ has the homology and intersection form of the disk cotangent bundle $DT^*\Sigma_g$ (Theorem~\ref{thm:homology-stein}) and that $\pi_1(W) \cong \pi_1(\Sigma_g)$ (Theorem~\ref{thm:surface-group}), with the circle fiber of $Y_g = \partial W$ being nullhomotopic in $W$. In this section we will deduce that $W$ is therefore homotopy equivalent, and thus s-cobordant, to $DT^*\Sigma_g$ rel boundary.
\begin{proposition} \label{prop:tau-1-implies-aspherical}
If $(W,J)$ is a Stein filling of $(Y_g,\xi_g)$, then $W$ is aspherical.
\end{proposition}
\begin{proof}
A decomposition of $W$ into handles of index at most 2, with exactly one 0-handle, necessarily has $2g-1+k$ 1-handles and $k$ 2-handles for some $k\geq 1$ since $\chi(W) = 2-2g$. The corresponding presentation of $\pi_1(W) \cong \pi_1(\Sigma_g)$ has $2g-1+k$ generators and $k$ relations and thus deficiency $2g-1$.
Hillman \cite[Proof of Theorem~2]{hillman-asphericity} showed that if a presentation $P$ of a group $G$ has deficiency $1 + \beta_1(G)$, where $\beta_1$ denotes the first $L^2$-Betti number (see for example \cite{luck}), then the 2-complex corresponding to $P$ is aspherical. In the above case we know that $\beta_1(\pi_1(\Sigma_g)) = 2g-2$, so the 2-complex corresponding to the given presentation of $\pi_1(\Sigma_g)$ is aspherical, and thus $W$ (which retracts onto this complex) is aspherical as well.
\end{proof}
We have now shown that $W$ is a $K(\pi_1(\Sigma_g),1)$, and so it has the homotopy type of $DT^*\Sigma_g$. Since both are compact 4-manifolds with boundary, we can strengthen this to an assertion about manifolds rel boundary as follows.
\begin{theorem} \label{thm:homotopy-equivalent}
If $(W,J)$ is a Stein filling of $(Y_g,\xi_g)$, then $W$ is s-cobordant rel boundary to the disk cotangent bundle $DT^*\Sigma_g$.
\end{theorem}
\begin{proof}
It suffices to find a homotopy equivalence $f: DT^*\Sigma_g \to W$ which restricts to a homeomorphism $\partial(DT^*\Sigma_g) \xrightarrow{\sim} \partial W$. Since $W$ is compact and aspherical with $\pi_1(W)$ a surface group, Khan \cite[Corollary~1.23]{khan-decompositions} showed that $W$ is \emph{topologically s-rigid}, a condition which implies that if such an $f$ exists then $DT^*\Sigma_g$ is s-cobordant to $W$.
To construct $f: DT^*\Sigma_g \to W$, following Stipsicz \cite{stipsicz}, we first take a standard handlebody decomposition of $DT^*\Sigma_g$, with a 0-handle, $2g$ 1-handles, and a single 2-handle, and turn it upside down to build $DT^*\Sigma_g$ from a thickened $Y_g$ by attaching a 2-handle, $2g$ 3-handles, and a 4-handle. We define $f$ by identifying the boundaries, $\partial (DT^*\Sigma_g) \xrightarrow{\sim} \partial W$, in a way which sends a circle fiber to a circle fiber; extending $f$ over the 2-handle of $DT^*\Sigma_g$, which can be done since the attaching curve is identified with the circle fiber in $Y_g = \partial W$ and is thus nullhomotopic in $W$; and then extending $f$ over the 3- and 4-handles of $DT^*\Sigma_g$, since the obstructions to doing so lie in $\pi_2(W)=0$ and $\pi_3(W)=0$.
The map $f$ which we have constructed now induces an isomorphism $f_*: \pi_1(DT^*\Sigma_g) \to \pi_1(W)$, since it induces an isomorphism $\pi_1(\partial(DT^*\Sigma_g)) \xrightarrow{\sim} \pi_1(\partial W)$ which preserves the subgroup generated by the circle fiber, and both groups are quotients of $\pi_1(Y_g)$ by that subgroup. Moreover, $f$ induces an isomorphism on all higher homotopy groups, since these are identically zero, and so $f$ is a homotopy equivalence by Whitehead's theorem.
\end{proof}
\bibliographystyle{alpha}
|
1,108,101,563,882 | arxiv | \section{Introduction}
\label{sec:intro}
Robot personalization to specific user-preferences will become increasingly important, as robots find their way into our everyday life. Harmonic human-robot interactions build trust and satisfaction with the user \cite{gasteiger_factors_2021}, whereas negative interaction experiences can quickly lead to frustration \cite{kruse_human-aware_2013}.
A cause for negative user experiences can be algorithms that do not reflect personal preferences.
Where mobile household robots navigate in the vicinity of a human, basic obstacle avoidance approaches fail to capture individual user preferences.
While collision avoidance is undoubtedly crucial during navigation, the navigation policy should furthermore be human-aware and take into account user preferences regarding proxemics \cite{kruse_human-aware_2013} and privacy, compare Fig.~\ref{fig:motivation} (bottom).
Subjective preferences may vary depending on the environment and social context, e.g., navigation preferences could reflect in the robot's approaching behavior, or always driving in front or behind the human.
In addition following a certain speed profile and maintaining a certain distance from humans and other obstacles in the environment might play a role.
The resulting navigation objective for the robot is to reach the navigation goal, not necessarily by only following the shortest path, but also by taking personal robot navigation preferences into account.
Recent advances in learning socially-aware navigation behavior from human demonstrations have been made with inverse reinforcement learning, where the parameters of a proxemics-encoding reward function were inferred \cite{kollmitz_learning_2020}. Influenced by the initial shaping of the reward function \cite{ng_policy_1999}, such approaches lack the ability for navigation style personalization beyond the scope of the reward function.
For smooth navigation, reinforcement learning (RL) based continuous control has lead to promising results on mobile robots \cite{tai_virtual--real_2017, pfeiffer_reinforced_2018}. Furthermore, off-policy RL methods can be complemented with demonstration data to greatly improve learning speed on a given task, even outperforming the resourcefulness of the original demonstrations \cite{vecerik_leveraging_2018}. However, RL robot navigation policies learn most efficient trajectories to the goal. These trajectories do not necessarily reflect the original demonstration behavior, which contains user preferences.
To more precisely imitate behavior from demonstrations, behavioral cloning (BC) can be used \cite{argall_survey_2009}. However, the final policy is limited by the quality and amount of demonstration data \cite{ravichandar_recent_2020}. The dataset would need to cover most of the state space to generalize fluently in unseen environments. This poses a problem, as human demonstrators can only provide limited amounts of demonstration data due to their finite patience~\cite{thomaz_reinforcement_nodate}.
The question crystallizes, how do we efficiently record personal preferences and teach them to the robot, without being limited by the quality and quantity of demonstrations.
\begin{figure}
\centering
\subfloat{%
\includegraphics[width=0.95\linewidth]{figures/motivation_1}}
\\
\subfloat{%
\includegraphics[width=0.95\linewidth]{figures/motivation_2}}
\caption{
\textbf{Top:} We propose a virtual reality (VR) interface to intuitively demonstrate robot navigation preferences by drawing trajectories onto the floor with a handheld controller.
\textbf{Bottom:} User study survey results on the importance of personalized navigation behavior. Participants strongly expressed their preference for personalization of robot navigation behavior, even at the possible cost of longer trajectories.
\label{fig:motivation}}
\end{figure}
In order to solve the aforementioned challenges, we propose a novel navigation learning approach together with a virtual reality (VR) interface to intuitively demonstrate robot navigation preferences by drawing trajectories onto the floor with a handheld controller, see Fig.~\ref{fig:motivation}.
Importantly, the interface does not require expert-level knowledge on robotics, facilitating personalized navigation to a wide range of users.
Our demonstration process is time-efficient, as only few demonstrations are required.
The demonstrations are leveraged to successfully train a personalized human-aware navigation controller, by combining deep reinforcement learning and behavioral cloning.
We show that our navigation policy closely reflects user preferences from only a few demonstrations. But at the same time, it generalizes to unseen states.
In an extensive user study, we evaluate the personalized navigation behavior against classical navigation approaches both in VR and on a real robot.
The threefold \textbf{main contributions} of our study are:
\begin{itemize}
\item A VR demonstration interface for teaching navigation preferences to robots intuitively.
\item Learning a user-personalized, context-based navigation policy based on the combination of RL and BC.
\item An interactive user study recording user specific navigation preferences, evaluating both the presented interface and learned personalized navigation policies.
\end{itemize}
\section{Related Work}
\label{sec:related}
Extensive research has been done on both human-aware navigation \cite{moller_survey_2021} and on robot personalization \cite{gasteiger_factors_2021, hellou_personalization_2021}, but surprisingly, very few can be found at the intersection of both disciplines.
Various studies adapt human-aware navigation behavior either by learning or inferring cost-maps \cite{bungert_human-aware_2021, perez-higueras_teaching_2018, kollmitz_learning_2020}.
These cost-maps usually encode proxemics or environmental characteristics.
To improve navigation in human-robot interaction based on context, Bruckschen \etal \cite{bruckschen_human-aware_2020} leveraged previously observed human-object interactions to predict human navigation goals, which in turn enables foresighted robot navigation and assistance.
Other studies aimed to distinguish between different environment types as context in order to automatically adjust the robot's navigation behavior \cite{xiao_appld_2020, zender_human-and_2007}.
In our work, we consider different environment scenarios as context.
Luber \etal \cite{luber_socially-aware_2012} studied the angle of approach between two individuals to improve human-aware navigation.
Recently, Narayanan \etal \cite{narayanan_proxemo_2020} leveraged the human gait posture as social context for foresighted robot navigation by predicting the human's navigation intent and emotion.
To build upon the aforementioned findings, we as well take the human orientation into account.
To learn personal navigation preferences in a human-robot collaboration scenario from demonstrations, \mbox{Kollmitz~\etal\cite{kollmitz_learning_2020}} learned the parameters of a navigation reward-function from physical human-robot interaction via inverse reinforcement learning.
More specifically, the navigation reward-function was learned from a user pushing the robot away to a desired distance. A limitation of this approach is the state space represented by a 2D grid map of the environment, making the approach unsuitable for larger and unknown environments. To overcome this limitation, our state space is robot-centric and continuous, focusing on the vicinity to the human and obstacles.
Xiao \etal \cite{xiao_appld_2020} proposed using teleoperation demonstrations to learn context-based parameters of a conventional planner. Here, the reproduction of demonstration trajectories during navigation is limited by the capabilities of the conventional planner. To ensure a more distinct preference reproduction including certain trajectory profiles, we chose a deep learning-based controller.
To efficiently train a deep learning based navigation controller for robot navigation via reinforcement learning, Pfeiffer \etal \cite{pfeiffer_reinforced_2018} utilized demonstration navigation data gathered from an expert planner algorithm. The demonstration data was used to pre-train the agent via imitation learning, followed by the reinforcement learning. In our work, we use a similar architecture for continuous control learning, but in contrast, we focus on human demonstrations of robot trajectories.
Virtual reality environments have been successfully deployed to simulate human-robot interactions \cite{bungert_human-aware_2021, liu_understanding_2017}, offering a tool for realistic demonstration and evaluation.
As a result, we chose to develop a VR interface that interactively records the user-demonstrated trajectories of a robot.
These demonstrated trajectories give the data required to learn user-specific robot navigation preferences.
The VR interface enables a first-person experience of the navigating robot during demonstration, ensuring a realistic perception of proxemic aspects.
In these regards, a clear benefit over, e.g., real world robot teleoperation is the easy separation of the demonstration and reevaluation experience in simulation, enabling interactive replay of scenarios.
\section{Problem Definition and Assumptions}
In this work, we consider a differential wheeled robot that has a local navigation goal and navigates in the vicinity of a single human.
Our goal is to create a personalized robot navigation controller that adapts to user preferences by learning from demonstrations of robot trajectories that include a velocity profile.
Hereby, we focus on local human avoidance taking into account user-specific preference.
Both human and robot are interacting in the same room, which serves as context for the navigation behavior.
We assume that the positions and orientations of the human, the robot, and all obstacles are known.
All parameters above can play a role for the robot navigation preferences of the user and need to be reflected in the robot-centric state space.
\section{Reinforcement Learning from Demonstrations}
\label{sec:rl}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\linewidth]{figures/architecture}
\caption{Schematic representation of the used architecture. \textbf{a)} Demonstration trajectories are drawn by the user and fed into the demonstration buffer. \textbf{b)} A TD3 reinforcement learning architecture with an additional behavioral cloning (BC) loss on the actor trains a personalized navigation policy for the human-robot interaction with continuous control. The learned policy is then evaluated in VR and subsequently transferred to a real robot. \textbf{c)} The robot-centric state space captures the vicinity and orientation of the human and the obstacles as well as the goal direction.}
\label{fig:architecture}
\end{figure*}
We adapted a twin-delayed deep deterministic policy gradient (TD3) architecture consisting of an actor and two critic networks \cite{fujimoto_addressing_2018}.
TD3 was chosen for two reasons:
i) It has a continuous action space allowing smooth robot control and ii) it is off-policy, thus is a perfect candidate for use with demonstration data.
The actor network outputs two continuous robot control commands, i.e., forward and angular velocity.
We introduce two modifications to classic TD3, similar to Nair \etal \cite{nair_overcoming_2018}:
i) a behavioral cloning loss on the actor network and ii) a separate buffer to integrate demonstration data.
The introduction of the behavioral cloning loss makes our approach a hybrid of reinforcement and imitation learning.
Fig.~\ref{fig:architecture} depicts a schematic overview of our approach.
\subsection{Twin-Delayed Deep Deterministic Policy Gradient}
Reinforcement learning describes the optimization of transitions from state $s_t \rightarrow s_{t+1}$ following a Markov Decision Process that result in a reward $r_t = r(s_t, a_t)$, by taking an action $a_t = \pi_{\phi}(s_t)$ at time step $t$ with respect to a policy $\pi_{\phi}$. The tuples $\left(s_t, a_t, r_t, s_{t+1}\right)$ are referred to as state-action pairs. The optimization objective is to maximize the cumulative return $R = \sum^{T}_{i=t} \gamma^{(i-t)}r_t$ of the $\gamma$-discounted rewards, onward from $t$.
With TD3, we optimize the expected return
\begin{align}
y_t = r_t + \gamma \min_{i=1, 2} Q_{\theta_i^*} \left( s_{t+1}, \pi_{\phi} \left( s_{t+1} \right) + \epsilon_{\theta_i} \right) \text{,}
\end{align}
while using the minimum of two critics $\left( Q_{\theta_1}, Q_{\theta_2}\right) $ to prevent value overestimation. $\theta_{i}$ denote the (network) parameters of critic $i$ and $\phi$ those of the actor. The clipped Gaussian policy noise $\epsilon_{\theta_i}$ stabilizes the Q-value estimation over similar state-action pairs and is controlled by the standard deviation $\sigma_{\epsilon_{\theta_i}}$
To ensure sufficient exploration, we add Gaussian noise from a process $\mathcal{N}$ with standard deviation $\sigma_{\epsilon_\pi}$ to the actions drawn from the actor, so that $a_t = \pi_{\phi}(s_t) + \mathcal{N}(0, \sigma_{\epsilon_\pi})$.
To update the critic $\theta_{i}$, TD3 optimizes the loss
\begin{align}
\mathcal{L}_{\theta_i} = \frac{1}{b} \sum_{j}^{b} \left( y_j - Q_i \left( s_j, a_j | \theta_{i} \right) \right)^2
\end{align}
over all state-action pairs $j$ in the batch of size $b$. The actor network parameters $\phi_\pi$ are updated using the policy gradient:
\begin{align}
\nabla_{\phi} J = \frac{1}{b} \sum_{j}^{b} \nabla_{a} Q_\text{min}(s, a|\theta) |_{s=s_j, a=\pi(s)} \nabla_{\phi} \pi(s|\phi)|_{s_j}
\end{align}
For further details on the learning algorithm, please refer to \cite{fujimoto_addressing_2018} and \cite{silver_deterministic_2014}.
The actor and critic networks share a feed-forward three-layered perceptron architecture with 256 neurons each. We normalize both the input (observation space for actor and critic) and output (action space for actor) of the networks, respectively.
\subsection{Replay and Demonstration Buffer}
In addition to TD3's standard experience replay buffer of size $B_{E}$, we introduce a second replay buffer to solely hold demonstration data, called the demonstration buffer.
As the demonstration data is collected before training begins, its main difference to the experience buffer is that it is not updated during training and thus holds the demonstration data for the entire training duration. Its size $B_D$ is equivalent to the number of demonstration state-action pairs.
We uniformly sample both from the experience replay buffer and the demonstration buffer with batch size $b_{E} = b_{D} = 64$. As both batches are merged, the actor and critic networks are optimized both with the demonstration and the latest experience data at every training step.
\subsection{Behavioral Cloning}
\label{sec:bc}
Similar to \cite{nair_overcoming_2018}, we introduce a behavioral cloning loss $\mathcal{L}_\text{BC}$ on the actor network as an auxiliary learning task:
\begin{align}
\mathcal{L}_\text{BC} = \sum_{i=1}^{b_D} || \pi(s_i|\phi) - a_i ||^2
\label{eq:bc_loss}
\end{align}
Only the batch fraction originating from the demonstration replay buffer is processed on the behavioral cloning loss.
The resulting gradient of the actor network is
\begin{align}
\nabla_{\phi} J_\text{total} = \lambda_{\text{RL}} \nabla_{\phi} J - \lambda_{\text{BC}} \nabla_{\phi} \mathcal{L}_\text{BC} \text{.}
\end{align}
Leveling both gradients against each other using $\lambda_{\text{BC/RL}}$ is important to achieve a balance where the navigation policy reproduces demonstration-like behavior around known states~(in demonstration data), but also learns to handle unknown states correctly.
\subsection{State Space}
\label{sec:statespace}
A visualization of our robot-centric state space is shown in Fig.~\ref{fig:architecture}c.
The state space is kept as minimalist as possible to ensure a fast and reliable training performance.
The functionality of our approach is proven for a single human in the vicinity of the robot.
The state~vector contains the person's distance $d_{H}$ to the robot's position and relative angle $\Delta\alpha_{H}$ to its orientation, facilitating human-awareness.
Furthermore, the relative angle to the navigation goal $\Delta \alpha_{G}$ is provided.
To increase awareness for the human's field of view, the person's body orientation relative to the orientation of the robot $\Delta \psi_{RH}$ is included.
It indicates whether a person faces the robot or not.
To deal with obstacles, we include the closest distance $d_{O_i}$ and relative angle $\Delta \alpha_{O_i}$ from the robot's pose to all environment obstacles $O_i$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/approaches}
\caption{
\textbf{a)} The demonstrated robot navigation preference trajectories of two participants A and B are shown for different human position-orientation pairs (color-coded).
Note the wall-following preference of user B, whereas user A prefers a smooth curve navigation style.
\textbf{b)}~The personalized controller successfully learned to reflect the individual user preferences. Note that when no specific side preference is given as in the demonstrations in the corridor, the controller reproduces trajectories mainly on one side.
We evaluated our approach against \textbf{c)} the social cost model and \textbf{d)} the Dynamic Window Approach.
A quantitative comparison of the different approaches in both environments reveals \textbf{e)} a higher relative path length (normalized by linear distance) and \textbf{f)} a higher preferred minimum distance.
\textbf{g)} The increased path area for our controller (between the learned trajectory and linear distance) also points to a general preference for earlier deviation from the shortest path in favor for more comfortable trajectories.
}
\label{fig:approaches}
\end{figure*}
\subsection{Reward}
\label{sec:reward}
The reward function is designed to avoid collisions and ensure goal-oriented navigation behavior. We aim to teach user-specific navigation preferences not by complex reward shaping, but only via demonstration data. Consequently, we keep the reward as sparse as possible, besides basic collision penalties and goal rewards. More specifically, the reward function is defined as
\begin{align}
r = r_\text{collision} + r_\text{goal} + r_\text{timeout} \text{.}
\end{align}
We introduce a scaling factor for the reward $c_\text{rew} = 5$ that is used throughout the reward definition below.
When the robot collides with the human or an obstacle during navigation, we penalize with
\begin{align}
r_\text{collision} =
\begin{cases}
- c_\text{rew} & \text{if collision} \\
0 & \text{else.}
\end{cases}
\end{align}
The goal reaching reward is provided to the agent if the robot is located closer than a certain distance to the goal position:
\begin{align}
\label{eq:reward_goal}
r_\text{goal} =
\begin{cases}
+ c_\text{rew} & \text{if goal reached in demonstration data} \\
0 & \text{if goal reached during training} \\
0 & \text{else}
\end{cases}
\end{align}
Note that we give a detailed explanation on the goal reaching reward in Sec.~\ref{sec:demo_reward}. Finally,
the timeout reward encourages the agent to avoid inefficient actions by penalizing behavior where the goal is not reached by the agent after a certain number of steps~$N_\text{ep}$:
\begin{align}
r_\text{timeout} =
\begin{cases}
- \frac{c_\text{rew} }{2} & \text{if episode timeout } (n > N_\text{ep}) \\
0 & \text{else}
\end{cases}
\end{align}
All three conditions above (goal reached, collision, timeout) are end criteria for an episode.
\section{Demonstration and Training Environment}
\begin{figure}[ht!]
\centering
\subfloat{%
\includegraphics[width=0.65\linewidth]{figures/corridor_top}}
\hfill
\subfloat{%
\includegraphics[width=0.325\linewidth]{figures/room_top}}
\caption{Top view of both demonstration environment configurations: Corridor \textbf{(left)} and room \textbf{(right)} of the VR interface for the user study. The human needs to be avoided by the robot navigating to the goal.}
\label{fig:environments_both}
\end{figure}
We propose a novel VR demonstration setup, where the user teaches the robot personal navigation preferences in a virtual reality environment, see Fig.~\ref{fig:architecture}a. The user can see the robot and its navigation goal (green cone). Intuitively, the person uses the handheld controller emitting a beam of light to draw preferred trajectories onto the floor in VR. The trigger on the backside of the controller allows the user to dynamically select the robot speed along the drawn trajectory. The robot executes the demonstrated trajectory right away for reevaluation, allowing the user to either keep or redo it. After the demonstrations have been collected, the training process begins. Finally, the personalized navigation controller is evaluated in VR, before being transferred to the real robot. For the user study conducted, we chose a corridor and a room environment, see Fig.~\ref{fig:environments_both}.
\subsection{Simulator and Robot}
\label{sec:environment-robot}
Our robotic platform is the Kobuki \textit{Turtlebot 2}. As a VR and physics simulator we use Pybullet \cite{coumans_pybullet_2016}, the VR system is a HTC Vive Pro Eye.
A key challenge in using demonstrations for reinforcement learning is bridging the gap between the agent's and the demonstrator's state space.
To do so, we analytically calculate action commands along a demonstration trajectory, so that the robot follows the trajectory by executing successive actions calculated at the control frequency $f$.
The kinematics of a differential wheeled robot are
\begin{align}
v &= \frac{K}{2} \left( u_r + u_l \right) \nonumber\\
\omega &= \frac{K}{L} \left( u_r - u_l \right) \text{,} \label{eq:kinematics}
\end{align}
where $K$ is the wheel radius, $L$ the distance between both wheels, and $v$ the forward velocity. The rotation speeds of the left and right wheel are $u_l$ and $u_r$. By integrating $v$ and $\omega$ over time~$t$, we find a relation for the finite distance ${\Delta d = v \Delta t}$ traveled forward and the change in robot orientation ${\Delta \alpha= \omega \Delta t}$ within a certain time period $\Delta t$:
\begin{align}
\label{eq:wheel_kinematics_discrete}
\frac{v}{\omega} = \frac{\Delta d}{\Delta \alpha}
\end{align}
The time period $\Delta t$ is determined by our chosen control frequency ${f = \frac{1}{\Delta t} = \SI{5}{\hertz}}$ of the robot. Now, given a desired forward velocity $v$, one can analytically calculate the matching angular control command $\omega$ to follow a discrete segment ${(\Delta d, \Delta \alpha)}$ along a trajectory.
\subsection{Collecting and Processing Demonstration Trajectories}
\label{sec:demonstration}
We use the following steps to process raw demonstration trajectories into state-action pairs contained in the demonstration buffer:
\begin{enumerate}
\item In VR, a user draws a trajectory using the handheld controller. The analogue trigger on the controller backside allows to control the robot speed linearly in the range from $v_\text{min} =\SI{0.1}{\meter\per\second}$ to $ v_\text{max}= \SI{0.25}{\meter\per\second}$ at the drawing location.
\item The drawn trajectory is interpolated and smoothed with a 2D spline, parameterized by $k \in [0, 1]$. Also, the speed information is spline-interpolated.
\item The robot is supposed to follow the demonstrated trajectory. Based on the speed along the spline $v(k)$, we consecutively extract the locations on the spline at which the robot receives a new control command, using $\Delta d = v(k) \Delta t$.
\item Inserting $v(k)$ for all control command locations into \eqref{eq:wheel_kinematics_discrete}, the corresponding angular velocities $\omega$ are calculated.
\item The robot is placed and oriented according to the trajectory's starting point.
\item Successively, the control command tuples $a_t = (v_t, \omega_t)$ are executed and the robot follows the trajectory.
\item Before and after the execution of each action $a_t$, we record the corresponding states $s_t$, $s_{t+1}$ and the reward $r_{t+1}$.
\item Finally, all state-action-reward~pairs~$\left( s_t, a_t, r_t, s_{t+1}\right)$ are stored in the demonstration buffer.
\end{enumerate}
Each demonstration trajectory is checked against possible collisions with the environment.
\subsection{Data Augmentation}
We use data augmentation to increase the data output from a single demonstration trajectory. More specifically, the robot's initial placement is shifted linearly by $\frac{\Delta d}{N_\text{aug}}$ within the distance $\Delta d = v(k_0) \Delta t$ along the spline \mbox{$N_\text{aug} = 15$ times}, where $k_0$ refers to the trajectory spline start. The result is a slightly shifted execution of the trajectory, while the original characteristic of the trajectory is preserved (${\max(\Delta d) = \SI{5}{\centi\meter} \ll }$ environment scale). Steps 5) to 8) are repeated for each data augmentation.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\linewidth]{figures/generalization_2}
\caption{
\textbf{a)} User A demonstrated a distinct speed profile \textbf{(top)} when facing the robot start position in the room environment.
It was successfully adapted by the learned controller \textbf{(bottom)}.
Furthermore, we tested the ability for generalization of the learned controller threefold by showcasing state configurations not covered by the demonstration data:
\textbf{b)} When the robot starts at a random position in the environment, its navigation behavior still reflects the characteristics of the trajectory from the user demonstrations (cf. Fig.~\ref{fig:approaches}a).
\textbf{c)} Even when its goal is randomly placed in the room, the robot exhibits the distinct user preferences.
\textbf{d)} The user's position and orientation was altered to non-demonstration configurations. When the human is obstructing the robot's path while facing the wall, the robot traverses on a straight path behind the human. In all other cases, a distinct distance is kept to the human, as demonstrated by both users.
This shows nicely how the navigation agent improved beyond the limits of the demonstration data provided.
For a legend, please refer to Fig.~\ref{fig:approaches}.
\label{fig:generalization}
}
\label{fig:training}
\end{figure*}
\subsection{Successful Demonstrations}
Reinforcement learning with demonstrations works best when demonstrations are successful, i.e., lead to the goal state. Thus, we end each demonstration trajectory with the goal state and thus a positive reward. Even if the goal position is not at the exact end of the trajectory, the goal is retroactively moved to the end of the demonstration trajectory.
\subsection{Value of Demonstration Data}
\label{sec:demo_reward}
To boost the value of demonstration-like behavior for the critics during learning, we exclusively provide the goal-reaching reward on the demonstration part of the batch, see \eqref{eq:reward_goal}.
The motivation behind this is that the agent should navigate on states $s_t$ that are as similar as possible to the states of the demonstrated trajectories, ideally recovering to those whenever useful.
To maximize return, however, the agent generally tries to navigate towards the goal with as few state transitions as possible (due to the discount value~$\gamma$), possibly disregarding demonstrated user preferences.
The resulting behavior corresponds to shortest-path trajectories with maximum speed while barely avoiding the human, promising a faster and higher return $R$.
The demonstration state value boost counteracts this unwanted effect, since the agent is encouraged to follow state transitions from the demonstration data due to their always \textit{higher} return.
\subsection{Training}
\label{sec:training}
We initialize the robot, human, and goal position either around the position from the demonstration configuration with probability $p_\text{env}$ or randomly in the environment to explore the entire state space with probability $(1-p_\text{env})$.
Training starts with pre-initialization of the experience buffer by executing $\num{5e+4}$ randomly sampled actions.
Subsequently, we train for 800~epochs.
Each epoch consists of 5000 environment interactions, while the actor and critic networks are updated every 5 interactions.
Each epoch ends with 10 evaluation episodes.
An episode denotes the trajectory roll-out from initial robot placement until one of the end criteria is satisfied.
We train for each user and environment individually to learn context sensitive controllers.
An overview on all relevant and experimentally obtained training parameters can be found in \tabref{tab:training_settings}.
We found it beneficial for the training performance to adjust the balancing factors to $\lambda_{\text{RL}}= \lambda_{\text{BC}} = \num{0.5e+1}$ at epoch 350, and reduce the actors learning rate to $l_a = \num{1e-5}$ at epoch 650.
\begin{table}[!b]
\centering
\caption{Notations and Training settings. \label{tab:training_settings}}
\begin{tabularx}{\linewidth}{llX}
Notation & Value & Description \\
\hline
$p_\text{env}$ & 0.25 & Placement probability: room vs. start position\\
$n_\text{ep}$ & 300 & Maximum number of steps per episode\\
$B_{E}$ & \num{1e+6} & Experience replay buffer size\\
$l_a$ & \num{1e-4} & Learning rate of actor\\
$l_c$ & \num{8e-4} & Learning rate of critic\\
$\gamma$ & 0.99 & Discount factor\\
$\sigma_{\epsilon_\pi}$ & 0.2 & Std. deviation of exploration noise $\epsilon_\pi$ \\
$\sigma_{\epsilon_\theta}$ & 0.05 & Std. deviation of target policy noise $\epsilon_\theta$ \\
$\lambda_{\text{RL}}$ & 10/3 & Weighting factor of RL gradient on actor\\
$\lambda_{\text{BC}}$ & 20/3 & Weighting factor of BC loss gradient on actor\\
\hline
\end{tabularx}
\end{table}
\section{Experimental Evaluation}
\label{sec:exp}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/survey_all}
\caption{User study survey results of both the demonstration and evaluation session. \textbf{a)} The demonstration interface was predominantly appreciated and experienced intuitive by the participants. \textbf{b)} Evaluation: In virtual reality, both the Dynamic Window Approach and the social cost model were outperformed by our personalized controller in various aspects. \textbf{c)} On the real robot, our novel personalized controller was perceived predominantly positive as well. The plot bar's positions are aligned to the neutral score~(3) to indicate overall rating.
\label{fig:survey_all}}
\end{figure}
This section highlights the results of our user study and provides a qualitative and quantitative analysis of the learned personalized navigation controller.
\subsection{User Study}
We conducted a user study with 24 non-expert participants~(13 male, 11 female) to i) record individual navigation preferences (demonstration data), ii) evaluate the navigation behavior learned by our personalized controller, and iii) evaluate the presented VR demonstration interface.
Participants attended two appointments, the first being the demonstration session and the second being the evaluation session.
In the user study section, the values in brackets refer to the mean survey-scores (1-5) and their standard deviation.
\subsubsection{Demonstration Session}
During the demonstration session, trajectories in both environments (corridor and room) were recorded, see Fig.~\ref{fig:approaches}a.
Each environment featured four position-orientation pairs (color-coded) for the participant.
For each pair, between three and five trajectories were recorded. The total time investment was about 20 min.
After the recording session, the participants were asked about their experience with the VR demonstration interface.
The survey questions and results are shown in Fig.~\ref{fig:survey_all}a).
Participants predominantly experienced comfortable interactions with the simulated robot ($\num{4.6 \pm 0.1}$) and found drawing trajectories with our interface very intuitive ($\num{4.5 \pm 0.1}$).
Also, no participants disliked our demonstration environments while the majority liked it \textit{very much} ($\num{4.6 \pm 0.1}$).
\subsubsection{Evaluation Session}
During the second session, our personalized navigation approach was evaluated against two approaches in virtual reality: The Dynamic Window Approach (DWA) \cite{fox_dynamic_1997} using the ROS \textit{move\_base} package \cite{quigley_ros_2009} in combination with a 2D lidar sensor, and a social cost model~(SC) based on the configuration of~\cite{kollmitz_time_2015}.
Each navigation approach was shown in VR (order: SC $\rightarrow$ DWA $\rightarrow$ Ours) in both environments for all four position-orientation pairs (cf. Fig.~\ref{fig:approaches}b-d), followed by an evaluation survey (cf. Fig.~\ref{fig:survey_all}b). Potential ordering effects cannot be completely ruled out. Participants were unaware of presented approach types.
Pairwise Bonferroni-corrected Wilcoxon signed-rank tests indicated that our personalized approached significantly outperformed both the SC and DWA navigation on all three measures comfort (Q1), unpleasant closeness (Q2) and preference (Q3) (see \tabref{tab:survey_statistics}).
No significant differences were measured between SC and DWA.
\subsubsection{Real Robot Evaluation}
Our personalized controller was demonstrated on the real robot (room environment) to investigate the participants transition experience from the simulated to the real robot.
The real robot evaluation was also complemented by a survey, see Fig.~\ref{fig:survey_all}c). As in~VR, the navigation of the real robot was predominantly experienced comfortable ($\num{4.5 \pm 0.1}$) and participants saw their preferences mostly reflected ($\num{4.3 \pm 0.1}$). Furthermore, the transition from the simulated robot experience in~VR to the real robot was mostly experienced as \textit{very natural} ($\num{4.5 \pm 0.1}$).
\begin{table}[!b]
\centering
\caption{Wilcoxon signed-rank tests on mean scores of all approaches \label{tab:survey_statistics}}
\begin{tabularx}{\linewidth}{XXXX}
Question & Ours - SC & Ours - DWA & SC - DWA\\
\hline
Q1: comfort & $z = \num{-4.17}^*$ & $z = \num{-4.01}^*$ & $z = \num{-1.81}$ \\
Q2: closeness & $z = \num{-4.29}^*$ & $z = \num{-4.2}^*$ & $z = \num{-3.61}$ \\
Q3: preference & $z = \num{-4.01}^*$ & $z = \num{-4.06}^*$ & $z = \num{-1.97}$ \\
\hline
\multicolumn{4}{>{\hsize=\dimexpr4\hsize+4\tabcolsep+\arrayrulewidth\relax}X}{Note that statistical significance was always $p<0.001$, as marked with *. All other comparisons did not reach statistical significance.}
\end{tabularx}
\end{table}
\subsection{Qualitative Navigation Analysis}
Fig.~\ref{fig:approaches}a shows demonstration data from two participants in both environments.
In the room environment, the preference of participant A is a smooth curve around their position, while the robot drives in their field of view when approaching from either side.
Interestingly, participant B's preference is a wall-following robot that navigates at higher distance to the human, compared to participant A.
Fig.~\ref{fig:approaches}b shows trajectories of the learned navigation behavior. The learned policy clearly reflects the characteristics of the demonstration trajectories. Furthermore, the robot adjusts its navigation trajectory according to the human orientation. For user A, it learned to traverse in the user's field of view, compare yellow orientation and trajectories. In participant B's demonstration, trajectories from a single position-orientation pair traverse both in front and behind the participant. Here, no specific side preference is given and the controller reproduces trajectories mainly on one side.
Beside trajectory shape, user demonstrated speed profiles along the demonstration trajectories. As an example, Fig.~\ref{fig:generalization}a) depicts how user A demonstrated a distinct speed profile when directly facing the robot start position in the room environment. After the robot slowly approached and passed by, it was allowed to accelerate. As can be seen, the behavior is picked up by the controller during training.
\subsection{Quantitative Navigation Analysis}
Fig.~\ref{fig:approaches}e-g compare quantitative properties of all three evaluation approaches and demonstrations from all 24 study participants.
The personalized navigation trajectories are on average longer than those by DWA or SC, while maintaining a higher minimal distance to the human.
Interestingly, the mean preferred minimum human distance gathered from the user demonstrations is similar in both environments, averaged at $\overline{d_H} = \SI{1.1 \pm 0.2}{\meter}$.
The path area is calculated between the trajectory and linear distance from start to goal. A higher path area reveals earlier deviation from the linear path in favor of personalization, as it is the case for our personalized controller, compare Fig.~\ref{fig:approaches}g. This clearly indicates that users prefer personalized navigation trajectories over shortest path navigation. Furthermore, the large standard deviation of the path area indicates a high trajectory shape variability among the participants.
\subsection{Generalization}
Finally, we tested the ability for generalization of the learned navigation policy, see Fig.~\ref{fig:generalization}b-d. First, the robot started at random positions in the environment not covered by the demonstrations. As can be seen, the controller still reflects the user preferences in the driving style (cf. Fig.~\ref{fig:generalization}b and Fig.~\ref{fig:approaches}a) by either approaching demonstration-like states, or reproducing demonstration-like navigation patterns at slightly different positions in the environment. When appropriate, the robot drives straight to the goal.
Second, we tested random goal positions in the environment~(cf. Fig.~\ref{fig:generalization}c). Interestingly, only after driving in accordance with preferences, the robot turns towards the goal when in direct vicinity.
Finally, we tested altered human positions~(cf. Fig.~\ref{fig:generalization}d). Human position-orientation pairs not covered in the demonstration data encourage the controller to still keep a preference-like distance.
As demonstrated with these results, our framework can successfully learn a personalized navigation controller that improves beyond the limits of few demonstration trajectories.
\section{Conclusion}
\label{sec:conclusion}
To summarize, we presented both a learning framework and an intuitive virtual reality interface to teach navigation preferences to a mobile robot.
From a few demonstration trajectories, our context-based navigation controller successfully learns to reflect user-preferences and furthermore transfers smoothly to a real robot.
The conducted user study provides evidence that our personalized approach significantly surpasses standard navigation approaches in terms of perceived comfort. Furthermore, the study verifies the demand for personalized robot navigation among the participants.
Our results are a first important step towards personalized robot navigation, made possible by our interface and user study. As a next logical step, we will transfer the framework to more complex and diverse environments.
\bibliographystyle{IEEEtran}
|
1,108,101,563,883 | arxiv |
\section{A Constant-Factor Approximation for Graphic Matroids}
\label{sec:graphic}
Given a Bernoulli instance from the matroid polytope, we show how to utilize it to obtain a constant-factor
non-adaptive algorithm for graphic matroids.
A graphic matroid is defined by an undirected graph $G$ with vertices $V$ and
edges $E$. The edges of the graph form the ground set, and the independent sets
$\ensuremath{\mathcal{I}}$ are forests, i.e. cycle-free sets of edges: $\ensuremath{\mathcal{I}} = \{I \subseteq E : I
\text{ contains no cycles}\}$; every spanning tree is a basis. In light of Lemma~\ref{lem:reduction}, each edge $i \in E$
has an associated weight $t_i$ and is active (non-zero) with probability $p_i$, where $\vec{p}\in\mathcal{P}_G$, the matroid polytope for the graphic matroid $G$. The objective is then to select a maximum weight
spanning tree.
As discussed in the previous section, we assume the edges arrive in order with
$t_i \le t_{i+1}$ for all $1 \leq i \leq n-1$; this order obtains the worst-case performance.
Our approach works by considering only a subset of the edges which has the properties that (1) a
significant fraction of the prophet's benchmark is accounted for and yet (2) with constant probability,
elements selected earlier in the ordering do not block later elements.
Specifically, we do this in two steps. First, we show there exists a way to
direct the edges such that every edge has at most a constant probability of
being spanned by edges {\it except} for those leaving the vertex into which it is directed. Then, we take a random
cut in the graph and allow our algorithm to select only edges crossing the cut
in one direction, ensuring that for every vertex, the edges entering it are considered while the edges leaving it are not with constant probability.
\begin{comment}
\subsection{Definitions}
\begin{itemize}
\item For any $\vec{q} \in \mathcal{P}_G$, let $R_{\vec{q}}$ denote the random
subset where $j$ is included with probability $q_j$. We say elements in
$R_{\vec{q}}$ are {\it active}.
\item Let $b_i = \ensuremath{\mathrm{Pr}}\big[i \in \ensuremath{\mathrm{span}}(R_{\vec{p}} \cap \{j \leq
i\})\big]$ be the probability that $i$ is spanned by earlier elements
in the worst-case ordering.
\item Let $a_i = \ensuremath{\mathrm{Pr}}\big[i \not \in \ensuremath{\mathrm{span}}(R_{\vec{p}} \cap \{j \geq
i\})\big]$ be the probability that $i$ is active and belongs to the
optimal set; i.e., $i \in {\textsc{opt}}\xspace(R_{\vec{p}})$.
\end{itemize}
\end{comment}
\paragraph{Notation.} We use $b_i(S)$ to denote the probability that element
$i$ is ``blocked'' or \emph{spanned} by the active elements in a set $S$ with
respective to active probabilities $\vec{p} \in \mathcal{P}_M$. For $\vec{p} \in
\mathcal{P}_M$, let $R_{\vec{p}}(S)$ be the random set containing $i \in S$
independently with probability $p_i$. We call this the ``active'' set.
Formally, $b_i(S) = \ensuremath{\mathrm{Pr}}[i \in \mspan(R_{\vec{p}}(S\setminus \{i\}))]$. Notice
that even if $i \in S$, we do not worry that it would span itself.
One convenience of using the ex-ante relaxation is that, so long as each
element is unblocked with constant probability, that is, $1 - b_i(S) \geq c$,
we obtain a constant-factor approximation.
\subsection{Directing the Graph}
\begin{lemma}
\label{lem:direction}
For $p \in \frac14\mathcal{P}_G$, there exists a way to orient the edges of $G$ such
that for each vertex the total probability mass of incoming edges is at most $1/2$.
\end{lemma}
\begin{proof}
Any vector from the graphic matroid polytope $\mathcal{P}_G$ is a convex combination bases, or spanning trees. The average vertex degree in any spanning tree is at most 2, so the average fractional degree in a convex combination of spanning trees is at most 2, and hence the average fractional degree under the scaled $\vec{p} \in \frac{1}{4} \mathcal{P}_G$ is at most $\frac{1}{2}$.
Let in-deg$(v)$ denote the fractional in-degree of $v$ in the constructed directed graph. That is, the sum of the ``active'' probabilities for the edges directed into $v$. We can find an orientation of the edges in the graph given probabilities $\vec{p}$ such that in-deg$(v) \leq \frac{1}{2}$ for all vertices $v$: because the average degree is at most $\frac{1}{2}$, there exists some vertex $v$ with degree at most $\frac{1}{2}$. Orient all of the edges incident to $v$ toward $v$, as in-deg$(v) \leq \frac{1}{2}$, and then recurse on the graph among the remaining vertices.
\end{proof}
\begin{corollary}
\label{cor:inBlockage}
Given a graph as guaranteed by Lemma~\ref{lem:direction}, let $\gin(v)$ be the
set of incoming edges to vertex $v$ and let $\gout(v)$ be the outgoing edges. For
any $i$, let $v$ be the vertex such that $i \in \gin(v)$. Then for any $S
\subseteq E$,
\[
b_i(S\setminus\gout(v)) \leq \frac12 .
\]
\end{corollary}
\begin{proof}
Observe that for $i \in \gin(v)$, $v$ cannot be spanned by a set that contains no other edges incident to $v$. Then in order for $i$ to be spanned in $S\setminus\gout(v)$, at least
one edge in $\gin(v)$ other than $i$ must be active. By construction,
$\sum_{i\in\gin(v)}p_i \leq \frac12$. So the probability that no edges are
active is at least $\frac12$ by the union bound.
\end{proof}
\subsection{Random Cut}
Assume $\vec{p} \in\frac14\mathcal{P}_G$, and direct the graph as described above. Let
$A\subseteq V$ be a random set of vertices such that each vertex is included in
$A$ independently with probability 1/2, and let $\bar{A} = B = V \setminus A$. Let $\widehat{S}$ be the set of directed edges across the cut from $A$ to $B$, formally, $\widehat{S} = \{i: i \in \gout(u) \cap \gin(v), u \in A, v \in B \}$.
\begin{claim}
\label{claim:randomS}
\[
\mathbb{E}_{\widehat{S}}\left[\sum_{i\in\widehat{S}}p_it_i(1-b_i(\widehat{S}))\right] \geq
\frac18\sum_{i\in E}p_it_i.
\]
\end{claim}
\begin{proof}
\begin{align*}
\mathbb{E}_{\widehat{S}}\left[\sum_{i\in\widehat{S}}p_it_i(1-b_i(\widehat{S}))\right]
&= \sum_{(u,v)\in E}p_{uv}t_{uv}\, \ensuremath{\mathrm{Pr}}[(u,v) \in \widehat{S}] \,
\mathbb{E}\left[1-b_{uv}(\widehat{S})\middle| (u,v) \in \widehat{S} \right] \\
&= \sum_{(u,v)\in E}p_{uv}t_{uv}\, \ensuremath{\mathrm{Pr}}[u\in A] \ensuremath{\mathrm{Pr}}[v \in B]\,
\mathbb{E}\left[1-b_{uv}(\widehat{S})\middle| u\in A, v \in B \right] \\
&= \frac{1}{4} \sum_{(u,v)\in E}p_{uv}t_{uv} \,
\mathbb{E}\left[1-b_{uv}(\widehat{S})\middle| u\in A, v \in B \right] \\
&\geq \frac{1}{8} \sum_{(u,v)\in E}p_{uv}t_{uv} \\
\end{align*}
where the last inequality follows from Corollary~\ref{cor:inBlockage}.
\end{proof}
\subsection{Final Algorithm}
For discrete random variables $\vec{X}$, our algorithm is constructive, albeit not
efficient, because we can compute $\vec{p}$ and $\vec{t}$ as guaranteed by
Lemma~\ref{lem:reduction}. (Of course, we can discretize continuous random variables to
arbitrary approximation.)
\begin{algorithm}[ht!]
\begin{algorithmic}[1]
\State Compute $\vec{p}$ and $\vec{t}$ as guaranteed by
Lemma~\ref{lem:reduction}.
\State Direct the graph as outlined in Lemma~\ref{lem:direction}.
\State Choose a cut $(A,B)$ uniformly at random; let $\widehat{S} = \{i: i \in \gout(u) \cap \gin(v), u \in A, v \in B \}$.
\State For all edges $i \in \widehat{S}$, set $T_i = t_i$.
\State For all edges $i \not\in \widehat{S}$, set $T_i = \infty$.
\end{algorithmic}
\end{algorithm}
Step 3 can be derandomized using the standard Max-Cut derandomization.
Our main result is that this algorithm gives a $\frac{1}{32}$-approximation.
\begin{theorem}
Let $G$ be a graphic matroid with independent edge weights $\vec{X}$.
Then
\[
32\,\mathbb{E}[{\normalfont\textsc{Alg}}(G,\vec{X})] \ge {\textsc{opt}}\xspace(G,\vec{X}).
\]
\end{theorem}
\begin{proof}
Let $\vec{p}\in \mathcal{P}_G$ and $\vec{t}$ be the probabilities and values
guaranteed by Lemma~\ref{lem:reduction}. Let $p'_i = \frac14 p_i$. Then our algorithm obtains ${\normalfont\textsc{Alg}} =
\mathbb{E}_{\widehat{S}}\left[\sum_{i\in\widehat{S}}p_i't_i(1-b_i(\widehat{S}))\right]$, which by our construction of $\widehat{S}$ and Claim~\ref{claim:randomS}, gives
$${\normalfont\textsc{Alg}}
\geq \frac18\sum_{(u,v)\in E}p'_{uv}t_{uv} = \frac{1}{32} \sum _{i \in E}
p_i t_i.$$
\end{proof}
Our approximation factor is, of course, a factor of 16 worse than the dynamic thresholds of \citep{KleinbergW19} and a 10.67-factor worse than the constrained non-adaptive thresholds of \citep{CHMS}. However, our guarantee holds for fully non-adaptive thresholds, and thus will guarantee truthful mechanisms in multi-parameter mechanism design applications.
\begin{comment}
\subsection{Analysis for $S^*$}
We show that the value the algorithm gets on the optimal choice of elements to
retain is at least a constant fraction of the value the prophet gets from those
same elements.
\begin{lemma}
\label{lem:alg_on_S*_vs_opt_on_S*}
If $p \in \frac14\mathcal{P}_G$, then
\[\sum_{i \in S^*} p_it_i(1-b_i(S^*)) \geq
\frac18 \sum_{i \in S^*} p_it_i \]
\end{lemma}
First, we make the observation that, in the optimal solution, the edges leaving
a vertex contribute more in expectation to the value of the algorithm than the
expected value of the incoming edges they block. This follows immediately from
the following more general fact.
\begin{fact}
\label{fact:opt_blocking}
For any set $U \subseteq S^*$,
\[
\sum_{i\in U}p_it_i\left[1 - b_i(S^*)\right] \geq
\sum_{i\inS^*\setminus U}p_it_i\left[
b_i(S^*) - b_i(S^*\setminus U)
\right]
\]
\end{fact}
\begin{proof}
By assumption,
\(
\sum_{i\inS^*}p_it_i\left[1-b_i(S^*)\right] \geq
\sum_{i\inS^*\setminus U} p_it_i\left[1-b_i(S^*\setminus U)\right].
\)
Re-arranging gives the result.
\end{proof}
\begin{fact}
\label{fact:opt_out_blocking_in}
For every $v$,
\[
\sum_{i\in\gout(v)\capS^*} p_it_i[1-b_i(S^*)] \geq
\sum_{i \in \gin(v)\capS^*}p_it_i\left[b_i(S^*) -
b_i(S^*\setminus\gout(v))\right].
\]
\end{fact}
\begin{proof}
This follows from Fact~\ref{fact:opt_blocking} by letting
$U=\gout(v)\capS^*$.
\end{proof}
\begin{proof} Partition the elements of $S^*$ into two sets; let $E_1 = \{i
\in S^* : b_i(S^*) \leq \frac34\}$ and let $E_2 = \{i \in S^* :
b_i(S^*) > \frac34 \}$.
Edges in $E_1$ have a low probability of being blocked and therefore recover
most of their own expected contribution to the RHS sum. Formally,
\begin{equation}
\label{eq:small-b_bound}
\sum_{i\in E_1}p_it_i(1-b_i(S^*)) \geq \frac14\sum_{i\in E_1}p_it_i.
\end{equation}
For edges in $E_2$ (that is, those with a large probability of being blocked),
we show that we can charge their expected contribution to the RHS to edges in
$S^*$ which were included in the solution despite having a high probability of
blocking edges in $E_1$. In particular, for every vertex $v$,
\begin{align*}
\sum_{i\in\gout(v)\cap S^*}p_it_i(1-b_i(S^*))
&\geq \sum_{i \in \gin(v)\cap S^*}p_it_i\left[
b_i(S^*) - b_i(S^*\setminus\gout(v))
\right] \\
&\geq \sum_{i \in \gin(v)\cap E_2}p_it_i\left[
b_i(S^*) - b_i(S^*\setminus\gout(v))
\right] \\
&\geq \sum_{i \in \gin(v)\cap E_2}p_it_i\frac14
\end{align*}
The first inequality follows from Fact~\ref{fact:opt_out_blocking_in}. The last
inequality follows from Corollary~\ref{cor:inBlockage} because, for $i \in E_2$,
$b_i(S^*) > \frac34$. Now, noting that each edge appears exactly once as in
incoming edge and exactly once as an outgoing edge gives
\begin{align*}
\sum_{i\inS^*}p_it_i(1-b_i(S^*))
&= \sum_v\sum_{i\in\gout(v)\capS^*}p_it_i(1-b_i(S^*)) \\
&\geq \frac14 \sum_v\sum_{i\in\gin(v)\cap E_2}p_it_i \\
&= \frac14\sum_{i\in E_2}p_it_i
\end{align*}
\end{proof}
\subsection{Alternate Approaches to Picking a Set}
Note that the proof of Lemma~\ref{lem:alg_on_S*_vs_opt_on_S*} depended on the
fact that $S^*$ is optimal only through Fact~\ref{fact:opt_out_blocking_in},
which is weaker than the property in Fact~\ref{fact:opt_blocking}. Therefore,
we may be able to pick a set $\widehat{S}$ other than $S^*$ for which it is easier
to analyze the gap between the algorithm's value and the prophet's value on
items not in the chosen set.
Our goal is to find a set $\widehat{S}$ s.t. the analog of
Fact~\ref{fact:opt_out_blocking_in} holds, yet define $\widehat{S}$ in such a way that
we can charge the mass of edges not included to particular edges which are
included.
\subsubsection{Algorithm 1}
\begin{algorithm}[ht!]
\begin{algorithmic}[1]
\State direct/sort graph as previously described
\State let $S_0 = E$
\For{$v = 1\ldots n$}
\If{$\displaystyle \sum_{i \in \gout(v)}
p_it_i\left[1-b_i(S_{v-1})\right]
< \sum_{i \in \gin(v)\cap S_{v-1}}p_it_i\left[b_i(S_{v-1}) -
b_i(S_{v-1}\setminus \gout(v))\right]$}
\vspace{0.5em}
$S_v = S_{v-1}\setminus \gout(v)$
\Else
$S_v = S_{v-1}$
\EndIf
\EndFor
\Return{$S_n$}
\end{algorithmic}
\end{algorithm}
\begin{proof}[Proof attempt]
We'd like to do the following. Note that $\widehat{S} \subseteq S_v$ for all $v$.
\begin{align*}
\sum_{i\in\gout(v)\cap\widehat{S}} p_it_i[1-b_i(\widehat{S})] &\geq
\sum_{i\in\gout(v)} p_it_i[1-b_i(S_{v-1})] \\
&\geq \sum_{i\in\gin(v)\cap S_{v-1}} p_it_i\left[b_i(S_{v-1}) -
b_i(S_{v-1}\setminus\gout(v))\right] \\
&\geq \sum_{i\in\gin(v)\cap \widehat{S}} p_it_i\left[
b_i(\widehat{S}) - b_i(\widehat{S}\setminus\gout(v))\right],
\end{align*}
The problem is that the last inequality doesn't obviously hold.
\end{proof}
\end{comment}
\section{Introduction} \label{sec:intro}
We study the classic prophet inequality problem introduced by \citet{KrengelS77}: $n$ items arrive online in adversarial order. A gambler observes the value of each item as it arrives, and in that moment, must decide irrevocably whether to take the item or pass on it forever. He can accept at most one item. The gambler knows in advance the (independent) prior distribution of each item's value. What rule should he use to maximize the value of the item he accepts? In expectation, how does the maximum value that the gambler can guarantee compare to the \emph{prophet}, who knows all of the realized item values in advance and selects the highest valued one?
The prophet inequality is a standard model for online decision making in a stochastic/Bayesian setting and has many applications, particularly to mechanism design and pricing. Over the last few years many variants of the basic single-item setting have been studied. One natural generalization is to allow the gambler to accept more than one item, subject to a feasibility constraint.
Formally, we can represent a feasibility constraint as a collection $\S$ of feasible sets.
Then both the gambler and prophet can each select any feasible set of items $S \in \S$; in the single-item setting, the feasible sets are just all singletons. What is the gambler's best algorithm and guarantee?
A seminal result by \citet{Samuel-Cahn} showed that for the basic single-item setting the online algorithm can obtain at least half of the prophet's value in expectation by determining a single threshold $T$ and accepting the first item with value exceeding $T$. Further, this approximation factor is tight: there exist instances where the gambler can do no better than $\frac{1}{2}$ as well as the prophet. The threshold $T$ is selected such that the probability that the value of \emph{any} of the $n$ items exceeds the threshold is exactly $\frac{1}{2}$.
In 2012\footnote{Their original result appeared in STOC 2012 \citep{KleinbergW12}, but we will cite their journal version from 2019 for the remainder of the paper.}, \citeauthor{KleinbergW12} introduced an alternative approach for setting a single threshold: set $T = \frac{1}{2} {\textsc{opt}}\xspace$. Here, ${\textsc{opt}}\xspace$ is what the prophet can achieve, and this approach guarantees the same $\frac{1}{2}$-approximation for a single item.
\citeauthor{KleinbergW12} showed that this alternate approach generalizes also to \emph{matroid} prophet inequalities: where both the gambler and the prophet are restricted to accepting independent sets in a given matroid. In this setting, the approach of \citeauthor{KleinbergW12} still achieves a factor of $2$ approximation, matching the single item lower bound.
There is a significant qualitative difference between Samuel-Cahn's approach for the single item prophet inequality and \citeauthor{KleinbergW12}'s approach for the matroid setting. In particular, the former computes a single threshold that is then used for the entire duration of the algorithm. The latter on the other hand, recomputes thresholds after every decision. The threshold applied to the value of the second item, for example, depends on whether the first item was accepted by the algorithm or not, and in turn on the realized value of the first item. As a consequence, the KW algorithm is more complicated and involves more computation.
In this paper we address a natural problem exposed by this discussion: {\bf Can an online algorithm compete against the prophet using static thresholds under a matroid feasibility constraint?}
There is an inherent connection between prophet inequalities and Bayesian mechanism design. The original problem by \citeauthor{KrengelS77} was formulated as an optimal stopping problem; it was \citet{HKS07} that made the first connection to an economic welfare-maximization problem. \citet{CHMS} studied this connection much more deeply, defining a truthful class of simple mechanisms called ``order-oblivious posted pricings.''
They show that one can translate a prophet inequality for $n$ items with feasibility constraint $\S$ into an order-oblivious posted pricing for an $n$-unit setting with unit-demand buyers and a service feasibility constraint\footnote{A service feasibility constraint $\S$ says that the set of buyers that are served simultaneously must belong to some set $S \in \S$.} corresponding to $\S$; this mechanism is truthful and it yields a revenue guarantee that matches its prophet inequality guarantee.
When $\S$ is a matroid, by \citet{KleinbergW19}, the resulting mechanism yields $\frac{1}{2}$-approximation to the optimal expected revenue.
This reduction from truthful mechanisms to prophet inequalities crucially
relies on the buyers being unit-demand. If we wish instead wish to translate the prophet inequality into a mechanism for a single constrained-additive buyer subject to feasibility constraint $\S$ over $n$ heterogenous items---that is, the buyer is interested in buying
\emph{more} than one item---then adaptive thresholds will not translate to a truthful
mechanism. Instead, they correspond to offering each item one-at-a-time to the
buyer in any order, but prices change as a function of previous purchases.
This update will not generally preserve truthfullness; that is, although the
buyer may wish to purchase the first item offered when considered myopically,
he may be better off declining, in order to avoid price increases on later
items.
In order to fix this reduction for multi-parameter buyers beyond unit-demand, we must use only prophet inequalities with non-adaptive thresholds.
This is our primary motivation: constructing non-adaptive prophet inequalities in order to expand the realm of settings where prophet inequalities can be used for truthful mechanism design (see related work for an understanding of how integral they are as a tool in this field). However, non-adaptive prophet inequalities possess numerous other attractive properties as well. For welfare-maximizing mechanisms, non-adaptive prophet inequalities correspond to prices that are not only order-oblivious, but also \emph{anonymous}, using the same prices on each item regardless of the buyer. Additionally, since the thresholds (prices) are all computed before the items arrive and are never updated, there is much less computation required than for adaptive thresholds.
\subsection{Our Contribution and Roadmap}
We present the first non-adaptive thresholds that give a constant-factor prophet inequality for graphic matroids. We finish Section~\ref{sec:intro} with additional related work and in Section~\ref{sec:prelim}, we introduce mathematical preliminaries. In Section~\ref{sec:motiv}, we discuss why extending non-adaptive algorithms to graphic matroids is such a challenging objective, and why prior methods fail. Section~\ref{sec:tipi} presents the ex-ante relation to the matroid polytope: a reduction from a given prophet inequality instance to an alternative setting with convenient properties for designing algorithms. Expert readers can safely skip this section. Then, in this context, Section~\ref{sec:graphic} presents our construction for non-adaptive thresholds.
The ex-ante relaxation takes a given item $i$'s value distribution and converts it into a Bernoulli distribution: with probability $p_i$, item $i$ is ``active,'' that is, non-zero, and takes on value $t_i$. Then, the threshold for item $i$ is implicit (just the non-zero value $t_i$), and the only remaining questions are (1) with what probability should our algorithm consider this item, and (2) with what probability will the item be ``unblocked,'' or feasible to accept, when the gambler reaches it?
In order to obtain a constant-factor approximation, then the probabilities of both selection and feasibility must be constant. In a graphic matroid, the elements are the edges and the independent sets are the forests---that is, any set of edges that does not contain a cycle. Depending on the given graph, an edge could be ``blocked'' by many different edges, and it is unclear how to group elements. Our main idea is to orient the graph to have good properties and then exploit them. For an edge $(u,v)$, suppose it is oriented into vertex $v$. Notice that if no other edges incident to $v$ are selected by the algorithm, then $(u,v)$ will certainly be feasible to accept---it cannot possibly form a cycle by taking this edge. We orient the graph such that all edges directed into $v$ have low enough probability mass such that with probability $\frac{1}{2}$, none are active. Then, our algorithm decides which edges to consider such that, with constant probability for every $v$, it will consider the edges into $v$ and \emph{not} the edges out of $v$. Hence, with no edges out of $v$ and a good chance that no edges into $v$ will be active, any edge into $v$ can be accepted with constant probability. Our method for determining which edges to consider is simple: we take a random cut and consider only the edges in one direction across the cut. Since every edge is oriented into some vertex $v$, it will be both considered and unblocked with constant probability, as desired.
Unfortunately, this approach is quite specific to a graphic matroid. While some properties of the algorithm might extend to other matroids, we know that it cannot generalize to all matroids: \citet{FeldmanSZ19} prove a lower bound of $\Omega(\frac{\log n}{\log \log n})$ for prophet inequalities that use only non-adaptive thresholds for the class of general matroids. Their lower bound example is a gammoid.
We pose the following two remaining open (but likely very difficult) open questions for understanding how far non-adaptive constant-factor approximations reach between graphic matroids and the lower bound of a gammoid.
\begin{openp}
What is the boundary within matroids for non-adaptive constant-factor approximations?
\end{openp}
\begin{openp}
How do approximations decay for non-adaptive thresholds as matroids become more complex?
\end{openp}
\subsection{Additional Related Work} \label{sec:related}
\paragraph{Non-Adaptive Thresholds.} As mentioned, the two predominant
approaches for achieving $\frac{1}{2}$-approximation in the single-item setting
are both non-adaptive~\citep{Samuel-Cahn,KleinbergW19}. \citet{CHMS} provide
non-adaptive $\frac{1}{2}$-approximations to the prophet for both $k$-uniform
and partition matroids; they also give a non-adaptive $O(\log r)$-approximation
for general matroids, where $r$ is the rank of the matroid. Recent work by
\citet{GravinW19} gives a non-adaptive algorithm that guarantees a
$3$-approximation to the prophet for online bipartite matching, which is the
intersection of two matroids. \citet{ChawlaDL20} optimize non-adaptive
thresholds for the $k$-uniform settings depending on the range that $k$ is in,
improving existing guarantees in the $k < 20$ regime. No non-adaptive
algorithms are known beyond uniform and partition matroids and the special case
of bipartite matching.
\paragraph{Constrained Non-Adaptive.} Another class of algorithms uses non-adaptive thresholds and a restricted feasibility constraint. That is, given a feasibility constraint $\S$ and the prior distributions for $n$ items, the algorithms set, prior to the arrival of all items, thresholds $T_i$ for each item and a restricted feasibility constraint $\S'$ such that $\S' \subset \S$. Then, an item is accepted if it exceeds its threshold \emph{and} is feasible with respect to the items already accepted and the subconstraint $\S'$. Notice that an item could exceed its threshold, be feasible with respect to previously accepted items and $\S$, and yet not be accepted because it is not feasible with respect to previously accepted items and $\S'$. In essence, imposing a subconstraint is equivalent to adaptively changing an item's threshold to $T_i = \infty$ if the item is not feasible with respect to the subconstraint.
Why is this different than when the gambler rejects an item that exceeds its threshold but is not feasible with respect to $\S$? We can interpret the gambler's value as constrained-additive with respect to $\S$, so the gambler does not have any marginal gain for items that are infeasible with respect to $\S$ and the items he has already accepted. Hence, he has no reason to take items with no positive marginal value to him. This is not a restriction on the algorithm, but rather a result of the gambler's valuation class.
\citet{CHMS} first produced a $\frac{1}{3}$-approximation to the prophet for graphic matroids using non-adaptive thresholds with a partition matroid subconstraint. In a very elegant approach, \citet{FeldmanSZ16} produce an Online Contention Resolution Scheme (OCRS) that yields a $\frac{1}{4}$-approximation for all matroids using non-adaptive thresholds and a subconstraint built cleverly from the structure of the given matroid.
\paragraph{Prophet Inequalities Beyond Matroids.} Prophet inequalities are well-studied and the literature is far too broad to cover; see \citep{Lucier17} for an excellent survey. Note, however, that dynamic algorithms yield good approximations to the prophet in settings reaching beyond matroids. In addition to matroids, the approach of \citep{FeldmanSZ16} also applies to matchings, knapsack constraints, and the intersections of each. Very recent work by \citet{DuttingKL20} gives an algorithm guaranteeing an $O(\log \log n)$-approximation for the very general setting of multiple buyers with subadditive valuations.
\paragraph{Direct Applications to Pricing.} The \citet{CHMS} reduction from order-oblivious posted pricings to prophet inequalities was only the first of many pricing applications of prophet inequalities.
\citet{FGL} considers the setting where buyers arrive online and face posted prices for items; non-adaptive anonymous prices are posted for each item equal to half its contribution to the optimal welfare. These prices guarantee $\frac{1}{2}$-approximation to the optimal welfare for fractionally subadditive valuations. Note that this is a prophet inequality when there is only one item.
\citet{DuttingFKL17,DuttingFKL20} connect posted prices and prophet inequalities: they interpret the \citet{KleinbergW19} thresholds as ``balanced prices'' and derive an economic intuition for the proof. They extended these balanced prices to more complex settings, including a variety of feasibility constraints and valuation classes. The approach is to prove guarantees in the full information setting, where the realized values are known in advance. Then, via an extension theorem, they prove that the results hold for Bayesian settings too, where distributions are known but values are unknown. Note that their balanced prices result in non-adaptive anonymous prices for all settings they consider \emph{except} for matroids feasibility constraints, where they remain adaptive and buyer-specific.
The recent work of \citet{DuttingKL20} also implements posted prices for buyers with subadditive valuations, but rather than balanced prices, provides a weaker sufficient condition to get a tighter approximation, and shows the existence of such prices through a primal-dual approach.
\paragraph{More Subtle Applications in Mechanism Design and Analysis.} Beyond direct applications to pricing, prophet inequalities have also been used in to build more complex mechanisms and prove approximation guarantees. \citet{CM} design a two-part tariff mechanism to approximate optimal revenue for matroid-constrained buyers. Their benchmark is an ex-ante relaxation, and they use an OCRS \citep{FeldmanSZ16} to achieve a constant fraction of that revenue. \citet{CZ17} prove that the better of a sequential posted price mechanism (where each buyer can only buy one item) and an anonymous sequential posted price mechanism with an entry fee yields a constant-approximation to the optimal revenue for multiple fractionally subadditive buyers (and $O(\log n)$-approximation for fully subadditive). In a specific case of their analysis that analyzes the core of the core (a double core-tail analysis follow the original of \citep{LY13}), they use \citep{FGL}.
Work by \citet{CZ19} approximates the optimal profit---seller revenue minus cost---for constrained-additive buyers. Like \citep{CM}, they also construct their benchmark using the ex-ante relaxation and use OCRS to bound a term here as well.
Recent work by \citet{CaiGMZ20} studies gains from trade approximation in a two-sided market with a constrained-additive buyer and single-dimensional sellers---both the single-item prophet inequality of \citep{KleinbergW19} and an OCRS are used to inspire prices for \emph{both} the buyer and the sellers simultaneously and then show that enough gains from trade will be received to approximate one specific part of their benchmark.
\section{Where Straightforward Extensions Fail} \label{sec:motiv}
Both of the non-adaptive single-item approaches---the probabilistic approach of
\citet{Samuel-Cahn} and the $\frac{1}{2} {\textsc{opt}}\xspace$ approach of
\citet{KleinbergW19}---extend to the $k$-uniform matroid setting, in which any
set of size at most $k$ is feasible. We first see why these approaches work for
$k$-uniform matroids yet break down for graphic matroids. Then, we attempt to
use an idea for graphic matroids from \citet{CHMS} to develop a non-adaptive
algorithm, and again highlight where the approach breaks down.
We begin with the two generalizations to $k$-uniform methods. Note that we do not claim either as part of our contribution, although to the best of our knowledge, neither approaches' generalized thresholds and proof is written anywhere.
Formally, a $k$-uniform matroid is the matroid where, for any given ground set $N$, $\ensuremath{\mathcal{I}} = \{I \subset N : |I| \leq k\}$. Bear in mind that $k=1$ returns to the single-item case.
\paragraph{The Probabilistic Approach.} (Extension of \citet{Samuel-Cahn} single-item algorithm to non-adaptive thresholds for the $k$-uniform matroid.) Determine the thresholds $T$ by setting $\ensuremath{\mathrm{Pr}}[\text{$< k$ item values exceed $T$}] = \ensuremath{\mathrm{Pr}}[\text{$\geq 1$ slot empty}] = p = \frac{1}{2}$.
\begin{align*}
{\normalfont\textsc{Alg}}(\vec{X}, T) &\geq \sum _i \ensuremath{\mathrm{Pr}}[i \text{ not blocked}] \mathbb{E}[(X_i - T)^+] + \ensuremath{\mathrm{Pr}}[\text{$\geq k$ above T}] \cdot k T \\
&\geq \ensuremath{\mathrm{Pr}}[\text{$< k$ above T}] \sum _i \mathbb{E}[(X_i - T)^+] + \ensuremath{\mathrm{Pr}}[\text{$\geq k$ above T}] \cdot k T \\
&\geq p \mathbb{E} \left[ \max_{S: |S| \leq k} \sum_{i \in S} (X_i - T)^+ \right] + (1-p) k T \\
&\geq p \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} X_i - kT\right] + (1-p) kT \\
&= \frac{1}{2} ( \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} X_i \right] ) - \frac{1}{2} kT + \frac{1}{2} kT\\
&= \frac{1}{2} \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} X_i \right] = \frac{1}{2} {\textsc{opt}}\xspace(\vec{X}).
\end{align*}
For uniform matroids, a simple characterization based on size exists
for sets that do not span \emph{any} elements that have yet to arrive: they
need only be of size strictly less than $k$. This property does not hold for
more complex matroids.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=.3\textwidth]{figures/hatgraph}
\caption{We depict the ``hat graph,'' a graphic matroid that is frequently used as a counterexample. All edges are active with probability $0.5$ with the exception of the base edge, which is always active.}
\label{fig:hatgraph}
\end{figure}
\end{comment}
\paragraph{The ``Thresholds as Constant-Fraction of Prophet'' Approach.} (Extension of \citet{KleinbergW19} single-item algorithm to non-adaptive thresholds for the $k$-uniform matroid; almost identical to those in \citet{CHMS}.) Set $T = \frac{1}{2k} \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} X_i \right] = \frac{1}{2k} {\textsc{opt}}\xspace(\vec{X})$.
\begin{align*}
{\normalfont\textsc{Alg}} &\geq \sum _i \ensuremath{\mathrm{Pr}}[i \text{ not blocked}] \mathbb{E}[(X_i - T)^+] + \ensuremath{\mathrm{Pr}}[\text{$\geq k$ above T}] kT \\
&\geq \ensuremath{\mathrm{Pr}}[\text{$< k$ above T}] \mathbb{E}[\sum _i (X_i - T)^+] + \ensuremath{\mathrm{Pr}}[\text{$\geq k$ above T}] kT \\
&\geq p \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} (X_i - T)^+\right] + (1-p) kT \\
&= p \left( \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} X_i \right] - kT\right) + (1-p) \frac{1}{2} \mathbb{E}\left[\max_{S: |S| \leq k}\sum_{i \in S} X_i \right] \\
&= \frac{1}{2} \mathbb{E}\left[\max_{S: |S| \leq k} \sum_{i \in S} X_i \right] = \frac{1}{2} {\textsc{opt}}\xspace(\vec{X}).
\end{align*}
In uniform matroids, any element is exchangeable for any other element. Then
so long as it contributes enough value, such as at least a constant fraction of
the average contribution to the optimal basis, there is no reason not to
accept an element. However, this does not hold for more complex matroids. A
particular element, even if extremely high value, may cause so many other
elements to be spanned that it is not worth taking.
One can imagine more nuanced extensions of either such approach---probabilistic thresholds for $i$ according to how many elements it might block, or value-based thresholds for $i$ based on the value of the sets it might block. However, any such extension would require a matroid-specific understanding of the relationship between elements, and element-specific thresholds.
Note that in addition to uniform matroids, both approaches easily extend to partition matroids by applying the approach to thresholds specific to the uniform matroid in each partition.
\paragraph{The Constrained Non-Adaptive Approach.} \citet{CHMS} construct non-adaptive thresholds for a graphic matroid that work \emph{so long as} the algorithm can enforce an additional subconstraint. Specifically, they cleverly partition the graph such that, so long as at most one edge is accepted from each partition, then an independent set is guaranteed. Then as items arrive, they are accepted if and only if they exceed their threshold \emph{and} are feasible with respect to the subconstraint---that is, no previous item from its partition has been accepted. This approach guarantees a $\frac{1}{3}$-approximation.
As discussed in the introduction, enforcing a subconstraint \emph{is} in fact adaptive. But, we \emph{could}, for example, randomly select one item from each partition in advance, defining our set for consideration $C$. Then, as items arrive, in each partition, we consider only the item in $C$, ignoring all other items from each partition. That is, we leave thresholds the same for all items in $C$ and \emph{a priori} set $T_i = \infty$ for all $i \not \in C$. This ensures that we only consider a set that complies with our feasibility constraint \emph{without} making any modifications online. Note that we can select items to be in the consideration set $C$ with whatever probabilities we choose, even in a correlated fashion---as long as we make them prior to items arriving---thus setting all thresholds to $T_i$ or $\infty$ in advance. Is there some clever way that we can implement our feasibility constraint, or any feasibility constraint, yet maintain a constant-factor approximation?
For the approach of CHMS, we might observe that a convenient property that bounds the probability mass of each partition could allow us to form a probability distribution over elements in each partition (i.e. place item $i$ in $C$ with probability $p_i/2$). However, this approach in fact reduces the probability too much, as it combines the probability that the element is active with the probability it is considered, and is no longer constant. If we use a constant probability, it would instead sell to too low of a quantile.
If such an approach \emph{were} to work, we could convert \emph{any} non-adaptive matroid prophet inequality to a prophet inequality, as a greedy OCRS exists for all matroids and constructs constrained non-adaptive thresholds all matroids \citep{FeldmanSZ16}. However, \citet{FeldmanSZ19} also prove a super-constant lower bound of $\Omega(\frac{\log n}{\log \log n} )$, so guarantees cannot possibly go through for every matroid. Thus, an interesting direction for future work is to characterize \emph{when} an approach of converting constrained non-adaptive thresholds to a fully non-adaptive algorithm in this way would maintain good guarantees.
\section{Preliminaries} \label{sec:prelim}
\begin{definition}
A \emph{matroid} $M = (N, \ensuremath{\mathcal{I}})$ is defined by a ground set of elements $N$ (with $|N| = n$) and a set of independent sets $\ensuremath{\mathcal{I}} \subseteq 2^N$. It is a matroid if and only if it satisfies the following two properties:
\begin{enumerate}
\item Downward-closed: If $I \subset J$ and $J \in \ensuremath{\mathcal{I}}$ then $I \in \ensuremath{\mathcal{I}}$.
\item Matroid-exchange: For $I, J \in \ensuremath{\mathcal{I}}$, if $|J| > |I|$ then there exists some $i \in J \setminus I$ such that $I \cup \{i\} \in \ensuremath{\mathcal{I}}$.
\end{enumerate} \end{definition}
\noindent We review several standard notions for matroids:
\begin{itemize}
\item The \emph{rank} of a set $\ensuremath{\mathrm{rank}}(S)$ is the size of the largest independent set in $S$: $\max \{|I| \mid I \in \ensuremath{\mathcal{I}}, I \subseteq S\}$.
\item The \emph{span} of a set $\ensuremath{\mathrm{span}}(S)$ is the largest set that contains $S$ and has the same rank as $S$: $\{i \in N \mid \ensuremath{\mathrm{rank}}(S \cup \{e\}) = \ensuremath{\mathrm{rank}}(S)\}$.
\item An element $i$ is \emph{spanned} by a set $S$ when $i \in \ensuremath{\mathrm{span}}(S)$.
\end{itemize}
We will informally use the language ``blocked'' (by a set $S$) to mean that an element is spanned (by the set $S$), and similarly ``unblocked'' to mean that an element is \emph{not} spanned (by the set $S$).
For any matroid $M$, we have the \emph{matroid polytope} $\mathcal{P}_M = $
$\{ \vec{p} \in \mathbb{R}_{\geq 0} ^M \mid \forall S \in 2^N, \sum _{i \in S} p_i \leq \ensuremath{\mathrm{rank}}(S) \}$. That is, $\mathcal{P}_M$ is the convex hull of the independent sets $\ensuremath{\mathcal{I}}$.
\begin{definition} A \emph{Matroid Prophet Inequality instance} $(\vec{X}, M)$ is given by a matroid $M =(N, \ensuremath{\mathcal{I}})$ and distribution of values $\vec{X}$ for the $n$ items that are the ground set $N$. $X_i$ denotes the random variable representing the value for item $i$. \end{definition}
For any given matroid prophet inequality instance, we let ${\textsc{opt}}\xspace(\vec{X}, M)$ denote the value of the prophet's set in expectation of the value of the items. Formally, ${\textsc{opt}}\xspace(\vec{X}, M) = \mathbb{E}\big[\max_{I\in \ensuremath{\mathcal{I}}} \sum_{i\in I}X_i\big]$. We omit the distributions $\vec{X}$ or matroid $M$ when it is obvious from context.
\begin{definition}A \emph{non-adaptive} threshold algorithm is given an instance $(\vec{X}, M)$ and determines thresholds $\vec{T}$. A threshold $T_i$ for each item $i$ is a function only of the random variables $\vec{X}$ (and, in particular, not as a function of any realizations of $\vec{X}$ or whether previous items have exceeded thresholds thus far). \end{definition}
For any non-adaptive thresholds $\vec{T}$, we let ${\normalfont\textsc{Alg}}(\vec{X}, M, \vec{T})$ denote the expected value obtained by the algorithm. Again, we omit the parameters when they are clear from context.
\section{The Ex-Ante Relaxation to the Matroid Polytope}
\label{sec:tipi}
Reducing a given matroid prophet inequality instance to one with Bernoulli distributions that sits within the matroid polytope is ``standard,'' and is used in \citep{FeldmanSZ16}. It's ``just'' an ex-ante relaxation to the matroid polytope, and expert readers can safely skip this section. However, we present the reduction in detail for comprehensiveness and ease of reading, as we did not find it elsewhere.
First, given arbitrary independent random variables $X_i$, we reduce the problem to
designing an algorithm for independent Bernoulli random variables
$X'_i$:
\begin{align*}
X_i' = \begin{cases}
t_i & \text{w.p. } p_i \\
0 & \text{w.p. } 1-p_i,
\end{cases}
\end{align*}
where $\vec{p} \in \mathcal{P}_M$.
Reducing to Bernoulli random variables gives two properties which greatly
simplify the design of an algorithm:
\begin{enumerate}
\item Each element of the ground set is either {\it active} or {\it
inactive}; and
\item There exists a worst-case total ordering of the elements.
\end{enumerate}
The worst-case ordering is the typical greedy ordering. Assume $t_i \le
t_{i+1}$; then greedily selecting elements in order (maintaining independence
and according to the rules of our algorithm) results in the lowest
weight outcome over all orderings. For the rest of the paper, we assume $t_i
\le t_{i+1}$ for $1 \le i < n$.
We now state our reduction formally.
\begin{lemma}
\label{lem:reduction}
Given a matroid $M = (N, \ensuremath{\mathcal{I}})$ and independent random weights $X_i$, $i \in
N$, there exist independent Bernoulli weights $X'_i$, where $X'_i = t_i$
w.p. $p_i$ and $\vec{p} \in \mathcal{P}_M$, such that
\[
{\textsc{opt}}\xspace(\vec{X}, M) \le \sum_ip_it_i.
\]
Furthermore, for any algorithm ${\normalfont\textsc{Alg}}$,
\[
{\normalfont\textsc{Alg}}(\vec{X}) \ge {\normalfont\textsc{Alg}}(\vec{X'}).
\]
\end{lemma}
\begin{proof}
First, rewrite the original optimal value as a sum over the ground set:
\begin{align*}
{\textsc{opt}}\xspace(\vec{X}, M) &= \mathbb{E}\big[\max_{I\in \ensuremath{\mathcal{I}}} \sum_{i\in I}X_i\big] \\
&= \sum_{i\in N} \ensuremath{\mathrm{Pr}}[i\in I^*] \cdot \mathbb{E}[X_i \,|\, i \in I^*],
\end{align*}
where $I^*$ is the maximum weight basis: $I^* = \ensuremath{\mathrm{argmax}}_{I \in \ensuremath{\mathcal{I}}}\sum_{i\in
\ensuremath{\mathcal{I}}} X_i$. Now let $p_i = \ensuremath{\mathrm{Pr}}[i\in I^*]$---the ex-ante probability that $i$ is in the prophet's solution. Since $\vec{p}$ is a convex
combination of basis vectors, then $\vec{p} \in \mathcal{P}_M$.
Now, observe that $\mathbb{E}[X_i\,|\,X_i\ge F_i^{-1}(1-p_i)] \ge \mathbb{E}[X_i | H]$ for
any event $H$ with $\ensuremath{\mathrm{Pr}}[H] = p_i$. Let $t_i = \mathbb{E}[X_i\,|\,X_i\ge
F_i^{-1}(1-p_i)]$; then in particular $t_i \ge \mathbb{E}[X_i\,|\,i\in I^*]$. Hence
\[
{\textsc{opt}}\xspace(\vec{X}, M) \le \sum_ip_it_i.
\]
Finally, to see that ${\normalfont\textsc{Alg}}(\vec{X}) \ge {\normalfont\textsc{Alg}}(\vec{X'})$, we simply couple
$\vec{X}$ and $\vec{X'}$, so that $X_i \ge t_i$ if and only if $X'_i = t_i$. For any
ordering of the elements, the algorithm applied to the original instance
selects the same items as the algorithm applied to the Bernoulli instance.
\end{proof}
\begin{comment}
Before stating our reduction lemma formally, we define the following notation.
\begin{itemize}
\item For any $\vec{q} \in \mathcal{P}_M$, let $R_{\vec{q}}$ denote the random
subset where $j$ is included with probability $q_j$.
\item Let $b_i = \ensuremath{\mathrm{Pr}}\big[i \in \ensuremath{\mathrm{span}}(R_{\vec{q}} \cap \{j \mid t_j \leq
t_i\})\big]$ be the probability that $i$ is blocked in the worst-case
ordering.
\item Let $a_i = \ensuremath{\mathrm{Pr}}\big[i \not \in \ensuremath{\mathrm{span}}(R_{\vec{p}} \cap \{j \mid t_j
\geq t_i\})\big]$ be the probability that $i$ is active and belongs to
the optimal set; i.e., $i \in {\textsc{opt}}\xspace(R_{\vec{p}})$.
\end{itemize}
\begin{lemma}
For a given solution to the optimization problem $\vec{p} \in
\mathcal{P}_M$ and $t_i \forall i$, if there exist $\vec{q}$ such that $q_i
\leq p_i \forall i$ and $\sum _i q_i (1-b_i) t_i \geq c \sum_i p_i t_i a_i$
for some constant $c$, then we have a prophet inequality for the original
matroid $M$ by setting the threshold for $i$ to $F_i^{-1}(1-q_i)$.
\end{lemma}
Let $\mathcal{P}_M$ be the matroid polytope for a given matroid $M$. The
following optimization problem is a fractional relaxation of the matroid
feasibility constraints that maximizes the expected value of the optimal basis:
\begin{align*}
\max \quad &\sum_i p_i \mathbb{E}[X_i \mid X_i \geq F_i^{-1}(1- p_i)] \\
\mathrm{s.t.}\quad & p_i \in \mathcal{P}_M
\end{align*}
Then if $t_i = \mathbb{E}[X_i \mid X_i \geq F_i^{-1}(1-p_i)]$, it is true that $\sum _i p_i t_i \geq {\textsc{opt}}\xspace$.
\kira{This fixes the $p_i$ and $t_i$'s.}
As such, we examine the worst case, where $X_i = \begin{cases} 0 & \text{w.p. } 1-p_i \\ t_i & \text{w.p. } p_i \end{cases}.$
For any $\vec{q} \in \mathcal{P}_M$ over the ground set, \kira{define worst case ordering with increasing thresholds; optimal ordering is reverse. Note using both $p$ and $q$ probabilities.}
\begin{itemize}
\item let $R_{\vec{q}}$ denote the random subset where $j$ is included with probability $q_j$
\item let $b_i = \ensuremath{\mathrm{Pr}}[i \in \ensuremath{\mathrm{span}}(R_{\vec{q}} \cap \{j \mid t_j \leq t_i\})]$ be the probability that $i$ is blocked in the worst-case ordering
\item let $a_i = \ensuremath{\mathrm{Pr}}[i \not \in \ensuremath{\mathrm{span}}(R_{\vec{p}} \cap \{j \mid t_j \geq t_i\})]$ be the probability that $i \in {\textsc{opt}}\xspace(R_{\vec{p}})$.
\end{itemize}
\end{comment}
\begin{comment}
\section{Algorithm-maximizing $q_i \in \{0, p_i\}$ for all $i$}
\begin{lemma}Let $\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}})]$ denote the expected value of the basis of $R_{\vec{q}}$ taken online in the worst-case ordering. If $\vec{q}$ is the solution that optimizes $\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}})]$, then $q_i \in \{0, p_i\}$ for all $i$. \end{lemma}
\begin{proof}
Suppose $\vec{q}$ is the solution that optimizes $\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}})]$ and $q_i \not \in \{0, p_i\}$. We can decompose this quantity into the the expected value of the algorithm when $i \in R_{\vec{q}}$ and when it is not:
$$\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}})] = q_i \mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}_{-i}} \cup \{i\})] + (1-q_i) \mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}_{-i}} \smallsetminus \{i\})].$$
Then if the expected value is higher when $i \in R_{\vec{q}}$, that is, $\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}_{-i}} \cup \{i\})] \geq \mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}_{-i}} \smallsetminus \{i\})]$, the algorithm's value increases by setting $q_i = p_i$ to maximally increase the probability that $i \in R_{\vec{q}}$. Otherwise, it is better not to include $i$, or $\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}_{-i}} \cup \{i\})] < \mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}_{-i}} \smallsetminus \{i\})]$, and the algorithm's value increases by excluding $i$ and setting $q_i = 0$.
\end{proof}
\section{For $k$-uniform matroids, $q_i = p_i a_i / 2$ gives a $\frac{1}{4}$-approximation}
Set $q_i = p_i a_i / 2$ for all $i$. For a $k$-uniform matroid, this gives a $\frac{1}{4}$-approximation.
By definition, $\sum _i p_i a_i = \mathbb{E}[\ensuremath{\mathrm{rank}}(R_{\vec{p}})] = k$. Since no element is blocked when the rank of all other elements is less than $k$, the probability that any element $i$ is blocked under $\vec{q}$ is
$$b_i \leq \ensuremath{\mathrm{Pr}}[\ensuremath{\mathrm{rank}}(R_{\vec{q}_{-i}}) \geq k].$$
Since $\sum_i q_i = \sum_i p_i a_i / 2 = k/2$ and
$$\ensuremath{\mathrm{Pr}}[\ensuremath{\mathrm{rank}}(R_{\vec{q}_{-i}}) < k] \cdot 0 + \ensuremath{\mathrm{Pr}}[\ensuremath{\mathrm{rank}}(R_{\vec{q}_{-i}}) \geq k] \cdot k \leq \mathbb{E}[\ensuremath{\mathrm{rank}}(R_{\vec{q}_{-i}})] \leq \sum_i q_i$$
then we have that $\ensuremath{\mathrm{Pr}}[\ensuremath{\mathrm{rank}}(R_{\vec{q}_{-i}}) \geq k] \leq \frac{1}{2}$, or $1 - b_i \geq \frac{1}{2}$.
Then
$$\mathbb{E}[{\normalfont\textsc{Alg}}(R_{\vec{q}})] = \sum _i q_i (1-b_i) t_i \geq \sum _i \frac{p_i a_i}{2} \frac{1}{2} t_i,$$
giving a $\frac{1}{4}$-approximation.
\end{comment}
|
1,108,101,563,884 | arxiv | \section{Introduction} \label{sec:Introduction}
Among the main aim of heavy-ion collision program at present collider experiments such as Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) is to mimic the state that was created few microseconds after the Big Bang. This state of matter, created at extremely high temperature and energy density, is called the Quark-Gluon Plasma (QGP) and is also believed to be present at the core of massive neutron stars. Interaction between quark and gluon, which leads to the formation of QGP, is governed by the Quantum Chromodynamics (QCD).
Asymptotic freedom, which is an important pillar of QCD, suggest a confinement deconfinement phase transition, during which the hadronic degree of freedom changes to partonic degree of freedom. Whether the phase transition is first-order, second-order or a simple cross-over and the search for the critical point are some of the important questions that are of immediate interest in the particle physics community.
Since the formation of QGP occurs at a very short time scale, it is not possible to directly probe in the experiment using current technologies. Therefore, we rely on the information carried by the final state particles to the detectors to gain insight into the medium created in the heavy-ion collision. Although we only measure the kinematic quantities such as the pseudorapidity $\eta$, transverse momentum $p_{\rm{_T}}$~, energy $E$ etc.~of the final state particles in the experiment, a breadth of information about the medium can be extracted by studying these kinematic observables.
Another set of quantities that are not directly observable, however, play an important role in understanding the nature of the medium and the equation of state are the thermodynamical response functions. This includes quantities that express how a system responds to change in some external parameters such as pressure, temperature etc. Isothermal compressibility ($\kappa_T$), specific heat ($C_V$), and speed of sound ($c_s$) are some of the response function that are of interest in high energy physics \cite{Sahu:2020swd, Basu:2016ibk, Khuntia:2016ikm}. The isothermal compressibility, $\kappa_{T}$, which exhibits the important property of the medium, tells us how much the volume of the medium changes on the change in pressure at a fixed temperature. This quantity can be used to study how close a medium is to be called perfect fluid. Perfect fluids are ideal fluids that do not possess shear stress, viscosity and also do not conduct heat. The $\kappa_{T}$ of perfect fluid is zero and the zero value signifies that the fluid is incompressible. Although the incompressible fluids do not exist in nature, the recent findings of the value of $\kappa_{T}$, as in Ref \cite{Sahu:2020nbu}, are almost close to zero which suggests that the medium created is almost a perfect fluid. Perfect fluid can also be characterized by the ratio of shear viscosity to entropy density $(\eta/s)$. Calculation based on $AdS/CFT$ correspondence has put up a universal lower bound of $1/4\pi$ for strongly interacting quantum field theories \cite{Kovtun:2004de}. On the other hand, the value of $\eta/s$ has been found to be close to the lower bound based on the flow harmonics calculation of the experimental data, indicating the near-perfect behaviour of medium created in heavy-ion collision \cite{ALICE:2011ab, Luzum:2008cw}.
As explained in Ref.~\cite{Bjorken:1982qr}, the speed of sound can quantify the nature of the same as it connects and explains the hydrodynamical evolution of the produced matter in the heavy-ion collisions. Fundamentally, the speed of sound also gives the information about the equation of state, which relates pressure $(P)$ and the energy density $(\epsilon)$. For a non-interacting massless ideal gas, the value of the squared speed of sound $c_s^2$ is expected to be $1/3$ times speed of light squared \cite{Hallman:2002qi}. Hence, the comparison with the massless ideal gas will give crucial information about the system dynamics and reveals the nature of the medium \cite{Deb:2019yjo}.
As already discussed, these quantities are not directly observable in the experiment and we extract them by utilizing the distribution of kinematic observables such as the transverse momentum $p_{\rm{_T}}$-spectra, rapidity, angle of emission etc. The $p_{\rm{_T}}$-spectra carries sufficient information to study such quantities as it is directly related to the energy of the system. Understanding the distribution of $p_{\rm{_T}}$~ is in itself a tedious task because in the low-$p_{\rm{_T}}$~ region, the QCD coupling strength is very high and hence we cannot apply the perturbative QCD theories to explain the spectra. Several phenomenological models have been developed to tackle this issue, and the most widely accepted are the statistical thermal models. Further, statistical thermal models are more appropriate to analyze the thermodynamical quantities because of the high multiplicities produced in high-energy collisions. We can utilize the statistical thermal models to extract the thermodynamical quantities such as temperature, number density, energy density etc.
If we assume the purely thermal origin of final state particles, the most natural choice to explain the energy distribution of particles is Boltzmann-Gibbs (BG) statistics \cite{Schnedermann:1993ws, Stodolsky:1995ds, Sharma:2018jqf}. However, it has been discussed in many works \cite{Jena:2020wno, Gupta:2020naz} that the BG distribution function deviates significantly from the experimental data because the spectra are more like a power-law rather than the simple exponential. Also, the BG statistics fails to explain the strongly correlated systems \cite{TSALLIS1995539} in which the long-range correlations are present, and entropy becomes non-additive and non-extensive \cite{lemanska2012nonadditive}. The existence of long-range interaction in high-energy heavy-ion collisions is discussed in Ref.~\cite{Alberico:1999nh} motivating to explore beyond the extensive BG regime to study the spectra. In 1988, C. Tsallis proposed a statistics \cite{Tsallis:1987eu, Biro:2020kve, Parvan:2015asa}, introducing an additional parameter $q$, which takes care of the non-extensivity in the system. It is a thermodynamically consistent \cite{Cleymans:2011in, Conroy:2010wt}, generalized version of Boltzmann distribution \cite{Tsallis:1998ws}. The power-law behavior of Tsallis distribution makes it a good choice to study the $p_{\rm{_T}}$-spectra and it is shown to nicely fit the spectra, particularly in the low-$p_{\rm{_T}}$~ region. Although Tsallis statistics nicely explain the data in the low-$p_{\rm{_T}}$~ region, however, it starts to deviate from the experimental data as we move toward the high-$p_{\rm{_T}}$~ part of the spectra.
Particle spectra in heavy-ion collisions can be divided into two distinct regions, low-$p_{\rm{_T}}$~ regime corresponds to the particle produced in soft processes whereas the hard processes dominate particle production in the high-$p_{\rm{_T}}$~ region. The limitation of Tsallis statistics in explaining the particle produced in hard processes demands a framework that can consider the effect of both soft and hard processes in the particle spectra. Some modification in Tsallis statistics \cite{Azmi:2015xqa, Cirto:2014sra, Wong:2013sca, Wong:2014uda} has been proposed to explain the high-$p_{\rm{_T}}$~ part of spectra in the heavy-ion collision, however, more work is required in this direction to get the full benefit from the spectra. To explain both the hard and soft part of particle spectra in a consistent manner, a unified theory using Pearson distribution is introduced in Ref.~\cite{Jena:2020wno}. It is a generalized form of the Tsallis distribution and is shown to be thermodynamically consistent and backward compatible to the Tsallis statistics within some limit on its parameters \cite{Gupta:2020naz}.
In this work, we have calculated the isothermal compressibility and speed of sound for charged hadrons produced in heavy-ion collisions using the Pearson statistical framework. For this analysis, we have taken the experimental data of transverse momentum spectra for charged hadrons produced in $Pb-Pb$~ collision at $\sqrt{s_{NN}}$ = $2.76$ TeV \cite{Abelev:2012hxa}, $5.02$ TeV \cite{Acharya:2019rys}, and $Xe-Xe$ collision at $5.44$ TeV \cite{Acharya:2018hhy}.
\section{Methodology} \label{sec:methodology}
The basic thermodynamic quantities that are of interest to formulate the isothermal compressibility and speed of sound include energy density $\epsilon$, number density $n$ and pressure $P$. From the standard thermodynamics, the number of particles $N$ in a system and its total energy $E$ can be calculated as:
\begin{equation} \label{eq1}
N = \sum_{i} f_{i}
\end{equation}
\begin{equation} \label{eq2}
E = \sum_{i} E_{i} f_{i}
\end{equation}
where $E_{i}$ is the energy of $i^{th}$ state and $f_{i}$ is the corresponding distribution function. The standard replacement while going from summation to integration for small energy intervals is given as \cite{Cleymans:2011in}:
\begin{equation} \label{eq3}
\sum_{i} \to V \int \frac{d^{3}p}{(2 \pi)^{3}}
\end{equation}
So, the number density n will be of the form:
\begin{equation} \label{eq4}
n = \int \frac{d^{3}p}{(2\pi)^{3}} \times f(E)
\end{equation}
and the corresponding energy density $\epsilon$ will be given as:
\begin{equation} \label{eq5}
\epsilon = \int \frac{d^{3}p}{(2\pi)^{3}} E \times f(E)
\end{equation}
Since the momentum distribution of the final state particles are fixed at kinetic freeze-out \cite{Deb:2020ezw}, the pressure of the system could be estimated from the moments of energy distribution.
The pressure $P$ is given as:
\begin{equation} \label{eq6}
P = \int \frac{d^{3}p}{(2\pi)^{3}} \frac{p^{2}}{3E} \times f(E)
\end{equation}
Among all the quantities discussed above, one common factor is the energy distribution of the particles ($f(E)$). Energy is related to the transverse mass $m_T$ as $E = m_T cosh(y)$ and the transverse mass is defined as $m_T = \sqrt{p_T^2 + m^2}$. So, the distribution of transverse momenta acts as a proxy for the energy distribution. Hence, the proper parameterization of transverse momentum spectra is crucial to understand the thermodynamics of the system created in high-energy collisions. In the present work, we have used the Pearson statistical framework to explain the $p_{\rm{_T}}$-spectra and extract the thermodynamical quantities such as temperature $T$, non-extensive parameter $q$.
In the seminal work \cite{Pearson343}, Karl Pearson discussed a family of the curve, based on the first four moments (mean, variance, skewness, and kurtosis), called Pearson distribution. Before the introduction of Pearson formalism in 1895, all probability distribution was only constructed based on mean and variance and did not take care of skewness and kurtosis. Pearson introduced a new probability distribution function where skewness and kurtosis can also be adjusted along with the mean and variance of a distribution. An important characteristic of this distribution is that depending on the limit on its parameters, it reduces to different distribution function such as Gaussian, normal, Student's T, Gamma distribution etc. The differential form of a Pearson distribution function, p(x), \cite{pollard} is expressed as:
\begin{equation}\label{eq7}
\frac{1}{p(x)}\frac{dp(x)}{dx} + \frac{a + x}{b_0 + b_1 x + b_2 x^2} = 0
\end{equation}
where a, $b_{0}$, $b_{1}$, and $b_{2}$ are related to first four moments of the distribution. By integrating this differential equation, one can get,
\begin{equation} \label{eq8}
p(x) = \exp \Bigg(- \int \frac{x+a}{b_{2}x^{2}+b_{1}x+b_{0}}dx\bigg)
\end{equation}
\begin{equation} \label{eq9}
p(x) = B\bigg(1+\frac{x}{e}\bigg)^{f}\bigg(1+\frac{x}{g}\bigg)^{h}
\end{equation}
Distribution function, in case of unified statistical framework, obtained from the above Pearson distribution, is given as \cite{Gupta:2020naz}:
\begin{equation} \label{eq10}
f_{i} = (Bf_{E})^{1/q} f_{Ta}
\end{equation}
where
\begin{equation} \label{eq11}
B = \frac{C}{(p_{0})^{n}} \bigg(\frac{T}{q-1}\bigg)^{\frac{-q}{q-1}}
\end{equation}
\begin{equation} \label{eq12}
f_{E} = \frac{1}{E} \bigg(1+\frac{E}{p_{0}}\bigg)^{-n}
\end{equation}
and
\begin{equation} \label{eq13}
f_{Ta} = \bigg[1+(q-1)\frac{p_{T}}{T}\bigg]^{\frac{-1}{q-1}}
\end{equation}
This formalism reduces to Tsallis statistics within the limit $n=-1$ and $p_{0} =0$.
Therefore, it can be considered as a generalized version of the Tsallis function and explains both soft and hard process contributions to $p_{\rm{_T}}$-spectra. The equation for the average number of particles and energy, in the case of unified formalism, remains the same as Tsallis \cite{Gupta:2020naz}:
\begin{equation} \label{eq14}
N = \sum_{i} f_{i}^{q}
\end{equation}
and, the energy of the system will be:
\begin{equation} \label{eq15}
E = \sum_{i} E_{i} f_{i}^{q}
\end{equation}
Here, the additional power of $q$ comes from the thermodynamic consistency. In case of the unified formalism, the transverse momentum spectra is defined as:
\begin{equation} \label{eq16}
\frac{1}{2\pi p_{T}} \frac{d^{2}N}{dp_{T}dy} = B^\prime \bigg(1+\frac{p_{T}}{p_{0}}\bigg)^{-n}
\bigg[1+(q-1)\frac{p_{T}}{T}\bigg]^{\frac{-q}{q-1}}
\end{equation}
where $B^\prime = B \times \frac{V}{(2\pi)^{3}}$, $T$ is temperature and $q$ is non-extensive parameter. Here we considered the chemical potential to be zero because at LHC energy, the net-baryonic number is extremely small at the central rapidity region. Thermodynamic parameters such as temperature $T$, $q$ and the other quantities can be obtained by fitting the transverse momentum spectra with the unified distribution using the Eq.~(\ref{eq16}). These quantities extracted from the spectra can be used to calculate the response function as discussed below.
\subsection{Isothermal compressibility}
Isothermal compressibility, in term of change in pressure and volume is given as:
\begin{equation} \label{eq17}
\kappa_{T} = -\frac{1}{V} \diffp{V}P{T}
\end{equation}
Further, $\kappa_{T}$ is also related to the average number of particles and the multiplicity fluctuation and the relation is given as:
\begin{equation}
\big \langle(N - \langle N \rangle)^2 \big \rangle = var(N) = \frac{T \langle N \rangle ^2}{V}\kappa_T
\end{equation}
also, the variance of particle multiplicity $N$ is related to derivative of the number density with respect to chemical potential as:
\begin{equation}
\big \langle(N - \langle N \rangle)^2 \big \rangle = VT\frac{\partial n}{\partial \mu}
\end{equation}
From above two equations, we can deduce the functional form of $\kappa_T$ \cite{Sahu:2020swd}:
\begin{equation} \label{eq20}
\kappa_{T} =\frac{\partial n/\partial \mu}{n^2}
\end{equation}
where n, in case of unified formalism, is of the form:
\begin{equation} \label{eq21}
n = \int \frac{d^{3}p}{(2\pi)^{3}} \times \frac{B}{E}\bigg(1+\frac{E}{p_{0}}\bigg)^{-n}\bigg[1+(q-1)\frac{(E-\mu)}{T}\bigg]^{\frac{-q}{q-1}}
\end{equation}
and,
\begin{equation} \label{eq22}
\diffp{n}\mu = \int \frac{d^{3}p}{(2\pi)^{3}} \times \frac{q}{T} \times \frac{B}{E}\bigg(1+\frac{E}{p_{0}}\bigg)^{-n}\bigg[1+(q-1)\frac{(E-\mu)}{T}\bigg]^{\frac{1-2q}{q-1}}
\end{equation}
By using the above equations, we have estimated the values of $\kappa_{T}$ for heavy-ion collisions at different energies.
\subsection{Speed of sound}
For a thermodynamic system at temperature $T$ and volume $V$, the squared speed of sound is given by,
\begin{equation} \label{eq23}
c_{s}^{2} = \diffp{P}\epsilon{s}
\end{equation}
where $P$ is pressure and $\epsilon$ is energy density of the system. It can be further reduced to:
\begin{equation} \label{eq24}
c_{s}^{2} =\dfrac{\diffp{P}T}{\diffp{\epsilon}T}
\end{equation}
where
\begin{equation}
P = \int \frac{d^{3}p}{(2\pi)^{3}} \times B \times \frac{p^{2}}{3E^{2}}\bigg(1+\frac{E}{p_{0}}\bigg)^{-n}\bigg[1+(q-1)\frac{E}{T}\bigg]^{\frac{-q}{q-1}}
\end{equation}
and,
\begin{equation}
\epsilon = \int \frac{d^{3}p}{(2\pi)^{3}} \times B \bigg(1+\frac{E}{p_{0}}\bigg)^{-n}\bigg[1+(q-1)\frac{E}{T}\bigg]^{\frac{-q}{q-1}}
\end{equation}
By using the above equations, the squared speed of sound $c_{s}^{2}$ reduces to
\begin{equation} \label{eq27}
c_{s}^{2} = \frac{\int \frac{p^{2}d^{3}p}{3E^{2}} \bigg(1+\frac{E}{p_{0}}\bigg)^{-n} \bigg[\frac{T}{q-1}+E\bigg]^{\frac{1-2q}{q-1}}}{\int d^{3}p \bigg(1+\frac{E}{p_{0}}\bigg)^{-n} \bigg[\frac{T}{q-1}+E\bigg]^{\frac{1-2q}{q-1}}}
\end{equation}
We have used the Eq.~(\ref{eq27}) to estimate the squared speed of sound in the medium created in heavy-ion collision at three different energies.
\section{Results and Discussion} \label{sec:discussion}
\begin{figure}[h]
\includegraphics[width=8cm]{Kt_pearson.eps}
\caption{Isothermal compressibility over volume as a function of average charged particle multiplicity for $Xe-Xe$ collision at $\sqrt{s_{NN}}$ = $5.44$ TeV, $Pb-Pb$~ collision at $\sqrt{s_{NN}}$ = $5.02$ TeV and $\sqrt{s_{NN}}$ = $2.76$ TeV using Unified formalism.}
\label{Fig1}
\end{figure}
\begin{figure}[h]
\includegraphics[width=8cm]{cs2_pearson.eps}
\caption{Squared speed of sound as a function of average charged particle multiplicity for $Xe-Xe$ collision at $\sqrt{s_{NN}}$ = $5.44$ TeV, $Pb-Pb$~ collision at $\sqrt{s_{NN}}$ = $5.02$ TeV $\sqrt{s_{NN}}$ = $2.76$ TeV using Unified formalism. The dotted line represents the theoretical value for ideal gas system.}
\label{Fig2}
\end{figure}
For the purpose of this analysis, we have considered the transverse momentum spectra of charged hadrons produced in $Pb-Pb$~ collision at $2.76~ \&$ $5.02$ TeV and $Xe-Xe$ collision at $5.44$ TeV. The $p_{\rm{_T}}$~ range is restricted to $p_T<5~GeV/c$ since we are trying to study bulk properties and the majority of high $p_{\rm{_T}}$~ particles are produced from hard processes. This study presents a formalism to calculate $\kappa_{T}$ and $c_{s}^{2}$ using the non-extensive unified statistical framework discussed in Ref.~\cite{Gupta:2020naz}. We have estimated the $\kappa_{T}/V$ and $c_{s}^{2}$ in the medium of charged hadrons as a function of charged particle multiplicity for different collision systems. The data for charged particle multiplicity $(\big<\diff{N_{ch}}\eta\big>)$ corresponding to a particular centrality is taken from the experimental results Ref.~\cite{Abelev:2013vea, Acharya:2019yoi, Acharya:2018hhy}. Temperature, non-extensive parameter, and the other fitting parameters are calculated by fitting the transverse momentum spectra with the unified formalism as in Eq.~(\ref{eq16}).
In Fig.~\ref{Fig1}, we have plotted the isothermal compressibility over volume calculated using the equations \ref{eq20}, \ref{eq21} and \ref{eq22}. It is observed that the values of $\kappa_{T}/V$ decreases with increasing charged particle multiplicity. At higher charged-particle multiplicity, $\kappa_{T}/V$ becomes the lowest, which suggest that the system move toward near-ideal behaviour with the increase in multiplicity. This trend is inline with the expectation as higher multiplicity class contains a larger number of particles and hence a higher pressure is required to attain a small change in volume. Similar values of $\kappa_{T}/V$ for different collision systems show an indication of similar dynamics of the produced medium. It is worth mentioning here that the ideal fluid is incompressible, hence $k_T = 0$, implying that the volume cannot be changed by applying pressure. For water, the corresponding value is several order of magnitude higher than what is obtained in case of heavy-ion collision. The values for $\kappa_T/V$ obtained in the case of heavy-ion collision using the unified formalism is in the range from $~10^{-3}$ to $10^{-5}~ GeV^{-1}$.
Since a proper estimation of volume is required to extract the value of $k_T~(fm^3/GeV)$, different techniques have been developed and tested on diverse datasets to extract the volume parameter \cite{Braun-Munzinger:2014lba, Cleymans:2012ya, Abelev:2014pja, Tawfik:2019oct, Gardim:2020sma, Azmi:2015xqa, Chatterjee:2015fua, BraunMunzinger:2003zz}. Although the numerical values vary greatly in different models, all of them are in the order of $10^3-10^4~fm^3$ and hence utilizing the value of volume from these models will give us the value of $\kappa_T$ in the order of $1-10~fm^3/GeV$. Therefore, the obtained value of $k_T$ is very low as compared to the water and other materials, indicating that the compressibility of the system created in the heavy-ion collision is very close to an ideal fluid. Proper estimation of volume is still an undergoing field of research, hence, we did not select a particular model and instead, we presented the value in terms of $k_T/V$.\\
In order to develop a deeper understanding and to explore the possibility of a near-ideal behavior of the produced medium, we have also calculated the speed of sound for different collision systems. The speed of sound in a medium reveals the properties of the medium via the equation of state. In Fig.~\ref{Fig2}, we have plotted the squared speed of sound with charged-particle multiplicity for three different energies estimated using the Eq.~(\ref{eq27}). It is observed that the value of the squared speed of sound is very close to $1/3$ times the speed of light squared, and the value increases with the increase in $\big<\diff{N_{ch}}\eta\big>$ suggesting that the system becomes more ideal at larger multiplicity. This observation complements the near-ideal behaviour already indicated from the measurement of isothermal compressibility.
\section{Conclusion} \label{sec:Summary}
With an aim towards understanding the system produced in the heavy-ion collision, we have made an attempt to study some thermodynamic response functions such as isothermal compressibility and speed of sound. Since, transverse momentum spectra carries information about the system, we have analyzed spectra of charged hadrons at three different LHC energies using the unified formalism and used the extracted value of thermodynamical parameters to study the isothermal compressibility and speed of sound. The $p_{\rm{_T}}$-spectra of charged hadrons produced in $Pb-Pb$~ collision at $2.76$ TeV, $5.02$ TeV and $Xe-Xe$ collision at 5.44 TeV are taken with $p_{\rm{_T}}$~ range upto 5 $GeV/c$. We have estimated the value of $\kappa_T/V$ and $c_s^2$ and studied their variation as a function of charge particle multiplicity. We observed that while the values of $\kappa_T/V$ decreases with respect to increase in multiplicity, the values of $c_s^2$ approaches to 1/3.\\
These estimation of $\kappa_T/V$ and $c_s^2$ using unified formalism represent that the medium tends to move toward a near-ideal behavior with an increase in charge particle multiplicity. In conclusion, the extracted values points toward the creation of a near-ideal medium in high energy collision and the system approach the ideal behavior as we move from peripheral to the central collision.
\section{Acknowledgement}
R. Gupta would like to acknowledge the financial support provided by CSIR through fellowship number 09/947 (0067) 2015-EMR-1. We would like to acknowledge that the work has been carried out using the computing facility in EHEP lab in Department of physics at IISER Mohali.
|
1,108,101,563,885 | arxiv | \section*{Introduction}
Phosphine gas is not very pleasant. Pure phosphine is odourless, but technical grade samples smell like rotting fish, toxic and spontaneously flammable, with the chemical formula of $\rm{PH_3}$.
On 14th September 2020, the Royal Astronomical Society made an official statement coupled with a webminar on the discovery of phosphine on Venus. At the heart of this announcement was the paper by \cite{greaves2020} titled `Phosphine gas in the cloud decks of Venus'. It was about the detection of phosphine in the temperate, but hyperacidic clouds of Venus with an abundance of 20 ppb. They also made a very detailed study of various chemical pathways of generation of phosphine in the given abundance by lightning, volcanoes or meteorites. They concluded, that the observed abundance of phosphine could possibly be a result of biological processes. They also stated that a detection of statistical significance needed to be backed up by observations to validate the spectral features of phosphine and of the conditions in the Venusian atmosphere.
Significant work in this context was done by \cite{bains2020} who at made a detailed study of all the possible chemical processes that can generate these amounts of phosphine. \cite{seager2020} proposed a Venusian aerial biosphere to explain the presence of phosphine as a possible hypothesis.
\section{Biosignatures: Why Phosphine?}
Wikipedia defines biosignatures as:{ \it {A biosignature is any substance -- such as an element, isotope, or molecule -- or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes}}. The obvious way forward is to identify molecules present on the Earth that could work as biosignatures. An ideal bio-signature should have its sole source from living organisms be intrinsically strong and easily identifiable spectroscopically.
The element that comes first to the mind is oxygen. However, as shown by \cite{mead}, oxygen is not most suitable, as it creates several false positive mechanisms on a variety of planet scenarios are possible (see Fig.~\ref{o2}). This implies that detection as well as non-detection of certain molecules could lead to false positive biosignatures\footnote{A false positive is a set of non-biological processes that can mimic the detectable feature of a biosignature. False negative biosignatures occur where life may be present on another planet, but potential biosignatures are undetectable.}.
\begin{figure}[ht]
\caption{Oxygen as a biosignature: False positive mechanisms for oxygen on a variety of planet scenarios. The main contributors to a spectrum of the planet's atmosphere are shown in large rectangles. The yellow and red circles represent molecules that would confirm a false positive biosignature and and molecules crossed confirm false positive biosignatures if not detected. Cartoon adapted from \cite{mead} (Image Credit: Wikipedia).}
\label{o2}
\centering
\includegraphics[width=1.0\textwidth]{oxygen.png}
\end{figure}
In the Earth's atmosphere, methane reacts with oxygen to get converted to carbon-dioxide and water, very efficiently. However, the atmosphere is found to have an abundance of 1 molecule per million and this can only be explained by a constant source of methane production (life) that oxygen cannot keep pace with in conversion to carbon-dioxide and water. Hence due to an imbalance of the production and destruction of methane, we still find some of it in our atmosphere. This is an indicator of life in an oxygen rich environment.
Biogenic methane production is the main contributor to the methane flux coming from the surface of Earth. Methane has a photochemical sink in the atmosphere, and hence can be detected only in the presence of biogenic methane production that is at a rate larger than its oxidation (Fig.~\ref{methe}). \cite{arney} stated that the detection of methane in the atmosphere of another planet can a viable biosignature, especially if the host star is of $G$ or $K$ type.
\begin{figure}[h]
\caption{Methane as a viable biosignature on the Earth. (Image Credit: Wikipedia).}
\label{methe}
\centering
\includegraphics[width=0.7\textwidth]{methane_earth.png}
\end{figure}
A similar argument was used to study methane as a biomarker on Mars. However, the processes of generation and destruction of methane on Mars are not fully understood (See Fig.~\ref{meth}). This is still controversial and a mass spectrometer measuring the isotopic proportions of carbon-12 to carbon-14 in methane is used to validate the bio-origin of methane \cite{zah}.
\begin{figure}[h]
\caption{Methane as a biosignature on Mars: Generation and Destruction (Image Credit: Wikipedia).}
\label{meth}
\centering
\includegraphics[width=0.7\textwidth]{methane.png}
\end{figure}
Phosphine is similar to ammonia, made of a single atom of phosphorous and three atoms of hydrogen. On the Earth, we have a phosphine abundance as 1 part per trillion. In an oxidizing environment, phosphine will immediately react with oxygen to form phosphorous acid or phosphoric acid.
Phosphine is exclusively associated to life and is not generated by other natural processes involving the geology or atmosphere of our planet. Hence its presence in the oxidizing atmosphere, indicates a constant generation, caused by biological processes exceeding the rate of oxidation, making it a reliable biomarker.
On the Earth, phosphine is associated with anaerobic ecosystems, and as such it is a potential biosignature gas in anoxic exoplanets. It has also been found on the gas giants Jupiter and Saturn. These planets have a hydrogen rich atmosphere, and the phosphine gets generated in the high temperature and pressure environment getting mixed well with the gas layers due to convection. In the case of rocky planets, the surface provides a natural barrier stopping the phosphine produced at the core from getting mixed with the atmosphere. Hydrogen is more likely to combine with oxygen to form water, or carbon to form methane. Hence for phosphorous to combine with Hydrogen in its very low abundance, is very improbable. Hence the detection of phosphine on a temperate rocky planet, is a robust indicator of life \footnote{A high temperature could destroy the phosphine. Hence a process of regeneration of phosphine would be necessary for detection.}. On the Earth, phosphine could be produced directly by microbial reduction of more oxidized phosphorus species or indirectly by microbial production of reduced phosphorous compounds, such as hypophosphite leading to $\rm{PH_3}$, implying a bio-origin.
An excess of phosphine would be due to an imbalance in the production and oxidation processes and can only be produced by life forms on rocky planets, for example, the Earth. If phosphine is produced through biotic, as opposed to abiotic pathways, the discovery could imply a significant biomass in the Venusian atmosphere \cite{lingloeb}.
There are a few other mechanisms that can produce phosphine. Even if it was made in the lower layers, it would be destroyed before it reaches layers where it would be found. Volcanoes, thunder, etc. can produce it, but an appropriate hydrogen pressure would be required to keep the phosphine. A document of more than 100 pages was made to study all such mechanisms \cite{bains2020}. There is also strong ultraviolet (UV) absorbtion in the Venusian atmosphere. Venus was probably habitable in the past till the green house effect took over to make the planet inhabitable.
Also, we should always keep in mind that life elsewhere could be very different from life on Earth from a bio-chemical perspective, specially with the variety of bio-diversity on the Earth.
\section{Venus: The Evil Twin}
Venus has been called the evil twin of the Earth since its size and mass is comparable. `Heaven and Hell' were the words used by Carl Sagan to describe the twins: Earth and Venus. The atmosphere of Venus is composed of 96.5\% $\rm{CO_2}$ and 3.5\% $\rm{N_2}$ and its present surface conditions are very different from that on Earth.
Life of Venus has been discussed in the past. \cite{msagan,grin,cock} proposed that the conditions between the lower and middle atmosphere were conducive to (terrestrial) biology, while higher altitudes could possibly host microorganisms. Life in the Venusian cloud layers was also considered by \cite{schulz} due to the presence of conducive chemical and physical conditions, like the presence of sulfur compounds, carbon dioxide ($\rm{CO_2}$), and water, and moderate temperatures ($0 - 60^0$ C) and pressures ($\approx 0.4 - 2$ atm).
In an hypothesis article \cite{limaye}, proposed that the lower cloud layer of Venus ($47.5 - 50.5$ km) is an region favorable to microbial life, as it has moderate temperatures and pressures ($\approx $ $60^0$ C and 1 atm), and the presence of micron-sized sulfuric acid aerosols (Fig.~\ref{lim}).
\begin{figure}[h]
\caption{The potential for microorganisms to survive in Venus' lower clouds and contribute to the observed bulk spectra in a hypothesis paper \cite{limaye} (Image credit: Limaye et al, doi: 10.1089/ast.2017.1783.)}
\label{lim}
\centering
\includegraphics[width=10cm, height=7.0cm]{limaye.jpeg}
\end{figure}
The formidable Venusian surface has a blistering temperature of 470$^{0}$ C and a pressure 92 times higher than that at the surface of Earth. This is similar to the pressure found in the depth of 900 m of Earth's oceans. This is an extreme hostile environment for life, as we know it. The Russian Venera program was a series of space probes between 1961 and 1984 to study Venus. Ten probes successfully landed on the surface of the planet, including the two Vega program and Venera-Halley probes, while thirteen probes successfully entered the Venusian atmosphere. However, due to the hostile surface conditions, the probes could only survive for short periods ranging from 23 minutes to two hours.
The surface of Venus is very hot ($\approx 400^0$ C) , but at a height of 45--60 km the temperature is about $30^0$ C and the pressure is 1 atm. This is possible region where the physical conditions are suitable for life, as we know it. It is believed that there could have been similar conditions (cooler and wetter) on the Earth and Venus at an earlier time and as conditions on Venus turned hostile, life could have migrated to this more temperate zone in the Venusian atmosphere. However, the chemical composition of the atmosphere is very different from that on the Earth, with clouds or droplets of sulphuric acid. Hence, life if it exists in these conditions, would be very different from we know it to be.
\section{Data: JCMT and ALMA}
Phosphine as a probable biomarker hence seemed to be good candidate for detection on Venus.
Phosphine is made up of a single atom of phosphorous and three atoms of hydrogen. The atoms in the molecule can make transitions between different rotation energy levels corresponding to certain characteristic frequencies or wavelengths by absorbing solar radiation. One such transition from the $J=1 - 0$ levels is at a frequency of 266944.51 MHz = 1.123 mm. Observations of this spectral line can confirm the presence as well as the abundance of phosphine and the temperature and pressure of the region.
The James Clerk Maxwell Telescope (JCMT) is a submillimetre-wavelength radio telescope at Mauna Kea Observatory in Hawaii, USA. The telescope is near the summit of Mauna Kea at 4,092 m. Its primary mirror has a size of 15 m. It is the largest single-dish telescope that operates in submillimetre wavelengths of the electromagnetic spectrum (far-infrared to microwave) (Fig.~\ref{jcmt}).
\begin{figure}[!t]
\caption{Model of James Clerk Maxwell Telescope (JCMT). Photographed at the Royal Astronomical Society's National Annual Meeting 2009. (Image Credit: Wikipedia).}
\label{jcmt}
\centering
\includegraphics[width=7cm, height=5cm]{jcmt.jpeg}
\end{figure}
The Fig.~\ref{jcmts} shows the spectrum obtained from the JCMT. The spectral ripples\footnote{Spectral ripples are instrumental artefacts in the data that become apparent when observing an object as bright as Venus.} had to be fitted and removed to obtain the spectrum shown in the left panel with the residual line present inside velocity ranges of $v = 8$ km/s (solid, black) and $v = 2$ km/s (dashed, orange). The data was binned for clarity into histograms on the x axis; representative $1\sigma$ error bars are also shown. The right panel shows the adopted mid-range solution with $v = 5$ km/s (histogram), overlaid with our model for 20 ppb abundance by volume. The solid red curve shows this model after processing with the same spectral fitting as used for the data.
\begin{figure}[!t]
\caption{Spectrum of PH$_3$(1-0) with JCMT \cite{greaves2020}}
\label{jcmts}
\centering
\includegraphics[width=9cm, height=5cm]{jcmts.jpeg}
\end{figure}
The Atacama Large Millimeter/submillimeter Array (ALMA) is an astronomical interferometer of 66 radio telescopes in the Atacama Desert of northern Chile, which observe electromagnetic radiation at millimeter and submillimeter wavelengths (Fig. \ref{alma}). The array has been constructed at the altitude of 5,000 m at the Chajnantor plateau -- near the Llano de Chajnantor Observatory and the Atacama Pathfinder Experiment. The site is perfect due to its high elevation and low humidity. The array has much higher sensitivity and higher resolution than earlier submillimeter telescopes such as the single-dish JCMT or existing interferometer networks such as the Submillimeter Array or the Institut de Radio Astronomie Millimétrique (IRAM) Plateau de Bure facility.
\begin{figure}[!t]
\caption{Atacama Large Millimeter/submillimeter Array (ALMA)(Image Credit: P. Horálek/ESO)}
\label{alma}
\centering
\includegraphics[width=7cm, height=5cm]{alma.jpg}
\end{figure}
Figure~\ref{almas} shows the PH$_3$ ($1 - 0$) spectrum of the whole planet, with $1\sigma$ errors of $0.11 \times 10^{-4}$ per 1.1 km/s spectral bin in the left panel. The right panel shows the spectra of the polar (histogram in black), mid-latitude (in blue) and equatorial (in red) zones on the planet.
\begin{figure}[!t]
\caption{Spectrum of PH$_3$ $(1-0)$ with ALMA \cite{greaves2020}}
\label{almas}
\centering
\includegraphics[width=9cm, height=5cm]{almas.jpeg}
\end{figure}
A comparison of the production and destruction of phosphine and a better understanding of the temperature and pressure on Venus is an integral part of this study.
The life time of phosphine in the layers at a height larger than 80 km is about $10^3$ s as it is destroyed by UV radiation.
Near the base, due to collisions, the molecules are destroyed in $10^8$ s.
In mid-regions, there is no data about the lifetimes. It is expected to be about $10^3$ years.
\cite{seager2020} proposed a hypothetical lifecycle in the Venusian atmosphere, the arial biosphere. Life could reside in the droplets, decicated spores that stay on higher levels of the atmosphere and when they reach the temperate zone, they get metabollically active, reproduce. They drift upwards and downwards as they evolve.
Life as we know it, cannot survive in the presence of sulphric acid and the conditions in the Venusian atmosphere. However, possible life-forms can have an unknown photochemistry and hence the question is still open.
\section{The Controversy}
There are two big questions in this study. Firstly, does the signal detected correspond to phosphine and no other molecule? Secondly, if it is phosphine, is it caused by life?
To answer the first questions, the data has been multiply analysed by various research groups to verify the presence of phosphine. The next step would be to reobserve the signal repeatedly to see how it is distributed on Venus, does it change between day and night, or seasons, or different regions? {\em IF} it is truly related to life, then we would expect some kind of variation. And {\em IF} it truly is life, it won't produce only one such molecule, there would be many more such molecules being produced by the complete ecosystem.
The data obtained was very noisy and there were various algorithms were used to reduce the data and extract the signal without causing an artificial signal. Several observations were used, 18 months apart, to ensure that the signal was not spurious and applying different kinds of algorithms to identify the spectral line in multiple ways in repeated efforts by teams taking in inputs from experts and referees. It took almost 3 years from the first detection on JCMT to publication.
The authors of the discovery paper stated clearly that `Even if confirmed, we emphasize that the detection of PH$_3$ is not robust evidence for life, only for anomalous and unexplained chemistry.' \cite{greaves2020}.
Shortly after the announcement, the organizing committee of the International Astronomical Union (IAU) Commission F3 on Astrobiology released a statement criticising Greaves' team for the press coverage of their discovery. `It is an ethical duty for any scientist to communicate with the media and the public with great scientific rigour and to be careful not to overstate any interpretation which will be irretrievably picked up by the press,' adding that the commission `would like to remind the relevant researchers that we need to understand how the press and the media behave before communicating with them'.
The IAU statement was condemned by many, including the commission’s own members, due to which the statement was retracted by the IAU executive. The IAU Executive stated that the organizing committee of Commission F3 had `been contacted to retract their statement and to contact the scientific team with an apology'.
Shortly after that, \cite{vill2020} contested the paper by \cite{greaves2020} and stated that the observed PH$_3$ feature with JCMT can be fully explained employing plausible mesospheric SO$_2$ abundances (~100 ppbv as per the SO$_2$ profile) and the identification of PH$_3$ in the ALMA data should be considered invalid due to severe baseline calibration issues. The team ended its abstract with the `suggestion' that Greaves' team retract its original paper – seen by some as unduly aggressive and hence an apology was made by Villanueva's team. `We agree that the sentence calling for retraction was inappropriate and we apologise for harm caused to the Greaves et al. team.'
There were a few more criticisms of the paper by some groups like \cite{snellen2020,encre2020, thomp2020}.
\cite{regreaves2020,souza2020} responded to the paper and reanalysed the data to recover PH$_3$ in Venus’ atmosphere with ALMA ($\approx 5\sigma$ confidence). They stated that the ALMA data are reconcilable with the JCMT detection ($\approx$20 ppb) if there is order-of-magnitude temporal variation and more advanced processing of the JCMT data is underway to check methods. They concluded that both ALMA and JCMT were working at the limit of observatory capabilities and hence new spectra should be obtained as spectral ripples could potentially reducing the significance of real narrow spectral features.
To add, a recent paper by \cite{mogul2020} re-examined data obtained from the Pioneer-Venus Large Probe Neutral Mass Spectrometer (LNMS) from NASA’s Pioneer Venus spacecraft to search for evidence of phosphorus and confirmed the detection of phosphine on Venus.
In the words of Lyman Beecher, `No great advance has ever been made in science, politics, or religion, without controversy'. We shall know.
\section{The Future}
As of now, the phosphine paper is the only set of results that has undergone peer review. Several papers have been submitted with a reanalysis of the data and if the findings are still under dispute, new data at different frequencies are required.
Phosphine is difficult to detect from the ground but NASA’s airborne Stratospheric Observatory For Infrared Astronomy telescope, which flies at an altitude of over 13.7 km on board a modified Boeing 747SP, could confirm or deny the finding.
Observations are also being planned in the Infrared using the NASA Infrared Telescope Facility (NASA IRTF) which is a 3-meter telescope in Hawaiii. Most telescopes are designed to look at faint sources. Venus is very bright and hence methods need to be planned to adapt to this bright source otherwise it saturates our detectors. JWST can look for such signals in far away planets. However, Venus is too bright for its detectors.
There will also be new missions for making in-situ measurements of the Venusian atmosphere to help build more realistic models. The orbiter and atmospheric balloon mission Shukrayaan-1 by ISRO is under development and planned under development and planned for 2023. The orbiter and lander Venera-D by Roscosmos is under development and planned for 2026. NASA has proposed a secondary payload VAMP for the Venera-D lander.
Small-scaled Venus mission could be launched in the near future to confirm the presence of phosphine and measure its vertical distribution in the atmosphere. A whole series of missions could look for signs of life and even life itself. A golden era of Venus exploration lies ahead.
The hypothesis of life in the clouds of Venus will gain scientific validity only after we have systematically ruled out all other chemical and geological processes that can explain the presence of phosphine on Venus and widened our explorations of this hostile, yet inviting twin.
\section{Conclusion}
The search of extraterrestial life has always been at the heart of human quest of Life, Universe and Everything. We have reached a stage where we are exploring planets of other Solar Systems and looking for habitability and signatures of life on them. Our Solar System is the immediate neighbourhood where we have started these explorations looking for indicators of water, habitability and life. The quest is not an easy one, however it is of great meaning and importance.
The exploration of Venus is one such attempt. Many more such attempts will be planned in the future and this can be a small step in the right direction in us, the universe, trying to find answers about itself.
Afterall, as Carl Sagan once said, `extraordinary claims require extraordinary evidence' (ECREE). That's what we are looking for.
|
1,108,101,563,886 | arxiv | \section{\footnotesize{{\bf{Introduction}}}}
\hspace*{5mm} In \cite{S-K}, we have introduced and characterized for $\alpha>0$ and $1\leq p,q\leq\infty$ the generalized Dunkl-Lipschitz spaces $\wedge^k_{\alpha,p,q}
(\R)$ associated with the Dunkl operator with parameter $k\geq0$
$${\cal D}_kf(x)=f'(x)+k\dfrac{f(x)-f(-x)}{x},\,\,\,f\in C^1(\R).$$ We were interested in characterizing the functions $f\in\wedge^k_{\alpha,p,q}(\R)$ for $\alpha>0$ in terms of their $k$-Poisson transform and the second order $L^p_k$-modulus of continuity. It is natural to extend the theory of the spaces $\wedge^k_{\alpha,p,q}(\R)$ for all real $\alpha$. To get this extension we use the $k$-heat transforms, since it is better suited in the treatment of tempered distributions than the $k$-Poisson transforms. More precisely, we define the spaces $\wedge^k_{\alpha,p,q}(\R)$ for $\alpha\leq0$ as spaces of tempered
distributions $T$ that belongs to an appropriate Lebesgue space for which the $k$-heat
transform $G^k_t(T)$ of $T$ satisfies the condition
$$
\left\{\dint_0^1t^{q(n-\frac{1}{2}\alpha)}\|\partial^n_tG^k_t(T)\|_{k,p}^q\dfrac{dt}{t}\right\}^{\frac{1}{q}}<\infty,\;\;\;\;\;\;\;\mbox{if}\;\;\;1\leq
q<\infty$$ and $$\;\;\;\;\;\;\;\;\; \displaystyle\sup_{0<t\leq1}t^{n-\frac{1}{2}\alpha}\|\partial^n_tG^k_t(T)\|_{k,p}<\infty,\;\;\;\;\;\;\mbox{if}\;\;\;q=\infty,$$
where $n=\overline{(\frac{\alpha}{2})}$ and $\overline{\alpha}$ is the smallest non-negative integer larger than $\alpha$. The first goal of this paper is to study these spaces. As it is well known, the fractional integral operators play an important role in this theory. Here we use the Dunkl-Bessel potential ${\cal J}^k_{\alpha}$ which we show that ${\cal J}^k_{\alpha}$ is a topological isomorphism from $\wedge^k_{\alpha,p,q}(\R)$ onto $\wedge^k_{\alpha+\beta,p,q}(\R)$, with $1\leq p,q\leq\infty$ and $\alpha$, $\beta\in\R$. Next, certain properties and continuous embedding for $\wedge^k_{\alpha,p,q}(\R)$ are given.\\ Our second objective will study the generalized Dunkl-Lipschitz spaces of $k$-temperatures (i.e., solutions of the Dunkl-type heat equation
$({\cal D}^2_k-\partial_t){\cal U}=0$) on the whole half-plane $\R^2_+=\left\{(x,t):x\in\R,t>0\right\}$ which denote by ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$, $1\leq p,q\leq\infty$. In Theorem \ref{Th.VI'.1}, we prove some basic properties of the space ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ in which the most important one is the fact that the topological property of the space ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ does not depend on the (Lipschitz) index $\alpha$. Thus, we should ask what relations there are between the generalized Dunkl-Lipschitz spaces $\wedge^k_{\alpha,p,q}(\R)$ and the generalized Dunkl-Lipschitz spaces of $k$-temperatures ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$. To reply to this question we must use the $k$-heat transforms. In Theorem \ref{Th.VII.1}, we establish that a $k$-temperature ${\cal U}$ belongs to ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ if and only if it is the $k$-heat transform of an element of $\wedge^k_{\alpha,p,q}(\R)$. So that the spaces $\wedge^k_{\alpha,p,q}(\R)$ for $\alpha\leq0$, which consist of tempered distributions, can be realized as spaces of functions.\par Similar results have been obtained by T. M. Fleet and M. H. Taibleson \cite{Flett1,Taibleson} in the framework of classical case $k=0$. Later, R. Johnson \cite{Johnson}, adopting Flett's idea, defined a space of temperatures which is isomorphic to the Lipschitz space of Herz. His method leaned on a theory of Riesz potentials for temperatures. Additionally, for $\alpha>0$, the generalized Dunkl-Lipschitz spaces or Besov-Dunkl spaces have been studied extensively by several mathematicians and characterized in different ways by many authors (see \cite{Ch-An-Sa-Si, Ch-Sa, Ch-Si, BLA, S-K, Lot1,Lot2}).
\par In this work, it is important to mention that the $1$D restriction is due to the fact that Dunkl translations operations in higher dimension are not yet known to be bounded on $L^p_k$ apart from $p=2$.
\par The organization of this paper is as follows. In Section 2, we recall some basic harmonic analysis results related to Dunkl operator. In Section 3, we recall some properties of the $k$-heat transform of a measurable function. In Section 4, a semi-group formula for $k$-temperatures is proved which will be used frequently. In Section 5, the Dunkl-Bessel potential is defined and related properties are investigated. In Section 6, $\wedge^k_{\alpha,p,q}(\R)$ for real $\alpha$ is defined and its properties have been obtained. In this section we also proved that ${\cal J}^k_{\beta}$ is a topological isomorphism from $\wedge^k_{\alpha,p,q}(\R)$ onto $\wedge^k_{\alpha+\beta,p,q}(\R)$, $\alpha,\beta\in\R$, and a variety of equivalent norms for $\wedge^k_{\alpha,p,q}(\R)$ are given. The remainder of this section is devoted to some properties and continuous embedding for $\wedge^k_{\alpha,p,q}(\R)$. In section 7, we defined the space ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$, the equivalence of several norms on ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ is proved and some properties of this space are studied. At the end, the isomorphism of ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ and $\wedge^k_{\alpha,p,q}(\R)$ is established.
\par In what follows, $B$ represents a suitable positive
constant which is not necessarily the same in each occurrence.
\section{\footnotesize{{\bf{Preliminaries in the Dunkl Setting on $\R$}}}}\par
\hspace*{5mm} In this section we state some definitions and
results which are useful in the sequel and we refer for more details to the articles
\cite{N-S,Dunkl3,Dunkl1,Dunkl2}, \cite{Jeu1}, \cite{Xu1} and \cite{Trim2}. We first begin by some notations.\\[4mm]
{\bf{Notations}} \begin{itemize} \item $C_0(\R)$ is the space of
continuous functions vanishing at infinity, equipped with the
usual topology of uniform convergence on $\R$. \item ${\cal
E}(\R)$ is the space of $C^{\infty}$-functions on $\R$, endowed with
the usual topology of uniform convergence of any derivative on compact subsets of $\R$.
\item $S(\R)$ is the space of $C^{\infty}$-functions on $\R$ which
are rapidly decreasing as well as their derivatives, endowed with the
topology defined by the semi-norms
$$\rho_{s,l}(\varphi):=\displaystyle\sup_{x\in\R,j\leq s}(1+x^2)^l|{\cal D}_k^j
\varphi(x)|,\,\,s,l\in\N.$$ \item $S'(\R)$ is the space of tempered
distributions on $\R$ which is the topological dual of $S(\R)$.
\end{itemize}
\par The Dunkl operator ${\cal D}_k$ with parameter $k\geq0$ is given by
$${\cal D}_kf(x) :=f'(x)+k\dfrac{f(x)-f(-x)}{x}\, ,\quad f\in
C^1(\R).$$ For $k=0$, ${\cal D}_0$ reduces to the usual derivative
which will be denoted by ${\cal D}$. The Dunkl intertwining
operator $V_k$ is defined in \cite{Dunkl1} on polynomials $f$ by
$$
{\cal D}_kV_kf=V_k{\cal D}f\,\,\mbox{and}\,\,V_k1=1.
$$
For $k>0,\,V_k$ has the following representation (see
\cite{Dunkl1}, Theorem 5.1)
\begin{equation}\label{e:I.1}
V_kf(x):=\dfrac{2^{-2k}\Gamma(2k+1)}{\Gamma(k)\Gamma(k+1)}\dint_{-1}^1
f(xt)(1-t^2)^{k-1}(1+t)dt.
\end{equation}
This integral transform extends to a topological automorphism to the space
${\cal E}(\R)$ (see \cite{Trim2} and \cite{N-S}). For $k\geq 0$, and
$\lambda\in\C$, the initial problem $$\left\{
\begin{array}{rcl}
{\cal D}_ku(x)&=&\lambda u(x),\,x\in\R,\\ u(0)&=&1,
\end{array}
\right.$$ has a unique analytic solution $u(x)=E_k(\lambda, x)$, called
Dunkl kernel \cite{Dunkl1} and given by
$$E_k(\lambda, x):=j_{k-\frac{1}{2}}(i\lambda
x)+\dfrac{\lambda x}{2k+1}j_{k+\frac{1}{2}}(i\lambda x),$$ where
$j_{\alpha}$ is the normalized Bessel function, defined
for $\alpha\geq-\dfrac{1}{2}$ by
$$j_{\alpha}(z):=\Gamma(\alpha+1)\dsum_{n=0}^{+\infty}\dfrac{(-1)^n}{n!}
\dfrac{(\frac{z}{2})^{2n}}{\Gamma(n+\alpha+1)} ,\,z\in\C.$$ We
remark that $E_k(\lambda, x)=V_k(e^{\lambda .})(x)$. Formula
(\ref{e:I.1}) and the last result imply that
\begin{equation}\label{e:I.2}
\mid E_k(\lambda, x)\mid\leq e^{\mid\lambda\mid\mid x\mid},\,\mid
E_k(\lambda,x)\mid\leq e^{\mid x\mid\mid{\cal R
}e\lambda\mid},\,\mid E_k(-iy,x)\mid\leq 1,
\end{equation}
for all $x,y\in\R$ and $\lambda \in\C$.
\\ For all $f$ and $g$ in $C^1(\R)$ with at least one of them is even, we have
$$
{\cal D}_k(fg)=({\cal D}_kf)g+g({\cal D}_kg).
$$
For $f\in C^1_b(\R)$ and $g$ in $S(\R)$, we have
$$
\dint_{\R}{\cal D}_kf(x)g(x)|x|^{2k}dx=-\dint_{\R}f(x){\cal
D}_kg(x)|x|^{2k}dx.
$$
Hereafter, we denote by $L^p(\R,|x|^{2k}dx)$, $p\in[1,\infty]$, the space of measurable functions on $\R$ such that
$$\|f\|_{k,p}:=(\dint_{\R}|f(x)|^p|x|^{2k}dx)^{\frac{1}{p}}<+\infty,\,\,\,\mbox\,\,\,
1\leq p<\infty,$$ and
$$\|f\|_{k,\infty}:=ess\!\displaystyle\sup_{x\in\R}|f(x)|<+\infty.$$
\par The Dunkl kernel gives rise to an integral transform, called
Dunkl transform on $\R$, which was introduced by Dunkl in
\cite{Dunkl2}, where already many basic properties were
established. Dunkl's results were completed and extended later on
by de Jeu in \cite{Jeu1}. \par The Dunkl transform of a function
$f\in L^1(\R, |x|^{2k}dx)$ is given by
$$\forall y\in\R,\,\,\,{\cal F}_k(f)(y):=c_k\dint_{\R}f(x)E_k(x,-iy)|x|^{2k}dx,$$
where $c_k:=\frac{1}{2^{k+\frac{1}{2}}\Gamma(k+\frac{1}{2})}$.
\par We summarize the properties of ${\cal F}_k(f)$ in the following proposition :
\begin{Prop} \cite{Jeu1}\label{Prop.I.1}
\par (i) For all $f\in S(\R)$, we have
$${\cal F}_k({\cal D}_kf)(x)=ix{\cal F}_k(f)(x),\,\,\,x\in\R.$$
\quad (ii) {\footnotesize\bf{Inversion formula :}} For all
$f\in L^1(\R,|x|^{2k}dx)$ such that ${\cal F}_k(f)$ belongs to
$L^1(\R,|x|^{2k}dx)$, we have
$$f(x)=\dint_{\R}E_k(x,iy){\cal F}_k(f)(y)|y|^{2k}dy\,\,\,\,a.e.$$
\quad (iii) {\footnotesize\bf{Plancherel's Theorem :}} The
Dunkl transform extends to an isometry of $L^2(\R,|x|^{2k}dx)$. In
particular, we have the following Plancherel's formula
$$\|f\|_{k,2}=\|{\cal F}_k(f)\|_{k,2},\,\,\,f\in L^2(\R,|x|^{2k}dx).$$
\end{Prop}
\begin{Def}
Let $f\in C(\R)$ (denotes the space of continuous functions on $\R$) and
$y\in\R$. Then ${\cal T}^k_yf(x)=u(x,y)$ is
the unique solution of the following Cauchy problem
$$\left\{
\begin{array}{rcl}
{\cal D}_{k,x}u(x,y)&=&{\cal D}_{k,y}u(x,y),\\ u(x,0)&=&f(x).
\end{array}
\right.$$ ${\cal T}^k_y$ is called the Dunkl translation
operator.
\end{Def}
\begin{Rem}\label{Rem.I.1} In what follows we point out some remarks.\\
\begin{itemize}
\item The operator ${\cal T}^k_x$ admits the following integral
representation
\begin{equation}\label{e.I.3}
{\cal
T}^k_yf(x):=d_k\left(\dint_0^{\pi}f_e(G(x,y,\theta))h^e(x,y,\theta)\sin^{2k-1}\theta
d \theta \right.\hspace*{6cm}
\end{equation}
$$\hspace*{3cm} \left. +\dint_0^{\pi}f_o(G(x,y,\theta))h^o(x,y,\theta)\sin^{2k-1}\theta
d\theta \right),$$ where
$$d_k:=\dfrac{\Gamma(k+\frac{1}{2})}{\Gamma(k)\Gamma(\frac{1}{2})},\,\,\,G(x,y,\theta)=\sqrt{x^2+y^2-2|xy|\cos\theta},\,\,\,h^e(x,y,\theta)=
1-sgn(xy)\cos\theta,$$
$$h^o(x,y,\theta)=\left\{
\begin{array}{rcl}
\dfrac{(x+y)h^e(x,y,\theta)}{G(x,y,\theta)}&,&\mbox{if}\,\,\,(x,y)\neq(0,0),\\
0\hspace*{1cm}&,&\mbox{otherwise},
\end{array}
\right.$$
$$f_e(x)=\dfrac{1}{2}(f(x)+f(-x))\,\,\,\mbox{and}\,\,\,f_o(x)=\dfrac{1}{2}(f(x)-f(-x)).$$
\item There is an abstract formula for ${\cal T}^k_y$, $y\in\R$, given in terms of the intertwining operator $V_k$ and its inverse, ( see \cite{Trim2,N-S} ).
It takes the form of
$${\cal T}^k_yf(x):=(V_k)_x\otimes(V_k)_y\left[(V_k)^{-1}(f)(x+y)\right],\,\,\,\,x\in\R,\,\,f\in{\cal E}(\R).$$
\item The Dunkl translation operators satisfy for $x,y\in\R$ the
following relations
$$
\begin{array}{rcl}
{\cal T}_x^kf(y)={\cal T}_y^kf(x)\quad ,\quad {\cal T}_0^kf(y)=f(y),\\
{\cal T}_x^k{\cal T}_y^k={\cal T}_y^k{\cal T}_x^k\quad ,\quad
{\cal T}_x^k{\cal D}_k={\cal D}_k{\cal T}_x^k.
\end{array}$$
\item For each $y\in\R$, the Dunkl translation operator ${\cal T}^k_y$ extends to a bounded operator on
$L^p(\R,|x|^{2k}dx)$. More precisely
\begin{equation}\label{e.I.4}
\|{\cal T}^k_yf\|_{k,p}\leq 3\|f\|_{k,p},\,\, 1\leq p\leq\infty.
\end{equation}
\item Unusually, ${\cal T}^k_y$ is not a positive operator in general (see \cite{Ros}), but if $f$ is even, then ${\cal T}^k_yf(x)=d_k\int_0^\pi f(G(x,y,\theta))h^e(x,y,\theta)\sin^{2k-1}\theta d\theta$, which shows that ${\cal T}^k_yf(x)\geq0$ whenever $f$ is non-negative.
\item From the generalized Taylor formula with integral remainder (see \cite{Mourou}, Theorem 2 p. 349), we have for $f\in {\cal E}(\R)$ and $x,y\in\R$
\begin{equation}\label{e.II.Taylor}
\left({\cal T}^k_xf-f\right)(y)=\dint_{-|x|}^{|x|}\left(\dfrac{sgn(x)}{2|x|^{2k}}-\dfrac{sgn(z)}{2|z|^{2k}}\right){\cal T}^k_z({\cal D}_kf)(y)|z|^{2k}dz.
\end{equation}
\end{itemize}
\end{Rem}
\par Associated to the Dunkl translation operator ${\cal T}^k_y$, the Dunkl convolution $f\ast_k g$ of two appropriate functions $f$ and $g$ on
$\R$ defined by
$$f\ast_k g(x):=\dint_{\R}{\cal T}^k_xf(-y)g(y)|y|^{2k}dy,\,\,\,x\in\R.$$ The Dunkl convolution preserves the main properties of the classical convolution which corresponds to $k=0$.\par For $S\in S'(\R)$ and $f\in S(\R)$, we
define the Dunkl convolution product $S\ast_kf$ by
$$S\ast_kf(x):=<S_y,{\cal T}^k_xf(-y)>.$$
\section{\footnotesize{{\bf{The $k$-Heat
Transforms of a Function}}}} \par We recall
some properties of the $k$-heat transforms of a measurable
function $f$ and we refer for more details to the survey \cite{N-A-S} and the references therein.
\par - For $t>0$, let $F^k_t$ be the function defined by
$$F^k_t(x):=(2t)^{-(k+\frac{1}{2})}e^{-\frac{x^2}{4t}}$$
which is a solution of the Dunkl-type heat equation $({\cal D}^2_k-\partial_t){\cal U}=0$ on the half-plane $\R^2_+$, \footnote{$\R^2_+=\left\{(x,t):x\in\R,t>0\right\}$}. The function $F^k_t$ may be
called the heat kernel associated with Dunkl operator or the $k$-heat kernel and it has the following basic properties :
\begin{Prop}\label{Prop.II.1}
For all $t>0$ and $n,\,m\,\in\N$, we have
\\ (i) ${\cal F}_k(F^k_t)(x)=e^{-tx^2}$ and
$\int_{\R}F^k_t(x)|x|^{2k}dx=c_k^{-1}$.
\\ (ii) $\int_{\R}|{\cal D}_k^nF^k_t(x)||x|^{2k}dx\leq
B(k,n)t^{-\frac{n}{2}}$.
\\ (iii)
$\partial^m_tF^k_t(x)=t^{-m}R(\frac{x^2}{4t})F^k_t(x)$, where $R$
is a polynomial of degree $m$ with coefficients depending only on
$m$ and $k$.
\\ (iv) $\int_{\R}|\partial^m_tF^k_t(x)||x|^{2k}dx\leq
B(k,m)t^{-m}$ and $\int_{\R}\partial^m_tF^k_t(x)|x|^{2k}dx=0$.
\end{Prop}
\begin{Def}
The $k$-heat transform of a smooth measurable function $f$ on $\R$ is given by
$$G^k_t(f)(x):=F^k_t\ast_kf(x),\,\,t>0.$$
\end{Def}
\begin{Th}\label{Th.II.1}
Let $f$ be a measurable bounded function on $\R$. Then,
\\ (i) $(x,t)\longmapsto G^k_t(f)(x)$ is infinitely differentiable on
$\R^2_+$ and it is a
solution of the Dunkl-type heat equation. Further, if
$n,m\in\N$, then for all $t>0$
$${\cal D}^n_kG^k_t(f)={\cal D}^n_kF^k_t\ast_kf\,\,\,\mbox{and}\,\,\,\partial^n_tG^k_t(f)=\partial^n_tF^k_t\ast_kf.$$
(ii) For all $s,t>0$ and $x\in\R$, we have
$G^k_{t+s}(f)(x)=\int_{\R}{\cal
T}^k_{-y}F^k_t(x)G^k_s(f)(y)|y|^{2k}dy$.
\\ (iii) If $f\in C_b(\R)$, then
$G^k_t(f)(x)\longrightarrow f(\xi)$ as
$(x,t)\longrightarrow(\xi,0)$.
\end{Th}
\begin{Th}\label{Th.II.2}
Let $p\in[1,\infty]$ and let $f\in L^p(\R,|x|^{2k}dx)$. Then the
$k$-heat transform $G^k_t(f)$ of $f$ has the following properties
:
\\ (i) For all $t>0$ and $m\in\N$, we have
$$\|G^k_t(f)\|_{k,p}\leq c_k^{-1}\|f\|_{k,p}\,\,\,\mbox{and}\,\,\,\|\partial^m_tG^k_t(f)\|_{k,p}\leq B(k,m)t^{-m}\|f\|_{k,p}.$$
(ii) If $1\leq p<r<\infty$ and
$\delta=\frac{1}{p}-\frac{1}{r}$, then for all $t>0$
$$\|G^k_t(f)\|_{k,r}\leq
t^{-(k+\frac{1}{2})\delta}c_k^{\delta-2}\|f\|_{k,p}$$ and $\|G^k_t(f)\|_{k,r}=\circ(t^{-(k+\frac{1}{2})\delta})$, \footnote{
$f(x)=\circ(g(x))$, $x\longrightarrow a$, means $f(x)/g(x)\longrightarrow0$ as $x\longrightarrow a$. }, as $t\longrightarrow0^+$.
\end{Th}
\begin{Def}
For any $T\in S'(\R)$, the $k$-heat transform of $T$ is given by
$$G^k_t(T)(x):=T\ast_kF^k_t(x),\,\,x\in\R.$$
\end{Def}
\section{\footnotesize{{\bf{A Semi-group Formula for
$k$-Temperatures}}}} \hspace*{5mm} Hereafter we shall be concerned
mostly with temperatures associated with the Dunkl setting on $\R$ which we recall the $k$-temperatures, satisfying a property which we call
"semi-group formula".
\begin{Def}
A function ${\cal U}$ on $\R^2_+$ is said to be a $k$-temperature if it
is indefinitely differentiable on $\R^2_+$ and satisfies at each
point of $\R^2_+$ the Dunkl-type heat equation i.e.,
$${\cal D}^2_k{\cal U}(x,t)=\partial_t{\cal U}(x,t).$$
\end{Def}
\par - We consider the following initial-value problem for the $k$-heat equation :
$$(IVP)\left\{
\begin{array}{rcl}
({\cal D}_k^2-\partial_t){\cal U}=&0&\mbox{on}\,\,\,\R^2_+\\
{\cal U}(.,0)=&f&
\end{array}
\right.$$ with initial data $f\in C_b(\R)$ ( that is, the space of bounded continuous functions on $\R$). For $f\in C_0(\R)$, the function
$$H_tf(x)=\dint_{\R}{\cal T}^k_{-y}F^k_t(x)f(y)|y|^{2k}dy,\,\,t>0,$$
solves initial value problem (IVP) (see \cite{Ros-Voit}).
\begin{Lem}\label{Lem.III.1}
Let $f$ be in ${\cal E}(\R)$, let $c>0$, $a>0$ and let
$S=\R\times]0,c[$. Then there exists at most one $k$-temperature
${\cal U}$ on $S$ which is continuous on $\overline{S}$ and satisfies the
conditions that ${\cal U}(x,0)=f(x)$, $x\in\R$ and
$$\dint_0^c\left[\dint_{\R}|{\cal U}(x,t)|e^{-ax^2}|x|^{2k}dx\right]dt<\infty.$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad Since $V_k$ is a topological automorphism to the space ${\cal E}(\R)$, then from Theorem 16 of Friedman \cite{Friedman}
(see also Lemma 5 of Flett \cite{Flett1}), there exists at most one classical temperature $\tilde{{\cal U}}$ on $S$ which is continuous on $\overline{S}$
and satisfies the conditions that
$$\tilde{{\cal U}}(x,0)=V_k^{-1}(f)(x),\,\,\,x\in\R\,\,\,\mbox{and}\,\,\,\dint_0^c\left[\dint_{\R}|\tilde{{\cal U}}(x,t)|e^{-ax^2}dx\right]dt<\infty.$$
Thus, $(x,t)\longmapsto {\cal U}(x,t)=V_k(\tilde{{\cal U}}(.,t))(x)$ is a $k$-temperature on $S$ which is continuous on $\overline{S}$ and ${\cal U}(x,0)=f(x)$, $x\in\R$.
From the formula (\ref{e:I.1}) we deduce that for $x\neq0$
\begin{equation}\label{e.VI.150}
V_k(\tilde{{\cal U}}(.,t))(x)=B(k)|x|^{-2k}sgn(x)\dint_{-|x|}^{|x|}\tilde{{\cal U}}(y,t)(x^2-y^2)^{k-1}(x+y)dy.
\end{equation}
Then according to Fubini-Tonelli's theorem, formula (\ref{e.VI.150}), change of variables $\xi=x^2$ and formula (11) given in \cite{Bateman} p. 202, we have
$$\dint_0^c\left[\dint_{\R}|{\cal U}(x,t)|e^{-ax^2}|x|^{2k}dx\right]dt\leq\dint_0^c\left[\dint_{\R}V_k(|\tilde{{\cal U}}(.,t)|)(x)e^{-ax^2}|x|^{2k}dx\right]dt$$
$$\leq B(k)\dint_0^c\left[\dint_{\R}|\tilde{{\cal U}}(y,t)|\left(\dint_{y^2}^{+\infty}e^{-a\xi}(\xi-y^2)^{k-1}d\xi\right)dy\right]dt$$
$$=B(k,a)\dint_0^c\left[\dint_{\R}|\tilde{{\cal U}}(y,t)|e^{-ay^2}dy\right]dt<\infty.$$ This achieves the proof.
\begin{Th}\label{Th.III.2}
Let $p\in[1,\infty]$ and let ${\cal U}$ be a $k$-temperature on $\R^2_+$
such that the function $t\longmapsto\|{\cal U}(.,t)\|_{k,p}$ is locally
integrable on $]0,\infty[$. Hence
\\ (i) for all $s>0$ and $(x,t)\in\R^2_+$,
\begin{equation}\label{semigroup}
{\cal U}(x,s+t)=\dint_{\R}{\cal T}^k_{-y}F^k_t(x){\cal U}(y,s)|y|^{2k}dy.
\end{equation}
(ii) $t\longmapsto\|{\cal U}(.,t)\|_{k,p}$ is decreasing and
continuous on $]0,\infty[$. Further, for each $(n,m)\in\N\times\N$
the function $t\longmapsto\|{\cal
D}_k^n\partial^m_t{\cal U}(.,t)\|_{k,p}$ is decreasing and continuous on $]0,\infty[$.
\end{Th}
{\footnotesize\bf{Proof}}\quad It is obtained in the same way as for Theorem 4 of Flett \cite{Flett1} by
using Lemma \ref{Lem.III.1}.
\begin{Rem}
The equation (\ref{semigroup}) is called the "semi-group formula"
hereafter.
\end{Rem}
\section{\footnotesize{{\bf{Dunkl-Bessel Potentials}}}}\par The
aim of this section is to define the Bessel potential of some
classes of $k$-temperature associated with the Dunkl setting on $\R$ and to prove related properties needed
later. We adopt the method used by Flett \cite{Flett1} and Johnson \cite{Johnson} in treating classical temperatures.
\begin{Def}
For any $f\in L^p(\R,|x|^{2k}dx)$, where $1\leq p\leq\infty$
and for any $\alpha>0$, the Dunkl-Bessel potential ${\cal
J}^k_{\alpha}f$ of order $\alpha$ of $f$ is given by
$${\cal J}^k_{\alpha}f:={\cal B}^k_{\alpha}\ast_kf,$$ with the kernel
function
\begin{equation}\label{e.IV.1}
\begin{array}{rcl} {\cal B}^k_{\alpha}(x) &:=& \dfrac{1}
{2^{k+\frac{1}{2}}\Gamma(\frac{\alpha}{2})}\dint_0^{+\infty}
e^{-t}e^{-\frac{x^2}{4t}}t^{-k+\frac{(\alpha-1)}{2}-1}dt\\
&=& \dfrac{1}
{2^{\frac{\alpha}{2}-1}\Gamma(\frac{\alpha}{2})}|x|^{
\frac{1}{2}(\alpha-1)-k} K_{\frac{\alpha}{2}-\frac{1}{2}-k}(|x|).
\end{array}
\end{equation}
\end{Def}
Here $$K_{\beta}(z):=\dfrac{\pi}{2}\left\{\dfrac{J_{-\beta}(z)-J_{\beta}(z)}{\sin\beta\pi}
\right\},$$ where $J_{\beta}$ is the modified Bessel function
of the first kind with series expansion
$$J_{\beta}(z):=\dsum_{n=0}^{+\infty}\dfrac{(\frac{1}{2}z)^{\beta+2n}}{n!\Gamma(\beta+n+1)}.$$
The Bessel potentials associated with the Dunkl setting on $\R$ which we recall the $k$-Bessel potentials are bounded operators from
$L^p(\R,|x|^{2k}dx)$ to itself for $1\leq p\leq\infty$ (see
\cite{Xu2}), i.e., if $f\in L^p(\R,|x|^{2k}dx)$ and $\alpha>0$, then ${\cal J}^k_{\alpha}f\in
L^p(\R,|x|^{2k}dx)$ and $\|{\cal
J}^k_{\alpha}f\|_{k,p}\leq\|f\|_{k,p}$. Further, for
$\alpha,\beta>0$ $${\cal J}^k_{\alpha}({\cal J}^k_{\beta}f)={\cal
J}^k_{\alpha+\beta}f.$$
By using the well-known asymptotic behavior of the function $K_{\nu}$, $\nu\in\R$ ( see \cite{Aron} page 415 ), we deduce that
\\ (a)
${\cal B}^k_{\alpha}(x)\sim\dfrac{\Gamma(\frac{1-\alpha}{2}+k)}{2^{\alpha-\frac{1}{2}-k}\Gamma(\frac{\alpha}{2})}|x|^{\alpha-1-2k}$,
\,\,\footnote{As usual, we write $f(x)\sim g(x)$ as $x\longrightarrow a$ if $\displaystyle\lim_{x\longrightarrow
a}\frac{f(x)}{g(x)}=1$.}
as $|x|\longrightarrow0$, for $0<\alpha<2k+1$.
\\ (b) ${\cal B}^k_{1+2k}(x)\sim\dfrac{1}{2^{k-\frac{1}{2}}\Gamma(k+\frac{1}{2})}\log(\dfrac{1}{|x|})$
as $|x|\longrightarrow0$.
\\ (c)
${\cal B}^k_{\alpha}(x)\sim\frac{\Gamma(\frac{\alpha-1}{2}-k)}{2^{\frac{1}{2}+k}\Gamma(\frac{\alpha}{2})}$
as $|x|\longrightarrow0$, for $\alpha>2k+1$.
\\ (d)
${\cal B}^k_{\alpha}(x)\sim\dfrac{\sqrt{\pi}}{2^{\frac{\alpha-1}{2}}\Gamma(\frac{\alpha}{2})}|x|^{\frac{\alpha}{2}-1-k}e^{-|x|}$
as $|x|\longrightarrow\infty$, for $\alpha>0$.
\\ As a consequence, we obtain
\begin{equation}\label{e.IV.2}
{\cal B}^k_{\alpha}(x)\leq
B(k,\alpha)|x|^{\alpha-1-2k},\,\,\,\mbox{if}\,\,\,0<\alpha<1+2k.
\end{equation}
By differentiation under the integration sign of formula (\ref{e.IV.1}), and using the identity
$$t^{-a}=\dfrac{1}{\Gamma(a)}\dint_0^{+\infty}e^{-t\delta}\delta^a\dfrac{d\delta}{\delta},\,\,\,\mbox{with}\,\,\,a>0,$$
we show that
\begin{equation}\label{e.IV.3}
|{\cal D}_k{\cal B}^k_{\alpha}(x)|<B(k,\alpha)|x|^{\alpha-2-2k},\,\,\mbox{if}\,\,\,0<\alpha<2k+3.
\end{equation}
Added to this, we can see that the kernel ${\cal B}_{\alpha}^k$, $\alpha>0$, satisfies
\\ (i) ${\cal B}_{\alpha}^k(x)\geq0$, for all $x\in\R$.
\\ (ii) $\|{\cal B}^k_{\alpha}\|_{k,1}=1$.
\\ (iii) ${\cal
F}_k({\cal B}^k_{\alpha})(x)=(1+x^2)^{-\frac{\alpha}{2}}$, $x\in\R$.
\\ (iv)
${\cal B}^k_{\alpha_1+\alpha_2}={\cal B}^k_{\alpha_1}\ast_k{\cal B}^k_{\alpha_2}$,
if $\alpha_1$, $\alpha_2>0$.
\par The next theorem is the basis of our definition of the Dunkl-Bessel potential for $k$-temperatures.
\begin{Th}\cite{N-A-S}\label{Th.IV.1}
Let $\alpha>0$, $1\leq p\leq\infty$ and let $f\in
L^p(\R,|x|^{2k}dx)$, then \\ (i) The $k$-Bessel
potential ${\cal J}^k_{\alpha}f$ of order $\alpha$ of $f$ is
given for almost all $x$ by
\begin{equation}\label{e.IV.4}
{\cal
J}^k_{\alpha}f(x)=\dfrac{1}
{\Gamma(\frac{\alpha}{2})}\dint_0^{+\infty}t^{\frac{\alpha}{2}-1}
e^{-t}G_t^k(f)(x)dt,
\end{equation}
where $G_t^k(f)$, $t>0$, is the $k$-heat transform of $f$ on $\R$.
\\ (ii) The $k$-heat transform of ${\cal
J}^k_{\alpha}f$, $\alpha>0$,
on $\R$ is the function $G_s^k({\cal
J}^k_{\alpha}f)$ given by
\begin{equation}\label{e.IV.5}
G_s^k({\cal
J}^k_{\alpha}f)(x)=\dfrac{1}
{\Gamma(\frac{\alpha}{2})}\dint_0^{+\infty}t^{\frac{\alpha}{2}-1}
e^{-t}G^k_{s+t}(f)(x)dt.
\end{equation}
Moreover, for each $s>0$, the function $x\mapsto G_s^k({\cal
J}^k_{\alpha}f)(x)$ is the $k$-Bessel potential of $x\mapsto G_s^k(f)(x)$.
\end{Th}
\begin{Def}
Let ${\cal T}^k(\R^2_+)$ denotes the linear space of $k$-temperatures
${\cal U}$ on $\R^2_+$ with the properties that if
$(n,m)\in\N\times\N$, $b>0$, $c>0$, and $S$ is a compact
subset of $\R$, then there is a positive constant $C$ such that
$$
|{\cal D}_k^{n}\partial^m_t{\cal U}(x,t)|\leq
Ct^{-b}e^t,\,\,\,\mbox{for all}\,\,(x,t)\in
S\times[c,\infty[.
$$
\end{Def}
\begin{Def}\label{Def.IV.2}
For any ${\cal U}$ in ${\cal T}^k(\R^2_+)$ and any real number
$\alpha$, ${\cal J}^k_{\alpha}{\cal U}$ is the function defined on
$\R^2_+$ by
\\ (i) ${\cal
J}^k_0({\cal U})={\cal U}$;\\ (ii) if $\alpha>0$,
$$
{\cal
J}^k_{\alpha}({\cal U})(x,s)=\dfrac{1}
{\Gamma(\frac{\alpha}{2})}\dint_0^{+\infty}t^{\frac{\alpha}{2}
-1}e^{-t}{\cal U}(x,s+t)dt;
$$
\\ (iii) if $\alpha$ is a negative even integer, say
$\alpha=-2m$, then $${\cal J}^k_{\alpha}({\cal U})(x,s)={\cal J}^k_{-2m}({\cal U})(x,s)=(-1)^me^s\partial
^m_s\{e^{-s}{\cal U}(x,s)\};$$
(iv) if $\alpha=-\beta<0$ and $\beta$ is not an even
integer, then $${\cal J}^k_{\alpha}({\cal U})={\cal J}^k_{-\beta}({\cal U})={\cal J}^k_{2m-\beta}
\left({\cal J}^k_{-2m}({\cal U})\right);$$ where
$m=[\frac{1}{2}\beta]+1$, \footnote{Here $[x]$ stands for the greatest integer not
exceeding $x$, $x\in\R$. } and where ${\cal J}^k_{2m-\beta}$
and ${\cal J}^k_{-2m}$ are defined as in (ii) and
(iii).
\end{Def}
\begin{Th}\label{Th.IV.2}\cite{N-A-S}
Let ${\cal U}\in{\cal T}^k(\R^2_+)$ and $\alpha$, $\beta$ be real numbers. \\ (i)
${\cal J}^k_{\alpha}({\cal U})$ is well-defined and ${\cal J}^k_{\alpha}({\cal U})\in{\cal
T}^k(\R^2_+)$, \\ (ii) ${\cal J}^k_{\alpha}
\left({\cal J}^k_{\beta}({\cal U})\right)={\cal
J}^k_{\alpha+\beta}({\cal U})={\cal J}^k_{\beta}
\left({\cal J}^k_{\alpha}({\cal U})\right)$.
\end{Th}
\begin{Cor}\label{Cor.IV.01}
For each real number $\alpha$, ${\cal J}^k_{\alpha}$ is a linear isomorphism of ${\cal T}^k(\R^2_+)$ onto itself, with inverse ${\cal J}^k_{-\alpha}$.
\end{Cor}
\begin{Th}\label{Th.IV.3}
Let $f$ be in $L^p(\R,|x|^{2k}dx)$, $1\leq p\leq\infty$,
$\alpha>0$, and let $G^k_t(f)$ be the $k$-heat transform of
$f$ on $\R^2_+$. Then for $t>0$
\\ (i) $\|{\cal J}^k_{\alpha}G^k_t(f)\|_{k,p}\leq
c_k^{-1}\|f\|_{k,p}$;
\\ (ii) $\|{\cal J}^k_{-\alpha}G^k_t(f)\|_{k,p}\leq
B(k,\alpha)(t^{-\frac{1}{2}\alpha}+1)\|f\|_{k,p}$;
\\ (iii) furthermore, if $1\leq p<\infty$ then
$$\|{\cal J}^k_{-\alpha}G^k_t(f)\|_{k,p}=\circ(t^{-\frac{1}{2}\alpha}),\,\,\,\mbox{as}\,\,\,t\longrightarrow0^+.$$
\end{Th}
{\footnotesize\bf{Proof}}\quad Part (i) follows from relation
(\ref{e.IV.5}), Minkowski's integral inequality and Theorem
\ref{Th.II.2}(i). According to the fact that
$$J^k_{-2m}G^k_t(f)=\dsum_{i=0}^m(-1)^i(\begin{array}[c]{c}m\\i\end{array})\partial^i_tG^k_t(f),\,\,\,m\in\N,$$
Minkowski's inequality, Theorem \ref{Th.II.2}(i) and the following inequality
\begin{equation}\label{Relation}
(a+b)^s\leq2^{s-1}(a^s+b^s),\,\,s\in[1,+\infty[,\,\,a,b\geq0,
\end{equation}
yield the part (ii) when $\alpha=2m$. Supposing that
$\alpha$ is not an even integer and let
$m=[\frac{1}{2}\alpha]+1$. Then for $(x,s)\in\R^2_+$
$${\cal J}^k_{-\alpha}G^k_s(f)(x)=\dfrac{1}{\Gamma(m-\frac{1}{2}\alpha)}\dint_0^{+\infty}t^{m-\frac{1}{2}\alpha-1}e^{-t}
{\cal J}^k_{-2m}G^k_{s+t}(f)(x)dt.$$ Hence, Minkowski's integral
inequality and the previous case when $\alpha=2m$ yield that $\|{\cal J}^k_{-\alpha}G^k_s(f)\|_{k,p}\leq
B(k,\alpha)(s^{-\frac{1}{2}\alpha}+1)\|f\|_{k,p}$. We shall prove (iii)
only when $\alpha=2m$, because the general case can be treated
in the same manner. Let $(x,t)$ be in $\R^2_+$. Thus by
Proposition \ref{Prop.II.1}(iv)
$${\cal J}^k_{-2m}G^k_t(f)(x)=\dsum_{i=0}^m(-1)^i(\begin{array}[c]{c}m\\i\end{array})\dint_{\R}\partial^i_tF^k_t(y)({\cal
T}^k_{-y}f(x)-f(x))|y|^{2k}dy$$ which together with Minkowski's integral
inequality imply that
$$t^m\|{\cal J}^k_{-2m}G^k_t(f)\|_{k,p}\leq t^m\dsum_{i=0}^m(\begin{array}[c]{c}m\\i\end{array})
\dint_{|y|<\delta}|\partial^i_tF^k_t(y)|\|{\cal T}^k_{-y}f-f\|_{k,p}|y|^{2k}dy+$$
$$t^m\dsum_{i=0}^m(\begin{array}[c]{c}m\\i\end{array})\dint_{|y|\geq\delta}|\partial^i_tF^k_t(y)|\|{\cal
T}^k_{-y}f-f\|_{k,p}|y|^{2k}dy=I_1(t)+I_2(t)\,\,\,(\delta>0).$$
Since $\lim_{y\longrightarrow0}\|{\cal T}^k_{-y}f-f\|_{k,p}=0$,
for an arbitrary positive number $\epsilon$, there exists a
$\delta>0$ such that $\|{\cal T}^k_{-y}f-f\|_{k,p}<\epsilon$ if
$|y|<\delta$. Therefore, from Proposition \ref{Prop.II.1}(iv) and inequality (\ref{Relation}), we obtain $I_1(t)\leq B(k,m)(1+t^m)\epsilon$. By
relation (\ref{e.I.4}), Proposition \ref{Prop.II.1}(iii) and the change of variables, we have
$$I_2(t)\leq
B(k)\|f\|_{k,p}\dsum_{i=0}^m(\begin{array}[c]{c}m\\i\end{array})t^{m-i}\dint_{\frac{\delta^2}{4t}}^{+\infty}|R_i(\sigma)|
e^{-\sigma}\sigma^{k-1/2}d\sigma.$$
Letting $t\rightarrow0^+$, the last integral approaches to $0$. This proves the part (iii).
\begin{Cor}\label{Cor.IV.1}
Let $\alpha>0$, $1\leq p\leq\infty$, and ${\cal U}$ be in ${\cal
T}^k(\R^2_+)$. If ${\cal U}$ satisfies the semi-group formula, then
for all $s,t>0$
\\ (i) $\|{\cal
J}^k_{\alpha}{\cal U}(.,s+t)\|_{k,p}\leq\|{\cal U}(.,s)\|_{k,p}$.
\\ (ii) $\|{\cal
J}^k_{-\alpha}{\cal U}(.,s+t)\|_{k,p}\leq B(k,\alpha)(t^{-\frac{1}{2}\alpha}+1)\|{\cal U}(.,s)\|_{k,p}$.
\end{Cor}
{\footnotesize\bf{Proof}}\quad Let $s$ be fixed. We may assume that
$\|{\cal U}(.,s)\|_{k,p}$ is finite (otherwise the conclusion would be trivial). Then for all $t>0$, by the semi-group formula for ${\cal U}$ yields
$${\cal U}(x,s+t)=\dint_{\R}{\cal T}^k_{-y}F^k_t(x){\cal U}(y,s)|y|^{2k}dy$$
which implies the corollary by analogous reasoning of Theorem \ref{Th.IV.3}.
\begin{Th}\label{Th.IV.4}
Let $1\leq p\leq\infty$, $1\leq q<\infty$, $\beta$ be a positive
number and ${\cal U}$ be a $k$-temperature on $\R^2_+$ such that
$$C=\left\{\dint_0^{+\infty}t^{\frac{1}{2}q\beta-1}e^{-t}\|{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}<\infty.$$
Thus for $t>0$, $\|{\cal U}(.,t)\|_{k,p}\leq
B(q,\beta)(1+t^{-\frac{1}{2}\beta})C$ and
$\|{\cal U}(.,t)\|_{k,p}=\circ(t^{-\frac{1}{2}\beta})$ as $t\longrightarrow0^+$.
Moreover, if $q<r<\infty$, then
$$\left\{\dint_0^{+\infty}t^{\frac{1}{2}r\beta-1}e^{-t}\|{\cal U}(.,t)\|^r_{k,p}dt\right\}^{\frac{1}{r}}\leq B(q,r,\beta)C.$$
\end{Th}
{\footnotesize\bf{Proof}}\quad The proof is similar to the classical case (see Theorem 11 p. 405 in \cite{Flett1}).
\begin{Th}\label{Th.IV.5}
Let $1\leq p\leq\infty$, $1\leq q\leq\infty$, $\alpha$ be a real
number, $\beta>0$, $\beta>\alpha$ and ${\cal U}$ be a
$k$-temperature on $\R^2_+$ such that
$$C:=\left\{
\begin{array}{rcl}
\left\{\dint_0^{+\infty}t^{\frac{1}{2}q\beta-1}e^{-t}\|{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}=C_1<\infty,\,\,(1\leq q<\infty),\\
\displaystyle\sup_{t>0}\left\{t^{\frac{1}{2}\beta}e^{-t}\|{\cal U}(.,t)\|_{k,p}\right\}=C_2<\infty,\,\,(q=\infty).\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\end{array}
\right.
$$
(i) ${\cal U}\in{\cal T}^k(\R^2_+)$ and
$$\left\{
\begin{array}{rcl}
\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-\alpha)-1}e^{-t}\|{\cal J}^k_{\alpha}{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}\leq
B(k,\alpha,\beta,q)C_1,
\,\,(1\leq q<\infty),\\ \displaystyle\sup_{t>0}\left\{t^{\frac{1}{2}(\beta-\alpha)}e^{-t}\|{\cal J}^k_{\alpha}{\cal U}(.,t)\|_{k,p}\right\}
\leq B(k,\alpha,\beta)C_2,\,(q=\infty).\,\,\,\,\,\,
\end{array}
\right.
$$
(ii) If $1\leq q<\infty$, then $\|{\cal
J}^k_{\alpha}{\cal U}(.,t)\|_{k,p}=\circ(t^{-\frac{1}{2}(\beta-\alpha)})$
as $t\longrightarrow0^+$.
\\ (iii) If $q=\infty$ and
$\|{\cal U}(.,t)\|_{k,p}=\circ(t^{-\frac{1}{2}\beta})$ as $t\longrightarrow0^+$,
then $\|{\cal
J}^k_{\alpha}{\cal U}(.,t)\|_{k,p}=\circ(t^{-\frac{1}{2}(\beta-\alpha)})$
as $t\longrightarrow0^+$.
\end{Th}
{\footnotesize\bf{Proof}}\quad Clearly $t\longmapsto\|{\cal U}(.,t)\|_{k,p}$ is locally
integrable on $]0,\infty[$, so that ${\cal U}\in{\cal T}^k(\R^2_+)$ and
$\|{\cal U}(.,t)\|_{k,p}$ is decreasing. Therefore ${\cal
J}^k_{\alpha}{\cal U}$ is well defined. First, suppose that
$\gamma=-\alpha>0$. Then by Corollary \ref{Cor.IV.1} we see that
\begin{equation}\label{e.IV.6}
\|{\cal J}^k_{\alpha}{\cal U}(.,2t)\|_{k,p}\leq
B(k,\alpha)(t^{\frac{1}{2}\alpha}+1)\|{\cal U}(.,t)\|_{k,p}
\end{equation}
which implies that
$$\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-\alpha)-1}e^{-t}\|{\cal J}^k_{\alpha}{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}\leq B(k,\alpha,\beta,q)C_1.$$
Next, we shall prove the result for the special case when $\alpha=2$ and
$\beta>2$. Since
\begin{equation}\label{e.IV.7}
{\cal J}^k_2{\cal U}(x,t)=\dint_0^{+\infty}e^{-\xi}{\cal U}(x,t+\xi)d\xi,
\end{equation}
it follows from Minkowski's integral inequality and Hardy's inequality that
$$\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-2)-1}e^{-qt}\|{\cal J}^k_2{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}\leq
B(k,\beta,q)\left\{\dint_0^{+\infty}t^{\frac{1}{2}q\beta-1}e^{-qt}\|{\cal U}(.,t)\|^q_{k,p}dt\right\}^{1/q}.$$
To prove the result for $\alpha=\delta>0$, let
$\gamma$ be the least positive number such that $\gamma+\delta$
is an even positive integer. Then by applying part (i) in case $\alpha<0$, we
have
$$\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta+\gamma)-1}e^{-t}\|{\cal J}^k_{-\gamma}{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}\leq B(k,\gamma,\beta,q)C_1$$
and hence after repeated applications of part (i) in case $\alpha=2$,
we obtain
$$\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-\delta)-1}e^{-t}\|{\cal J}^k_{\delta} {\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}\leq
B(k,\alpha,\beta,q)C_1.$$ It is easy to see
$$\displaystyle\sup_{t>0}\left\{t^{\frac{1}{2}(\beta-\alpha)}e^{-t}\|{\cal J}^k_{\alpha}{\cal U}(.,t)\|_{k,p}\right\}\leq B(k,\alpha,\beta)C_2$$ from Corollary \ref{Cor.IV.1}.
The assertion (ii) then follows from part (i) and Theorem
\ref{Th.IV.4}.\\ Now, we shall prove the assertion (iii). First, assuming that
$\alpha<0$, the result follows easily from the estimate
(\ref{e.IV.6}). Next, we shall prove the result for the case when
$\alpha=2$ and $\beta>2$. It follows from relation (\ref{e.IV.7}) and Minkowski's integral inequality
that
$$s^{\frac{\beta}{2}-1}\|{\cal J}^k_2{\cal U}(.,s)\|_{k,p}\leq
s^{\frac{\beta}{2}-1}e^s\dint_s^{+\infty}e^{-t}\|{\cal U}(.,t)\|_{k,p}dt,$$
consequently the assertion is proved for the special case. In case
$\alpha=\delta>0$ and by choosing $\gamma>0$, $\gamma+\delta$ is
an even positive integer. Applying the above result for $\alpha<0$ we see
that $\|{\cal
J}^k_{-\gamma}{\cal U}(.,t)\|_{k,p}=\circ(t^{-\frac{1}{2}(\beta+\gamma)})$.
Repeated use of the result for $\alpha=2$ yields
$\|{\cal J}^k_{\delta}{\cal U}(.,t)\|_{k,p}=\|{\cal J}^k_{\gamma+\delta}
({\cal J}^k_{-\gamma}{\cal U}(.,t))\|_{k,p}=\circ(t^{-\frac{1}{2}(\beta+\gamma)+\frac{1}{2}(\gamma+\delta)})
=\circ(t^{-\frac{1}{2}(\beta-\delta)})$. Thus part (iii) is proved.
\begin{Def}
For any real number $\alpha$ and for any $T\in S'(\R)$, the
$k$-Bessel potential of order $\alpha$ of $T$ is the element
${\cal J}^k_{\alpha}(T)$ of $S'(\R)$ given by the relation
$${\cal F}_k({\cal J}^k_{\alpha}(T)):=(1+(.)^2)^{-\frac{\alpha}{2}}{\cal F}_k(T),$$
where the identity is to be understood in the sense of
distributions.
\end{Def}
\begin{Rems} We have
\begin{itemize}
\item For all real $\alpha$, $\beta$ and all $T\in S'(\R)$
$${\cal J}^k_{\alpha}({\cal J}^k_{\beta}(T))={\cal J}^k_{\alpha+\beta}(T).$$
\item By definition
$${\cal J}^k_{\alpha}(T)=T\ast_k{\cal B}^k_{\alpha},$$
where ${\cal B}^k_{\alpha}$ is a tempered distribution whose Dunkl
transform ${\cal
F}_k({\cal B}^k_{\alpha})=\left[(1+(.)^2)^{-\frac{\alpha}{2}}\right]$,
\footnote{$[f]$ is the distribution on $\R$ associated with the
function $f$. In addition $[f]$ belongs to $S'(\R)$, when $f\in
L^p(\R,|x|^{2k}dx)$ or $f$ is slowly increasing}.
\item If $f\in L^p(\R,|x|^{2k}dx)$, where $p\in [1,\infty]$ and
$\alpha>0$, then
$${\cal J}^k_{\alpha}([f])={\cal J}^k_{\alpha}(f)=f\ast_k{\cal B}^k_{\alpha}.$$
\end{itemize}
\end{Rems}
\section{\footnotesize{{\bf{Generalized Dunkl-Lipschitz Spaces, $\alpha$ Real}}}}
\par Our basic aim is to define Lipschitz spaces associated with the Dunkl
operators for all real $\alpha$. In the classical case, the heat (or
Poisson) semi-group provides an alternative characterization of
the Lipschitz spaces, we will follow this approach, using the
$k$-heat (or $k$-Poisson) semi-group, to define generalized Dunkl-Lipschitz
spaces. One of the main result of this part is to show that ${\cal J}^k_{\beta}$ is an isomorphism of $\wedge^k_{\alpha,p,q}(\R)$ onto $\wedge^k_{\alpha+\beta,p,q}(\R)$ for real $\alpha$ and $\beta$. The section closes by giving some properties and continuous embedding for the space $\wedge^k_{\alpha,p,q}(\R)$.
\par We define for $t>0$, the function $P^k_t$ on $\R$ by
$$P^k_t(x):=\tilde{c}_k\,\dfrac{t}{(t^2+x^2)^{k+1}},\,\,\,\mbox{where}\,\,\,\,\tilde{c}_k :=\dfrac{2^{k+\frac{1}{2}}}{\Gamma(\frac{1}{2})}\Gamma(k+1).$$
The function $P^k_t$ is called the $k$-Poisson kernel. We summarize the properties of $P^k_t$ in the following proportion :
\begin{Prop}\label{Prop.V.1}
For all $t>0$, $n\in\N$ and $x\in\R$, we have
\\ (i) ${\cal F}_k(P^k_t)(x)=e^{-t|x|}$.
\\ (ii) $\int_{\R}P^k_t(y)|y|^{2k}dy=1$.
\\ (iii) $P^k_t\in L^p(\R,|x|^{2k}dx)$, $1\leq p\leq\infty$.
\\ (iv) $P^k_{t_1+t_2}=P^k_{t_1}\ast_kP^k_{t_2}$, if $t_1,t_2>0$.
\\ (v) $\|\partial^n_tP^k_t\|_{k,1}\leq B(k,n)t^{-n}$, $\|{\cal D}^n_kP^k_t\|_{k,1}\leq \tilde{B}(k,n)t^{-n}$ and $|\partial^n_tP^k_t(x)|\leq B(k,n)
t^{-2k-1-n}$.
\\ (vi) $\displaystyle\lim_{t\rightarrow0}P^k_tf(x)=f(x)$, where the limit is interpreted in $L^k_p$-norm and pointwise a.e. For $f\in C_0(\R)$ the convergence is uniform on $\R$.
\end{Prop}
However, for $t>0$ and for all $f\in L^p(\R,|x|^{2k}dx)$, $p\in[1,\infty]$, we put
$$P^k_tf(x):=P^k_t\ast_kf(x),\,\,x\in\R.$$
The function $P^k_tf$ is called the Poisson transform of a function
$f$ associated with the Dunkl setting on $\R$ that's why we may recall it the $k$-Poisson transform of $f$.
\\ A $C^2$ function ${\cal U}$ on $\R^2_+$ satisfying $({\cal D}^2_k+\partial^2_t){\cal U}(x,t)=0$ is said to be $k$-harmonic. For $p\in[1,\infty]$, we suppose that
\begin{equation}\label{e.VI.100}
A^p:=\displaystyle\sup_{t>0}B(k)\dint_{\R}|{\cal U}(x,t)|^p|x|^{2k}dx<\infty.
\end{equation}
Now, we need the following key results.
\begin{Lem} (Semi-group property)\label{Lem.VI.Semi-group}
If ${\cal U}(x,t)$ is $k$-harmonic on $\R^2_+$ and bounded in each proper sub-half space of $\R^2_+$, then for $t_0>0$, ${\cal U}(x,t+t_0)$ is identical with the $k$-Poisson transform of ${\cal U}(.,t_0)$, that is,
$${\cal U}(x,t_0+t)=P^k_t({\cal U}(.,t_0))(x),\,\,\mbox{for}\,\,t>0.$$
Furthermore,
$$\partial_t{\cal U}(x,t_0+t)=\partial_tP^k_t({\cal U}(.,t_0))(x)=P^k_t(\partial_t{\cal U}(.,t_0))(x).$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad It is obtained in the same way as for property 12 p. 417 in \cite{Taibleson}.
\begin{Th} \label{Th.VI.characterization}(Characterization of $k$-Poisson transform)
Let $p\in[1,\infty]$ and let ${\cal U}(x,t)$ be $k$-harmonic on $\R^2_+$. Then
\\ (i) if $1<p<\infty$, ${\cal U}(x,t)$ is the $k$-Poisson transform of a function $f\in L^p(\R,|x|^{2k}dx)$ if and only if ${\cal U}(x,t)$ satisfies condition (\ref{e.VI.100}), moreover $\|f\|_{k,p}=A$.
\\ (ii) For $p=1$, ${\cal U}(x,t)$ is the $k$-Poisson transform of $f\in L^1(\R,|x|^{2k}dx)$ if and only if ${\cal U}(x,t)$ satisfies condition (\ref{e.VI.100}) and
$\|{\cal U}(.,t_1)-{\cal U}(.,t_2)\|_{k,1}$, as $t_1,t_2\rightarrow0$.
\\ (iii) For $p=\infty$, ${\cal U}(x,t)$ is the $k$-Poisson transform of a function $f\in L^{\infty}(\R,|x|^{2k}dx)$ if and only if there exists $C>0$ such that
$\|{\cal U}(.,t)\|_{k,\infty}\leq C$ for all $t>0$.
\end{Th}
{\footnotesize\bf{Proof}}\quad Parts (i) and (ii) are proved in \cite{Zh-Ji} Theorem 4.16 p. 254. Part (iii) is proved in usual way (see \cite{Taibleson} p. 416).
\begin{Rem}\label{Rem.VI.Temperature}
Analogously to the $k$-harmonic case, we can assert that Theorem \ref{Th.VI.characterization} and Lemma \ref{Lem.VI.Semi-group} are true when we take ${\cal U}(x,t)$
$k$-temperature on $\R^2_+$ and we replace $k$-Poisson transform by $k$-heat transform.
\end{Rem}
\par Before giving a central result of this section, we need to
recall the definition of the spaces $\wedge^k_{\alpha,p,q}(\R)$ (see \cite{S-K}) and the following auxiliary lemmas.
\begin{Def}
The generalized Dunkl-Lipschitz spaces $\wedge^k_{\alpha,p,q}(\R)$,
$\alpha\in]0,1[$, $1\leq p,q\leq\infty$, is the set of functions $f\in
L^p(\R,|x|^{2k}dx)$ for which the norm
$$\|f\|_{k,p}+\left\{\dint_{\R}\dfrac{\|\triangle_{y,k}f\|^q_{k,p}}{|y|^{1+\alpha
q}}dy\right\}^{\frac{1}{q}}<\infty,\footnote{$\triangle_{y,k}f={\cal T}^k_yf-f$}\,\,\mbox{if}\,\,q<\infty$$
and $$\|f\|_{k,p}+\displaystyle\sup_{|y|>0}\dfrac{\|\triangle_{y,k}f\|_{k,p}}{|y|^{\alpha}}<\infty,\,\,\mbox{if}\,\,q=\infty.$$
\end{Def}
{\footnotesize{\bf{Notations}}}
\begin{itemize}
\item For any $k$-harmonic (or $k$-temperature) ${\cal U}$ on $\R^2_+$, we denote by
$${\cal A}^k_{p,q}({\cal U}):=\left\{
\begin{array}{rcl}
\left\{\dint_0^{\infty}\left[\|{\cal
U}(.,t)\|_{k,p}\right]^q\dfrac{dt}{t}\right\}^{\frac{1}{q}}&(1\leq
q<\infty),\\ \displaystyle\sup_{t>0}\|{\cal
U}(.,t)\|_{k,p}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,&(q=\infty),
\end{array}
\right.$$
and
$${\cal A}^{k,\ast}_{p,q}({\cal U}):=\left\{
\begin{array}{rcl}
\left\{\dint_0^1\left[\|{\cal
U}(.,t)\|_{k,p}\right]^q\dfrac{dt}{t}\right\}^{\frac{1}{q}}&(1\leq
q<\infty),\\ \displaystyle\sup_{0<t\leq1}\|{\cal
U}(.,t)\|_{k,p}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,&(q=\infty),
\end{array}
\right.$$ the value $\infty$ being allowed.
\item For $\alpha$ real,
$\overline{\alpha}$ will denote the smallest non-negative integer
larger than $\alpha$.
\end{itemize}
\begin{Rems}(\cite{S-K}) We have :
\begin{itemize}
\item For $\alpha\in]0,1[$ and $q=\infty$, $f\in\wedge^k_{\alpha,p,\infty}(\R)$ if and only if $\|\partial_tP^k_tf\|_{k,p}
\leq B(k,\alpha)t^{-1+\alpha}.$
\item For $\alpha>0$, $p,q\in[1,\infty]$, we set
$$\wedge^k_{\alpha,p,q}(\R):=\left\{f\in L^p(\R,|x|^{2k}dx):\,\,
{\cal A}^k_{p,q}(t^{\overline{\alpha}-\alpha}\partial^{\overline{\alpha}}_tP^k_t(f))<\infty\right\}.$$
The $\wedge^k_{\alpha,p,q}$-norms are defined by
$$\|f\|_{\wedge^k_{\alpha,p,q}}:=\|f\|_{k,p}+{\cal A}^k_{p,q}(t^{\overline{\alpha}-\alpha}\partial^{\overline{\alpha}}_tP^k_t(f))
.$$
\end{itemize}
\end{Rems}
\begin{Lem}\label{Lem.V.1}
We have
$${\cal B}^k_{\alpha}\in\wedge^k_{\alpha,1,\infty}(\R),\,\,\,\mbox{if}\,\,\,\alpha>0.$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad Let us first consider the case
$\alpha\in]0,1[$. Since ${\cal B}^k_{\alpha}\in L^1(\R,\,|x|^{2k}dx)$, we
can write
$$\|\triangle_{y,k}{\cal B}^k_{\alpha}\|_{k,1}=
\dint_{|x|\leq 2|y|}|{\cal T}^k_y{\cal B}^k_{\alpha}(x)-{\cal B}^k_{\alpha}(x)||x|^{2k}dx+ \dint_{|x|>
2|y|}|{\cal T}^k_y{\cal B}^k_{\alpha}(x)-{\cal B}^k_{\alpha}(x)||x|^{2k}dx=I_1(y)+I_2(y).$$
${\cal B}^k_{\alpha}$ is an even function, then formula (\ref{e.I.3})
yields
$${\cal T}^k_y{\cal B}^k_{\alpha}(x)=d_k\dint_0^{\pi}{\cal B}^k_{\alpha}(G(x,y,\theta))h^e(x,y,\theta)\sin^{2k-1}\theta d\theta$$
which shows that ${\cal T}^k_y{\cal B}^k_{\alpha}(x)\geq0$ since ${\cal B}^k_{\alpha}$ is non-negative.
Moreover, using the following inequalities $G(x,y,\theta)\geq||x|-|y||$, $0\leq h^e(x,y,\theta)\leq2$ and relation
(\ref{e.IV.1}), we have
\begin{equation}\label{e.V.2}
{\cal T}^k_{y}{\cal B}^k_{\alpha}(x)\leq 2{\cal B}^k_{\alpha}(|x|-|y|).
\end{equation}
Then, by inequalities (\ref{e.V.2}) and (\ref{e.IV.2}), we have
$$I_1(y)\leq B(k,\alpha)\left\{\dint_{|x|\leq2|y|}||x|-|y|||^{\alpha-1-2k}|x|^{2k}dx+\dint_{|x|\leq2|y|}|x|^{\alpha-1}dx\right\}\leq B(k,\alpha)|y|^{\alpha}.$$
By the generalized Taylor formula with integral remainder
(\ref{e.II.Taylor}), we have
\begin{equation}\label{e.V.12'}
|{\cal
T}^k_y{\cal B}^k_{\alpha}(x)-{\cal B}^k_{\alpha}(x)|\leq\dint_{-|y|}^{|y|}|{\cal
T}^k_z({\cal D}_k{\cal B}^k_{\alpha})(x)|dz.
\end{equation}
Since ${\cal
D}_k{\cal B}^k_{\alpha}$ is an odd function, formula (\ref{e.I.3}) gives
$${\cal T}^k_z({\cal D}_k{\cal B}^k_{\alpha})(x)=d_k\dint_0^{\pi}{\cal D}_k{\cal B}^k_{\alpha}(G(x,z,\theta))h^o(x,z,\theta)\sin^{2k-1}\theta d\theta.$$
It is obvious to see that $h^o(x,z,\theta)\leq2$ and $0\leq
G(x,z,\theta)\leq|x|+|z|$. Thus, formula (\ref{e.IV.3}) yields
$$|{\cal T}^k_{z}({\cal D}_k{\cal B}^k_{\alpha})(x)|\leq B(k,\alpha)(|x|+|z|)^{\alpha-2-2k}.$$
Hence, by relation (\ref{e.V.12'}) we obtain
$$|{\cal T}^k_y{\cal B}^k_{\alpha}(x)-{\cal B}^k_{\alpha}(x)|\leq B(k,\alpha)|y||x|^{\alpha-2-2k}$$
and so $I_2(y)\leq B(k,\alpha)|y|^{\alpha}$. This completes the proof when $\alpha\in]0,1[$. To pass to
the general case for $\alpha>0$, we write $t=t_1+t_2+\cdots +t_{\overline{\alpha}}$ and $t_i>0$. Then
$$P^k_t{\cal B}^k_{\alpha}=P^k_{t_1}{\cal B}^k_{\beta}\ast_kP^k_{t_2}{\cal B}^k_{\beta}\ast_k\cdots\ast_kP^k_{t_{\overline{\alpha}}}{\cal B}^k_{\beta},$$
where $\beta=\frac{\alpha}{\overline{\alpha}}\in]0,1[$. Therefore $\|\partial^{\overline{\alpha}}_tP^k_t{\cal B}^k_{\alpha}\|_{k,1}\leq
B(k,\alpha)t^{\alpha-\overline{\alpha}}$, whenever
$t_1=t_2=\cdots=t_{\overline{\alpha}}=\frac{t}{\overline{\alpha}}$. This finishes the proof.
\begin{Lem}\label{Lem.V.2}
Let $1\leq p,q\leq\infty$, ${\cal U}(x,t)$ is $k$-harmonic on $\R^2_+$ and bounded in each
proper sub-half space of $\R^2_+$. Suppose we are given $A>0$,
$\alpha>0$, $t_0>0$ and an integer $n>\alpha$ such that
$${\cal A}^k_{p,q}(t^{n-\alpha}\partial^n_t{\cal U})\leq A,$$$$\|{\cal U}(.,t)\|_{k,p}\leq A,\,\,\,t\geq t_0.$$
Then ${\cal U}(x,t)$ is the $k$-Poisson transform of a function
$f\in\wedge^k_{\alpha,p,q}(\R)$ and :
\\ (a) $\|\partial_t{\cal U}(.,t)\|_{k,p}=o(t^{-1})$,
as $t\longrightarrow0$,
\\ (b) $\|f\|_{\wedge^k_{\alpha,p,q}}\leq
B(\alpha,k,t_0,n)A$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad Consider
first the case $\alpha\in]0,1[$. We are given ${\cal U}(.,t)=O(1)$, \footnote{$f(x)=O(g(x))$, $x\rightarrow a$, means $\frac{f(x)}{g(x)}$ is
bounded as $x\rightarrow a$.} as
$t\longrightarrow\infty$, then from Lemma \ref{Lem.VI.Semi-group}, H\"older's inequality and Proposition \ref{Prop.V.1}(v), we get $\partial^{m-1}_t{\cal U}(.,t)=\circ(1)$, $t\longrightarrow\infty$, $m\in\N$.
Using the fact that
$$\partial^{m-1}_t{\cal U}(x,t)=-\dint_t^{\infty}\partial^m_s{\cal U}(x,s)ds,\,\,m\in\N,$$ and
Minkowski's integral inequality, we obtain
\begin{equation}\label{e.V.2'}
\|\partial^{m-1}_t{\cal U}(.,t)\|_{k,p}\leq\dint_t^{+\infty}\|\partial^m_s{\cal U}(.,s)\|_{k,p}ds.
\end{equation}
From Hardy inequality and relation (\ref{e.V.2'}), we
deduce that
$${\cal A}^k_{p,q}(t^{1-\alpha}\partial_t{\cal U})\leq
B(n,\alpha)A.$$ But $t\longmapsto\|\partial_t{\cal U}(.,t)\|_{k,p}$ is a
non-increasing function, so that
$$((1-\alpha)q)^{-\frac{1}{q}}s^{1-\alpha}\|\partial_s{\cal U}(.,s)\|_{k,p}=\left[\dint_0^s(t^{1-\alpha}\|\partial_s{\cal U}(.,s)\|_{k,p})^q
\dfrac{dt}{t}\right]^{\frac{1}{q}}\leq
B(n,\alpha)A,\,\,\mbox{if}\,\,\alpha<1,$$ which proves
\begin{equation}\label{e.V.2"}
t\|\partial_t{\cal U}(.,t)\|_{k,p}\leq
B(n,\alpha,q)At^{\alpha}=\circ(1),\,\,\mbox{as}\,\,t\longrightarrow0.
\end{equation}
If $\alpha\geq1$, it is easily to verify that
\begin{equation}\label{e.V.100}
{\cal A}^k_{p,q}(t^{n-\frac{1}{2}}\partial^n_t{\cal U})\leq B(n,k,q,t_0)A.
\end{equation}
Then by relation (\ref{e.V.2'}), Hardy inequality and relation (\ref{e.V.100}), we have
$${\cal A}^k_{p,q}(t^{\frac{1}{2}}\partial_t{\cal U})\leq B(n,k,q,t_0)A.$$
By the same reason for
$\alpha\in]0,1[$, we obtain $t\|\partial_t{\cal U}(.,t)\|_{k,p}=\circ(1)$ as
$t\longrightarrow0^+$ which proves the part (a).
To complete the proof it suffices to find a function $f\in
L^p(\R,|x|^{2k}dx)$ so that ${\cal U}(.,t)=P^k_t(f)$ converges
in the $L^p_k$-norm to $f$ and $\|{\cal U}(.,t)\|_{k,p}\leq
B(\alpha,k,t_0,n)A$. Using inequality (\ref{e.V.2"}), we deduce that for
$t\leq t_0$
$$\|{\cal U}(.,t)\|_{k,p}\leq \|{\cal U}(.,t_0)\|_{k,p}+\dint_t^{t_0}\|\partial_s{\cal U}(.,s)\|_{k,p}ds\leq B(n,\alpha,q,t_0)A.$$
On the other hand, by relation (\ref{e.V.2"}), we have $$\|{\cal U}(.,t_1)-{\cal U}(.,t_2)\|_{k,1}\leq
\dint_{t_1}^{t_2}\|\partial_s{\cal U}(.,s)\|_{k,1}ds\leq
B(n,\alpha,q,t_0)A\dint_{t_1}^{t_2}s^{\alpha-1}ds\longrightarrow0,\,\,\mbox{as}\,\,t_1\leq
t_2\longrightarrow0.$$ According to Theorem
\ref{Th.VI.characterization}, there exists $f\in L^p_k$ (it is
uniformly continuous if $p=\infty$) such that ${\cal
U}(x,t)=P^k_tf$. This achieves the proof of the Lemma \ref{Lem.V.2}.
\begin{Rems}\label{Rem.VI.100} We have
\begin{itemize}
\item By proceeding in same manner as before, we can assert that the Lemma \ref{Lem.V.2} is true when we take ${\cal U}(x,t)$ $k$-temperature on $\R^2_+$
and we replace $k$-Poisson transform by $k$-heat transform.
\item If $\beta>0$, we define $P^k_t({\cal B}^k_{-\beta})$ as follows
\begin{equation}\label{e.V.1}
P_t^k({\cal B}^k_{-\beta})(x)=P^k_t({\cal B}^k_{2-\beta})(x)+\partial^2_tP^k_t({\cal B}^k_{2-\beta})(x),\,\,\,\mbox{when}\,\,\,0<\beta<2,
\end{equation}
and for arbitrary $\beta>0$ by the rule
$$P_t^k({\cal B}^k_{-\beta})(x)=P^k_{\frac{t}{2}}({\cal B}^k_{-\gamma})\ast_kP^k_{\frac{t}{2}}({\cal B}^k_{-\delta})(x),\,\,\mbox{whenever}\,\,\,\gamma+\delta=\beta.$$
\item If $\beta>0$, we define the $k$-Bessel potential ${\cal J}^k_{-\beta}f(x)$ for a function $f\in L^p(\R,|x|^{2k}dx)$, $1\leq p\leq\infty$, by
$${\cal J}^k_{-\beta}f(x)=\displaystyle\lim_{t\longrightarrow0}P^k_t({\cal B}^k_{-\beta})\ast_kf(x),$$
where the limit is interpreted in $L^p_k$-norm and pointwice a.e.
\end{itemize}
\end{Rems}
\begin{Rem}\label{Rem.V.1}
For $f\in L^p(\R,|x|^{2k}dx)$, $1\leq p\leq\infty$ and $\beta>0$, the $k$-Poisson transform of ${\cal J}^k_{-\beta}f$,
$P^k_t({\cal J}^k_{-\beta}(f))$, is $k$-harmonic on $\R^2_+$ and
$\|P^k_t({\cal J}^k_{-\beta}(f))\|_{k,p}\leq\|{\cal J}^k_{-\beta}f\|_{k,p}$, for all $t>t_0$, with $t_0>0$.
\end{Rem}
\par We will study the action of the $k$-Bessel potential ${\cal
J}^k_{\beta}$ on the generalized Dunkl-Lipschitz spaces,
$\wedge^k_{\alpha,p,q}(\R)$.
\begin{Th}\label{Th.V.1}
Let $\alpha>0$, $\beta>0$ and $1\leq p,q\leq\infty$. Then ${\cal
J}^k_{\beta}$ is a topological isomorphism from $\wedge^k_{\alpha,p,q}(\R)$ onto
$\wedge^k_{\alpha+\beta,p,q}(\R)$.
\end{Th}
{\footnotesize\bf{Proof}}\quad If $f\in\wedge^k_{\alpha,p,q}(\R)$, by Lemma
\ref{Lem.V.1}, we have
$$\|{\cal J}^k_{\beta}(f)\|_{\wedge^k_{\alpha+\beta,p,q}}\leq B(k,\beta)\|f\|_{\wedge^k_{\alpha,p,q}}$$
which implies the continuity of ${\cal
J}^k_{\beta}$ from $\wedge^k_{\alpha,p,q}(\R)$ into
$\wedge^k_{\alpha+\beta,p,q}(\R)$. If $f\in\wedge^k_{\alpha+\beta,p,q}(\R)$, we may assume without loss of generality that $\beta\in]0,2[$. Applying the formula (\ref{e.V.1}) and Lemma \ref{Lem.V.1}, we obtain
\begin{equation}\label{e.V.6'}
\|P^k_t({\cal B}^k_{-\beta})\|_{k,1}\leq1+B(k,\beta)t^{-\beta}\leq B(k,\beta),\,\,t\geq1.
\end{equation}
Therefore,
$$\|{\cal J}^k_{-\beta}(P^k_t(f))\|_{k,p}\leq B(k,\beta)\|f\|_{\wedge^k_{\alpha+\beta,p,q}},\,\,t\geq1.$$
From formula (\ref{e.V.6'}) and Proposition \ref{Prop.V.1}(v), a direct verification yields that
$${\cal A}^k_{p,q}(t^{\overline{\alpha}+\overline{\beta}-\alpha}\partial^{\overline{\alpha}+\overline{\beta}}_tP^k_t({\cal J}^k_{-\beta}(f)))\leq
B(k,\alpha,\beta)\|f\|_{\wedge^k_{\alpha+\beta,p,q}}.$$ On the
other hand, by remark \ref{Rem.V.1} and Lemma \ref{Lem.V.2}, there
exists a function $g\in\wedge^k_{\alpha,p,q}(\R)$ satisfying $P^k_t({\cal J}^k_{-\beta}(f))=P^k_t(g)$.
Consequently, we get
$${\cal J}^k_{-\beta}(f)=g\,\,\,\mbox{with}\,\,\,g\in\wedge^k_{\alpha,p,q}(\R)\,\,\,\mbox{and}\,\,\,\|{\cal J}^k_{-\beta}(f)\|
_{\wedge^k_{\alpha,p,q}}\leq
B(k,\alpha,\beta)\|f\|_{\wedge^k_{\alpha+\beta,p,q}}$$ which proves the continuity of ${\cal J}^k_{-\beta}$ from $\wedge^k_{\alpha+\beta,p,q}(\R)$
into $\wedge^k_{\alpha,p,q}(\R)$. We now come to show
${\cal J}^k_{-\beta}({\cal
J}^k_{\beta}(f))(x)=f(x)$ a.e., if
$f\in\wedge^k_{\alpha,p,q}(\R)$, $\alpha>0$, which
follows from the fact that\\ $P^k_t({\cal J}^k_{-\beta}({\cal J}^k_{\beta}(f)))(x)=P^k_t(f)(x)$ and similarly, ${\cal J}^k_{\beta}({\cal
J}^k_{-\beta}(f))(x)=f(x)$ a.e., if $f\in\wedge^k_{\alpha+\beta,p,q}(\R)$, $\alpha>0$.
This concludes the proof of the theorem.
\\\par Before giving a formal definition of the generalized Dunkl-Lipschitz
spaces, we introduce the definition of the space ${\cal L}^p_{\alpha,k}(\R)$.
\begin{Def}
The Lebesgue space
$${\cal L}^p_{\alpha,k}(\R):=\left\{T\in S'(\R):\,\,T={\cal J}^k_{\alpha}(g),\,\,g\in L^p(\R,|x|^{2k}dx)\right\},$$
for $\alpha$ real, $1\leq p\leq\infty$, is called the Dunkl-Sobolev
space of fractional order $\alpha$. Define
$$\|T\|_{k,p,\alpha}:=\|g\|_{k,p}.$$
Thus ${\cal L}^p_{\alpha,k}(\R)$ is a Banach space that is an
isometric image of $L^p(\R,|x|^{2k}dx)$.
\end{Def}
\par Now, following the classical case, see for instance \cite{Taibleson,Flett1}, we are going to define the generalized Dunkl-Lipschitz spaces $\wedge^k_{\alpha,p,q}(\R)$, for all real $\alpha$.
\begin{Def} Let $p,q\in[1,\infty]$, $\alpha\in\R$ and $n=\overline{(\dfrac{\alpha}{2})}$.
\\ (i) If $\alpha>0$, $\wedge^k_{\alpha,p,q}(\R)$ is the space of functions of $f\in
L^p(\R,|x|^{2k}dx)$ for which the $k$-heat transform
$G^k_t(f)$ of $f$ satisfies the condition that
$${\cal A}^k_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(f))<\infty.$$
The space is given the norm
$$\|f\|_{\wedge^k_{\alpha,p,q}}:=\|f\|_{k,p}+{\cal A}^k_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(f)).$$
(ii) If $\alpha\leq0$, $\wedge^k_{\alpha,p,q}(\R)$ is the space of tempered
distributions $T\in{\cal L}^p_{\alpha-\frac{1}{2},k}(\R)$ for which
the $k$-heat transform $G^k_t(T)$ of $T$ satisfies the
condition that
$${\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))<\infty.$$
The space is given the norm
$$\|T\|_{\wedge^k_{\alpha,p,q}}:=\|T\|_{k,p,\alpha-\frac{1}{2}}+{\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T)).$$
\end{Def}
\begin{Lem}\label{Lem.V.2'}
Let $\alpha<0$, $1\leq p\leq\infty$, $T\in {\cal L}^p_{\alpha,k}(\R)$ and let $G^k_t(T)$ be the $k$-heat transform of $T$ on $\R^2_+$.
Then $G^k_t(T)\in{\cal T}^k(\R^2_+)$ and
$$\|G^k_t(T)\|_{k,p}\leq B(k,\alpha)(t^{\frac{1}{2}\alpha}+1)\|T\|_{k,p,\alpha}.$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad From Theorem 3.12 of \cite{N-A-S} and Theorem \ref{Th.IV.3}, the result is proved.
\\\par Now, we want to extend the Theorem \ref{Th.V.1} for all real $\alpha$ and $\beta$. For this, we need the following auxiliary lemmas.
\begin{Lem}\label{Lem.V.3}
Let $H(x,t)$ be absolutely continuous as a
function $t$ for $(x,t)\in\R^2_+$, $t\leq1$. Then for
$\alpha>0$, $p,q\in[1,\infty]$,
$${\cal A}^{k,\ast}_{p,q}(t^{\alpha}H)\leq B(\alpha,q)\left[{\cal A}^{k,\ast}_{p,q}(t^{\alpha+1}\partial_tH)+\|H(.,1)\|_{k,p}\right].$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad We shall prove the Lemma only when $q\in[1,\infty[$, the case $q=\infty$ can be similarly treated. We can write
$$H(x,t)=H(x,1)-\dint_t^1\partial_sH(x,s)ds.$$
From Minkowski's integral inequality, we obtain
$${\cal A}^{k,\ast}_{p,q}(t^{\alpha}H)\leq B(\alpha,q)\|H(.,1)\|_{k,p}+\left\{\dint_0^1\left[t^{\alpha}\dint_t^1
\|\partial_sH(.,s)\|_{k,p}ds\right]^q\dfrac{dt}{t}\right\}^{\frac{1}{q}}.$$
The result announced arises from Hardy inequality.
\begin{Rem}\label{Rem.V.1.}
Observe that, for $\alpha>0$, the tempered distribution ${\cal B}^k_{\alpha}$ is a
function in $L^1(\R,|x|^{2k}dx)$. For $\alpha=0$ it is the Dirac
delta $\delta_0$ and for $-\alpha\in]0,2[$
$$G^k_t({\cal B}^k_{\alpha})(x)=G^k_t({\cal B}^k_{\alpha+2})(x)-\partial_tG^k_t({\cal B}^k_{\alpha+2})(x)$$
which is easily verified by taking the Dunkl transform ${\cal F}_k$.
Similarly, we may construct $G^k_t({\cal B}^k_{\alpha})$ for all
$\alpha<0$ and find in particular that for each $t>0$,
$G^k_t({\cal B}^k_{\alpha})\in L^1(\R,|x|^{2k}dx)$ and is uniformly
bounded in $L^1(\R,|x|^{2k}dx)$ in each proper sub-half space of
$\R^2_+$.
\end{Rem}
\begin{Lem}\label{Lem.V.4}
Let $\alpha$ be real number, $T\in{\cal
L}^p_{\alpha-\frac{1}{2},k}(\R)$ and $n\in\N$,
$n\geq\overline{(\frac{\alpha}{2})}$. Then the norm
$$\|T\|_{k,p,\alpha-\frac{1}{2}}+{\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))$$
is equivalent to the norm with $n=\overline{(\frac{\alpha}{2})}$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad If $T\in{\cal
L}^p_{\alpha-\frac{1}{2},k}(\R)$, from Proposition \ref{Prop.II.1}(iv), we have
$$\|\partial^n_tG^k_1(T)\|_{k,p}\leq
B(k,n,\alpha)\|T\|_{k,p,\alpha-\frac{1}{2}},\,\,\,n>l=\overline{(\frac{\alpha}{2})}.$$ Therefore by Lemma \ref{Lem.V.3}, we obtain $${\cal
A}^{k,\ast}_{p,q}(t^{l-\frac{\alpha}{2}}\partial^l_tG^k_t(T))\leq
B(k,\alpha,n)({\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+\|T\|_{k,p,\alpha-\frac{1}{2}}).$$
Conversely, a direct check shows that
$${\cal A}^{k,\ast}_{p,q}(t^{\beta+1}\partial_tG^k_t(T))\leq B(k,\beta){\cal A}^{k,\ast}_{p,q}(t^{\beta}G^k_t(T)),\,\,\,\beta>0.$$
Thus
$${\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))
\leq B(k,\alpha,n){\cal
A}^{k,\ast}_{p,q}(t^{l-\frac{\alpha}{2}}\partial^l_tG^k_t(T)),\,\,\,\mbox{where}\,\,\,n>l=\overline{(\frac{\alpha}{2})},$$
which proves the results.
\begin{Lem}\label{Lem.V.5}
Let $\alpha$ be real, $n=\overline{(\frac{\alpha}{2})}$ and $1\leq p,q\leq\infty$. Then the set of
tempered distributions $T\in{\cal L}^p_{\alpha-\frac{1}{2},k}(\R)$
for which $${\cal
A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))<\infty,$$
normed with
\begin{equation}\label{e.V.3}
{\cal
A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+\|T\|_{k,p,\alpha-\frac{1}{2}}
\end{equation} is topologically and algebraically equal to
$\wedge^k_{\alpha,p,q}(\R)$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad By definition of $\wedge^k_{\alpha,p,q}(\R)$, one only needs to consider the case $\alpha>0$.
Assume that $T\in{\cal L}^p_{\alpha-\frac{1}{2},k}(\R)$ and (\ref{e.V.3}) is finite. It is easily seen that
\begin{equation}\label{e.V.4}
{\cal A}^k_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))\leq
B(k,\alpha,q)\left({\cal
A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+
\|T\|_{k,p,\alpha-\frac{1}{2}}\right),\;\;\alpha>0.
\end{equation}
If $\alpha\geq\frac{1}{2}$, thus
$T\in L^p(\R,|x|^{2k}dx)$ is obvious. On the other hand, if $0<\alpha<\frac{1}{2}$, then for $t\geq1$,
$\|G^k_t(T)\|_{k,p}\leq B(k,\alpha)\|T\|_{k,p,\alpha-\frac{1}{2}}$. By the relation (\ref{e.V.4}) and Lemma \ref{Lem.V.2},
there exists a function $\psi\in\wedge^k_{\alpha,p,q}(\R)$ such
that $G^k_t(T)=G^k_t(\psi)$ and
$$\|\psi\|_{\wedge^k_{\alpha,p,q}}\leq B(k,\alpha,q)\left\{{\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}
\partial^n_tG^k_t(T))+\|T\|_{k,p,\alpha-\frac{1}{2}}\right\}.$$
Now $T$ and $\psi$ have the same $k$-heat transform and thus are
equal as distributions. This implies that $T$ is a function and is in
$L^p(\R,|x|^{2k}dx)$, when $\alpha\in]0,\frac{1}{2}]$. Summarizing, the above two cases show that $T\in\wedge^k_{\alpha,p,q}(\R)$
and
$$\|T\|_{\wedge^k_{\alpha,p,q}}\leq B(k,\alpha,q)\left({\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+
\|T\|_{k,p,\alpha-\frac{1}{2}}\right),\;\;\alpha>0.$$
Conversely, let $T\in\wedge^k_{\alpha,p,q}(\R)$ and $\|T\|_{\wedge^k_{\alpha,p,q}}$
is finite. Note that
$${\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))\leq{\cal
A}^k_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))<\infty,\,\,\,\alpha>0.$$
If $\alpha\in]0,\frac{1}{2}]$,then $T\in{\cal L}^p_{\alpha-\frac{1}{2},k}(\R)$ is obvious. If $\alpha>\frac{1}{2}$, thus from Theorem \ref{Th.V.1}, we obtain
$${\cal J}^k_{-(\alpha-\frac{1}{2})}(T)\in\wedge^k_{\frac{1}{2},p,q}(\R)\subset L^p(\R,|x|^{2k}dx)\,\,\,\mbox{and}\,\,\,\|{\cal J}^k_{-(\alpha-\frac{1}{2})}
(T)\|_{k,p}\leq B(k,\alpha)\|T\|_{\wedge^k_{\alpha,p,q}}.$$ Since
$\|T\|_{k,p,\alpha-\frac{1}{2}}=\|{\cal
J}^k_{-(\alpha-\frac{1}{2})}(T)\|_{k,p}$ and
$\|T\|_{\wedge^k_{\alpha,p,q}}$ is finite, the proof is finished.
\begin{Rem}\label{Rem.VI.101}
From Lemmas \ref{Lem.V.1} and \ref{Lem.V.5} for $\beta>0$, Remark
\ref{Rem.V.1.} for $\beta<0$ and Proposition \ref{Prop.II.1}(iv)
for $\beta=0$, we get
$$\|\partial^n_tG^k_t({\cal B}^k_{\beta})\|_{k,1}\leq B(k,\beta) t^{\frac{\beta}{2}-n},\;\;\mbox{where}\;\;n-\frac{\beta}{2}>0\;\;\mbox{and}\;\;t>0.$$
\end{Rem}
\par We can now state the main result of this section.
\begin{Th}\label{Th.V.2}
Let $\alpha$, $\beta$ be real and $1\leq p,q\leq\infty$. Then
${\cal J}^k_{\beta}$ is a topological isomorphism from $\wedge^k_{\alpha,p,q}(\R)$ onto $\wedge^k_{\alpha+\beta,p,q}(\R)$.
\end{Th}
{\footnotesize\bf{Proof}}\quad
Suppose $f\in\wedge^k_{\alpha,p,q}(\R)$, by Remark \ref{Rem.VI.101}, we obtain
$$\|\partial^l_tG^k_t({\cal J}^k_{\beta}(f))\|_{k,p}\leq B(k,\beta)t^{\frac{\beta}{2}-\overline{(\frac{\beta}{2})}}
\|\partial^s_tG^k_{\frac{t}{2}}(f)\|_{k,p},$$ where
$l=\overline{(\frac{\alpha}{2})}+\overline{(\frac{\beta}{2})}$ and
$s=\overline{(\frac{\alpha}{2})}$. As a consequence, we deduce
$${\cal A}^{k,\ast}_{p,q}(t^{l-\frac{\alpha+\beta}{2}}\partial^l_tG^k_t({\cal J}^k_{\beta}(f)))\leq B(k,\beta)
{\cal A}^{k,\ast}_{p,q}(t^{s-\frac{\alpha}{2}}\partial^s_tG^k_t(f)).$$
From Lemmas \ref{Lem.V.4} and \ref{Lem.V.5}, we conclude that
$${\cal J}^k_{\beta}f\in\wedge^k_{\alpha+\beta,p,q}(\R)\,\,\,\mbox{and}\,\,\,\|{\cal J}^k_{\beta}f\|_{\wedge^k_{\alpha+\beta,p,q}}\leq
B(k,\alpha,\beta)\|f\|_{\wedge^k_{\alpha,p,q}}.$$ Moreover, the
following relation
$$G^k_{t_1}({\cal B}^k_{\beta})\ast_kG^k_{t_2}({\cal B}^k_{-\beta})=F^k_{t_1+t_2},\,\,t_1,t_2>0,$$
provide that if $f\in\wedge^k_{\alpha,p,q}(\R)$ then ${\cal
J}^k_{-\beta}({\cal J}^k_{\beta}(f))=f$ as a distribution. Similar conclusions show that if $f\in\wedge^k_{\alpha+\beta,p,q}(\R)$ then
${\cal J}^k_{\beta}({\cal J}^k_{-\beta}(f))=f$ as a distribution. The
announced statement arises.
\begin{Th}\label{Th.V.3}
Let $T\in S'(\R)$. Then for each integer
$n>\overline{(\frac{\alpha}{2})}$ and real number $\beta<\alpha$,
the norm
\begin{equation}\label{e.V.5}
{\cal
A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+\|T\|_{k,p,\beta}
\end{equation}
is equivalent to $\|T\|_{\wedge^k_{\alpha,p,q}}$, where $1\leq
p,q\leq\infty$.
\end{Th}
{\footnotesize\bf{Proof}}\quad Suppose
$T\in\wedge^k_{\alpha,p,q}(\R)$. Since $\alpha-\beta>0$ and by
Theorem \ref{Th.V.2}, we have
$$\|T\|_{k,p,\beta}=\|{\cal J}^k_{-\beta}T\|_{k,p}\leq\|{\cal J}^k_{-\beta}T\|_{\wedge^k_{\alpha-\beta,p,q}}\leq B(k,\alpha,\beta)
\|T\|_{\wedge^k_{\alpha,p,q}}.$$ Then, Lemmas \ref{Lem.V.5} and \ref{Lem.V.4} ensure that relation (\ref{e.V.5}) is finite.\\
Conversely, if relation (\ref{e.V.5}) is finite and let
$l>\overline{(\frac{\alpha-\beta}{2})}$. By Lemmas \ref{Lem.V.3} and \ref{Lem.V.4}, Remark \ref{Rem.VI.101} and change of variables, we have
$${\cal A}^{k,\ast}_{p,q}(t^{l-\frac{\alpha-\beta}{2}}\partial^l_tG^k_t({\cal J}^k_{-\beta}(T)))\leq B(k,n,\alpha,\beta)
\left\{{\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+\|T\|_{k,p,\beta}\right\}$$ and $\|{\cal J}^k_{-\beta}T\|_{k,p}=\|T\|_{k,p,\beta}$.
Note that
$$\|{\cal J}^k_{-\beta}T\|_{\wedge^k_{\alpha-\beta,p,q}}\leq B(k,\alpha,\beta)\left\{{\cal A}^{k,\ast}_{p,q}(t^{l-\frac{\alpha-\beta}{2}}
\partial^l_tG^k_t({\cal J}^k_{-\beta}(T)))+\|{\cal J}^k_{-\beta}T\|_{k,p}\right\},$$
hence from Theorem \ref{Th.V.2}, we obtain
$$\|T\|_{\wedge^k_{\alpha,p,q}}\leq B(k,\alpha,\beta)\|{\cal J}^k_{-\beta}T\|_{\wedge^k_{\alpha-\beta,p,q}}\leq B(k,n,\alpha,\beta)\left\{
{\cal A}^{k,\ast}_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(T))+\|T\|_{k,p,\beta}\right\}$$
which prove the theorem.\\\\
{\footnotesize{\bf{Note}}}
\par We are essentially defining $\wedge^k_{-\alpha,p,q}(\R)$ to
be ${\cal
J}^k_{-\alpha-\frac{1}{2}}(\wedge^k_{\frac{1}{2},p,q}(\R))$, $\alpha>0$. The
choice of $\frac{1}{2}$ is arbitrary. Any $\beta>0$, would work as
well.
\\\par The remainder of this section is devoted to some properties and embedding theorems for the spaces $\wedge^k_{\alpha,p,q}(\R)$.
\begin{Th}\label{Th.V.4}
Let $f$ in
$\wedge^k_{\alpha_0,p_0,q_0}(\R)\cap\wedge^k_{\alpha_1,p_1,q_1}(\R)$,
then $f$ belongs to $\wedge^k_{\alpha,p,q}(\R)$ and we have
$$\|f\|_{\wedge^k_{\alpha,p,q}}\leq
B(k,\alpha_0,\alpha_1)\|f\|^{1-\theta}_{\wedge^k_{\alpha_0,p_0,q_0}}\|f\|^{\theta}_{\wedge_{\alpha_1,p_1,q_1}},$$
where $\alpha=(1-\theta)\alpha_0+\theta\alpha_1$,
$\dfrac{1}{p}=\dfrac{1-\theta}{p_0}+\dfrac{\theta}{p_1}$,
$\dfrac{1}{q}=\dfrac{1-\theta}{q_0}+\dfrac{\theta}{q_1}$, and
$\theta\in[0,1]$. In particular
\\ (a)
$\|f\|_{k,p,\beta}\leq\|f\|^{1-\theta}_{k,p_0,\beta}\|f\|^{\theta}_{k,p_1,\beta}$,
$\beta<\min(\alpha_0,\alpha_1)$.
\\ (b) ${\cal A}^k_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_tG^k_t(f))\leq\left[{\cal A}^k_{p_0,q_0}(t^{n-\frac{\alpha_0}{2}}\partial^n_tG^k_t(f))
\right]^{1-\theta}\left[{\cal
A}^k_{p_1,q_1}(t^{n-\frac{\alpha_1}{2}}\partial^n_tG^k_t(f))
\right]^{\theta}$, where
$n>\max(\frac{\alpha_0}{2},\frac{\alpha_1}{2})$.
\end{Th}
{\footnotesize\bf{Proof}}\quad This can be proved from Theorem
\ref{Th.V.3} and the Logarithmic convexity of the $L^p_k$-norms.
\\\par Let us study some inclusions among the generalized Dunkl-Lipschitz spaces :
\begin{Lem}\label{Lem.V.6}
The continuous embedding
$$\wedge^k_{\alpha_1,p,q_1}(\R)\hookrightarrow\wedge^k_{\alpha_2,p,q_2}(\R)$$
holds if either
\\ (i) if $\alpha_1>\alpha_2$ ( then $q_1$ and $q_2$ need not be
related), or
\\ (ii) if $\alpha_1=\alpha_2$ and $q_1\leq q_2$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad We give the argument for
$q\neq\infty$. The case $q=\infty$ is done similarly. We may
suppose $0<\alpha_2<\alpha_1<1$. Let
$f\in\wedge^k_{\alpha_1,p,q_1}(\R)$ and consider first the case
$q_1=q_2$. In the one hand, it is easily to see that
$${\cal
A}^{k,\ast}_{p,q_1}(t^{1-\alpha_2}\partial_tP^k_t(f))\leq\|f\|_{\wedge^k_{\alpha_1,p,q_1}}.$$
In the other hand, using the fact that $\|\partial_tP^k_t(f)\|_{k,p}\leq B(k)t^{-1}\|f\|_{k,p}$, we get
$$\left\{\dint_1^{\infty}\left[t^{1-\alpha_2}\|\partial_tP^k_t(f)\|_{k,p}\right]^{q_1}\dfrac{dt}{t}\right\}^{\frac{1}{q_1}}
\leq B(k,\alpha_2,q_1)\|f\|_{\wedge^k_{\alpha_1,p,q_1}}$$ which
proves that $\wedge^k_{\alpha_1,p,q_2}(\R)\hookrightarrow\wedge^k_{\alpha_2,p,q_2}(\R)$. Moreover, if $q_1<q_2$, Lemma 5.2 of \cite{S-K} and
Lemma 1.2 of \cite{Johnson} show that
$\wedge^k_{\alpha_1,p,q_1}(\R)\hookrightarrow\wedge^k_{\alpha_1,p,q_2}(\R)$.
Hence
$\wedge^k_{\alpha_1,p,q_1}(\R)\hookrightarrow\wedge^k_{\alpha_1,p,q_2}(\R)\hookrightarrow\wedge^k_{\alpha_2,p,q_2}(\R)$.
If $q_1>q_2$, let $\frac{1}{s}=\frac{1}{q_2}-\frac{1}{q_1}$. Applying
H\"older's inequality and analogous reasoning as before finish the
proof of the lemma.
\begin{Lem}\label{Lem.V.7}
If $1\leq p_1\leq p_2$ and
$\alpha_1-\frac{2k+1}{p_1}=\alpha_2-\frac{2k+1}{p_2}$, we have the
continuous embedding
$$\wedge^k_{\alpha_1,p_1,q}(\R)\hookrightarrow\wedge^k_{\alpha_2,p_2,q}(\R).$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad We may assume that
$0<\alpha_1,\alpha_2<1$. If $f\in\wedge^k_{\alpha_1,p_1,q}(\R)$, Young's inequality yields that
$$\|\partial_tP^k_t(f)\|_{k,p_2}\leq\|\partial_tP^k_{\frac{t}{2}}(f)\|_{k,p_1}\|P^k_{\frac{t}{2}}\|_{k,s}\leq
B(k,p_1,p_2)
t^{(-\frac{1}{p_1}+\frac{1}{p_2})(2k+1)}\|\partial_tP^k_{\frac{t}{2}}(f)\|_{k,p_1},
$$ where $\frac{1}{s}=\frac{1}{p_2}-\frac{1}{p_1}+1$. Hence ${\cal A}^k_{p_2,q}(t^{1-\alpha_2}\partial_tP^k_t(f))\leq B(k,\alpha_1,p_1,p_2){\cal
A}^k_{p_1,q}(t^{1-\alpha_1}\partial_tP^k_t(f))$. On the other
hand, for $t\geq1$, $\|P^k_t(f)\|_{k,p_2}\leq
B(k,p_1,p_2)\|f\|_{k,p_1}$ and therefore by Lemma \ref{Lem.V.2},
we can deduce that $f\in\wedge^k_{\alpha_2,p_2,q}(\R)$ and
$\|f\|_{\wedge^k_{\alpha_2,p_2,q}}\leq
B(k,\alpha_1,p_1,p_2)\|f\|_{\wedge^k_{\alpha_1,p_1,q}}$ which end the proof.
\\\par As consequence of Lemmas \ref{Lem.V.6} and
\ref{Lem.V.7}, we deduce the following theorem :
\begin{Th}\label{Th.V.5}
Let $\alpha_1,\alpha_2\in\R$ and $1\leq p_1\leq p_2\leq\infty$, then we have the continuous
embedding
$$\wedge^k_{\alpha_1,p_1,q_1}(\R)\hookrightarrow\wedge^k_{\alpha_2,p_2,q_2}(\R)$$
if $\alpha_1-\frac{2k+1}{p_1}>\alpha_2-\frac{2k+1}{p_2}$ or if
$\alpha_1-\frac{2k+1}{p_1}=\alpha_2-\frac{2k+1}{p_2}$ and $1\leq
q_1\leq q_2\leq\infty$.
\end{Th}
\par The action of Dunkl derivatives on Dunkl-Lipschitz spaces is
as follows :
\begin{Prop}\label{Prop.V.2}
Let $\alpha>0$, $1\leq p,q\leq\infty$ and $0\leq n\leq\alpha$.
Then the norm $\|f\|_{k,p}+\|{\cal
D}^n_kf\|_{\wedge^k_{\alpha-n,p,q}}$ is equivalent to
$\|f\|_{\wedge^k_{\alpha,p,q}}$.
\end{Prop}
{\footnotesize\bf{Proof}}\quad If $\|f\|_{\wedge^k_{k,p,q}}$ is
finite, then according to the Proposition \ref{Prop.V.1}(v) and Remark (5.14) of \cite{S-K}, it is easy to see that
$${\cal A}^k_{p,q}(t^{\overline{\alpha}-(\alpha-n)}\partial_t^{\overline{\alpha}}{\cal D}^n_kP^k_t(f))\leq B(k,\alpha,n)\|f\|
_{\wedge^k_{\alpha,p,q}},$$ and $$\|{\cal
D}^n_kP^k_t(f)\|_{k,p}\leq B(k,n)\|f\|_{k,p},\,\,\,t\geq1.$$ Thus by
Lemma \ref{Lem.V.2}, we deduce
that there exists $g\in\wedge^k_{\alpha-n,p,q}(\R)$ such that
${\cal D}^n_kP^k_t(f)=P^k_t(g)$ and
$\|g\|_{\wedge^k_{\alpha-n,p,q}}\leq
B(k,\alpha,n)\|f\|_{\wedge^k_{\alpha,p,q}}$. On the other hand,
since ${\cal D}^n_kP^k_t(f)=P^k_t({\cal D}^n_kf)$ (in the distribution sense), we have $P^k_t(g)=P^k_t({\cal D}^n_kf)$. Letting $t\longrightarrow0$ yields that $g={\cal D}^n_kf$. An easy check shows the converse result.
\begin{Lem}\label{Lem.V.8}
If $f\in\wedge^k_{\alpha,\infty,q}(\R)$, $\alpha\in]0,1[$, then
$f$ is uniformly continuous.
\end{Lem}
{\footnotesize\bf{Proof}}\quad It suffices to show that
$\|\triangle_{y,k}f\|_{k,\infty}\rightarrow0$ as $y\rightarrow0$.
By Theorem \ref{Th.V.5},
$f\in\wedge^k_{\alpha,\infty,\infty}(\R)$, so
$\|\triangle_{y,k}f\|_{k,\infty}\leq A|y|^{\alpha}$ and thus tends
to zero as $y\rightarrow0$.
\begin{Th}\label{Th.V.6}
$\wedge^k_{\alpha,p,q}(\R)$ is complete if $1\leq p,q\leq\infty$ and $\alpha\in\R$.
\end{Th}
{\footnotesize\bf{Proof}}\quad By Theorem \ref{Th.V.2}, we may suppose $\alpha\in]0,1[$. If $(f_n)$ is a Cauchy sequence in
$\wedge^k_{\alpha,p,q}(\R)$, then $(f_n)$ is obviously Cauchy sequence in
$L^p_k$, and therefore converges in $L^p_k$ to a function $f$. Hence $\|\partial_tP^k_t(f_s)\|_{k,p}\rightarrow\|\partial_tP^k_t(f)\|_{k,p}$ as
$s\rightarrow\infty$ and for $m=1,2,\cdots$, $\|\partial_t(P^k_tf_m-P^k_tf_s)\|_{k,p}\rightarrow\|\partial_t(P^k_tf_m-P^k_tf)\|_{k,p}$ as $s\rightarrow\infty$.
Consequently, by Fatou's Lemma, we have
$${\cal A}^k_{p,q}(t^{1-\alpha}\partial_t(P^k_tf_m-P^k_tf))\leq\epsilon_m=\displaystyle\lim_{s\rightarrow\infty}\inf{\cal A}^k_{p,q}(t^{1-\alpha}
\partial_t(P^k_tf_m-P^k_tf_s))_{\overrightarrow{m\rightarrow\infty}}0,$$ and ${\cal A}^k_{p,q}(t^{1-\alpha}\partial_tP^k_t(f))
\leq\displaystyle\lim_{s\rightarrow\infty}\inf\|f_s\|_{\wedge^k_{\alpha,p,q}}<\infty$.
So that $f\in\wedge^k_{\alpha,p,q}(\R)$ and $f_m \rightarrow f$, as $m\rightarrow\infty$, in
$\wedge^k_{\alpha,p,q}(\R)$ which conclude the proof.
\\\par The object of the next section will be to derive a similar result for $k$-temperatures on $\R^2_+$.
\section{\footnotesize{{\bf{Dunkl-Lipschitz Spaces of $k$-Temperatures}}}}
We shall define a generalized Dunkl-Lipschitz space of
$k$-temperatures on $\R^2_+$ which will be denoted by ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ and prove that various norms are equivalent to
our original definition. Finally, the isomorphism of ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ and $\wedge^k_{\alpha,p,q}(\R)$ is established.
\\ We begin this section by stating the following standard Lemmas.
\begin{Def}
Let $\alpha$ be a real number. For any $k$-temperature ${\cal U}$ in
${\cal T}^k(\R^2_+)$, $1\leq p\leq\infty$ and $1\leq
q\leq\infty$, let
$${\cal E}^{k,\alpha}_{p,q}({\cal U}):=\left\{
\begin{array}{rcl}
\left\{\dint_0^{+\infty}t^{q-1}e^{-t}\|{\cal
J}^k_{-\alpha-2}{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}&(1\leq
q<\infty),\\
\displaystyle\sup_{t>0}\left\{te^{-t}\|{\cal
J}^k_{-\alpha-2}{\cal U}(.,t)\|_{k,p}\right\}\,\,\,\,\,\,\,&(q=\infty),
\end{array}
\right.
$$
with infinite values being allowed.
\end{Def}
\begin{Lem}\label{Lem.VI.1}
Let $\alpha$, ${\cal U}$, $p$, $q$ be as in the above definition and let
$\gamma$ be a real number. Then $${\cal
E}^{k,\alpha}_{p,q}({\cal U})={\cal E}^{k,\alpha+\gamma}_{p,q}({\cal J
}^k_{\gamma}{\cal U}).$$
\end{Lem}
{\footnotesize\bf{Proof}}\quad By Theorem \ref{Th.IV.2}, ${\cal
J}^k_{-\alpha-2}{\cal U}={\cal J}^k_{-\alpha-\gamma-2}({\cal
J}^k_{\gamma}{\cal U})$ which implies that ${\cal
E}^{k,\alpha}_{p,q}({\cal U})={\cal E}^{k,\alpha+\gamma}_{p,q}({\cal J
}^k_{\gamma}{\cal U})$.
\begin{Def}\label{Def.VII.102}
Let $1\leq p,q\leq\infty$, let $\alpha$, $\beta$ be real numbers such
that $\beta>\alpha$. For any $k$-temperature ${\cal U}$ in ${\cal
T}^k(\R^2_+)$, let
$${\cal E}^{k,\alpha,\beta}_{p,q}({\cal U}):=\left\{
\begin{array}{rcl}
\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-\alpha)-1}e^{-t}\|{\cal
J}^k_{-\beta}{\cal U}(.,t)\|^q_{k,p}dt\right\}^{\frac{1}{q}}&(1\leq
q<\infty),\\
\displaystyle\sup_{t>0}\left\{t^{\frac{1}{2}(\beta-\alpha)}e^{-t}\|{\cal
J}^k_{-\beta}{\cal U}(.,t)\|_{k,p}\right\}\,\,\,\,\,\,\,&(q=\infty),
\end{array}
\right.
$$
and $${\cal L}^k_p({\cal U}):=\displaystyle\sup_{t\geq\frac{1}{2}}\|{\cal U}(.,t)\|_{k,p}.$$
\end{Def}
\begin{Rem}\label{Rem.VII.1}
Let $1\leq p,q\leq\infty$, and $\gamma$ be real number. If ${\cal U}\in{\cal T}^k(\R^2_+)$ and ${\cal E}^{k,\alpha}_{p,q}({\cal U})<\infty$, where $\alpha$ is real, so that Theorem \ref{Th.IV.4} and Corollary \ref{Cor.IV.1} yield that for each $a>0$ there exists a positive constant $B$ such that for all $t\geq a$
$$\|{\cal J}^k_{\gamma}{\cal U}(.,t)\|_{k,p}\leq B(k,\alpha,\gamma,q,a){\cal E}^{k,\alpha}_{p,q}({\cal U}).$$
\end{Rem}
\begin{Lem}\label{Lem.VI.2}
Let $\alpha$, $\beta$, ${\cal U}$, $p$, $q$ be as in
definition \ref{Def.VII.102}. Then
\\ (i) ${\cal E}^{k,\alpha}_{p,q}({\cal U})$ is equivalent to
${\cal E}^{k,\alpha,\beta}_{p,q}({\cal U})$.
\\ (ii) ${\cal E}^{k,\alpha}_{p,q}({\cal U})$ is equivalent to ${\cal A}^{k,\ast}_{p,q}\left(t^{\frac{1}{2}(\beta-\alpha)}{\cal J}^k_{-\beta}{\cal U}\right)+{\cal L}^k_p({\cal U})$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad The proof is a simple consequence of Remark \ref{Rem.VII.1}, Theorem \ref{Th.IV.5} and Corollary \ref{Cor.IV.1}.
\begin{Lem}\label{Lem.VI.3}
Let $\alpha$ be real number, ${\cal U}\in{\cal T}^k(\R^2_+)$, $1\leq
p\leq\infty$, $1\leq q\leq\infty$, and $n$ be a non-negative
integer greater than $\frac{\alpha}{2}$. Then ${\cal
E}^{k,\alpha}_{p,q}({\cal U})$ is equivalent to ${\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}\partial^n_t{\cal U}\right)+{\cal L}^k_p({\cal U})$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad If $n=0$, the result will be obtained from Lemma \ref{Lem.VI.2}(ii).
First suppose that ${\cal E}^{k,\alpha}_{p,q}({\cal U})<\infty$. For $i=0,1,\cdots,n-1$, we have
$$\|{\cal J}^k_{-2i}{\cal U}(.,t)\|_{k,p}\leq\|{\cal J}^k_{-2n}{\cal U}(.,t)\|_{k,p}$$ and since $\partial^n_t{\cal U}(.,t)$ is a linear combination of ${\cal U}(.,t),\;\;{\cal J}^k_{-2}{\cal U}(.,t),\;\;\cdots,{\cal J}^k_{-2n}{\cal U}(.,t)$, it follows that
\begin{equation}\label{e.VI.1}
\|\partial^n_t{\cal U}(.,t)\|_{k,p}\leq B(k,n)\|{\cal J}^k_{-2n}{\cal U}(.,t)\|_{k,p}
\end{equation}
and therefore by Lemma \ref{Lem.VI.2}(ii), we obtained
$${\cal L}^k_p({\cal U})+{\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}\partial^n_t{\cal U}\right)\leq{\cal L}^k_p({\cal U})+{\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}{\cal J}^k_{-2n}{\cal U}\right)\leq B(k,n,\alpha,q){\cal E}^{k,\alpha}_{p,q}(U).
$$ Conversely, suppose ${\cal L}^k_p({\cal U})+{\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}\partial^n_t{\cal U}\right)$. From Theorem \ref{Th.III.2}, Minkowski's integral inequality, relation (\ref{e.I.4}) and Proposition \ref{Prop.II.1}(iv),
we deduce that for $i=1,2\cdots n$
$$\displaystyle\sup_{t\geq1}\|\partial^i_t{\cal U}(.,t)\|_{k,p}\leq B(k,i){\cal L}^k_p({\cal U})$$ and
\begin{equation}\label{e.VI.2}
\|\partial^i_t{\cal U}(.,t)\|_{k,p}\leq B(k,n) {\cal L}^k_p({\cal U})+\|\partial^n_t{\cal U}(.,t)\|_{k,p}.
\end{equation}
Thus $${\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}{\cal J}^k_{-2n}{\cal U}\right)\leq B(k,n,\alpha,q)\left({\cal L}^k_p({\cal U})+{\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}\partial^n_t{\cal U}\right)\right).$$
Again Lemma \ref{Lem.VI.2}(ii) shows the desired result.
\\\par Now we turn to the definitions of the generalized Dunkl-Lipschitz space of $k$-temperatures on $\R^2_+$.
\begin{Def}
Let $\alpha$ be a real number, $1\leq p\leq\infty$, $1\leq
q\leq\infty$. We define
$${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+):=\left\{{\cal U}\in{\cal T}^k(\R^2_+)\,:{\cal E}^{k,\alpha}_{p,q}({\cal U})<\infty\right\};$$
$${\cal T}\lambda^k_{\alpha,p,\infty}(\R^2_+):=\left\{{\cal U}\in{\cal T}\wedge^k_{\alpha,p,\infty}(\R^2_+)\,:\|{\cal J}^k_{-\alpha-2}{\cal U}(.,t)\|_{k,p}=
\circ(t^{-1})\,\,\,\mbox{as}\,\,\,t\longrightarrow0^+\right\}.$$
Then, ${\cal E}^{k,\alpha}_{p,q}$ is a norm on ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$.
\end{Def}
\par First we give :
\begin{Lem}\label{Lem.VI'.2}
Let $1\leq p,q\leq\infty$, $\alpha$ and $\gamma$ be real
numbers. Then ${\cal J}^k_{\gamma}$ is an isometric isomorphism of
${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\alpha,p,\infty}(\R^2_+)$ resp.) onto ${\cal
T}\wedge^k_{\alpha+\gamma,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\alpha+\gamma,p,\infty}(\R^2_+)$ resp.) with inverse
${\cal J}^k_{-\gamma}$.
\end{Lem}
{\footnotesize\bf{Proof}}\quad Since ${\cal J}^k_{-\alpha-2}{\cal U}={\cal J}^k_{-\alpha-\gamma-2}\left({\cal J}^k_{\gamma}{\cal U}\right)$, then Corollary \ref{Cor.IV.01}
proves the result.
\\\\ The basic properties of the spaces ${\cal
T}\wedge^k_{\alpha,p,q}(\R^2_+)$ lie in the following theorem :
\begin{Th}\label{Th.VI'.1}
Let $1\leq p,q\leq\infty$ and $\alpha$ be a real number.
\\ (i) If $1\leq q_1< q_2<\infty$, we have the continuous embedding
$${\cal T}\wedge^k_{\alpha,p,q_1}(\R^2_+)\hookrightarrow{\cal T}\wedge^k_{\alpha,p,q_2}(\R^2_+)\hookrightarrow{\cal
T}\lambda^k_{\alpha,p,\infty}(\R^2_+)\hookrightarrow{\cal
T}\wedge^k_{\alpha,p,\infty}(\R^2_+).$$
\\ (ii) If $\beta$ is a real number such that
$\beta>\alpha$, then ${\cal E}^{k,\alpha,\beta}_{p,q}$ is an
equivalent norm on ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$; moreover
${\cal U}\in{\cal T}\lambda^k_{\alpha,p,\infty}(\R^2_+)$ if and only if
${\cal U}\in{\cal T}\wedge^k_{\alpha,p,\infty}(\R^2_+)$ and $\|{\cal
J}^k_{-\beta}{\cal U}(.,t)\|_{k,p}=\circ(t^{-\frac{1}{2}(\beta-\alpha)})$
as $t\longrightarrow0^+$.
\\ (iii) If $n$ is a non-negative integer greater than
$\frac{1}{2}\alpha$, then ${\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}\partial^n_t{\cal U}\right)+{\cal L}^k_p({\cal U})$
is an equivalent norm on ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$.
\\ (iv) The spaces ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$,
where $p$, $q$ are fixed and $\alpha$ varies, are isomorphic to
one another. The same conclusion holds for the spaces ${\cal
T}\lambda^k_{\alpha,p,\infty}(\R^2_+)$.
\end{Th}
{\footnotesize\bf{Proof}}\quad (i) follows easily from Theorem \ref{Th.IV.4}.
(ii) is an easy consequence of Lemma \ref{Lem.VI.2} and Theorem
\ref{Th.IV.5}(iii). (iii) is derived from Lemma \ref{Lem.VI.3}. To prove (iv), let $\delta$ be another real number. It
then follows from Lemma \ref{Lem.VI'.2} that ${\cal J}^k_{-n}$ is
an isometric isomorphism of ${\cal T}\wedge^k_{\delta,p,q}(\R^2_+)$ (${\cal T}\lambda^k_{\delta,p,\infty}(\R^2_+)$ resp.) onto ${\cal
T}\wedge^k_{\delta-n,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\delta-n,p,\infty}(\R^2_+)$ resp.); denote its inverse by
$({\cal J}^k_{-n})^{-1}$. This Lemma again implies that ${\cal
J}^k_{\delta-\alpha-n}$ is an isometric isomorphism of ${\cal
T}\wedge^k_{\alpha,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\alpha,p,\infty}(\R^2_+)$ resp.) onto ${\cal
T}\wedge^k_{\delta-n,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\delta-n,p,\infty}(\R^2_+)$ resp.). Consequently, $({\cal
J}^k_{-n})^{-1}\circ{\cal J}^k_{\delta-\alpha-n}$ is an isometric
isomorphism of ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\alpha,p,\infty}(\R^2_+)$ resp.) onto ${\cal
T}\wedge^k_{\delta,p,q}(\R^2_+)$ (${\cal
T}\lambda^k_{\delta,p,\infty}(\R^2_+)$ resp.).
\\\par The following theorem establish the relation between $\wedge^k_{\alpha,p,q}(\R)$ and ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$.
\begin{Th}\label{Th.VII.1} If $1\leq p,q\leq\infty$ and $\alpha$ is real, then the $k$-heat transform is a topological isomorphism from $\wedge^k_{\alpha,p,q}(\R)$ onto ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$. Moreover if $f\in\wedge^k_{\alpha,p,q}(\R)$, then $G^k_t(f)\in{\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ and ${\cal E}^{k,\alpha}_{p,q}(G^k_t(f))\leq B(k,\alpha)\|f\|_{\wedge^k_{\alpha,p,q}}$. Conversely, if ${\cal U}\in{\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$, then there exists $f\in\wedge^k_{\alpha,p,q}(\R)$ such that $${\cal U}(.,t)=G^k_t(f)(.),\;\;t>0,\;\;\mbox{and}\;\;\|f\|_{\wedge^k_{\alpha,p,q}}\leq B(k,\alpha){\cal E}^{k,\alpha}_{p,q}({\cal U}).$$
\end{Th}
{\footnotesize\bf{Proof}}\quad Let $f\in\wedge^k_{\alpha,p,q}(\R)$, by Theorem \ref{Th.II.2}, Lemmas \ref{Lem.V.2'} and \ref{Lem.VI.3}, we deduce that $$G^k_t(f)\in{\cal T}\wedge^k_{\alpha,p,q}(\R)\;\;\mbox{and}\;\;{\cal E}^{k,\alpha}_{p,q}(G^k_t(f))\leq B(k,\alpha)\|f\|_{\wedge^k_{\alpha,p,q}}.$$ To prove the converse we proceed first in case $\alpha>0$. For ${\cal U}\in{\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$, let ${\cal V}(.,t)={\cal J}^k_{-\alpha-2}{\cal U}(.,t)$, $t>0$, then for $s>0$
$${\cal U}(x,s)=\dfrac{1}{\Gamma(\frac{\alpha}{2}+1)}\dint_0^{+\infty}\xi^{\frac{\alpha}{2}}e^{-\xi}{\cal V}(x,\xi+s)d\xi.$$
Moreover, by Theorem \ref{Th.IV.4} yields
\begin{equation}\label{e.VII.1}
\|{\cal J}^k_{-\alpha-2}{\cal U}(.,t)\|_{k,p}\leq B(q)(t^{-1}+1){\cal E}^{k,\alpha}_{p,q}({\cal U})
\end{equation}
which together with Minkowski's integral inequality, we find that
$$\|{\cal U}(.,s)\|_{k,p}\leq B(q,\alpha){\cal E}^{k,\alpha}_{p,q}({\cal U})\dint_0^{+\infty}\xi^{\frac{\alpha}{2}}e^{-\xi}(\xi^{-1}+1)d\xi=B(q,\alpha){\cal E}^{k,\alpha}_{p,q}({\cal U}),\,\,\,\mbox{if}\,\,\,1\leq p\leq\infty.$$
On the one hand, for $p=1$ and $\epsilon>0$, from inequality (\ref{e.VII.1}), we can find $\delta$ satisfying $0<\delta<1$ such that $\|{\cal V}(.,t)\|_{k,1}\leq\epsilon t^{-1-\frac{1}{4}\alpha}$ for $0< t\leq\delta$. On the other hand, by a simple verification yields $\|{\cal U}(.,s)-{\cal U}(.,s')\|_{k,1}\rightarrow0$ as $s,s'\rightarrow0$. Summarizing the above two cases show that from Remark \ref{Rem.VI.Temperature}, there exists a function $f\in L^p(\R,|x|^{2k}dx)$, $1\leq p\leq\infty$, such that ${\cal U}(.,t)=G^k_t(f)(.)$. Next, in case $\alpha\leq0$, then using Lemma \ref{Lem.VI'.2}, we have
$${\cal J}^k_{-\alpha+\frac{1}{2}}{\cal U}\in{\cal T}\wedge^k_{\frac{1}{2},p,q}(\R^2_+)\;\;\mbox{and}\;\;{\cal E}^{k,\frac{1}{2}}_{p,q}({\cal J}^k_{-\alpha+\frac{1}{2}}{\cal U})\leq B{\cal E}^{k,\alpha}_{p,q}({\cal U}).$$ Applying the above case $\alpha>0$, then there exists $g\in L^p(\R,|x|^{2k}dx)$, $p\in[1,\infty]$, such that ${\cal J}^k_{-\alpha+\frac{1}{2}}{\cal U}(.,t)=G^k_t(g)(.)$, and $\|g\|_{k,p}\leq B{\cal E}^{k,\alpha}_{p,q}({\cal U})$. Due to Theorem 3.12 for \cite{N-A-S},
$${\cal U}(.,t)=G^k_t(f)(.),\;\;f={\cal J}^k_{\alpha-\frac{1}{2}}(g)\;\;\mbox{and}\;\;\|f\|_{k,p,\alpha-\frac{1}{2}}=\|g\|_{k,p}\leq B{\cal E}^{k,\alpha}_{p,q}({\cal U}).$$
By Proposition \ref{Prop.II.1}(iv), we obtain for $\alpha>0$ $${\cal A}^k_{p,q}(t^{n-\frac{\alpha}{2}}\partial^n_t{\cal U})\leq {\cal A}^{k,\ast}_{p,q}\left(t^{n-\frac{1}{2}\alpha}\partial^n_t{\cal U}\right)+B(k,\alpha){\cal L}^k_p({\cal U}),\,\,n=\overline{(\frac{\alpha}{2})}.$$ Therefore by Lemma \ref{Lem.VI.3}, we obtain $\|f\|_{\wedge^k_{\alpha,p,q}}\leq B{\cal E}^{k,\alpha}_{p,q}({\cal U})$, $\alpha\in\R$, and the theorem is proved.
\begin{Th}\label{Th.VI'.2}
Let $1\leq p<r\leq\infty$, $1\leq q\leq\infty$, $\alpha$ be a real
number and $\delta=\frac{1}{p}-\frac{1}{r}$. Then
$$(i)\,\,\,{\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)\hookrightarrow{\cal
T}\wedge^k_{\alpha-\delta(2k+1),r,q}(\R^2_+), \,\,\,(ii)\,\,\,{\cal
T}\lambda^k_{\alpha,p,\infty}(\R^2_+)\hookrightarrow{\cal
T}\lambda^k_{\alpha-\delta(2k+1),r,\infty}(\R^2_+).$$
\end{Th}
{\footnotesize\bf{Proof}}\quad Let $h$ such that $\frac{1}{r}=\frac{1}{p}+\frac{1}{h}-1$, ($\frac{1}{h}=1-\delta$). We give the argument for $q\neq\infty$. The case $q=\infty$ is done similarly. Let ${\cal U}$ be in ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$ and $\beta$ be a real number greater than $\alpha$. Theorem \ref{Th.VI'.1}(ii) implies that ${\cal E}^{k,\alpha,\beta}_{p,q}({\cal U})$ is equivalent to ${\cal E}^{k,\alpha}_{p,q}({\cal U})$, $\beta>\alpha$. Then $t\mapsto\|{\cal J}^k_{-\beta}{\cal U}(.,t)\|_{k,p}$ is locally integrable on $]0,\infty[$, so the semi-group formula holds for ${\cal J}^k_{-\beta}{\cal U}$. By Theorem \ref{Th.III.2} and Young's inequality (Proposition 7.2 of \cite{Xu1}), we have
$$\|{\cal J}^k_{-\beta}{\cal U}(.,t)\|_{k,r}\leq\|{\cal J}^k_{-\beta}{\cal U}(.,\frac{t}{2})\|_{k,p}\|F^k_{\frac{t}{2}}\|_{k,h}.$$
By a simple verification, we deduce that $\|F^k_{\frac{t}{2}}\|_{k,h}\leq B(k,p,r)t^{-(k+\frac{1}{2})\delta}$.
Hence, we obtain
$$\|{\cal J}^k_{-\beta}{\cal U}(.,t)\|_{k,r}\leq B(k,p,r)t^{-(k+\frac{1}{2})\delta}\|{\cal J}^k_{-\beta}{\cal U}(.,\frac{t}{2})\|_{k,p}.$$
Therefore
$${\cal E}^{k,\alpha-2k\delta-\delta,\beta}_{r,q}({\cal U})=\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-\alpha+2k\delta+\delta)-1}e^{-t}\|{\cal J}^k_{-\beta}{\cal U}(.,t)\|^q_{k,r}dt\right\}^{\frac{1}{q}}$$$$\;\;\;\leq B(k,p,r)\left\{\dint_0^{+\infty}t^{\frac{1}{2}q(\beta-\alpha)-1}e^{-t}\|{\cal J}^k_{-\beta}{\cal U}(.,\frac{t}{2})\|^q_{k,p}dt\right\}^{\frac{1}{q}}\leq B(k,p,\alpha,\beta,r){\cal E}^{k,\alpha,\beta}_{p,q}({\cal U}),$$
from which we obtain the part (i) after making use of Theorem \ref{Th.VI'.1}(ii) again. We proceed in the same way to prove the assertion (ii).
\begin{Rem}
In view of the isometry between $\wedge^k_{\alpha,p,q}(\R)$ and ${\cal T}\wedge^k_{\alpha,p,q}(\R^2_+)$, the same result of Theorem \ref{Th.VI'.2} holds for spaces $\wedge^k_{\alpha,p,q}(\R)$.
\end{Rem}
{\footnotesize
|
1,108,101,563,887 | arxiv | \section{Introduction}
\label{introduction}
The radial velocity (RV) technique remains one of the most successful
methods for the discovery of exoplanetary systems. At the present
time, more than 500 exoplanets have been discovered using the RV
technique, including a vast range of multi-planet systems and orbital
configurations. The success of this method is greatly dependent upon
the ability to accurately characterize the properties of the host
star. In particular, the evolution of star spots, magnetic fields, and
pulsations have well-studied effects on stellar radial velocity
variations \citep{saa97,que01,des07,heb14}. There have been numerous
recent cases where stellar activity has posed a significant problem in
the correct interpretation of RV data \citep{hat13,hat15,rob14,rob15}.
One source of activity-induced RV variations is that due to stellar
activity cycles, analogous to the 11-year Solar cycle. \citet{dra85}
predicted such a correlation, and \citet{dem87} reported the detection
of such a correlation in the solar CO lines at 2.3$\mu$m, and inferred
an amplitude of 30~m\,s$^{-1}$ from the effect. \citet{wri08} argued
that experience with the hundreds of sun-like stars from the
California Planet Survey (CPS) showed that such effects are not so
strong, and that activity cycles were probably not to blame for a
$\sim$15~m\,s$^{-1}$ RV variation in phase with an activity cycle in
HD~154345. Similar high-amplitude RV-activity correlations in
individual targets have been reported by \citet{mou11}, \citet{car14},
and \citet{rob13}. Nonetheless, for most stars such correlations are
small or absent, as argued by \citet{wri08} and \citet{san10}.
The star HD~99492 is an early-K dwarf in a binary orbit with HD~99491
(also known as 83 Leonis B and A, respectively). HD~99492 has a
parallax of $55.7\pm1.46$~marcsec and a distance of $17.96\pm0.47$~pcs
\citep{van07a, van07b}. The mean angular separation of the stellar
components is 40.76\arcsec, leading to an average projected separation
of $\sim$730~AU. HD~99492 was found to harbor a 0.1~$M_J$ planet in a
17 day orbit by \citet{mar05}. The best-fit Keplerian orbital solution
at that time included a linear trend to account for a possible second
companion in the system. The orbital elements were updated by
\citet{mes11}, who claimed to have resolved the separate orbit of an
outer planet with a period of $\sim$5000 days.
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[angle=270,width=8.2cm]{f01a.ps} &
\includegraphics[angle=270,width=8.2cm]{f01b.ps}
\end{tabular}
\end{center}
\caption{{\it Left}: HD~99492 S-values determined from the complete
time series of Keck/HIRES spectra. {\it Right:} The periodogram
resulting from a fourier analysis of the HD~99492 S-values,
revealing a broad peak between 3000--5000~days.}
\label{actfig}
\end{figure*}
Here we present new results for the system that reveal an activity
cycle in the star and further show that stellar activity amply
explains the signature of the outer planet (c). Section \ref{stellar}
provides new fundamental stellar parameters, including spectral
analysis, discussion of element abundances, and activity indices from
the complete dataset of 130 Keck/HIRES spectra. Section \ref{update}
presents our revised Keplerian orbital solution, including the
correlation of the outer planet signature with the activity
indices. Section \ref{photometry} includes photometry from 5 observing
seasons acquired over a span of 11 years. The photometric data
confirms the absence of brightness variations in phase with the
orbital period of planet b, thus confirming the radial velocity
variations in HD 99492 on a 17 day cycle are due to planetary reflex
motion. Our limited number of brightness measurements near the
predicted phase of planetary transit show no evidence for a transit
but fall short of ruling them out. We provide concluding remarks in
Section~\ref{conclusions}.
\section{Stellar Properties}
\label{stellar}
\subsection{Fundamental Parameters}
\label{stellar:sme}
The fundamental properties of HD~99492 have been previously
determined, for example by \citet{val05,tak07}. We used an upgraded
version of the Spectroscopy Made Easy (SME) package to model a
Keck/HIRES spectrum of HD~99492. Details of the SME package may be
found in \citet{val96,val05}. Briefly, SME uses an iterative technique
that combines model atmosphere analysis with Yonsei-Yale model
isochrones \citep{dem04} that utilize {\it Hipparcos} photometry and
distances \citep{van07a,van07b}. This approach produces a
self-consistent convergence with the measured surface gravity
\citep{val09}.
The results of our analysis are shown in Table~\ref{system}, including
values for the surface gravity $\log g$, rotational velocity $v \sin
i$, atmospheric abundance [Fe/H], effective temperature $T_{\rm eff}$
and stellar isochrone solution (mass $M_\star$, radius $R_\star$, and
age). These parameters are consistent with previous estimates of the
stellar properties and demonstrate that HD~99492 is a late-G/early-K
dwarf with an age similar to the Sun.
\subsection{Stellar Abundances}
\label{sec:abund}
The element abundances of HD~99492 have been measured only by two
groups to-date, namely \citet{val05} and \citet{pet11}. To correct for
varying solar abundance normalizations, per the analysis within the
Hypatia Catalog \citep{hin14}, each dataset was re-normalized to the
\citet{lod09} scale. The [Fe/H] measurement per both groups is 0.40
dex, since \citet{pet11} adopted the stellar parameters and iron
abundance from \citet{val05} in their analysis. From \citet{pet11},
[O/H] $=$ 0.25 dex while \citet{val05} determined [Na/H] $=$ 0.41,
[Si/H] $=$ 0.34 dex, [Ti/H] $=$ 0.28 dex, and [Ni/H] $=$ 0.38 dex.
These results reveal a star that is markedly super-solar in both the
volatile and refractory elements.
\subsection{Stellar Activity}
\label{activity}
HD~99492 has been spectroscopically monitored using the HIRES echelle
spectrometer \citep{vog94} on the 10.0m Keck I telescope as part of
the CPS. For Keck/HIRES instrument configuration details, see
\citet{wri04,how09}. Our complete HIRES dataset contains 130
measurements spanning over 18 years, extending the time baseline of
the data reported by \citet{mes11} by over 5 years. The pipeline that
extracts the RVs from the spectra (see Section~\ref{update}) also
extracts Ca II H\&K line-profile variations and provides an index of
stellar activity \citep{noy84}. These data are calibrated to the
Mt. Wilson S-values, defined as the ratio of the sum of the flux in
the H\&K line cores to the sum of the two continuum bands on either
side \citep{wil68}. We include data acquired both before and after the
upgrade of the HIRES CCD in 2004 August \citep{isa10}, taking into
account the offset between pre-2004 and post-2004 calibrated datasets.
The time series of S-values are shown in the left panel of
Figure~\ref{actfig}. The periodic variation in S indicates that we
have observed just over one complete cycle of stellar activity in the
host star. To quantify the variation, we performed a fourier analysis
of the time series, resulting in the periodogram shown in the right
panel of Figure~\ref{actfig}. This analysis reveals a broad peak in
the power spectrum that lies between 3000--5000 days, with maximum
power occurring at $\sim$3650~days. The S-value periodicity is thus
consistent with the HD~99492c orbital period of $4970\pm744$~days
determined by \citet{mes11}. We elaborate further on the correlation
between stellar activity and possible planetary signature in
Section~\ref{update}.
\section{An Update to the Planetary System}
\label{update}
The RV measurements were extracted from the Keck/HIRES data with the
use of an iodine cell mounted at the spectrometer entrance slit as a
robust source of wavelength calibration \citep{mar92,val95}. The
modeling procedure for the Doppler shift of each stellar spectrum with
respect to the iodine spectrum is described further in
\citet{how09}. The discovery orbital solution for the HD~99492 system
by \citet{mar05} included a linear trend component. The 93 RV
measurements utilized by \citet{mes11} used a two-planet orbital
solution to account for the previously-noted linear trend. A
two-planet fit to our expanded dataset is able to recover a similar
orbital solution to that previously found by
\citet{mes11}. Considering the periodic stellar activity described in
Section~\ref{activity} as a source for the previously observed linear
trend and purported second planet, we performed a single-planet fit to
our dataset of 130 RV measurements, both with and without a linear
trend included. These fits were carried out using RVLIN, a
partially-linearized, least-squares fitting procedure described in
\citet{wri09}. The uncertainties in the resulting orbital parameters
were estimated using the BOOTTRAN bootstrapping routines described in
\citet{wan12}. We included a stellar jitter noise component of
4~m\,s$^{-1}$ in quadrature with the measurement uncertainties
\citep{wri05,but06}. With our new dataset and its increased timespan,
we find no evidence of a significant difference between the orbital
fits that do and do not include a linear trend. We thus adopt the
solution without the linear trend for which the complete orbital
solution is shown in Table~\ref{system} and in the top panel of Figure
\ref{rvfig}. Note that the $\gamma$ parameter shown in
Table~\ref{system} is the systemic velocity of the system with respect
to the zero point of the extracted RVs and thus is the systemic
velocity relative to the template spectrum. The complete RV dataset of
130 measurements of HD~99492 are listed in Table~\ref{rvs}.
\begin{deluxetable}{lc}
\tablecaption{\label{system} System Parameters}
\tablewidth{0pt}
\tablehead{
\colhead{Parameter} &
\colhead{Value}
}
\startdata
\noalign{\vskip -3pt}
\sidehead{HD99492}
~~~~$V$ & 7.58 \\
~~~~$B-V$ & 1.0 \\
~~~~Distance (pc) & $55.7 \pm 1.46$ \\
~~~~$T_\mathrm{eff}$ (K) & $4929 \pm 44$ \\
~~~~$\log g$ & $4.57 \pm 0.06$ \\
~~~~$v \sin i$ (km\,s$^{-1}$) & $0.41 \pm 0.5$ \\
~~~~$[$Fe/H$]$ (dex) & $0.3 \pm 0.03$ \\
~~~~$M_\star$ ($M_\odot$) & $0.85 \pm 0.02$ \\
~~~~$R_\star$ ($R_\odot$) & $0.78 \pm 0.02$ \\
~~~~Age (Gyrs) & $4.8 \pm 4.1$ \\
\sidehead{HD 99492 b}
~~~~$P$ (days) & $17.054 \pm 0.003$ \\
~~~~$T_c\,^{a}$ (JD -- 2,440,000) & $17367.776 \pm 0.855$ \\
~~~~$T_p\,^{b}$ (JD -- 2,440,000) & $13776.317 \pm 3.392$ \\
~~~~$e$ & $0.07 \pm 0.06$ \\
~~~~$\omega$ (deg) & $240.7 \pm 75.4$ \\
~~~~$K$ (m\,s$^{-1}$) & $6.98 \pm 0.53$ \\
~~~~$M_p$\,sin\,$i$ ($M_J$) & $0.079 \pm 0.006$ \\
~~~~$a$ (AU) & $0.123 \pm 0.001$ \\
\sidehead{System Properties}
~~~~$\gamma$ (m\,s$^{-1}$) & $-1.49 \pm 0.37$ \\
\sidehead{Measurements and Model}
~~~~$N_{\mathrm{obs}}$ & 130 \\
~~~~rms (m\,s$^{-1}$) & 4.33 \\
~~~~$\chi^2_{\mathrm{red}}$ & 1.03
\enddata
\tablenotetext{a}{Time of mid-transit.}
\tablenotetext{b}{Time of periastron passage.}
\end{deluxetable}
\begin{figure}
\includegraphics[width=8.2cm]{f02a.ps} \\
\includegraphics[angle=270,width=8.2cm]{f02b.ps}
\caption{{\it Top}: The complete 130 RV measurement dataset phased
on the best-fit Keplerian orbital solution for a single-planet
system (see Table~\ref{system}). {\it Bottom:} The residuals from
the best-fit solution plotted against the activity indices
described in Section~\ref{activity}. Our analysis shows that the
probability of no correlation between the one-planet RV residuals and
the S-values is $1.2 \times 10^{-5}$.}
\label{rvfig}
\end{figure}
To investigate further the impact of stellar activity on a two-planet
solution (see Section~\ref{activity}), we compared the S-values with
the RV residuals of the single-planet solution shown in
Table~\ref{system}. The resulting correlation diagram is shown in the
bottom panel of Figure~\ref{rvfig}. We quantified the significance of
the correlation using the Spearman rank correlation coefficient. The
Spearman coefficient lies in the range $-1 < r_s < 1$, and, in turn,
gives the probability that the two quantities being examined are not
correlated. The Spearman coefficient for the data shown in the bottom
panel of Figure~\ref{rvfig} is $r_s = 0.39$, indicative of a positive
correlation. The corresponding probability that the residuals of the
single-planet solution and the S-values would produce the observed
correlation if those quantities were in fact uncorrelated is $1.2
\times 10^{-5}$. We conducted a further test via an extensive
Monte-Carlo simulation that performs a Fisher-Yates shuffle,
randomizing the order of the residual data values. For each
realization, the Spearman's rank correlation coefficient and
probability were recalculated. This test resulted in a 0.5 probability
of null-correlation, indicating that the correlation found above is
robust. This implies, in turn, that the second planet claimed by
\citet{mes11} is instead the result of stellar activity.
\section{Photometric Observations}
\label{photometry}
We observed HD~99492 photometrically as part of the Transit Ephemeris
Refinement and Monitoring Survey (TERMS) \citep{kan09} with the T12
0.8m Automatic Photoelectric Telescope (APT), one of several automated
telescopes operated by Tennessee State University (TSU) at Fairborn
Observatory in southern Arizona. The T12 APT is equipped with a
precision, two-channel photometer that simultaneously measures the
Str\"omgren $b$ and $y$ passbands using two EMI 9924QB photomultiplier
tubes. This makes T12 ideal for achieving high photometric precision
on relatively bright stars. The TSU APTs and their precision
photometers, observing strategy, data reduction techniques, and
photometric precision are described in detail by \citet{hen99}.
\LongTables
\begin{deluxetable}{ccc}
\tablewidth{0pc}
\tablecaption{\label{rvs} HD~99492 Radial Velocities}
\tablehead{
\colhead{Date} &
\colhead{RV} &
\colhead{$\sigma$} \\
\colhead{(BJD -- 2,440,000)} &
\colhead{(m\,s$^{-1}$)} &
\colhead{(m\,s$^{-1}$)}
}
\startdata
10462.113958 & -3.09 & 1.49 \\
10546.987859 & -3.70 & 1.62 \\
10837.932535 & -3.30 & 1.56 \\
10862.898993 & -6.29 & 1.57 \\
10955.876644 & -8.27 & 1.21 \\
11172.101597 & -2.40 & 1.62 \\
11228.035903 & -8.13 & 1.51 \\
11311.816319 & 2.63 & 1.59 \\
11544.172650 & -8.38 & 1.39 \\
11582.974942 & -0.25 & 1.35 \\
11704.805914 & -2.63 & 1.62 \\
11898.154005 & -15.85 & 1.48 \\
11973.053090 & 4.30 & 1.39 \\
12095.752049 & -3.42 & 1.63 \\
12097.753715 & -8.23 & 1.56 \\
12333.139410 & 5.37 & 1.69 \\
12334.079884 & 7.42 & 1.73 \\
12334.968322 & 2.05 & 1.53 \\
12364.068125 & 2.33 & 1.46 \\
12445.768264 & -7.95 & 1.40 \\
12654.009595 & 5.07 & 1.70 \\
12681.123484 & -10.56 & 1.46 \\
12711.858843 & 0.40 & 1.28 \\
12804.765590 & -7.83 & 1.43 \\
12805.876296 & 4.73 & 1.68 \\
12806.763634 & -1.00 & 1.39 \\
12989.171424 & -16.78 & 1.60 \\
13015.119444 & 11.38 & 1.59 \\
13016.134363 & 6.20 & 1.59 \\
13017.121921 & 8.51 & 1.38 \\
13044.127569 & -3.96 & 1.58 \\
13045.999074 & 4.24 & 1.62 \\
13071.870764 & -4.13 & 1.51 \\
13073.940752 & -7.32 & 1.47 \\
13076.983611 & -5.70 & 1.42 \\
13153.801470 & -0.04 & 1.29 \\
13153.804144 & -0.48 & 1.28 \\
13179.820787 & -1.32 & 1.46 \\
13179.824352 & -1.33 & 1.57 \\
13180.782037 & 7.05 & 1.78 \\
13181.808171 & 2.02 & 1.35 \\
13195.775914 & -12.34 & 1.46 \\
13196.794780 & -5.61 & 1.47 \\
13339.157731 & 5.37 & 0.89 \\
13340.150718 & 3.85 & 0.95 \\
13369.115093 & 3.54 & 0.92 \\
13425.000741 & 1.95 & 0.97 \\
13425.003310 & 1.67 & 1.02 \\
13480.759734 & -5.61 & 0.92 \\
13480.761887 & -6.24 & 0.90 \\
13725.101748 & -9.87 & 0.93 \\
13725.104595 & -9.46 & 0.91 \\
13747.133588 & 7.52 & 1.77 \\
13747.138218 & 4.20 & 1.10 \\
13747.145174 & 5.65 & 0.98 \\
13748.096169 & 7.62 & 0.91 \\
13748.098819 & 8.29 & 0.92 \\
13753.040035 & -4.47 & 0.91 \\
13753.043380 & -3.48 & 0.92 \\
13754.021562 & -5.64 & 0.92 \\
13754.024097 & -5.80 & 0.96 \\
13775.980868 & -3.38 & 0.96 \\
13775.983125 & -2.44 & 0.93 \\
13776.976910 & 2.23 & 0.91 \\
13776.979213 & 3.52 & 0.90 \\
13777.950347 & 2.20 & 0.95 \\
13777.952720 & 0.03 & 0.94 \\
13779.971238 & 5.97 & 0.99 \\
13779.974155 & 3.50 & 1.00 \\
13806.916794 & -13.28 & 0.95 \\
13806.918981 & -14.02 & 0.95 \\
13926.762188 & -5.05 & 0.99 \\
13926.768762 & -5.12 & 0.98 \\
13927.761840 & -9.91 & 0.90 \\
13927.764213 & -10.64 & 0.82 \\
14084.153623 & 2.60 & 1.02 \\
14084.157870 & 3.07 & 0.94 \\
14139.063102 & 2.72 & 0.86 \\
14139.064722 & 1.83 & 0.92 \\
14216.896134 & -13.32 & 0.91 \\
14216.899722 & -11.04 & 0.99 \\
14246.798900 & 0.92 & 0.85 \\
14246.800718 & 4.41 & 0.75 \\
14248.811678 & 1.16 & 0.87 \\
14250.800613 & -2.91 & 0.84 \\
14251.804815 & -2.25 & 0.82 \\
14255.765556 & -1.65 & 0.89 \\
14255.766991 & -2.77 & 0.90 \\
14277.743067 & 5.72 & 0.85 \\
14278.749942 & 6.86 & 0.83 \\
14279.748507 & 3.12 & 0.82 \\
14285.751910 & -3.80 & 0.97 \\
14294.758669 & 5.47 & 0.97 \\
14300.738970 & -2.54 & 0.96 \\
14455.109028 & -5.20 & 1.04 \\
14455.110868 & -5.48 & 1.06 \\
14456.129444 & -4.92 & 0.96 \\
14456.131400 & -7.51 & 0.92 \\
14493.134583 & -8.34 & 1.17 \\
14544.982280 & 0.64 & 0.95 \\
14546.963137 & 0.81 & 1.03 \\
14547.871944 & 7.05 & 1.10 \\
14548.847407 & 9.68 & 1.10 \\
14635.754444 & 2.00 & 0.98 \\
14638.750949 & -5.25 & 0.86 \\
14807.164861 & 0.91 & 1.08 \\
14985.837363 & -14.40 & 0.93 \\
14986.825959 & -9.95 & 1.00 \\
14987.839171 & -5.24 & 1.08 \\
15016.744103 & -8.88 & 0.93 \\
15173.123927 & -18.81 & 0.94 \\
15190.171113 & -15.69 & 1.05 \\
15311.807895 & -3.49 & 1.09 \\
15313.781452 & -0.95 & 0.97 \\
15319.842967 & 0.77 & 1.23 \\
15319.850617 & 1.65 & 1.11 \\
15376.739902 & -17.07 & 0.89 \\
15400.735697 & 6.43 & 1.01 \\
15635.955861 & -11.51 & 0.94 \\
15707.736240 & 7.18 & 0.95 \\
15905.166211 & -6.50 & 1.01 \\
16111.736847 & -6.40 & 0.95 \\
16328.051766 & -0.37 & 1.17 \\
16614.127514 & 9.08 & 1.15 \\
16639.094368 & -4.03 & 0.91 \\
16675.173277 & -4.19 & 1.18 \\
16827.757817 & 3.70 & 1.03 \\
17065.116057 & -7.00 & 1.02 \\
17203.750309 & 4.08 & 1.01 \\
17217.748571 & -3.38 & 1.02
\enddata
\end{deluxetable}
The T12 telescope acquired 368 nightly observations of HD~99492 during
the 2004, 2009, 2010, 2013, and 2014 observing seasons. These data are
plotted against Heliocentirc Julian Date in the top panel of
Figure~\ref{photfig}. The observations are insufficient to detect the
long-term activity cycle described in
Section~\ref{activity}. Therefore, we removed very small
season-to-season variability in HD~99492 and/or its comparison stars
by normalizing the final four observing seasons so their means match
the first season, indicated by the horizonal dotted line in the top
panel. This removal of seasonal variability allows a more sensitive
search for variability that might be due to rotational modulation of
star spots \citep[e.g.,][]{hen13}. \citet{mar05} estimated the
rotation period of HD~99492 to be around 45 days from the Ca II H and
K emission strength. Our nightly observations scatter about the mean
with a standard deviation 0.00484~mag, somewhat more than the typical
measurement precision. However, Fourier analyses of the complete
normalized data set and the individual observing seasons did not
reveal any significant periodicities between 1 and 100 days that might
correspond to the star's rotation period.
We further examined publicly available photometric data from the {\it
Hipparcos} satellite to search for evidence of periodicity in the
lightcurve of HD~99492 \citep{per97,van07a}. The data were extracted
from the NASA Exoplanet Archive \citep{ake13}, including 71
measurements spanning a period of 1,062 days and with a standard
deviation of 0.135~mag. Our fourier analysis of the {\it Hipparcos}
did not reveal strong periodicity, with the possible exception of a
minor fourier power at $\sim$15.8~days.
\begin{figure}
\includegraphics[width=8.2cm]{f03.ps}
\caption{{\it Top}: Nightly photometric observations of HD~99492
from the 2004, 2009, 2010, 2013, and 2014 observing seasons
acquired with the T12 0.8~m APT. The final four seasons have been
normalized so their seasonal means match the 2004 season. {\it
Middle}: The APT observations phased with the orbital period of
17.054~days. A sine fit to the phased observations yields a
semi-amplitude of $0.00041\pm0.00033$~mag. This is consistent with
the absense of light variability on the radial velocity period and
also consistent with planetary reflex motion of the star as the
cause of the RV variations. {\it Bottom}: The APT observations
within $\pm0.06$ phase units of the predicted transit time. The
solid curve for the predicted transit shows the predicted
mid-transit at phase 0.0, the transit depth (0.5\%), and the
duration ($\pm0.005$ phase units) for a central transit of planet
b. The vertical dashed lines represent the uncertainty in the time
of transit. Our photometry shows no evidence for transits but
cannot rule them out completely.}
\label{photfig}
\end{figure}
The Keplerian orbital solution in Section~\ref{update} includes an
estimate of $T_c$, the predicted time of mid-transit should the
planetary orbital inclination be suitably close to edge-on. To
determine the remainder of the predicted transit parameters, we
adopted the SME stellar radius from Table~\ref{system} and an
estimated planetary radius of $R_p = 0.52 \ R_J$ using the mass-radius
relationship described by \citet{kan12}. Taking into account the
orbital eccentricity from the Keplerian orbit \citep{kan08}, the
transit probability is 2.8\% and the predicted duration and depth for
a central transit are~0.181 days and 0.54\% respectively.
The APT observations are replotted in the middle panel of Figure
\ref{photfig}. These data are phased with the orbital period and the
predicted transit time shown in Table \ref{system}. We use
least-squares to fit a sine curve to the data, also phased on the
17.054-day orbital period. This yields a formal semi-amplitude of the
sine curve of just $0.00041\pm0.000033$ mag. The relatively small
amplitude confirms that the observed RV variations are due to the
presence of a planet rather than intrinsic stellar brightness
variations.
The APT observations within $\pm0.06$ phase units of the predicted
transit time are shown in the bottom panel of Figure
\ref{photfig}. The solid curve for the predicted transit signature
includes the predicted mid-transit at phase 0.0, the transit depth
(0.5\%), and transit duration ($\pm0.005$ phase units). The vertical
dashed lines represent the uncertainty in our new time of transit. We
find no evidence for transits although our data do not rule them out
completely. Monitoring observations were made on the night of 31
January 2016 UT, during a predicted transit, with the T12 APT and with
the 0.6m telescope at Swarthmore's Peter van de Kamp Observatory. The
night was marginally photometric at both sites; again, no evidence for
transits was seen but we are still not able to completely rule them
out.
\section{Conclusions}
\label{conclusions}
The presence of stellar activity presents continuing challenges to
exoplanet detection and characterization. Radial velocity exoplanet
survey targets are usually chosen for their low chromospheric
activity, leading to a bias against activity in the sample of bright
planet-host stars. HD~99491 is an evolved star and has been known to
exhibit chromospheric activity for some time \citep{zar83,wri04}. It
is thus quite interesting to find that the companion star, HD~99492,
exhibits similar behavior over long timescales. It is hoped that
continued photometric monitoring will help to resolve the complete
magentic cycle of the star, such as those found for HD~192263
\citep{dra12}, although the target is very difficult to observe due to
the small angular separation of the binary components.
The update to the parameters for the HD~99492 system presented here
refines the stellar and planetary orbital parameters for the
system. The update shows that the $\sim$5000~day RV signal is due to
stellar activity rather than a planet. However, there are likely other
planets of smaller mass and/or larger separation that lie beneath the
current noise floor. As the exploration of exoplanetary systems forges
onward to ever smaller planets, the careful examination of stellar
activity is becoming more relevant than ever before.
\section*{Acknowledgements}
GWH acknowledges long-term support from Tennessee State University and
the State of Tennessee through its Centers of Excellence program.
This research has made use of the NASA Exoplanet Archive, which is
operated by the California Institute of Technology, under contract
with the National Aeronautics and Space Administration under the
Exoplanet Exploration Program. The results reported herein benefited
from collaborations and/or information exchange within NASA's Nexus
for Exoplanet System Science (NExSS) research coordination network
sponsored by NASA's Science Mission Directorate. The data presented
herein were obtained at the W.M. Keck Observatory, which is operated
as a scientific partnership among the California Institute of
Technology, the University of California and the National Aeronautics
and Space Administration. The Observatory was made possible by the
generous financial support of the W.M. Keck Foundation. The authors
wish to recognize and acknowledge the very significant cultural role
and reverence that the summit of Mauna Kea has always had within the
indigenous Hawaiian community. We are most fortunate to have the
opportunity to conduct observations from this mountain.
|
1,108,101,563,888 | arxiv | \section{Introduction}
\label{sec:intro}
Everyone can focus their attention on an image, a sound, or a thought. But what is attention and how does it really work?
Besides James's definition, other standard definitions of attention include: ``{\it the ability to focus selectively on a selected stimulus, sustaining that focus and shifting it at will}''; or, linking attention to awareness: ``{\it the concentration of awareness on some phenomenon to the exclusion of other stimuli}''.
All such definitions remain very coarse, based on introspective and phenomenological considerations, and define attention in terms of other functionally obscure terms, such as ``focus'' or ``awareness''. Here, in order to better understand attention at the computational level, we study it within the simplified framework of artificial neural networks and deep learning by first identifying the most fundamental building blocks or quarks, using a physics-inspired terminology, and then rigorously analyzing some of their computational properties.
The motivation for working with artificial neural networks is two-fold. The first motivation is to avoid getting bogged down by the complexity of biological systems. There is of course a substantial literature on the neurobiology and psychophysics of attention (e.g. \cite{itti2005neurobiology,
arnsten2010neurobiology,posner2011cognitive})
pointing to a variety of different phenomena and attention systems, leading some to conclude: ``{\it The word``attention'' is an inadequate, singular term for a multitude of inter-related
processes. We use a host of adjectives to describe attention—-for example, we say that attention can be divided, oriented, sustained, or focused, and many of these descriptions likely reflect underlying, dissociable neural processes. Complicating matters, attentional resources can be allocated to either external stimuli, or to internal stimuli such as thoughts and memories. Furthermore, we often confuse the regulation of attention (a covert behavior) with the regulation of movement (an overt behavior) when discussing an ``attentional disorder''}'' \cite{arnsten2010neurobiology}. In spite of this complexity and diversity of processes, we believe that at the most fundamental level attention mechanisms are built out of a small number of fundamental
operations, which occur on time scales that are fast compared to the time scales for learning and long-term synaptic modifications. For instance, in order to exclude other stimuli, neuronal machinery must exist that is capable of dynamically suppressing the activity of subsets of neurons, or subsets of connections, or both, associated with the non-attended information.
These fundamental operations may be easier to identify and study using artificial neural networks. In particular, one of our goals here is to produce a systematic nomenclature of all such possible operations, within the standard deep learning formalism. While this is not the place to discuss the relationship between artificial and biological neural networks, there is a body of evidence showing that, atleast at some level, the former can provide useful information about the latter (e.g. \cite{zipser1988back,olshausen1996emergence,
yamins2016using}).
The second motivation, equally or even more important, is that attention plays an increasingly important role in deep learning systems.
In deep learning networks, various attention mechanisms
such as content-based attention
\cite{graves2014neural},
speech recognition attention
\cite{chorowski2015attention},
or
dot product attention
\cite{luong2015effective},
have been introduced and successfully deployed
in applications. Many of these mechanisms were initially developed
for speech and natural language applications (NLP) (e.g. \cite{attention_bahdanau,bert,gpt2}),
but they are now being adapted to other problems (e.g. \cite{set_transformer,
fenton2020permutationless}). The intuitive idea in NLP applications is that when, for instance,
translating a sentence from one language to another, the underlying neural algorithm should be able to dynamically shift its focus on the relevant words and context, while filtering out the less relevant ones.
For instance when translating ``the red roof'' into the French ``le toit rouge'' the machinery that produces the {\it third} word of the output (``rouge'') should dynamically give more importance to the {\it second} word (``red'') of the input, relative to the other neighboring words.
The current pinnacle of attention-based architectures is the transformer architecture
\cite{vaswani2017attention,transformers}
which has led to state-of-the-art performance in NLP and is now widely used.
These advances have even led some experts to speculate that attention mechanisms may be key for achieving machine consciousness (!).
However, with rare exceptions
\cite{dong2021attention}, there is little theory to help us better understand the nature and computational capabilities of attention.
To address this gap, in Section 2 we first seek to identify and classify the most fundamental building block of all attention mechanisms within the deep learning framework. In particular, we identify three key attentional mechanisms we call activation attention, output gating, and synaptic gating. In Section 3, we show how output gating and synaptic gating are used in all the current attention-based architectures, including transformers. In Section 4, we explore the functional capacity of output gating and synaptic gating. In Section 5, we provide a brief overview of the notion of capacity and the technique of multiplexing, which is a form of activation attention, for proving capacity lower bounds. In Sections 6 and 7, we prove several theorems about the capacity of
activation, output, and synaptic gating, using multiplexing, first for single units and then for single layers
of linear and polynomial threshold functions.
\section{Sytematic Identification of Attention Quarks: Within and Beyond the Standard Model}
\label{sec:gating}
We first introduce the formal neural network framework that we use in order to systematically organize and study the attention quarks, i.e. the most fundamental building blocks of attention. To borrow another term from physics, we call this framework the Standard Model.
\subsection{The Standard Model (SM)}
The Standard Model is the class of all neural networks made of what are generally called McCulloch and Pitt neurons. Neural networks in the SM consist of directed weighted graphs of interconnected processing units, or neurons. The synaptic strength of the connection from neuron $j$ to neuron $i$ is represented by a single real-valued number $w_{ij}$. Any non-input neuron $i$ produces an output $O_i$ by first computing an activation $S_i=\sum_j w_{ij}O_j$, i.e the activation corresponds to the dot product of the incoming signal with the synaptic weights. In turn, the output of the neuron is produced in the form $O_i=f_i(S_i)$ where $f_i$ is the transfer or activation function of neuron $i$. Typical activation functions include the identity function in the case of linear neurons, sigmoidal activation functions such as the logistic and tanh activation functions, and piece-wise linear functions (\cite{tavakoli2021splash}), such as the Heaviside, sign, or ReLU functions. An encompassing, and more than sufficient, class of transfer functions for a formal definition of the SM is the class of functions that are differentiable everywhere except for a finite (and small) set of points. A fundamental, and easy to prove \cite{baldi2021deep}, property of the SM is that it has universal approximation properties: (1) any Boolean function can be implemented exactly by a feedforward network in the SM; and (2) for any small $\epsilon >0$, any continuous function from
$\mathbb{R}^n$ to $\mathbb{R}^m$ defined on a compact set $C$ can be
approximated within $\epsilon$ everywhere over $C$ by a feedforward network in the SM.
Several attention mechanisms described below can be viewed as extensions of the standard model, where new operations are added to the SM to obtain a richer model. Extending the SM is not a new procedure. For instance, using softmax layers is already an extension of the SM since
the softmax is not a proper, single-neuron, activation function. Another example is the use of polynomial activation functions
(e.g. \cite{baldi2019polynomial}).
Due to the universal approximation properties of the SM, these extensions are not meant to increase the approximating power of the SM.
Rather, their value must be established along other dimensions, such as circuit size or learning efficiency.
In the digital simulations of neural networks, these extensions correspond to new software primitives. In physical neural networks, these extensions must come with actual wires and physical mechanisms. For instance, a softmax operation is a new software primitive in a neural network software library but it requires a new physical mechanism for its physical implementation. It can be replaced by a network of depth 3 within the SM (Section \ref{sec:functional} with fixed weights set to $\pm 1$ Figure \ref{fig:SoftMax}), provided logarithm and exponential activation functions are available.
\subsection{Systematic Taxonomy}
In the SM, there are three kinds of variable types: $S$ (activations), $O$ (outputs), and $w$ (synaptic weights).
At the most fundamental level, we can organize attention mechanisms (and more broadly new SM interactions) depending on:
the type of variable associated with the {\it source} of an attention signal (3 possibilities), the type of variable associated with the {\it target} of an attention signal (3 possibilities), and on the {\it mechanism} of the interaction, i.e. on the algebraic operation used to combine the attending signal and the attended target. While many algebraic operations can be considered, the two most basic ones are addition and multiplication (two possibilities)--resulting in a total of 18 different possibilities. These could be further subdivided depending on multiplicity issues, at both the source and the target, as well as time scales, as described below. We now discuss these possibilities, reducing them down to the 6 most important ones.
\begin{enumerate}
\item{\bf Source:} It is reasonable to assume that the source of the attending signal is a variable of type $O$ corresponding to the output of one attending neuron, or a group (layer) of attending neurons. While other possibilities can be explored, e.g. a synapse directly attending another synapse,
they would require new complex mechanisms in a physical implementation. Furthermore, they do not occur in current attention-based deep learning models. The same can be said for the activation being the direct source of the attending signal. Even more unlikely would be the case of mixed schemes where the attending signal would emanate, for instance, from both neuronal outputs and synapses. In short, the reasonable assumption that the attending signals emanate from neuronal outputs allows us to reduce the number of possibilities by a factor of three leaving 6 basic possibilities (Table \ref{tab:taxonomy}.
\item {\bf Target:} For the target of an attention signal, we will study all three possibilities. Thus attention signals can target activations ($S$), outputs ($O$), or synapses ($w$). We will call these three forms of attention activation attention, output attention, and synaptic attention respectively.
\item {\bf Mechanism:} The most simple operations one can think of for combining the attending signal with its attended target are addition and multiplication.
Attention requires
excluding all other stimuli and possibly enhancing the attended stimulus (here we do not distinguish between external stimulus or internal representation). Intuitively, at the fundamental level, these exclusions and enhancements correspond to multiplicative operations where, for instance, the signals associated with non-attended stimuli are inhibited--i.e. multiplied by zero, and the attended stimuli are enhanced, i.e. mutliplied by a factor greater than one.
We will reserve the term ``gating'' for multiplicative interactions.
Thus, for instance, multiplicative synaptic attention will also be called synaptic gating.
All multiplicative interactions, with the exceptions of terms of the form $w_{ij}O_j$, are not part of the SM and thus correspond to potential extensions of the SM.
However, for completeness, we will also consider the case of additive interactions. Furthermore, in the case of activation attention, for several common activation functions such as logistic or ReLU, inhibition (and thus suppression of stimuli) can be achieved additively by sending a large negative signal towards the
attended neuron. Unlike gating, additive activation attention is contained in the SM.
Note that both addition and multiplication are differentiable operations, and thus can easily be incorporated into the backpropagation learning framework.
\item {\bf Multiplicities:}
In each possible case, one must take into account multiplicity issues both at the level of the source and at the level of the target. For instance, in synaptic gating, can the attending output of a neuron gate more than one synapse? Can the attending output of several neurons gate the same synapse? And so forth. In the most simple cases, we will assume that the multiplicity is one both at the source and at the target, but greater multiplicities will also be considered, for instance in some of the theorems in Sections \ref{sec:capacitysingle} and \ref{sec:capacitylayers}.
\item {\bf Time Scales:} Finally, for simplicity, and in line with current deep learning attention models, we assume that the attention mechanisms operate on the time scale of individual inputs. Different inputs create different attention signals.
Alternative possibilities are briefly discussed in Section
\ref{sec:SVO}).
\end{enumerate}
In summary, we are left with six main cases, corresponding to two different mechanisms $(+,\times)$ and three different target types $(S,O,w)$. We now examine them one by one and show that they can be reduced to three most important cases, which are further studied in the following sections.
Finally, for each case, it is useful to keep in mind the difference between digital simulations and actual implementations in a physical neural network, i.e. machine learning versus learning in the machine \cite{baldi2021deep}.
For instance, different mechanisms may be equivalent at the level of the algebraic expressions they lead to, but very different in terms of their physical implementations.
\subsection{Identification: Additive Interactions}
In the case of additive interactions, the attention signal is added to three possible targets of type $S$, $O$, or $w$.
\subsubsection{Additive Activation Attention: Multiplexing}
In this case, consider an attended neuron $i$. It activation
$S$ has the form $S=S_1+S_2$ where $S_1$ is the ``normal'' activation (without attention) and $S_2$ is the attending signal originated from one, or multiple, attending neurons. The terms multiplexing simply refers to the combination or superposition of two signals over the same channel.
Depending on the transfer function $f_i$ of neuron $i$, the attending signal can be used to control the output $O_i=f_i(S_1+S_2)$. The typical case is when $f_i$ is the logistic or Heaviside function: then a large negative signal $S_2$ (much larger than $S_1$) will override any $S_1$ and
force the output of neuron $i$ to be zero. If, on the other hand, $S_2=0$ then $O_i=f_i(S_1)$ and the normal signal will be propagated. If attention must be able to both suppress and enhance signals, this mechanism allows the suppression, but it does not provide a direct way for the multiplicative enhancement of signals. Formally this mechanism is entirely within the SM and does not require extending it. If the attending signal must come from a single neuron (source multiplicity one), this can easily be achieved by connecting the output of the attending neurons to a single linear neuron
whose output is equal to $S_2$. Although not new, this attention mechanism is interesting because it will play a central role in the methods for proving various technical results about the new gating mechanisms presented below.
\subsubsection{Additive Output Attention}
In this case, using multiplicities of 1, we
consider a neuron $i$ connected to a neuron $k$ in the main network, and an attending neuron $j$. In this case, the output $O_j$ is simply added to the output $O_i$
(Figure \ref{fig:addition}, Left) producing the terms
$O_i+O_j$ (or $O_i+w_{ij}O_j$). This terms is nothing new in the SM and is equivalent to having an additional linear neuron with two incoming connections originating in neurons $i$ and $j$, both with synaptic weight 1, and the same outgoing connections as neuron $j$ in the original network. This mechanism alone does not provide much in terms of attentional functionalities and therefore it will not be considered here any further.
\subsubsection{Additive Synaptic Attention}
In this case, using multiplicities of 1, we
consider a neuron $i$ connected to a neuron $k$ in the main network, and an attending neuron $j$. In this case, the output $O_j$ is simply added to a synaptic weight, i.e. to $w_{ki}$.
(Figure \ref{fig:addition}, Right), producing a new
synaptic weight
$w_{ki} +O_j$, which in turn creates a contribution equal to $(w_{ki}+O_j)O_i$ st neuron $k$. This contribution contains a new multiplicative term of the form $O_iO_j$ which is not part of the SM.
Since $O_iO_j$ falls under the multiplicative category, it is subsumed by the analyses below of multiplicative interactions; thus additive synaptic interactions will not be considered any further in the rest of this work.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\columnwidth]{Addition.eps}
\end{center}
\vspace{-0.2cm}
\caption{Additive Output (Left) or Synaptic (Right) Interactions. Left: the output $O_j$ of the attending neuron is added to the output $O_i$ of the attended neuron, producing a term of the form $O_i+O_j$. Right:
the output $O_j$ of the attending neuron is added to the attended synaptic weight $w_{ki}$, {\it de facto} producing a new multiplicative term of the form $O_iO_j$ as one of the input components to neuron $k$.}
\label{fig:addition}
\end{figure}
In summary, there are three kinds of additive interactions.
Only multiplexing (additive activation attention) will be used in the rest of this work, and primarily as a tool in the proofs of some theorems.
\subsection{Identification: Multiplicative Interactions or Gating}
In the case of multiplicative interactions, the attention signal is multiplied with three possible targets of type $S$, $O$, or $w$.
\subsubsection{Multiplication Activation Attention: Activation Gating}
In this case, using a source multiplicity of 1, consider
an attended neuron $i$ with activation $S_i=\sum w_{il}O_l$ and transfer function $f_i$ and an attending neuron $j$. In this case, the attending signal $O_j$ multiplies the activation $S_i$ so that the final output of neuron $i$ becomes $O_i=f_i(S_iO_j)$. If $f_i$ is sigmoidal or a threshold function, a large positive value of the attention signal $O_j$ could be used to drive the response of neuron $i$ towards one of its extreme values (e.g. $0/1$ or $-/+$ \footnote{Everywhere we write -/+ to indicate
-1/+1.} ).
Note that $O_i=f_i(S_iO_j)=f_i(\sum_l w_{il}O_lO_j$, so formally this mechanism is equivalent to having $O_j$ multiply the output $O_l$ of all the neurons connected to neuron $i$, although in a physical implementation these two things could be very different. Because of this equivalence, we will consider that output gating subsumes this mechanism and we will not discuss it much further. Furthermore, at least in the case of attended and attending neurons with threshold transfer functions equal to the sign function, multiplication of activation and multiplication of output are directly equivalent at the algebraic level because:
$\sign (S_i O_j)=\sign(S_i) \sign(O_j) =\sign S_i O_j=O_iO_j$.
\subsubsection{Multiplicative Output Attention: Output Gating}
In this case, using multiplicities of 1, we
consider a neuron $i$ connected to a neuron $k$ in the main network, and an attending neuron $j$. In this case, the output $O_i$ is multiplied by $O_j$ (or $w_{ij}O_j$) producing the quadratic terms
$O_iO_j$ which is new in the SM, leading to an input component into neuron $k$ equal to $w_{ki}O_iO_j$
(Figure \ref{fig:gating}, Left). Note that while the multiplication is commutative, the attention mechanism is not in the sense that only the axon emanating from neuron $i$ carries the signal $O_iO_j$ to all the targets of neuron $i$.
\subsubsection{Multiplicative Synaptic Attention: Synpatic Gating}
In this case, using multiplicities of 1, we
consider a neuron $i$ connected to a neuron $k$ in the main network, and an attending neuron $j$. In this case,
the synaptic weight $w_{ki}$ is multiplied by $O_j$.
This produces a new synaptic weight
$w_{ki}O_j$, which in turn also creates a contribution equal to $w_{ki}O_iO_j$ into neuron $k$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\columnwidth]{MultiplicativeGating.eps}
\end{center}
\vspace{-0.2cm}
\caption{Multiplicative Interactions: Output and Synaptic Gating. Left: In output gating, neuron $j$ gates the output of neuron $i$ producing a new effective output $O_iO_j$. The signal $O_iO_j$ is broadcasted to all the neurons downstream of neuron $i$, including neuron $k$. Right: In synaptic gating, neuron $j$ gates the synapse between neuron $i$ and neuron $k$, producing a new effective synaptic weight equal to $w_{ki}O_j$. In both cases, the signal $O_j$ can be transmitted to other neurons and other synapses (higher multiplicity). In both cases, neuron $k$ receives the same signal $w_{ki}O_iO_j$. However the effects of output versus synaptic gating on the rest of the network are different (see text).}
\label{fig:gating}
\end{figure}
\subsection{Synaptic Gating versus Output Gating}
\label{sec:SVO}
When the gating signal $O_j$ is close to zero, it will tend to suppress the gated signal $O_i$ or the gated synaptic weight $w_{ki}$.
The ability to dynamically suppress a synaptic weight or the signal flowing through it embodies the idea of
``excluding other stimuli'' associated with attention. Likewise, when the gating signal $O_j$ is far from zero, it can dynamically enhance a synaptic weight or the signal flowing through it.
Although equivalent circuits for output and synaptic gating can be found
(see Figures \ref{fig:gating2} and \ref{fig:gating3}), conceptually they are different.
Synaptic gating is a mechanism by which the gating neuron or network can dynamically change the synaptic weights of the gated neuron or network, thus effectively changing the program being executed by the attended network.
This allows the same gated neuron or network to be modulated and to compute different functions, as a function of the gating neuron or network.
Thus synaptic gating can also be viewed as a form of fast synaptic weight mechanism
\cite{schmidhuber1992learning,ba2016using}, where synapses with different time scales coexist and fast synapses are used to dynamically store information and modulate the function being computed by a given network. However, even for the fast synapses there could be different time scales. While here we assume that synapses change on the time scales of the inputs, fast synapses could also change on a lower time scale in the sense that a gated synapse could be reused over several inputs.
Although we have seen that both synaptic and output gating produce the same term of
the form $w_{ki}O_iO_j$ at neuron $k$, this is true only for neuron $k$.
Unlike synaptic gating,
output gating affects all the neurons downstream of the gated neuron. In contrast, synaptic gating is more precise as it affects only the neuron downstream of the gated synapse, but it is more expensive, requiring one gating wire per gated synapse, rather than one gating wire per gated neuron.
Nevertheless, if neuron $i$ has only one outgoing connection, then gating of its output or its outgoing synapse are of course equivalent.
For this reason, in the formal analyses, we will focus on output gating which covers also synaptic gating under the assumption of a single outgoing connection per gated neuron.
An observation that will become important in Sections \ref{sec:functional}-- \ref{sec:capacitylayers}, is that in the case of binary units and output gating, it does make a difference whether one uses $0/1$ or $-/+$ representations. In particular, although $0/1$ or $-/+$ linear (or polynomial) threshold functions are equivalent, different forms of output gating are obtained with different combinations of such units. This is because multiplication of $x \in \{-1,0,1\}$
by $0$ or by $-1$ leads to different results. In particular, multiplication of the outputs of two $0/1$ threshold gates is equivalent to applying a logical AND operation, whereas multiplication of two $-/+$ linear threshold gate is equivalent to applying a logical NXOR (the negation of an XOR). Multiplication of a $0/1$ threshold gate by a $-/+$ threshold gate produces a non-Boolean functions with outputs in $\{-1,0,1\}$.
Nevertheless in many cases equivalent circuits can be found (see Example in Figure \ref{fig:XOR}) using either multiplication between $0/1$ threshold gates or multiplication between $-/+$ threshold gates. This also suggests a more general question of studying all possible ways of combining two threshold functions using Boolean operators (see Section
\ref{sec:capacitysingle}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\columnwidth]{EquivalentSynapticGating.eps}
\end{center}
\vspace{-0.2cm}
\caption{Output gating equivalence. Left: Output gating of neuron $i$ by neuron $j$. All the connections emanating from neuron $i$ carry the signal $O_iO_j$. Right: Equivalent network obtained using synaptic gating only. The gating neuron $j$ must synaptically gate all the connection weights emanating from neuron $i$}.
\label{fig:gating2}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\columnwidth]{EquivalentOutputGating.eps}
\end{center}
\vspace{-0.2cm}
\caption{Synaptic Gating Equivalence. Right: Synaptic gating of synaptic weight $w_{ki}$ by neuron $j$.
Left: Equivalent network obtained using output gating only. Neuron $i$ has an identical twin neuron $i'$, i.e. neuron $i$ connected to neuron $i'$ through the identity function and thus both neurons produce the same output equal to $O_i$.
Neuron $j$ output gates neuron $i'$ producing a signal $O_iO_j$ which travels through the synapse $w_{ki}$.
All other connections emanating from neuron $i$ carry the signal $O_i$ and are unaffected by the gating neuron $j$.
}
\label{fig:gating3}
\end{figure}
\subsection{Relations to Polynomial Neural Networks}
There are at least two important relationships between gating and polynomial neural networks. First, we have seen that both synaptic and output gating mechanisms produce quadratic terms of the form $w_{ki}O_iO_j$ contributing to the activation of neuron $k$.
Thus gating can also be viewed as a special case of neurons with quadratic activations or, more generally, polynomial activations \cite{baldi2019polynomial}.
However, a full quadratic activation function of $n$ inputs may need $n(n-1)/2$ 3-way synaptic weights (the quadratic component of the activation of a neuron $i$ has the form
$S_i=\sum_{jk} w_{ijk} O_jO_k$)
associated with each possible pair of inputs.
Synaptic gating or output gating produce only one new quadratic term.
Thus, in short, gating creates quadratic terms but in a sparse way that avoids the combinatorial explosion associated with all possible combinations.
The second connection is that the same gating concepts can be applied to
to more complex units, beyond the standard model, in particular to units where the activation is a polynomial function of degree $d$ of the inputs (the standard model corresponds to $d=1$).
Thus for instance a neuron $j$ with a quadratic activation function could gate the output of another neuron $i$ with quadratic activation functions, or gate a synapse $w_{ki}$ between neuron $i$ and neuron $k$. Gating by neurons with polynomial activations, in particular gating by polynomial threshold units, will be studied in Sections \ref{sec:capacitysingle} and \ref{sec:capacitylayers}.
\begin{table}
\caption{Organization of attention mechanisms. Assuming that the
origin of the attention signal is the output of one or several neurons, there are 6 classes depending on the target of the signal and the interaction mechanism. We consider 3 kinds of targets: activation ($S$), output ($O$), and synapses ($w$). We consider 2 kinds of interaction mechanisms: addition and multiplication.
Two of the classes (additive activation attention, or multiplexing, and additive output attention) are in the SM; the other 4 classes correspond to true extensions of the SM. The discussion in the text shows that further analyses can focus on three classes only: multiplexing, output gating, and synaptic gating (in bold).
}
\label{tab:taxonomy}
\centering
\begin{tabular}{|l|c|c| c|}
\hline
&$S$ &$O$& $w$ \\ \hline
Addition & {\bf multiplexing} (SM) & additive output att.(SM)& aditive synaptic att. \\ \hline
Multiplication & activation gating & {\bf output gating} & {\bf synaptic gating}\\ \hline
\end{tabular}
\end{table}
\subsection{Summary}
In summary, the quarks of attention can be classified based on the origin, the target, and the interaction mechanism of the attention signal. Assuming that the origin is in the output of one neuron, or a group of neurons, and that the interactions are either additive or multiplicative, this leads to six classes (Table \ref{tab:taxonomy}). Within the additive group, two classes are already in the SM (additive activation and additive output attention) and only one class is of interest here for further studies (additive activation attention or multiplexing). Within the multiplicative group, all three classes correspond to true extensions of the SM and, at least formally, further analyses can be reduced to two main classes: output gating and synaptic gating. In all cases, the attending signal modulates the function
computed by the attended network.
\section{All you Need is Gating: Transformers}
\label{sec:transformers}
Although the descriptions of attention mechanisms in deep learning often seem complex and sometimes obscure the
underlying neural architecture
\cite{graves2014neural,
chorowski2015attention,
luong2015effective,
attention_bahdanau,bert,gpt2},
it can be checked that in all cases these are built out of the output and synaptic gating operations described in the previous section. For conciseness, here we demonstrate this in detail only for the transformer architectures
\cite{vaswani2017attention,transformers}
(see also \cite{liu2021pay} for an MLP alternative to transformers).
These architectures consist of stacks of similar encoder and decoder modules, with attention mechanisms in each module.
The details of an encoder module are shown in Figure
\ref{fig:transformer}. As the Figure shows, a shared and typically linear network is first applied to each of $n$ input vectors. At the bottom of the architecture, these input vectors
could represent for instance vectors encoding successive words from a sentence. At higher levels of the stack, these vectors could be associated with the outputs of the previous encoder or decoder module and correspond to more abstract representations.
For each input vector, the shared network typically produces a triplet of vectors of the same size $m$: $Q$ (Query),$ K$ (key), and $V$ (value), for a total of $3n$ triplets. The subsequent attention mechanism is drawn in a concise way in the Figure and is based on three operations: (1) taking all $n^2$ pairwise dot products of the $n$ query vectors with the $n$ key vectors; (2) applying a softmax to each row of dot products; and (3) using the output of the softmax operations as weights for linearly combining the value vectors to produce the corresponding output vector at each position.
The first operation can be built using output gating, each dot product involving $m$ gating operations, to multiply the proper $Q$ and $K$ components together. As a side note, these dot products can be viewed as similarity measures between the $Q$ and $K$ vectors, especially when these are normalized, and this suggests other kinds of transformer architectures where different similarity kernels are used.
The softmax operation is a standard extension of the SM (Figure \ref{fig:SoftMax}). The third operation corresponds to synaptic gating of the connections between the $V$ vectors and the outputs. The convex combination of the value vectors by the corresponding softmax weights determines how much each value vector influences each output vector, based on the corresponding similarities between $Q$ vectors and $K$ vectors. This is where the influence of some of the value vectors can be enhanced, while the influence of others can be suppressed.
Thus in total there are $mn^2$ output gating operations, and
$n^2$ synaptic gating operations (assuming $n$ output vectors).
Thus, in short, the entire encoder module is based on a large number ($O(mn^2)$) of gating operations, both of the output and synaptic type. Thus, in this form, it can only be applied when $n$ is not very large. The basic transformer decoder module (not shown) is very similar. One important property of the encoder module conferred by the attention mechanisms is that the output is invariant under permutation of the inputs. This is because any permutation of the inputs, results in a corresponding permutation of the Q,K, and V vectors due to the weight sharing. This in turn induces a corresponding permutations in the dot products and softmax outputs, so that in the end the weighted contribution of any V vector into any output vector remains the same. This may seem surprising for an architecture that was originally developed for NLP tasks, where the order of the words obviously matter. Indeed, very often in practice positional information is added to each input vector. The permutation invariance of transformers is particularly beneficial for applications of transformers outside of NLP, in particular applications where the input consists of {\it sets} of data vectors, where the order of the data vectors does not matter (e.g. \cite{set_transformer,
fenton2020permutationless}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.2\columnwidth]{TransformerEncoder.eps}
\end{center}
\vspace{-0.3cm}
\caption{Neural network representation of the basic encoder module of a transformer architecture. Each input vector is converted into three vectors Q (Query), K (key), and V (value) using weight sharing (blue weights). All $n^2$ pairwise dot products $K(k)Q(l)$ are computed in the attention layer, which corresponds to a set of output gating operations. This is followed by row-wise softmax operations on these dot products to produce the weights that are used to linearly combine the value vectors into each corresponding output. These linear combinations correspond to synaptic gating. }
\label{fig:transformer}
\end{figure}
\section{Functional Aspects of Attention}
\label{sec:functional}
Next we study though several examples how certain functionalities can be implemented using attention mechanisms, beginning with the effect of attention on single units.
\subsection{Single Unit Output Gating: Shaping the Activation Function}
First, for simplicity, we consider output gating of a unit by another unit with the same inputs and the same weights, hence the same activation $S$. The two units may have two different activation functions $f$ and $g$. Through output gating, the final output of the gated unit will be given by:
$f(S)g(S)=fg(S)$. Thus, in this case, output gating is equivalent to changing
the activation function of the gated unit
from $f$ to $fg$. Examples of this effect
are shown in Figure (Figure
\ref{fig:activation}) where both $f$ and $g$ are piecewise linear, and centered at
the origin. Note that in the case of a linear unit gated by another linear unit, the final output is a quadratic function of the $n$ inputs, but with only $O(n)$ parameters as opposed to $O(n^2)$. The ReLU activation function emerges naturally, through the gating of a linear function by a $(0,1)$ threshold function, or vice versa. Finally, the symmetric wedge activation function \cite{tavakoli2021splash} emerges also naturally through the gating of a linear function by a $(-1,1)$ threshold function.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\columnwidth]{SelfAttention.eps}
\end{center}
\vspace{-0.1cm}
\caption{Effect of gating on activation functions. For simplicity we consider four main activation functions: linear, threshold (0,1) [Heaviside function], threshold (-1,1) [sign function], and ReLU.}
\label{fig:activation}
\end{figure}
\subsection{Single Unit Attention: XOR}
Next, we look at the simple XOR function.
It is easy to show that the XOR function cannot be computed in a shallow way by a single linear threshold gate (or sigmoidal) neuron. Its computation requires at least one hidden layer.
However, as shown in Figure
\ref{fig:XOR} using 0/1 outputs, the XOR function can be computed by a shallow network with a single linear threshold unit output-gated by another linear threshold unit. To
see this, note that any corner of the hypercube can always be isolated by a hyperplane from the other corners of the hypercube, i.e. there is always a linear threshold gate that has value 1 (resp. 0) for one Boolean setting of its inputs, and 0 (resp. 1) for all the other possible inputs (see Lemma \ref{lm:multi}). In particular, both the OR and NAND
functions are of this kind and thus can be implemented by a linear threshold gate. Gating the OR by the NAND (or vice versa) produces the desired XOR function without using any hidden layers, assuming that the ouput gating operation is an integral part of the layer where it occurs.
\begin{figure}[ht]
\begin{center}
\includegraphics[trim={0 0 0 1.2cm},width=1.0\columnwidth]{XOR.eps}
\end{center}
\vspace{-1.9cm}
\caption{Shallow computation of XOR by a single unit with attention. The left unit computes an OR, which can be implemented by a single linear threshold gate. The right unit computes a NAND, which can also be implemented by a single linear threshold gate. The gating of one unit by the other produces the XOR function. The XOR function cannot be implemented using a shallow (no hidden layer) network of
linear threshold gates. In this particular examples, the gated and the gating $0/1$ units can easily be replaced by $-/+$ units, since it is always possible to linearly separate any point on a hypercube from all the other points with a hyperplane, and the component wise product of $(-1,1,1,1) \times (1,1,1,-1)$ gives
$(-1,1,1,-1)$, which corresponds to the $-/+$ version of XOR.}
\label{fig:XOR}
\end{figure}
\subsection{Attention Layers: Universal Approximation Properties}
Next, we look at how universal approximation proofs are affected if output gating is allowed, both in the Boolean and continuous cases.
\subsubsection{The Boolean Case.}
Every Boolean function of $n$ variables can be computed by a feedforward network of linear threshold gates, since AND, OR, and NOT can be implemented by linear threshold gates. By expressing the function in disjunctive or conjunctive normal form, the implementation can be achieved with a single hidden layer of exponential size. If we allow output gating, and its iterations, we have the following theorem.
\begin{proposition}
\label{prop:universal}
Every Boolean function of $n$ variables can be expressed as the product of at most $2^{n-2}$ linear threshold gates, both in the $0/1$ and $-/+$ representations.
\end{proposition}
\begin{proof}
Let $f$ be a Boolean function of $n$ variables, using $0/1$ to denote false and true respectively. If $f$ is 0 everywhere, it can immediately be expressed as a linear threshold gate. Likewise, if $f$ is 0 everywhere but one point of the $n$ dimensional cube, then it can be immediately expressed as a single linear threshold gate. Thus we can assume that $f$ is 0 on at most $2^{n-2}$ points. Let
$x_1,\ldots,x_L$($L \leq 2^{n-2}$) denote the inputs where $f$ is zero. For each index $i$, let $g_i$ denote the linear threshold gate which has value $0$ on $x_i$ and $1$ everywhere else. Then it is easy to check that $f(x)$ can be written as the product $f(x)=f_1(x)\ldots f_L(x)$ (alternatively, on can express $f$ in conjunctive normal form). The proof is the same in the $-/+$ case, letting $g_i(x)$ be the linear threshold gate with value $-1$ for $x_i$, and $+1$ everywhere else. Obviously the same result holds for polynomial threshold gates of degree $d$.
\end{proof}
In the $-/+$ case, the set $B_n$ of all Boolean functions with the multiplication operation forms a commutative group, and each Boolean function is its own inverse.
The subset of all linear threshold gates contains the identity, and each linear threshold function gate is its own inverse.
However it does not form a subgroup because it is not closed. By the theorem above, the multiplicative closure of the set of all linear threshold gates is the set $B_n$ of all Boolean functions.
Since every Boolean functions can be written as a product of an exponential number of linear (or polynomial) threshold gates, it is natural to ask whether a smaller number of factors may be used. Can every Boolean function be written as the product of a linear or polynomial number of linear threshold gates? We will answer this question negatively in Section \ref{seq:negative}.
\subsubsection{The Continuous Case.}
Next we look at the continuous case, using output gating to modify the basic universal approximation proof
\cite{baldi2021deep}.
\begin{theorem}
Let $f$ be a continuous function from $[0,1]$ to $\mathbb{R}$, and $\epsilon >0$. Then there exists an integer $n=n(\epsilon)$ such that $f$ can be approximated within $\epsilon$ everywhere over $[0,1]$ by a network of $n$ linear units, attended by $n$ corresponding linear threshold gates with output gating. The final approximation corresponds to the dot product between the vector of linear unit outputs and the vector of attending unit outputs.
\end{theorem}
\begin{proof}
Since $f$ is continuous over the closed interval, it is uniformly continuous so that there exists $\delta >0$ such that for any $x_1$ and $x_2$ in $[0,1]$:
\begin{equation}
\vert x_2-x_1\vert < \delta
\Rightarrow
\vert f(x_2)-f(x_1) \vert < \epsilon
\label{eq:}
\end{equation}
Let us choose an integer $n$ large enough so that $\delta> 1/n$. Next we slice the interval $[0,1] $ into $n$ slices of width
$1/n$. Next we construct a network with $n$ linear units and $n$ linear threshold gate attention units with outputs in $\{0,1\}$ (the proof can be adjusted to accommodate outputs in $\{-1,1\}$).
All the attention units are connected to the single input $x$ by a weight equal to 1. Their threshold (bias) however are $0, 1/n,2/n, \ldots ,(n-1)/n$ so that when $x \in [0,1/n)$ only the first attention unit is on, when $x \in [1/n,2/n)$ only the first two attention units are on, and so forth. In other words, the slice containing $x$ is encoded in the number of linear threshold gates that are on.
For the linear units, they compute values $y_1(x), \ldots,y_n(x)$ as follows. The first linear unit approximates the function $f$ in the first slice by producing the line that goes through $f(0)$ and $f(1/n)$, i.e. by implementing the function $y_1(x) =
f(0)+ n[f(1/n)-f(0)]x$. The second linear unit approximates the function $f$ in the second slice by producing the line that goes through $f(1/n)$ and $f(2/n)$, but with the subtraction of the value produced by the previous unit. Thus in short:
$y_2(x)= n[f(1/n)-f(0)]x
-y_1(x)$. More generally, the output of the $k$-th linear unit
approximates the function $f$ in the $k$-th slice producing the line that goes through $f(k-1/n)$ and $f(k/n)$, but with the subtraction of $y_{k-1}$.
[Note: as an alternative construction, the linear units could also be taken to be constant, with $y_k(x)=f(k-1/n)-y_{k-1}(x)$, and $y_1(x)=f(0)$.]
\end{proof}
The same construction can be applied over any closed interval, as well over any finite union of closed intervals. Furthermore, if the range is $\mathbb{R}^p$, the same construction can be applied to each component. And finally, the same construction can be generalized if the input domain is of the form $[0,1]^m$. Thus in short:
\begin{theorem}
Every continuous function $f$ from a compact set $C \subset \mathbb{R}^m$ to $\mathbb{R}^p$ can be approximated to any degree of precision $\epsilon$ by a shallow attention network comprising linear units gated by corresponding linear threshold gate units, with a final dot product output.
\end{theorem}
\subsection{Attention Layers: Dot Products}
As we have seen in the section on transformers and the universal approximation proof above, one place where attention mechanisms are particularly important is for computing the dot product between two activity vectors $u=(u_1,\ldots,u_n)$ and $v=(v_1,\ldots, v_n)$, associated with two corresponding layers of $n$ neurons each. This can be achieved through output gating to first compute all the pairwise products $u_iv_i$ and then combine these products through a single linear output unit, with all its incoming weights set to $1$, to compute the dot product $uv= \sum_i u_iv_i$. However, this dot product can equally be computed by synaptic gating, i.e. by using the vector $v$ to gate the incoming weights of the linear unit above and compute the dot product in the form $uv= \sum_i (1.v_i) u_i$.
This can be scaled up to tensors where there are multiple output vectors $u(k)=(u^k_i)$
and multiple attention vectors $v(l)=(v^l_i)$ of the same length, and all pairwise dot products $u(k)v(l)$ are computed, for any $(k,l)$ pair, as in the transformer architectures. Of course the dot product can also be computed in the standard model (Figure
\ref{fig:DotProduct}) however this requires a deeper network with four layers of standard units with fixed connections all equal to $1$, and both logarithm and exponential transfer functions. Thus
output or weight attention create a new primitive, or compact circuit, for computing dot products. The same is true of other operators that are often introduced in neural network without being part of the standard model, such as for the already-mentioned softmax
(Figure \ref{fig:SoftMax}) or the
normalization of a vector
(Figure \ref{fig:Normalization}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\columnwidth]{DotProduct.eps}
\end{center}
\vspace{-0.1cm}
\caption{Standard model neural network for computing the dot product of two vectors $(u_1,u_2,u_3)$ and $(v_1,v_2,v_3)$ .}
\label{fig:DotProduct}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\columnwidth]{Normalization.eps}
\end{center}
\vspace{-0.1cm}
\caption{Standard model neural network for normalizing a vector $(u_1,u_2,u_3)$ (for clarity, only the first normalized component is fully shown).}
\label{fig:Normalization}
\end{figure}
\subsection{Attention Layers: Attention Weights}
Synaptic gating of a connection can suppress or enhance the corresponding incoming signal.
Synaptic gating all the incoming edges of a unit allows to assign different importance
to its different inputs. In addition, it is often desirable that these degree of importance form a probability vector, as in the transformer architecture, and this can be achieved through a softmax operation.
It is possible to apply a normalizing softmax either to the vector of pairwise products $u_iv_i$, or to the rows or columns of the tensor of dot products $u(k)v(l)$, as in the transformer architectures.
The output of these softmax operations can then be used to gate other synaptic weights. These gated weights are often equal and set to one in order to compute convex combinations, as in the transformer architecture
Thus, in short, in transformer and other architectures, attention mechanisms allow
dot products, softmax, and synaptic gating operations to be combined into one macro operation, which would require a network of depth $\sim 10$ for its implementation inside the SM.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\columnwidth]{SoftMax.eps}
\end{center}
\vspace{-0.1cm}
\caption{Standard model neural network for computing the softmax function for a vector $(u_1,u_2,u_3)$(for clarity, only the first component is fully shown).}
\label{fig:SoftMax}
\end{figure}
\section{Cardinal Capacity Review}
\label{sec:cardinalreview}
We have seen that attention mechanisms enable important functionalities with minimal depth compared to the equivalent SM circuits, at the cost of adding attention neurons and mechanisms. Here we want to better understand the trade offs between the computations that are enabled and the corresponding costs. The key concept for doing so is the concept of cardinal capacity \cite{baldi2019capacity} which we briefly review below.
\subsection{Definition of Capacity:} Given a class of functions $\mathcal{A}$, we define its cardinal capacity $C(\mathcal{A})$, or just capacity, to be: $C(\mathcal{A})= \log_2 \vert \mathcal{A} \vert$, where $\vert \mathcal{A} \vert$ is the cardinality of $\mathcal{A}$ in the finite case.
In the continuous case, $\vert \mathcal{A} \vert$ can be defined as a volume, but here we will focus primarily on finite cases.
The class $\mathcal{B}_n$ of all Boolean functions of $n$ variables has capacity $C(\mathcal{B}_n)=2^n$.
Here we will consider subclasses of $\mathcal{B}_n$, in particular those implemented by feed-forward networks of linear or polynomial threshold gates, with attention mechanisms, and compute the corresponding capacity.
\subsection{Linear and Polynomial Threshold
Functions}
Linear or polynomial threshold functions are reasonably good approximation of linear- or polynomial-activation neurons with steep sigmoidal activation functions and, as such, are not particularly restrictive.
A polynomial threshold functions of degree $d$ has the form $\sgn p(x)$,
where $p(x)$ is a polynomial of degree $d$ using a $-/+$ output representation. Alternatively, for a 0/1 output representation, we can use the form $H(p(x)$ where $H$ is the Heaviside function
equal to 0 for $x\leq 0$ and to 1 otherwise.
Units with values in $0/1$ are similar to logistic sigmoidal units, and units with values in $-1/+1$ are similar to
$\tanh$ sigmoidal units.
We let $\mathcal{T}(n;d)$ denote the class of polynomial threshold functions of degree $d$. Thus $\mathcal{T}(n;1)$ denotes the class of linear threshold functions.
When the inputs to a threshold function are binary, we use the term threshold gate.
In the case of polynomial threshold gates, it does not matter whether their input is encoded using $0/1$ or $-/+$ (or for that many any two distinct real numbers). This is because there is an affine transformation between any two such encodings and the affine transformation can be absorbed in the synaptic weights, i.e. the coefficients of $p$. The same is generally true for the encoding of the output, however when attention gating is considered the $0/1$ and $-/+$ encodings behave differently. For instance, in the case of output gating, the product of two $0/1$ threshold gates behaves like an AND, whereas the output of two $-/+$ gates behaves like an NXOR.
Thus to derive more general results, we will consider the case where the gating mechanism is implemented by a Boolean function $B$, which could be an AND, an NXOR, or something else.
In the most general setting, we let $B(z_1,\ldots,z_k) : \{-1,1\}^k \to \{-1,1\}$ be a Boolean formula in $k$ variables.
We are interested in the class of functions of the form
$B(f_1,...,f_k): \{0,1\}^n \to \{-1,1\}$ where $f_j \in \mathcal{T}(n;d_j)$. We denote this class by $\mathcal{T}_B(n; d_1,\ldots,d_k)$.
\subsection{Why Capacity is Important}
The capacity $C(\mathcal{A})$ is a measure of what the class of functions $\mathcal{A}$ can do. As a single number, it is of course a very crude representation of the true functional capacity. However in the case of neural networks the capacity has a stronger significance. To see this, note first that the cardinal capacity is also the number of bits required to specify an element of $\mathcal{A}$. Thus in the case of neural networks, to a first order of approximation, the capacity is the number of bits that must be transferred from the training data to the synaptic weights during learning for the network to learn to implement a specific function in the class $\mathcal{A}$.
\subsection{Capacity of Single Units: Review}
Before we estimate the capacity of single units with attention mechanisms, we must review the known capacity results on single units without attention mechanisms.
For a single linear threshold gate of $n$ variables, we have \cite{zuev1989asymptotics,zuev1991combinatorial}:
\begin{equation}
\left ( 1- \frac{10}{\log n} \right) n^2 \leq C(\mathcal{T}(n;1)) \leq n^2
\label{eq:zuev}
\end{equation}
This result was refined to the form \cite{kahn1995probability}:
\begin{equation}
C(\mathcal{T}(n;1))= n^2 - n \log_2n \pm O(n)
\label{eq:komlos}
\end{equation}
Similar results have been obtained for polynomial threshold gates of degree $d$
\cite{baldi88a,baldi2019polynomial}. In particular, for any $n$ and $d$ satisfying
$1 \leq d \leq n^\alpha$ (where $\alpha$ is fixed and $<\alpha <1$) there exists a constant $D=D(\alpha)$ such that:
\begin{equation}
(1-\frac{D}{\log n})^d n {n \choose \leq d} \leq C(\mathcal{T}(n;d))\approx n {n \choose \leq d}
\label{eq:poly100}
\end{equation}
where:
\begin{equation}
{n \choose \leq d} = \sum_{k=0}^d {n \choose k}
\end{equation}
For degree $d=o (\log n)$, including fixed degree $d$, Equation \ref{eq:poly100} yields:
\begin{equation}
C(\mathcal{T}(n;d))= \frac{n^{d+1}}{d!}(1-o(1))
\label{eq:bv}
\end{equation}
\iffalse
baldi2019polynomial,
\cite{baldi2019polynomial}
\cite{baldi2019capacity}
\cite{baldi2018neuronal}
\cite{zuev1991combinatorial}
\cite{zuev1989asymptotics}
\cite{kahn1995probability} komlos
\fi
\subsection{Activation Attention and Muliplexing}
\label{sec:multiplexing}
We now describe one of the main techniques that will be used in the attention capacity proofs for both synaptic and output gating. Perhaps surprisingly, this technique can be viewed as a form of attention, specifically a form of activation attention or multiplexing.
It was developed and used in
\cite{baldi2018neuronal,baldi2019capacity}. First, we need the following lemma, stated for the 0/1 $n$-dimensional hypercube, but equally valid on the -/+ hypercube, or any other hypercube $[a,b]^n$. The lemma basically states that any vertex of the hypercube can be separated from the rest of the cube by a hyperplane with large margins.
\begin{lemma} \label{lm:multi}
Let $H$ be the $n$-dimensional hypercube, and $M>0$ and $K\geq 0$.
Fix any vertex $c=(c_1,\ldots, c_n)$ of the hypercube, and let
$D=H-\{ c \} $. Then there exists affine linear functions of the form
$f(x)=a_0+ \sum_{1}^n a_i x_i$ such that:
$f(c)=K$ and $f(d)\leq -M$ for any $d \in D$.
\end{lemma}
\begin{proof}
First note that there are 1:1 affine maps between the different hypercubes, thus it is enough to prove the result for the 0/1 hypercube. Second, all the corners play a symmetric role so it is enough to prove it for the corner $c=(1,1, \ldots,1)$.
It is easy to check that:
$f(x)=\sum_{1}^n (M+K)x_i -(M+K)n +K $ satisfies the conditions of the Lemma. Note that by using $-f(x)$ the sign of the regions and corresponding margins can be exchanged
($f(c)=-K$ and $f(d)>M$ for all $d \in D$).
\end{proof}
\begin{figure}[ht]
\begin{center}
\includegraphics[trim={0 0 0 1.0cm},width=1.1\columnwidth]
{Lemma.eps}
\end{center}
\vspace{-1.3cm}
\caption{Any corner $c$ of the $n$-dimensional hypercube can be separated from all other corners $d$ of the hypercube by an affine hyperplanee with large margins defined by the parameters
$K \geq 0$ and $M>0$.
}
\label{fig:Lemma}
\end{figure}
Now consider a neural network consisting of $n$ inputs fully connected to a hidden layer of $m$ linear or polynomial threshold functions (Figure \ref{fig:Multiplexing}) $f_0(x), \ldots , f_{m-1}(x)$. In the multiplexing approach, we add $m$, or even just $
\lceil \log_2 m \rceil $ new binary inputs to the input layer. $m$ different binary patterns over these inputs can be associated in one to one fashion with one of the $m$ threshold functions in the hidden layer. Let $i$ be any integer
$0 \leq i \leq m$ and let $p(i)$ denote the corresponding pattern of bits. For simplicity we can just use the binary representation of $i$, but any other representation works equally well.
This pattern $p(i)$ can be viewed as a corner of the corresponding hypercube of dimension $
\lceil \log_2 m \rceil$ and thus we can apply Lemma \ref{lm:multi} above to choose the weights connecting the attention units and the bias to hidden unit $i$ accordingly. In particular, the weights can be chosen such that:
(1) the attending signal originated from the attending bit patterns $p(i)$
is equal to 0; and (2) for all other settings of the attending bits, the attending signal is arbitrarily large and negative (alternatively arbitrarily large and positive). Ans similarly for all the other units and attention input patterns.
As a result, whenever $p(i)$ appears in the attention bits, the $i$-th output of the hidden layer is equal to $f_i(x)$, and for all the other settings of the attention bits, the $i-th$ output is constantly equal to 1, or constantly equal to 0 (or -1 in the case of $-/+$
threshold hidden units). The pattern of constant bits is called the mask and different masks can be used for different proofs. Thus, in short, the attending signal emanating from the attention units is multiplexed with the regular signal and used to focus the attention of the hidden layer on the hidden unit encoded by the bits appearing in the attention units. The output of the hidden layer is equal to the mask except for the attended position, where it is equal to the corresponding function $f_i(x)$.
This form of activation attention is the key tool for proving capacity lower bounds. To see this, consider for instance the case where an OR operator is applied to the outputs of the hidden layer. With a mask consisting of 0s, when the attention bits are set to $p(i)$, the output of the OR applied to the hidden units is equal to $f_i(x)$. Thus the truth table of the overall input-output function of the original inputs plus the attention bits is uniquely equal to $f_i(x)$ when the attention bits are set to $p(i)$. Thus the capacity of the network with the expanded input of size $n+ \lceil log m \rceil$ is lower bounded by the sum of the capacities associated with the functions $f_i$ over the original input of size $n$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.1\columnwidth]{Multiplexing.eps}
\end{center}
\vspace{-1.3cm}
\caption{Left: A fully connected feedforward neural network with $n$ inputs and $m=4$ threshold gates computing the functions $f_0(x),f_1(x),f_2(x)$ and $f_3(x)$. The bias unit is constantly set to 1.
Right: The same network with $2=\lceil \log_2 m \rceil$ additional attention units in the input payer. The weights from the input units to the threshold gates are the same as in the left image.
The attention units can be in 4 different states $(00)$, $(01)$, $(10)$, and $(11)$; these states can be associated in 1:1 fashion with the $m=4$ threshold units. Assume for instance that the state $(1,0) $ is associated with the hidden unit computing $f_1(x)$ (hidden unit 1). Then by Lemma \ref{lm:multi} applied to the hypercube of dimension 2, it is possible to choose a set of weights from the attention units and the bias to hidden unit 1 providing: (1) an attention activation of 0 when the attention units are in the (0,1) state; and (2) an arbitrarily large negative (or an arbitrarily large positive) attention activation for all the other 3 states.
As a result, when the attention units are in the (0,1) state the output of the attended hidden unit 1 is equal to $f_1(x)$. When the attention units are in any of the other three states, unit 1 is not attended and its output is constant and equal to 0 (or constant and equal to 1). And similarly, mutatis mutandis, for the other three hidden units. In other words, we can first choose a fixed pattern of 0s and 1s in the hidden layer, called a mask, and then connect the attention units to the hidden layer with such weights that the output of the hidden layer is equal to the mask, except for one position associated with the attended unit. If the attended unit is unit $i$ in the hidden layer, the corresponding output is equal to $f_i(x)$ (and the attention units must be set to the corresponding values).
}
\label{fig:Multiplexing}
\end{figure}
\section{Capacity of Single Unit Attention}
\label{sec:capacitysingle}
We can now begin to estimate the capacity of various attention circuits, when the attention signal originate in a single gating unit, as shown in Figure \ref{fig:AttentionUnit12}.
\begin{figure}[ht]
\begin{center}
\includegraphics[trim={0 0 0 3.5cm},width=1.1\columnwidth]{AttentionUnit12.eps}
\end{center}
\vspace{-1.3cm}
\caption{Left: Output gating with a single attention unit. Both the gated function $f$ and the gating function $g$ are linear (or polynomial) threshold gates of the same input vector $x=(x_1, \ldots, x_n)$. The overall network computes the function $fg(x)=f(x)g(x)$. Right: Synaptic gating with a single attention unit $g(x)$ gating all the incoming weights of the gated function $f(x)$. So the overall network computes the function $f_g(x)$. For instance, if $f$ is a linear threshold gate $f(x)=\sign( \sum_i w_ix_i)$, then $f_g(x)=\sign (
\sum_i g(x) w_ix_i)$.
}
\label{fig:AttentionUnit12}
\end{figure}
\subsection{Capacity of Single Attention Units: Output Gating}
We want to compute the capacity of the
class of all functions that can be computed by one neuron gated by another neuron, corresponding to the left hand side of Figure
\ref{fig:AttentionUnit12}. In the purely linear case,we have seen that this is the set of all quadratic functions of the form
$O=(\sum_i w_ix_i)(\sum_j v_jx_l)$.
To partially address this question in the non-linear case, we can consider first the case of a linear threshold gate gated by another linear threshold gate, and then similarly for polynomial threshold gates of degree $d$. Using $-/+$ linear threshold gates for the gated and the gating units, this is the class of Boolean functions of the form:
\begin{equation}
fg(x)=f(x)g(x) =\sign (\sum_i w_ix_i) \sign
(\sum_i v_i x_i)=\sign \left ( (\sum_i w_ix_i)(\sum_jv_jx_j) \right )
\label{eq:}
\end{equation}
This class contains the identity and all the linear threshold gates. Thus, by Zuev's result (Equation \ref{eq:zuev}) its capacity is at least $n^2(1+o(1))$. However, intuitively, it must contain many other functions as shown in Figure \ref{fig:capacity1} suggesting that in general the product of two linearly separable functions is not linearly separable. On the other hand, the upperbound on the capacity is at most $2n^2(1+o(1)$, because the capacity is always bounded by the sum of the capacity of each individual component. Similarly considerations can be made for the $0/1$ which leads to the more general problem of estimating the capacity of the class of functions of the from $B(f,g)$ where $B$ is any Boolean operator, and $f$ and $g$ are linear or polynomial threshold gates.
And even more generality can be obtained by considering
classes of Boolean functions of the form
$B(f_1,\ldots, f_k)$ where $B$ is a $k$-ary Boolean operator and $f_1,\ldots,f_k$ are polynomial threshold gates of respective degrees $d_1,\ldots,d_k$. We first address the case of $k=2$ and then the general case.
\begin{figure}[ht]
\begin{center}
\includegraphics[trim={0 0 0 1.0cm},width=0.99\columnwidth]{Proof1.eps}
\end{center}
\vspace{-1.3cm}
\caption{Two randomly selected -/+ linear threshold functions and their product.
We can randomly pick such functions by randomly picking normal vectors of weights $w=(w_i)$ on the unit sphere $S^{n-1}$ (or using i.i.d coordinates that are Normal or Uniform).
When $n$ is large, the normal vectors $w$ and $v$ are approximately orthogonal and the corresponding hyperplanes partition the space into four regions, each one containing approximately $2^{n-2}$ points of the hypercube. In general, the resulting function is not linearly separable.}
\label{fig:capacity1}
\end{figure}
\subsubsection{Pairwise Composition ($k=2$).}
\label{seq:negative}
\begin{table}
\caption{All possible Boolean combinations. There are 16 possible Boolean functions $B(p,q)$ of two variables $p$ and $q$. Each row corresponds to a function $B(p,q)$ and its negation $\lnot B(p,q)$.
Ten Boolean functions are irreducible i.e. they cannot be expressed as a function of a smaller number of variables. Eight Boolean functions are symmetric ($B(p,q)=B(q,p)$). Fourteen Boolean functions can be implemented by a linear threshold gate (LTG). The functions are organized into four groups separated by horizontal lines. Within a group, all the functions in the same column are equivalent when their arguments are implemented by linear threshold gates. The last column correspond to the cardinal capacity $C(B(f,g))$ when $f$ and $g$ vary among all possible linear threshold functions of the same $n$ variables.
}
\label{tab:boolean}
\centering
\begin{tabular}{llllll}
\toprule
$B(p,q)$ & $\lnot B(p,q)$ &Irred. ($k=2$) & Sym & LTG & C\{B(f,g)\}) \\
\midrule
$T$ & $F$ &no& yes & yes & $1$ \\ \hline
$p$ & $\lnot p$ & no& no & yes & $n^2(1+o(1))$ \\
$q$ & $\lnot q$ & no &no & yes & \\ \hline
$p \, {\rm OR} \, q$ & $\lnot p \, {\rm AND} \, \lnot q$ &yes & yes & yes &$2n^2(1+o(1))$ \\
$p \, {\rm OR} \, \lnot q$ & $\lnot p \, {\rm AND} \, q$ &yes& no & yes & \\
$\lnot p \, {\rm OR} \, q$ & $\ p \, {\rm AND} \, \lnot q$ &yes& no & yes & \\
$\lnot p \, {\rm OR} \, \lnot q$ & $ p \, {\rm AND} \, q$ & yes & yes & yes & \\
\hline
$p\, {\rm XOR} \,q$ & $\lnot (p XOR q)$ &yes & yes & no & $2n^2 (1+o(1))$\\\hline
\end{tabular}
\end{table}
For completeness, consider Table \ref{tab:boolean} summarizing all 16 Boolean function $B(p,q)$ of two variables. We can substitute $p$ and $q$ with arbitrary linear (or polynomial) threshold functions $f$ and $g$ and compute the corresponding cardinal capacity.
The first group in the table correspond to always true (T) and always false (F) functions, thus to a negligible total capacity of 1. The second group corresponds to a single linear threshold function, and thus its capacity is equal to:
$n^2 (1+o(1))$
All the elements in the second group in the table are also found in the third group corresponding to the AND and OR operators, because $f \, {\rm AND} \, f =f \, {\rm OR} \, f=f$. Within the third group, all the OR expression are equivalent to each other, and all the AND expressions are equivalent to each other, when $p$ and $q$ are substituted with linear (or polynomial) threshold gates. This is because whenever $f$ is a polynomial threshold gate of degree $d$, then $\lnot f$ is also a polynomial threshold gate of degree $d$.
The first three groups cover 14 Boolean functions $B(p,q)$ in total. These 14 Boolean functions can be implemented by a single linear threshold gate of
$p$ and $q$, and no other linear threshold gate of $p$ and $q$ exist. Thus the total aggregated capacity corresponding to all these cases, is given by the cardinal capacity $C(n,2,1)$ of a network of linear threshold gates with $n$ inputs, 2 hidden units, and 1 output unit. This capacity is given by \cite{baldi2018capacity,baldi2019capacity}:
\begin{equation}
C(n,2,1) = 2n^2 (1+o(1))
\label{eq:}
\end{equation}
There is a one-to-one correspondence between the set $ \{f \, AND \, g\}$ and the set $\{f \, OR \, g\}$ through the negation operator. Therefore:
\begin{equation}
C(\{f \, {\rm AND} \, g\})=C(\{f \, {\rm OR} \, g\})
\label{eq:}
\end{equation}
[Note that any Boolean function that isolates once corner of the hypercube is irreducible. For such a function, knowing the values of the sequence $(B(f,g), B(f,\lnot g), B(\lnot f,g), B(\lnot f,\lnot g))$ uniquely determines the values of $f$ and $g$].
The relevant result in \cite{baldi2018capacity,baldi2019capacity} is obtained using the attention multiplexing technique, applied in fact with the OR Boolean function and a mask of 0s, as described in Section
\ref{sec:multiplexing}. Thus, in short:
\begin{equation}
C(\{f \, {\rm AND} \, g\})=C(\{f \, {\rm OR} \, g\}) =2n^2(1+o(1))
\label{eq:}
\end{equation}
For the last row of the table,
the output gating (multiplication) of two $-/+$ linear threshold functions correspond to applying the negation of the XOR Boolean operator. Note that:
$ f \, XOR \, \lnot g \equiv \lnot f \, XOR \, g \equiv
\lnot (f \, XOR \, g)$ and $f\, XOR \, g \equiv \lnot f\, XOR \, \lnot g$. As a result we have:
\begin{equation}
\vert\{ f \, XOR \, g \} \vert = \vert \{\lnot (f\, XOR \, g) \} \vert
\label{eq:}
\end{equation}
when $f$ and $g$ vary over all possible linear threshold gates. Even more strongly, the corresponding sets of Boolean functions are identical:
\begin{equation}
\{ f\, XOR \, g \}=\{\lnot (f \, XOR \, g) \}
\label{eq:}
\end{equation}
Now it is easy to see that
\begin{equation}
n^2 (1 +o(1)) \leq C(\{ f \, XOR\, g \})=C(\{\lnot (f\, XOR \, g) \}) \leq 2n^2 (1+ o(1))
\end{equation}
The lower bound is obtained by noticing that for any Boolean function $f$, $f \, XOR \, F=f$. The upperbound is obtained by noticing that
$f \, XOR\, g$ can be implemented by
a network $A(n,2,2,1)$ of linear threshold gates (using the disjunctive normal form), and the capacity of such a network is always at most equal to the sum of the capacities of its individual gates. Finally, the attention multiplexing technique described in Section
\ref{sec:multiplexing} applied with a mask of 1s (since $f \, NXOR \, T= f $)
shows that:
\begin{equation}
C(\{ f XOR g \})=C(\{\lnot (f XOR g) \}) = 2n^2 (1+ o(1))
\end{equation}
Thus the product of $0/1$ or $-/+$ linear threshold gates have the same capacity, and a similar argument holds for polynomial threshold gates. These results can be summarized in the following theorem, which is true for both $0/1$ and $-/+$ threshold gates:
\begin{theorem}
\label{thm:main10}
The capacity of a linear threshold gate output-gated by another linear threshold gate is given by:
\begin{equation}
2n^2 \left (1+o(1) \right )
\label{eq:main101}
\end{equation}
Likewise,
the capacity of a polynomial threshold gate of degree $d$ output-gated by another polynomial threshold gate of the same degree is given by:
\begin{equation}
2 \frac{n^{d+1}}{d!} \left ( 1+o(1) \right )
\end{equation}
\end{theorem}
\begin{remark}
Furthermore, we have seen that every Boolean function can be written as a product of linear threshold gates with an exponential number of terms (Proposition \ref{prop:universal}). Theorem \ref{thm:main10} shows that it is not possible to do so using only a polynomial number of terms, since this would result in an overall capacity that is only polynomial, whereas the capacity of $B_n$ is $2^n$.
\end{remark}
\begin{remark}
The estimate in Equation \ref{eq:main101}
can be slightly refined using Equation
\ref{eq:komlos} instead of \ref{eq:zuev}.
\end{remark}
\begin{remark}
These results can be extended to other interesting cases. For instance, if we assume that the weights of the gated and gating linear threshold neurons are binary with $-/+$ values, then the output gating capacity is equal to
$2n \left ( 1 + o(1) \right )$.
\end{remark}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{hierarchyF.eps}
\end{center}
\vspace{-0.2cm}
\caption{Hierarchy of classes of Boolean functions of $n$ variables according to their asymptotic capacity $C$ and number of parameters $W$. Linear threshold functions require $n$ parameters and achieve capacity $n^2$. Linear threshold functions gated by linear threshold functions require $2n$ parameters and achieve larger capacity (e.g. they contain XOR) equal to $2n^2$. Quadratic threshold functions require
$n^2/2$ parameters and achieve capacity $n^3/2$
\cite{baldi2019polynomial}. The set of all Boolean functions correspond to an exponential capacity exactly equal to $2^n$. Note that in all these cases $C=nW$.}
\label{fig:hierarchy}
\end{figure}
\subsubsection{General Composition ($k\geq 2$).}
\label{seq:genral}
The results in the previous section can be generalized to the class of functions of the form $B(f_1,\ldots,f_k)$, where $B$ is a Boolean function of $k$ variables and, for each $j$, $f_j \in \mathcal{T}(n;d_j)$. We denote this class by: $\mathcal{T}_B(n; d_1,\ldots,d_k)$.
\begin{theorem}[Composition]
\label{thm:composition}
Let $B$ be an irreducible Boolean operator in $k$ variables.\footnote{Irreducibility means that $B$ can not be expressed as a Boolean operator in fewer than $k$ variables.}
Then:
\begin{equation}
\prod_{j=1}^k \abs{\mathcal{T}(n-k+1;d_j)}
\le \abs{\mathcal{T}_B(n; d_1,\ldots,d_k)}
\le \prod_{j=1}^k \abs{\mathcal{T}(n;d_j)}
\end{equation}
Furthermore, if $B$ is the set of all irreducible Boolean functions of two variables
(there are 10 of them), we have:
\begin{equation}
\abs[2]{\bigcap_B \mathcal{T}_B(n; d_0,d_1)}
\ge \abs{\mathcal{T}(n-1;d_0)} \, \abs{\mathcal{T}(n-1;d_1)}
\label{eq:}
\end{equation}
where the intersection is over the ten irreducible binary Boolean operators.
\label{eq:th1}
\end{theorem}
The complete proof of this theorem is given in the Appendix. The upper bound is easy and the lower bound relies on the attention multiplexing approach (Section \ref{sec:multiplexing}). To check the special case, when $k=2$, for two polynomial threshold gates of degree $d_1$ and $d_2$, Theorem \ref{thm:composition} yields:
\begin{equation}
\abs{{\mathcal{T}(n-1;d_1)}{\mathcal{T}(n-1;d_2)}}
\le \abs{\mathcal{T}_B(n; d_1,d_2)}
\le \abs{{\mathcal{T}(n;d_1)}{\mathcal{T}(n;d_2)}}
\end{equation}
and when $d_1=d_2=d$:
\begin{equation}
\abs{\mathcal{T}(n-1;d)}^2
\le \abs{\mathcal{T}_B(n; d,d)}
\le \abs{\mathcal{T}(n;d)}^2
\end{equation}
Thus, in the case of output gating of two linear threshold gates, we have:
\begin{equation}
\abs{\mathcal{T}(n-1;1)}^2
\le \abs{\mathcal{T}_B(n; 1,1)}
\le \abs{\mathcal{T}(n;1)}^2
\end{equation}
Substituting the estimates in Equations \ref{eq:zuev}--\ref{eq:bv} in these inequalities gives immediately Theorem
\ref{thm:main10}. Note that the intersection across all 10 irreducible Boolean functions is large.
\subsection{Capacity of Single Attention Units: Synaptic Gating}
We are now ready to compute the capacity for the case corresponding to the right hand side of Figure \ref{fig:AttentionUnit12}, where one threhsold unit synaptically gates the weights of another threshold unit.
To begin with, we look at the case where all the weights of the gated unit are gated simultaneously. The main result is as follows:
\begin{theorem}
\label{thm:fullsynapticgating}
Let $f(x)$ and $g(x)$ be two linear or polynomial threshold gates (not necessarily of the same degree), both with the same $-/+$ or $0/1$ output encoding and $n$ binary input variables. Then full synaptic gating of $f$ by $g$, where all the coefficients of $f$ are multiplied by $g$, is equivalent to output gating of $f$ by $g$. In particular, if both gates are linear threshold gates, then the corresponding capacity is given by:
\begin{equation}
2n^2 \left ( 1 +o(1) \right )
\end{equation}
and if both gates are polynomial threshold gates of degree $d$, then the corresponding capacity is given by:
\begin{equation}
2\frac {n^{d+1}}{d!} \left ( 1 +o(1) \right )
\end{equation}
\end{theorem}
\begin{proof}
We sketch the proof when $f$ and $g$ are linear threshold gates, but the argument extends immediately to polynomial threshold gates. Let us assume that $f(x) = \sign (\sum_i w_ix_i)$ and $g(x) = \sign (\sum v_ix_i)$. Then, with full synaptic gating, the gated function satisfies:
$f_g(x)=\sign (\sum_i g(x)w_i x_i)=
\sign(g(x) \sum_i (w_i x_i)=\sign g(x)
\sign (\sum_i w_i x_i) = g(x) f(x)$.
In the case of $0/1$ units, if $H$ is the Heaviside function, then:
$f_g(x)=H (\sum_i g(x)w_i x_i)=H(g(x)
\sum_i w_i x_i)$. If $ g(x)=1$, this is the same as $f(x)g(x)$. Likewise if $g(x)=0$, as long as we define $H(0)=0$, then $f_g(x)$ is also equal to: $f(x)g(x)$. [Note that the gating is applied to the bias too].
\end{proof}
\begin{remark} In this particular case, to some extent, we can also consider the mixed case. If $f(x)$ is a $-/+$ gate and $g(x)$ is a $0/1$ gate, if we define $\sign 0 =0$ then we also have $f_g(x)=f(x)g(x)$ everywhere. If $f(x)$ is a $0/1$ gate and $g(x)$ is a $-/+$ gate, then when $g(x)=1$ we also have $f_g=fg$. However, when $g(x)=-1$, then $f_g(x)=\lnot f(x)=1-f(x)$.
\end{remark}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.70\columnwidth]{SynapticGatingOneWeight.eps}
\end{center}
\vspace{-0.2cm}
\caption{Synaptic gating with a single attention unit. Both the gated function $f$ and the gating function $g$ are linear (or polynomial) threshold gates of the same input vector $x=(x_1, \ldots, x_n)$. Through the synaptic gating operation, a single synaptic weight of the function $f$ ($w_1$ in the figure) is multiplied by $g$.}
\label{fig:synapticgatingoneweight}
\end{figure}
\begin{remark}
In both Theorems \ref{thm:main10} and \ref{thm:fullsynapticgating}
there is approximately a doubling of the capacity at the cost of doubling the number of parameters.
\end{remark}
Finally, we consider the synaptic gating case where the gating unit gates only {\it one} of the weights of the gated unit
(Figure \ref{fig:synapticgatingoneweight}).
The following Proposition provides bounds on the corresponding capacity.
\begin{proposition}
\label{prop:singleweightgating}
Consider the case of a linear threshold gate $f$ with $n$ binary inputs, where one of the weights is synaptically gated by the output of a second linear threshold gate $g$ of the same $n$ inputs.
Then the capacity $C$ satisfies:
\begin{equation}
n^2 \left ( 1 + o(1) \right ) \leq C \leq 2n^2\left ( 1 + o(1) \right )
\end{equation}
If the linear threshold gates are replaced by polynomial threshold gates of degree $d$, the capacity $C$ satisfies:
\begin{equation}
\frac{n^{d+1}}{d!} \left ( 1 + o(1) \right ) \leq C \leq 2
\frac{n^{d+1}}{d!}
\left( 1 + o(1) \right )
\end{equation}
The same bounds hold for the case of additive activation attention between two linear polynomial threshold gates, or two linear polynomial threshold gates, of the same $n$ inputs.
\end{proposition}
\begin{proof}
The result is true for both $0/1$ and $-/+$ encodings of the outputs. We provide the proof in the linear case but the technique is the same for polynomial threshold gates of degree $d>1$. The lower bound results immediately from the fact that the gating unit could have an output constant and equal to 1 $g(x)=1$). In this case the gated function is equal to $f(x)$ and the lower bound is the corresponding capacity estimate.
The upperbound is simply the sum of the capacities. A similar argument applies for the case of additive activation attention.
\end{proof}
\section{Capacity of Attention Layers}\label{sec:capacitylayers}
The previous attention results are obtained using only two neurons, a gating neuron and a gated neuron, with either output gating or synaptic gating. We now extend the capacity analysis to cases where there is a layer of gating neurons, as shown in Figure \ref{fig:AttentionLayer12} for both output and synaptic gating.
\begin{figure}[ht]
\begin{center}
\includegraphics[trim={0 0 0 1.0cm},width=0.99\columnwidth]{AttentionLayer12.eps}
\end{center}
\vspace{-1.0cm}
\caption{Left: output gating by a gating layer. For the same $n$ dimensional input
vector $x$, there are $m$ hidden units computing functions $h_1(x),\ldots, h_m(x)$,
and $m$ corresponding gating units computing functions
$g_1(x),\ldots, g_m(x)$. With the gating, the effective output of the hidden units is given by $h_1(x)g_1(x), \ldots, h_m(x)g_m(x)$. The final output unit produces an output of the form $f(h_1(x)g_1(x), \ldots, h_m(x)g_m(x))$.
In the capacity analysis, we assume that the functions $h$, $g$, and $f$ are linear threshold gates.
Right: synaptic gating by a gating layer.
In this case, there is a unit computing
a function $f(x)$ with $n$ weights $w_i, \ldots, w_n$. There are $n$ gating functions
$g_1(x),\ldots, g_n(x)$, each one multiplicatively gating one of the weights $w$. If $f= \sign (\sum_i w_ix_i)$
then $f_g(x)=\sign (\sum _i g_i(x) w_i x_i)$.
}
\label{fig:AttentionLayer12}
\end{figure}
\subsection{Capacity of Attention Layers: Output Gating}
We now examine the capacity of a network with one attention layer with output gating, as depicted on the left hand side of Figure \ref{fig:AttentionLayer12}. Thus we consider an architecture with $n$ inputs, $m$ hidden linear threshold units gated by $m$ corresponding linear threshold units, and one final linear threshold output gate. All the linear threshold gates have $-/+$ outputs, although the following theorem is unchanged, and the method of proof is similar, if the gates have 0/1 outputs. We denote by
$\mathcal{T}({n,m,1};\times)$ the corresponding set of Boolean functions.
Note that this is the same architecture for computing the dot product of the gated and the gating hidden layer outputs, except that the final unit is non-linear with variable weights, instead of being linear with fixed weights equal to one.
We will also let $\mathcal{T}({n,1};\times)$ denote the set of Boolean functions corresponding to one linear threshold gate of $n$ variables output-gated by another linear threshold gate of the same variables.
\begin{theorem}
\label{thm:outputlayergating}
The capacity $C(\mathcal{T}({n,m,1};\times))$ of the
set of Boolean functions corresponding to $n$ inputs, $m$ hidden linear threshold gates output-gated by $m$ hidden linear threshold gates of the same inputs, followed by one linear threshold gate output satisfies:
\begin{equation}
mn^2 \leq C(\mathcal{T}({n,m,1};\times)) \leq 2mn^2 \left ( 1+o(1) \right )
\label{eq:}
\end{equation}
for $n \to \infty$,
and for any choice of $m \in [1,2^{o(n)}]$.
Furthermore:
\begin{equation}
C(\mathcal{T}({n,m,1};\times))=m C(T(n,1;\times)) \left (1+o(1) \right)
\end{equation}
Thus:
\begin{equation}
C(\mathcal{T}({n,m,1};\times)) = 2mn^2 \left (1+o(1) \right )
\end{equation}
\end{theorem}
\begin{proof}
Let us denote by $f$ the map between the input layer and the hidden layer with gating, and by $\phi$ the map from the hidden layer to the output layer. For the upper bound, we first note that the total number of possible maps $f$ is bounded by $2^{mC(\mathcal{T}(n,1;\times))}\leq
2^{2mn^2(1+o(1))}$, since $f$ consists of $m$ threshold gates gated by $m$ threshold gates, and thus each gated unit corresponds to at most $2^{C(\mathcal{T}(n,1;\times))}\leq 2^{2n^2(1+o(1))}$ possibilities by the Theorems in Section \ref{sec:capacitysingle}.
Any fixed map $f$, produces at most $2^n$ distinct vectors in the hidden layer. It is known \cite{anthony2001discrete} that the number of threshold functions $\phi$ of $m$ variables defined on at most $2^n$ points is bounded by:
\begin{equation}
2 {2^{n}-1 \choose \leq m} =2^{nm (1+o(1))}
\label{eq:}
\end{equation}
using the assumption $m \leq 2^{o(n)}$. Thus, under our assumptions, the total number of functions of the form $\phi \circ f$ is bounded by the product of the bounds above which yields immediately:
\begin{equation}
C(\mathcal{T}({n,m,1};\times) )\leq
mC(\mathcal{T}(n,1;\times)) \left (1+o(1) \right )
\leq 2mn^2 \left (1+o(1) \right )
\label{eq:}
\end{equation}
For the lower bound, we can force the gating units to be the identity (i.e. with a constant output equal to 1). In this particular case, the gating units can be ignored and we need to count the number of Boolean functions that can be implemented in the remaining architecture. A theorem in \cite{baldi2019capacity} shows that this number is equal to $mn^2(1+o(1))$.
To prove the rest of the theorem, we use
attention multiplexing. As a reminder, the basic idea is to have a small set of the input units act as attention units that can be used to select a particular function in the hidden layer. The same setting of the attention units will be used to select the corresponding functions in both the gating and gated layers.
More formally, we decompose $n$ as: $n=n^- + n^+$ where $n^- = \lceil \log_2 m \rceil$ corresponds to the attention units. Likewise, we decompose each input vector $x=(x_1,\ldots,x_n )\in \{-1,+1\}^n$ as: $x=(x^-,x^+)$, where:
\begin{equation}
x^-=(x_1,\ldots,x_{n^-}) \in
\{-1,+1\}^{n^-} \quad {\rm and} \quad
x^+=(x_{n^-+1}1,\ldots,x_{n}) \in
\{-1,+1\}^{n^+}
\label{eq:}
\end{equation}
For any gated Boolean linear threshold map $f^+$ from
$\{ -1,+1\}^{n^+}$ to $\{-1,+1\}^m$, we can uniquely derive a map
$f=(f_1,\ldots, f_m)$ from $\{ -1,+1\}^{n}$ to $\{-1,+1\}^m$ defined by:
\begin{equation}
f_i(x^-,x^+)= [x^-=i] \;\; AND \;\; [f_i^+(x^+)]
\label{eq:}
\end{equation}
Here $x^-=i$ signifies that the binary vector $x^-$ represents the digit $i$. In other words $x^-=i$ is used to select the $i$-th unit in the gated layer as well as in the gating layer,
and filter $f^+$ by retaining only the value of $f_i^+$. By Lemma \ref{lm:multi}),
this selection procedure can be expressed using a single linear threshold function of the input $x^-$ for the gated layer, and similarly for the gating layer.
We say that $f$ is obtained from $f^+$ by multiplexing and $f$ is a gated threshold map.
It is easy to see that the filtering of two distinct maps $f^+$ and $g^+$ results into two distinct maps $f$ and $g$.
Now let us use $\phi = OR$ in the top layer--note that OR can be expressed as a linear threshold function. Then it is also easy to see that $\phi \circ f \not = \phi \circ g$. Thus the total number of Boolean functions that can be implemented in this architecture is lower-bounded by the number of all gated Boolean maps $f^+$. This yields:
\begin{equation}
C(\mathcal{T}(n,m,1;\times)) \geq m
C(\mathcal{T}(n^+,1;\times)) \left (1 + o(1) \right )=
2m n^2 \left (1 + o(1) \right )
\label{eq:}
\end{equation}
using the fact that $n^+=n- \lceil \log_2 m \rceil$,
and $\lceil \log_2 m \rceil = o(n)$ by assumption.
Thus:
$C(\mathcal{T}(n,m,1;\times))=mC(\mathcal{T}(n,1;\times)) \left ( 1 + o(1) \right)=2mn^2 \left ( 1+o(1) \right )$.
\end{proof}
\begin{remark}
In Theorem \ref{thm:outputlayergating}, we see again that both the capacity and the number of parameters approximately double at the same time.
\end{remark}
\subsection{Capacity of Attention Layers: Synaptic Gating}
We now examine the capacity of a network with one attention layer with synaptic gating, as depicted on the right hand side of Figure \ref{fig:AttentionLayer12}, with each gating neuron gating a different weight of a gated neuron.
\begin{proposition}
Consider the case of a linear threshold gate with $n$ inputs and $n$ weights, where each weight is synaptically gated by an independent linear threshold gate of the same $n$ inputs.
Then the capacity $C$ satisfies:
\begin{equation}
n^2 \left ( 1 + o(1) \right ) \leq C \leq n^3\left ( 1 + o(1) \right )
\end{equation}
If the linear threshold gates are replaced by polynomial threshold gates of degree $d$, the capacity $C$ satisfies:
\begin{equation}
\frac{n^{d+1}}{d!} \left ( 1 + o(1) \right ) \leq C \leq
\frac{n^{d+2}}{d!}
\left( 1 + o(1) \right )
\end{equation}
\end{proposition}
\begin{proof}
The proof is similar to the proof of Proposition \ref{prop:singleweightgating}. The lower bound is obtained by constraining all the gating units to have a constant output equal to 1. The upperbound is simply the sum of all the capacities.
\end{proof}
Likewise, we can consider an architecture with $n$ inputs, one layer of $m$ gating units, and one parallel layer of $m$ gated units. Each gating unit is uniquely paired with one gated unit (one to one) and synaptically gates one of the weights of the gated unit.
\begin{proposition}
Consider the case of
an architecture with $n$ inputs, one layer of $m$ gating units, and one parallel layer of $m$ gated units. Each gating unit is uniquely paired with one gated unit (one to one) and synaptically gates one of the weights of the gated unit.
Then the capacity $C$ satisfies:
\begin{equation}
m n^2 \left ( 1 + o(1) \right ) \leq C \leq 2m n^2\left ( 1 + o(1) \right )
\end{equation}
If the linear threshold gates are replaced by polynomial threshold gates of degree $d$, the capacity $C$ satisfies:
\begin{equation}
m \frac{n^{d+1}}{d!} \left ( 1 + o(1) \right ) \leq C \leq
2m\frac{n^{d+1}}{d!}
\left( 1 + o(1) \right )
\end{equation}
\end{proposition}
\begin{proof}
The proof is similar to the proof of Proposition \ref{prop:singleweightgating}. The lower bound is obtained by constraining all the gating units to have a constant output equal to 1. The upperbound is simply the sum of all the capacities.
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
In addition to the fundamental role attention plays in brain function, attention mechanisms have also become important for artificial neural networks and deep learning.
Here we have taken the first steps towards building a theory of attention mechanisms by first identifying the quarks of attention, i.e. its smallest building blocks.
Using the three variable types of the SM allows for the systematic identification and organization of possible attention building blocks based on their origin type, target type, and whether the mechanism of action is additive or multiplicative. Assuming that the attention signal originates from the output of some neurons, this yields six possibilities, which can then be reduced to three main cases: activation attention, output gating, and synaptic gating. Activation attention falls within the SM, whereas output gating and synaptic gating correspond to multiplicative extensions of the SM. Current attention-based architectures in deep learning, including transformers, are built out of attention modules which are themselves built out of output gating and synaptic gating operations. These operations and modules can be viewed as new primitives in the language of neural architectures in digital simulations and, because they are differentiable, the usual backpropagation learning framework can easily be extended to them.
However, in a physical neural machine, these operations require additional connections (wires) and physical mechanisms for implementing multiplicative interactions.
Ouput gating can be used dynamically to directly silence unattended neurons, and to magnify the output of attended neurons. It can also be used as the main building block of a shallow module that can compute the dot product of two vectors of neuronal activities. The latter is a key, massively used, component of transformer architectures.
Synaptic gating is a fast synaptic mechanism that can be used dynamically to silence or weigh the attended synapses. It is often used in combination with a softmax operator to enable dynamic convex combinations of vectors, as in the transformer architectures.
The concept of fast synapses that can vary their strengths on fast time scales is not new and has been associated with different roles,
in different contexts. For instance, one potential role is the storage of transient information, such as intermediary results during mental reasoning, or simply the memorization of the beginning of a paragraph as the reading of the paragraph proceeds. A second potential role stems from viewing synaptic weights as computer programs, and thus fast synapses as enabling dynamic changes in the programs that are being executed and the implementation of parameterized functions. And a third role studied here is the enabling of attention. These three roles are not independent and raise interesting architectural
questions for deep learning and neuroscience and the possible need for multiple synaptic time scales interacting in hierarchical ways.
To see this, as an example, consider the reading paradigm where information about the first sentence of a long paragraph is stored using a set of fast weights. If, as the reading proceeds, one must suddenly access
a specific subset of this transiently stored information,
attention must be directed towards certain particular words contained in the first sentence.
In a deep learning architectures, this can be thought of in terms of a softmax synaptic gating, as is done in transformer and other NLP architectures. Thus somehow this fast weight attention mechanism must operate upon, and be faster than, the fast weight synaptic mechanism used to store information about the first sentence.
Attention mechanisms allow the attending network to modulate the function computed by the attended network, thereby expanding the scope of useful functions that can be efficiently implemented and trained in deep learning.
Because the SM already has universal approximation properties, its extensions should not be evaluated in terms of which functions can be approximated, but rather in terms of other efficiencies.
While attention blocks act as new primitives in standard deep learning software libraries,
having access to output gating and synaptic gating mechanisms in a physical neural network can reduce its depth.
Using the notion of cardinal capacity, and working with the approximation provided by Boolean neurons (linear or polynomial threshold gates), enables systematic investigations of the capacity of attentional circuits that were previously not possible. In particular, we have been able to estimate the capacity of basic attentional circuits involving linear, or polynomial, threshold gates.
In many cases of interest, we found essentially a doubling of the capacity with a doubling of the number of parameters, which is a sign of efficiency.
Perhaps surprisingly, a key ingredient in the capacity proofs is the third form of attention, activation attention. Activation attention is used to prove capacity lower bounds by the multiplexing approach which selects a unit in a layer, as a function of the attending units, while driving the remaining units in the layer to low or high saturation.
There is work left for tightening some of the estimates and for extending them to other activation functions and other architectures.
\begin{figure}[ht]
\begin{center}
\includegraphics[trim={0 0 0 0.5cm},width=0.99\columnwidth]{SMF.eps}
\end{center}
\vspace{0cm}
\caption{Standard model and some of its extensions.}
\label{fig:SM}
\end{figure}
Overall, both output and synaptic gating are extensions of the SM which introduce quadratic terms
in the SM (Figure \ref{fig:SM}). Quadratic terms are powerful but expensive: a neuron with full quadratic activation over its $n$ inputs requires on the order of $n^2$ synaptic parameters. Using quadratic activations everywhere in a large deep architecture leads to implementations that
may not be efficient in terms of parameters and learning. Attention mechanisms are a way of introducing quadratic terms in a sparse way, in order to gain some of the benefits of quadratic activations, without paying the full price.
\iffalse
Thus, whatever attention is, it must be dynamic in the sense of being able to change or modulate the function computed by a neural network architecture, and doing so rapidly, on a time scale much faster than the time scale required for growing new connections, or learning the weights of existing ones.
From a mathematical perspective, this is linked to the idea of being able to compute parameterized functions of the form $f_\theta(x)$, where $\theta$ parameterizes $f$. How can one implement the concept of a parameterized function in a neural network?
\null\par
\noindent
{\bf Parameterized Functions.}
There are different possible implementations, as shown in Figure
\ref{fig:modulation}. If we write $f_\theta(x)=g(x,\theta)$, then
$f_\theta(x)$ can be implemented by a neural network that computes the function $g$ and takes both $x$ and $\theta$ as its inputs. This is true for any reasonable function $g$ due to the well know universal approximation properties of the standard model. This is the approach taken in \cite{baldi2016parameterized} in a physics problem where
$\theta$ represents the mass of a particle and $x$ the corresponding measurements. The interesting problem there is that value of $\theta$ is known only on the training data, and thus it must be sampled or integrated over at production time. A small variation on this idea is when $\theta$ is not directly available in the input, but can be computed from $x$(or some other input). These two variations can be implemented directly within the standard model. However, while in both cases the complete input-output function changes as a function of $\theta$ on a fast time-scale without the need for changing any connections, these implementations do not seem to capture the notion of attention and ``exclusion of other stimuli''.
A third possibility is to compute the vector $\theta$ as the output of a layer of neurons, and then have these neuronal outputs dynamically change the function computed by the network. This dynamical modulation however requires introducing new algebraic mechanisms that are not part of the standard model. In particular, the ``exclusion of other stimuli'' suggests the existence of mechanisms by which connections weights, or neuronal outputs, can be multiplied by zero. Thus we begin by identifying a list of such most basic mechanisms.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\columnwidth]{ThreeModulations.eps}
\end{center}
\vspace{-0.2cm}
\caption{Three different ways of implementing the idea of a parameterized function $f_\theta(x)$ in neural networks. Left: the parameter $\theta$ is part of the input. Middle: the parameter $\theta$ is computed in a hidden layer of the architecture, and is then fed to the upper layers. Right: the parameter $\theta$ is computed in a hidden layer
of the architecture, and then in some way it modifies
the function that is computed by the upper layers.}
\label{fig:modulation}
\end{figure}
3) attention does only suppression or also enhancement
energy
4) kernels
5) awareness
6) here feedforward attention
but what about attention driven by feedback connections? free will etc.
brittleness due to proofs
other SM extension
improve bounds
quadratic activation neurons
apply a logical operation to two linear threshold gates and compute capacity
2) uniform distribution of a on the sphere equivalent to sampling linear threshold gates uniformly --> mean and variance of the corresponding areas on the sphere
3) How many functions are there of the form $sign<A,xx^T> $ on the cube ${-1,1}^n$?
If we don't place any restrictions on the $n x n$ matrix $A$, we are looking at quadratic threshold functions. We solved this problem before, and the answer is $2^{n^3/2}$.
If we demand that $rank(A)=1$, this is the problem we are considering now, since $A=ab^T$ and so $<A,xx^T>= <a,x><b,x>$. Pending verification of the induction proof, the answer is $2^{2n^2}$.
What if we require $rank(A)=2$? Or, more generally, $rank(A)=k$?
4) As we discussed, this argument
gives a new proof of Zuev's result. But I realized that it actually
sharper: it seems that we can get the optimal asymptotics of LTFs not
just on the logarithmic scale (like Zuev does) but on the natural
scale as well:
$T(n) = (1+o(1)) 2\binom{2^n}{n-1}$.
A similar result was recently claimed by Irmatov:
Can the sharper result obtained on LTF be extended to PTFs of degree $d$?
}
\fi
\iffalse
Attention is a mechanism for modulating a neural network (Figure \ref{fig:modulation}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Modulation.eps}
\end{center}
\vspace{-1.9cm}
\caption{Modulation.}
\label{fig:modulation}
\end{figure}
\fi
\iffalse
scalability issues
linear increase in parameters and linear increase in capacity
scalable transformer
invariance with respect to input
macro --> softmax
products --? dot products (quadratic) OR softmax of produts --of dot matrix of dot products and softmaxw of rows
new macros
replacing activation functions with quadratic parabolas ...
softmax is a block
attention on activation SS or SO
key are the gains in expression tradeoffs with number of parameter and computational cost
\fi
Finally, we can return to the quotes in the introduction linking attention to awareness and pointing to the inadequacy of having a single term.
While subjectively we feel that we can control and direct our attention and be aware of its shifts, it should be obvious that attention mechanisms, such as output or synaptic gating, are computational mechanisms that do not require awareness. They can operate at all levels of a cognitive architecture, for instance to help implement dynamically whole-part hierarchies and ultimately awareness itself. Thus, in short, awareness is not necessary for attention, but attention may be necessary for awareness. Having a single term is indeed inadequate and, in time, it may have to be replaced with multiple terms
to better capture the underlying complexities.
\section{Appendix: Detailed Proof of Theorem \ref{thm:composition}}
Here a polynomial threshold function is a function of the form
$f = \sign(p): \{0,1\}^n \to \{-1,1\}$
where $p$ is a polynomial in $n$ real variables of degree at most $d$.
The class of all such functions is denoted $\mathcal{T}(n;d)$.
Let $B(z_1,\ldots,z_k) : \{-1,1\}^k \to \{-1,1\}$ be a Boolean function in $k$ variables.
We are interested in the class of functions of the form
$B(f_1,...,f_k): \{0,1\}^n \to \{-1,1\}$ where $f_j \in \mathcal{T}(n;d_j)$. Denote this class by $\mathcal{T}_B(n; d_1,\ldots,d_k)$.
We want to prove the following theorem:
\par\null\par
\noindent
{\bf Theorem} (Composition). {\it
Let $B$ be an irreducible Boolean operator in $k$ variables.
\iffalse
\footnote{Irreducibility means that $B$ can not be expressed as a Boolean operator in fewer than $k$ variables.}
\fi
Then:}
\begin{equation}
\prod_{j=1}^k \abs{\mathcal{T}(n-k+1;d_j)}
\le \abs{\mathcal{T}_B(n; d_1,\ldots,d_k)}
\le \prod_{j=1}^k \abs{\mathcal{T}(n;d_j)}
\label{eq:th1a}
\end{equation}
The upper bound is trivial from considering the total number of tuples $(f_1,...,f_k)$ with $f_j \in \mathcal{T}(n;d_j)$. The lower bound is nontrivial except for $k=1$ where both bounds become identical.
The key to the proof is the multiplexing (activation attention) procedure, where $k$ input components are viewed as attention units capable of producing a constant mask in the hidden layer, except for the attended function. Here for simplicity we use a sparse encoding in the $k$ components, although dense encoding is also possible, as in the proof of Theorem \ref{thm:outputlayergating}. Dense encoding would lead to a reduction in the number of attending units from $k$ to $ \lceil \log_2k \rceil $ as in Section
\ref{sec:multiplexing}. As a side note, using more attention units than the minimal number required, can be used to reduce the size of the attention weights, or to make the attention mechanism less sensitive to each individual attention bit.
To prove the lower bound in Composition Theorem~\ref{thm:composition}, let us restate it equivalently as:
\begin{equation}
\prod_{j=0}^k \abs{\mathcal{T}(n-k;d_j)}
\le \abs{\mathcal{T}_B(n; d_0,\ldots,d_k)}
\le \prod_{j=0}^k \abs{\mathcal{T}(n;d_j)}.
\label{eq:th1.0}
\end{equation}
\par\noindent
Irreducibility implies that if we select any input component $i$, the value of $B$ cannot be determined entirely from the value of the remaining components alone. More formally:
\begin{lemma} \label{lem: restriction}
Consider an irreducible Boolean operator $B = B(z_0,\ldots,z_k)$
and an index $i \in \{0,\ldots,k\}$.
There exist signs $\theta \in \{-1,1\}$ and $\theta_j \in \{-1,1\}$, $j \in \{0,\ldots,k\} \setminus \{i\}$, such that:
\begin{equation}
B(z_0,\ldots,z_k) = \theta z_i
\quad \text{whenever } z_j = \theta_j
\text{ for all } j \ne i.
\label{eq:lm1}
\end{equation}
\end{lemma}
\begin{proof}
Consider $B(z_0,\ldots,z_k)$ as a function of $z_i$. If this function is constant in the variable $z_i$ no matter how we fix the other variables, then the value of $B(z_0,\ldots,z_k)$ is entirely determined by the values of these other variables, which contradicts irreducibility. Therefore, there exists some assignment $z_j = \theta_j$, $j \ne i$, so that
the function $B(\theta_0,\theta_1,\ldots, z_i, \ldots \theta_k)$
is not constant in $z_i$. But there exists only two non-constant Boolean functions $f(x)$ in one variable: $f(x)=x$ or $f(x)=-x$, and this determines $\theta$.
\end{proof}
\par\noindent
The next lemma essentially states that we can fit an affine function of $k$ variables to $k+1$ points.
\par\noindent
\begin{lemma} \label{lem: fitting}
Let $e_0=0$ and $e_1,\ldots,e_k$ denote the canonical basis vectors in $\mathbb{R}^k$.
Then, for any choice of index $j \in \{0,\ldots,k\}$
and signs $\theta_i \in \{-1,1\}$, $i \in \{0,\ldots,k\} \setminus \{j\}$
there exists an affine function $q: \mathbb{R}^k \to \mathbb{R}$ such that:
\begin{equation}
q(e_i) =
\begin{cases}
0, & i=j \\
\theta_i, & i \ne j
\end{cases}
\label{eq:lm2}
\end{equation}
for all $i \in \{0,\ldots,k\}$.
\end{lemma}
\begin{proof}
It is straightforward to check that the function:
\begin{equation}
q(z) = \theta_0 - \theta_0 z_j + \sum_{i \in \{0,\ldots,k\} \setminus \{j\}} (\theta_i-\theta_0)z_i
\label{eq:}
\end{equation}
satisfies the required property.
\end{proof}
\par\noindent
We can now use the previous lemma to derive a lemma for consistently extending a function of $n-k$ variables to a function of $n$ variables. Here $k$ components are used as selector of filter variables, as in the proof of Theorem
\ref{thm:outputlayergating}.
\begin{lemma} \label{lem: F}
Consider a function $f \in \mathcal{T}(n-k;d)$, an index $j \in \{0,\ldots,k\}$,
and signs $\theta \in \{-1,1\}$ and $\theta_i \in \{-1,1\}$, $i \in \{0,\ldots,k\} \setminus \{j\}$.
There exists a function $F \in \mathcal{T}(n;d)$ such that:
\begin{equation}
F(e_i \oplus x) =
\begin{cases}
\theta f(x), & i=j \\
\theta_i, & i \ne j
\end{cases}
\label{eq:}
\end{equation}
for all $x \in \{0,1\}^{n-k}$. Here $\oplus$ denotes the concatenation operator.
\end{lemma}
\begin{proof}
Express the polynomial threshold function $f$ as:
\begin{equation}
f(x) = \sign(p(x))
\quad \text{for } x \in \{0,1\}^{n-k}
\label{eq:}
\end{equation}
where $p$ is a polynomial in $n$ variables and of degree at most $d$.
Let $q$ be a function that satisfies the conclusion of Lemma~\ref{lem: fitting}.
Fix a number $M$ large enough so that $M > \abs{p(x)}$ for all $x \in \{0,1\}^{n-k}$,
and define:
\begin{equation}
F(z \oplus x) = \sign \left( M q(z) + \theta p(x) \right)
\label{eq:}
\end{equation}
for all $z \in \mathbb{R}^k$ and $x \in \mathbb{R}^{n-k}$.
By construction, $F$ is a polynomial threshold function on $\{0,1\}^n$ of degree at most $d$ as required.
Let us check that $F$ satisfies the conclusion of the lemma.
If $z=e_j$, we have $q(z)=0$ due to our choice of $q$ (per the conclusion of Lemma~\ref{lem: fitting}), and we get
$F(z \oplus x) = \sign(\theta p(x)) = \theta f(x)$.
If $z=e_i$ with $i \ne j$, then our choice of $q$ implies
$F(z \oplus x) = \sign(M \theta_i + \theta p(x))$.
The choice of $M$ guarantees that the term $M \theta_i$ dominates the term $\theta p(x)$ in magnitude, so we have
$F(s \oplus x) = \sign(M \theta_i) = \theta_i$.
\end{proof}
\par\noindent
We can now use Lemma \ref{lem: F} for the simultaneous extension and filtering of several functions of $n-k$ variables relative to an irreducible Boolean function $B$.
\begin{lemma} \label{lem: embedding}
For any $(k+1)$-tuple of functions $(f_0,\ldots,f_k)$ where $f_j \in \mathcal{T}(n-k;d_j)$
there exists a $(k+1)$-tuple of functions $(F_0,\ldots,F_k)$ where $F_j \in \mathcal{T}(n;d_j)$
such that:
\begin{equation}
B(F_0,\ldots,F_k)(e_i \oplus x)
= f_i(x)
\label{eq:}
\end{equation}
for all $i \in \{0,\ldots,k\}$ and $x \in \{0,1\}^{n-k}$.
\end{lemma}
\begin{proof}
Lemma~\ref{lem: restriction} yields the existence of signs $\theta_i \in \{-1,1\}$ for $i \in \{0,\ldots,k\}$
and $\theta_{ij} \in \{-1,1\}$ for distinct $i,j \in \{0,\ldots,k\}$, such that:
\begin{equation} \label{eq: B restricted}
B(z_0,\ldots,z_k) = \theta_i z_i
\quad \text{whenever } z_j = \theta_{ij}
\text{ for all } j \ne i.
\end{equation}
Now consider the functions $f_j \in \mathcal{T}(n-k;d_j)$, $j \in \{0,\ldots,k\}$.
Lemma~\ref{lem: F} yields the existence of functions $F_j \in \mathcal{T}(n;d_j)$, $j \in \{0,\ldots,k\}$, such that:
\begin{equation} \label{eq: Fj constructed}
F_j(e_i \oplus x) =
\begin{cases}
\theta_i f_i(x), & i=j \\
\theta_{ij}, & i \ne j
\end{cases}
\end{equation}
for all $i,j \in \{0,\ldots,k\}$ and $x \in \{0,1\}^{n-k}$.
For any fixed $i \in \{0,\ldots,k\}$ and $x \in \{0,1\}^{n-k}$, by construction the variables $z_j \coloneqq F_j(e_i \oplus x)$ satisfy the condition in \eqref{eq: B restricted}.
Therefore, \eqref{eq: B restricted} and \eqref{eq: Fj constructed} yield:
\begin{equation}
B(F_0,\ldots,F_k)(e_i \oplus x) = B(z_0,\ldots,z_k) = \theta_i z_i
= \theta_i F_i(e_i \oplus x)
= \theta_i^2 f_i(x)
= f_i(x)
\label{eq:}
\end{equation}
as claimed.
\end{proof}
\medskip
\par\noindent
Armed with this lemma, we can now prove Theorem
\ref{thm:composition}.
\begin{proof}[Proof of Theorem~\ref{thm:composition}]
Lemma~\ref{lem: embedding} demonstrates that
for any tuple of functions $(f_0,\ldots,f_k) \in \prod_{i=0}^k \mathcal{T}(n-k;d_j)$
there exists a function $F \in \mathcal{T}_B(n;d_0,\ldots,d_k)$
such that $F(e_i \oplus x) = f_i(x)$
for all $i \in \{0,\ldots,k\}$ and $x \in \{0,1\}^{n-k}$.
Thus, each component $f_i$ of the original $k$-tuple can be uniquely recovered
from $F$. Therefore, a map $(f_0,\ldots,f_k) \mapsto F$ (if there are multiple $F$ corresponding to some $f$, select one arbitrarily) defines an injection from the cartesian product $\prod_{i=0}^k \mathcal{T}(n-k;d_j)$ into $\mathcal{T}_B(n;d_0,\ldots,d_k)$, completing the proof.
\end{proof}
As shown in Table \ref{tab:boolean},
there are $16$ binary Boolean operators $B$. Ten of them are irreducible, including AND, OR and XOR and their negations. For each such operator, the Composition Theorem
\ref{thm:composition} gives:
\begin{equation}
\abs{\mathcal{T}(n-1;d_0)} \, \abs{\mathcal{T}(n-1;d_1)}
\le \abs{\mathcal{T}_B(n; d_0,d_1)}
\le \abs{\mathcal{T}(n;d_0)} \, \abs{\mathcal{T}(n;d_1)}
\label{eq:}
\end{equation}
Surprisingly, the intersection of all ten classes is still as large.
\begin{proposition}
We have:
\begin{equation}
\abs[2]{\bigcap_B \mathcal{T}_B(n; d_0,d_1)}
\ge \abs{\mathcal{T}(n-1;d_0)} \, \abs{\mathcal{T}(n-1;d_1)}
\label{eq:}
\end{equation}
where the intersection is over the ten irreducible binary Boolean operators.
\end{proposition}
In particular, there are many functions $f$ (specifically, $2^{2n^2(1-o(1))})$
that can be simultaneously expressed
as: $f = f_1 \, \AND \, f_2 = f_3 \, \OR \, f_4=f_5 \, XOR \, f_6$ where all the $f_i$ are linear threshold gates.
\begin{proof}
In the proof of the Composition Theorem~\ref{thm:composition} above,
we showed that for each irreducible Boolean operator $B$
and pair of functions $(f_0,f_1) \in \mathcal{T}(n-1;d_0) \times \mathcal{T}(n-1;d_1)$,
there exists $F \in \mathcal{T}_B(n; d_0,d_1)$ such that:
\begin{equation}
F(0 \oplus x) = f_0(x), \quad
F(1 \oplus x) = f_1(x)
\label{eq:}
\end{equation}
for all $x \in \{0,1\}^{n-1}$.
Obviously, this pair of equations defines $F$ uniquely on $\{0,1\}$, and
$F$ is independent of $B$. Thus, $F$ lies in the intersection of $\mathcal{T}_B(n; d_0,d_1)$
over all irreducible $B$.
\end{proof}
\section*{Acknowledgment}
Work in part supported by
ARO grant 76649-CS and NSF grant 1633631 to PB, and
AFOSR grant FA9550-18-1-0031 to RV.
|
1,108,101,563,889 | arxiv | \section{Introduction}
\label{sect:introduction}
Stochastic models are often used to model and analyse the performance
of computer (and many other) systems. A particularly rich and popular
class of models is given by stochastic population models. These have
been used, for instance, to model biological systems
\cite{wilkinson2006}, epidemic spreading \cite{andersson2000} or
queuing networks \cite{vvedenskaya1996queueing}. These systems are
composed of a set of homogeneous objects interacting with one
another. These models have a high expressive power, but an exact
analysis of any such a model is often computationally prohibitive when
the number of objects of the system grows. This results in the need
for approximation techniques.
A popular technique is to use mean field approximation. The idea
behind mean field approximation is to replace the study of the
original stochastic system by the one of a, much simpler,
deterministic dynamical system. The success of mean field
approximation can be explained by multiple factors : (a) it is fast --
many models can be solved in closed form
\cite{vvedenskaya1996queueing,mitzenmacher2001power,tsitsiklis2011power,minnebo2}
or easily solved numerically
\cite{massoulie1,gast2010mean,van2013mean} -- (b) it is proven to be
asymptotically optimal as the number of objects in the system goes to
infinity
\cite{kurtz70,Le+07,benaim2008class,gast2012markov,BHLM13};
and (c) it is often very accurate also for systems of moderate size,
composed of $N\approx100$ objects.
The mean field approximation of a given model is constructed by
considering the limit of the original stochastic model as the number
of objects $N$ goes to infinity. There can be two types of limits. The
first type arises when the dynamics of the objects are
asynchronous. In this case the mean field approximation is given by a
continuous time dynamical system (often a system of ordinary
differential equations) -- this is the most studied case \emph{e.g.}
\cite{kurtz70,benaim2008class,BHLM13}. The second
type arises when the objects are synchronous. In this case
the mean field approximation is a discrete time dynamical system
\cite{Le+07,gastgaujalDEDS,tinnakornsrisuphap2003limit}. We focus on
the latter.
\paragraph*{Contributions}
Our main contribution is an extension to (synchronous) DTMC population models of the results
proposed in~\cite{gast2017refined} for (asynchronous) CTMC population models, thus providing a new approximation technique that is significantly more accurate than classical mean field approximation, especially for relatively small systems.
Our results apply to the classical model of
\cite{Le+07,gastgaujalDEDS,latella2013fly}. We prove our result for
the transient and the steady state dynamics. Moreover, it retains an
interesting feature of mean field approximation by being
computationally non-intensive.
More precisely, if $\MN_i(t)$ denotes the proportion of objects in a
state $i$ at time $t$, then the classical result of \cite{Le+07}
states that, as $N$ grows large, if the vector $\MN(0)$ converges almost surely
to $m$, for some vector $m$, then the vector $\MN(t)$ converges almost
surely to a deterministic quantity $\mu(t)$ that satisfies a
recurrence equation of the form $\mu(t+1)=\mu(t)\vr{K}(\mu(t))$ with
$\mu(0)=m$. We show that, for any twice differentiable function $h$, there
exists a constant $V_{t,h}$ such that
\begin{align}
\label{eq:main_result}
\lim_{N\to\infty} N(\esp{h(\MN_t)} - h(\mu(t))) = V_{t,h}.
\end{align}
We provide an algorithm to compute the constant $V_{t,h}$ by a linear
dynamical system that involves the first and second derivative of the
functions $m\mapsto m\vr{K}(m)$ and $h$. We also show that if the function
$m\mapsto m\vr{K}(m)$ has a unique fixed point $\mu(\infty)$ that is
globally exponentially stable, then the same result holds for the
steady-state : in this case, $V_{\infty,h}=\lim_{t\to\infty}V_{t,h}$
exists and can be expressed as the solution of a discrete-time
Lyapunov equation that involves the first and second derivative of
$m\mapsto m\vr{K}(m)$ and $h$ evaluated at the point $\mu(\infty)$.
By using these results, we define a quantity $h(\mu(t))+V_{t,h}/N$
that we call the \emph{refined mean field} approximation. As opposed
to the classical mean field approximation, this approximation depends
on the system size $N$. We illustrate our theoretical results with
four different examples. While these examples all show that our
refined model is clearly more accurate than the classical
approximations, they illustrate different characteristics. The first
two examples are cases where the dynamical system has a unique
exponentially stable attractor. In these examples, refined mean field
provides performance estimates that are extremely accurate (the
typical error between $M^{(N)}$ and $\mu$ is less than $1\%$ for
$N=10$). The third example is different as it is a case when the
stochastic system has two absorbing states. In this case, the refined
mean field is still more accurate than the classical mean field
approximation but remains far from the exact values for $N=10$. It is
only for larger values of $N$ that the refined mean field provides a
very accurate estimate. Finally, the fourth example is a case where
the mean field approximation has a unique attractor that is not
exponentially stable. We observe that in this case the refined
approximation provides an accurate approximation of $\esp{\MN(t)}$ for
small values of $t$ but fails to predict correctly what happens when
$t$ is large compared to $N$. In fact in this case, one cannot refine
the steady-state expectation by a term in $O(1/N)$ because the
convergence is only in $O(1/\sqrt{N})$ in this case.
This suggests that, when using a mean field or refined mean field
approximation, one has to be careful~: the approximations of a system
with more than one stable equilibrium or a unique but
non-exponentially stable equilibrium is likely to be inaccurate for
small values of $N$, even when one focuses on the transient behaviour.
\paragraph*{Related work} Our results extend the recent results of
\cite{gast2017refined}. The authors of \cite{gast2017refined} study
the steady-state of stochastic models that have a continuous-time mean
field approximation. They show that Equation~\eqref{eq:main_result} is
true in this case and provide a numerical algorithm to compute the
constant. Our paper has two theoretical contributions with respect to
\cite{gast2017refined} : First we show that the results also hold for
models that have a discrete-time mean field approximation, and second
we show how to derive these equations for the transient and steady
state regimes of such systems. This means that our results remain in
the realm of discrete-time models whereas in~\cite{gast2017refined} it
is shown how some discrete-time models can be transformed into
density-dependent (continuous time) population models by replacing
time steps with steps that last for a random time that is
exponentially distributed with mean $1/N$, where $N$ is the population
size. The resulting continuous time model can then be analysed using
the approximation techniques for CTMC population models discussed
in~\cite{gast2017refined}.
The results of \cite{gast2017refined} and the one of the current paper
follow from a series of recent results concerning the rate of
convergence of stochastic models to their mean field approximation
\cite{gast2017expected,ying2016rate,ying2017stein,kolokoltsov2011mean}. The
key idea behind these works is to study the convergence of the
generator of the stochastic processes to the one of its mean field
approximation and to use this convergence rate to obtain a bound on
Equation~\eqref{eq:main_result}. For the steady-state regime, this is
made possible by using Stein's methods
\cite{stein1986approximate,braverman2017stein,braverman2017stein2}.
Note that the approach taken in the current paper is fundamentally
different from the one that is usually used to obtain convergence
rates, like \cite{gast2012markov,bortolussi2013bounds,gastgaujalDEDS}
in which the authors focus on sample path convergence and obtain
bounds on the convergence of the expected distance
$\esp{\norm{\MN-\mu}}$ between the stochastic system and its mean
field approximation. When focusing on sample path convergence, the
refinement of the mean field approximation would be to consider an
additive term of $1/\sqrt{N}$ times a Gaussian noise as for example in
\cite{gast2012markov} and not a $1/N$ term as in this paper.
\paragraph*{Outline} The rest of the paper is organised as follows. In
Section~\ref{sect:preliminaries}, we introduce the model that we
study. In Section~\ref{sect:Refined} we provide the main results and
in particular Theorem~\ref{theo:main}. In Sections~\ref{sect:RefSEIR},
\ref{sect:RefWSN}, \ref{sect:RefVoting} and
\ref{sect:non-exponentially-stable}, we provide a few numerical
examples that demonstrate the accuracy of the refined mean field
approximation and its limits. Finally, we conclude in
Section~\ref{sec:conclusion}.
\section{Preliminaries}
\label{sect:preliminaries}
In this section we introduce some terminology and notation as well as
some preliminary definitions, setting the context for the rest of the
paper.
\subsection{Notations}
We let $\mathrm{I\!N}$ denote the set of natural numbers and $\reals_{\geq 0}^n$ the
set of $n$-tuples of non-negative real numbers; we conventionally see
any such an $n$-tuple $\vt{m}=(m_1,\ldots,m_n)$ as a row-vector,
i.e. a $1 \times n$ matrix. We let $\mathcal{U}^n \subset \reals_{\geq 0}^n$ be the
unit simplex of $\reals_{\geq 0}^n$, that is
$\mathcal{U}^n=\SET{\vt{m} \in [0,1]^n \;|\; m_1 + \ldots +m_n =1}$.
For function $f :\mathrm{I\!R}^n \rightarrow \mathrm{I\!R}^p$ continuous and twice
differentiable, with
$ f(\vt{m}) = (f_1(\vt{m}), \ldots, f_p(\vt{m})), $ we denote by $Df$
and $D^2f$ its first and second derivatives, respectively. $D f (\vt{m})$ is the
$p \times n$ (function) matrix such that
$(D f (\vt{m}))_{ij} = \frac{\partial f_i(\vt{m})}{\partial m_j}$.
$D^2 f (\vt{m})$ is the $p \times n \times n$ tensor such that
$(D^2 f (\vt{m}))_{ijk} = \frac{\partial^2 f_i(\vt{m})}{\partial
m_j\partial m_k}$.
Moreover, for a $p \times n \times n$ tensor $P$ and a $n \times n$
matrix $Q$, we let $P \cdot Q$ be the (row) vector in $\mathrm{I\!R}^p$ such
that $(P \cdot Q)_i = \sum_{j,k=1}^n (P)_{ijk}(Q)_{jk}$, for
$i=1,\ldots, p$. In addition, for $p \times n$ matrix $A$ and
$n \times m$ matrix $B$ we use the notation $AB$ for standard matrix
product: $(AB)_{ij} = \sum_{k=1}^n (A)_{ik}(B)_{kj}$; obviously this
includes the case of vector inner product $\vt{u}^T \vt{v}$ for
$\vt{u}$ and $\vt{v}$ being $n \times 1$ (column) vectors, where $A^T$
is the transpose of $A$, i.e. $(A^T)_{ij}=(A)_{ji}$; the vector outer
product $\vt{u} \otimes \vt{v}$, sometimes also denoted by
$\vt{u} \;\vt{v}^T$, is the matrix such that
$(\vt{u} \otimes \vt{v})_{i,j} = u_iv_j$.
Finally, for any vector $v$, we let $\norm{v}$ denote the norm of $v$
and $\norm{v}^2$ denote the square of $\norm{v}$. Note that the choice
of the specific norm is left unspecified because the results presented
in the present paper hold for any norm\footnote{Of course, technical
details in the proofs may depend on the specific norm (see
e.g. footnote~\ref{footnote:ftntlemma:o(||E||^2)} in the proof of
Lemma~\ref{lemma:o(||E||^2)})}. Note that in the proofs, what we
denote by $E, o(\norm{E}^2), \esp{o(\norm{E}^2)}$ are $p$-dimensional
vectors while their norm
$\norm{E}, \norm{E}^2, \norm{\esp{o(\norm{E}^2)}} \in \mathrm{I\!R}$ are real
non-negative numbers.
\subsection{Synchronous Mean Field Model}
We consider a system of $0<N \in \mathrm{I\!N}$ identical\footnote{It is worth
pointing out here that the requirement that the $N$
objects be identical
can be relaxed, since a system with {\em different} classes of {\em
identical} objects can easily be modelled by considering an
equivalent system with instances of an object whose set of states is
the union of those of the original objects and similarly for the set
of its transitions, as shown in the example in
Section~\ref{sect:RefWSN}.} interacting objects; $N$ is called the
{\em size} of the system. We assume that the number of local states
of each object is finite\footnote{In fact, the same theoretical
results could be derived for infinite dimensional models with two
additional assumptions to cope with the fact that the simplex
$\mathcal{U}^{\infty}$ is not compact : (i) imposing all functions to be
\emph{uniformly} continuous and (ii) Imposing tightness
assumptions.}, say $n$; for the sake of simplicity, in the sequel we
let the set of states of local objects be $\SET{1,\ldots, n}$, when
not specified otherwise. Time is {\em discrete} and the behaviour of
the system is characterised by a (time homogeneous) discrete time
Markov chain (DTMC) $\vt{X}^{(N)}(t)=(X_1^{(N)}(t), \ldots, X_N^{(N)}(t))$,
where $X_i^{(N)}(t)$ is the state of object $i$ at time $t$, for
$i=1,\ldots, N$.
We define the occupancy measure at time $t$ as the row-vector
$\vt{M}^{(N)}(t)=(M_1^{(N)}(t), \ldots, M_n^{(N)}(t))$ where, for
$j=1,\ldots, n$, $M_j^{(N)}(t)$ is the {\em fraction} of objects in state
$j$ at time $t$, over the total population of $N$ objects:
\begin{align*}
M_j^{(N)}(t) = \frac{1}{N}\sum_{i=1}^N 1_{\SET{X_i^{(N)}(t)=j}}
\end{align*}
where $1_{\SET{x=j}}$ is equal to $1$ if $x=j$ and $0$ otherwise.
At each time step $t \in \mathrm{I\!N}$ each object performs a local
transition, that may also be a self-loop to its current state. The
transition probabilities of an object state depend on the current
local state of the object and may depend also on $M(t)$.
We let $\vr{K}(\vt{m})$ denote the one-step transition probability
$n \times n$ matrix of an object in the system: $\vr{K}_{ij}(\vt{m})$
is the probability for the object to jump from state $i$ to state $j$
in the system when the occupancy measure vector is $\vt{m}$. We assume
that, given the occupancy measure, the transitions made by two objects
are independent. Our model is identical to the one of \cite{Le+07} up
to the fact that the authors of \cite{Le+07} add a continuous resource
to the model and allow object transition matrix $\vr{K}$ to depend also on
the size $N$ of the system (in which case they assume
that the sequence of transition matrices $\vr{K}^N$ converges to a function
$\vr{K}$ as $N$ goes to infinity). To simplify the exposition, in this
paper we consider a case without resource and we assume
$\vr{K}(\vt{m})$ is a continuous function of $\vt{m}$ that does not
depend on $N$. The results presented in this paper could be extended
to the more general case where $\vr{K}(\vt{m})$ is also a function of
$N$ and could be modified to incorporate a resource, essentially by
replacing our equation for the variance $\Gamma$ by the one presented
in \cite[Equation~(7)]{gastgaujalDEDS}.
Below we recall Theorem 4.1 of~\cite{Le+07} on classical mean field
approximation, under the simplifying assumptions mentioned above:
\begin{quotation}
\noindent
{\bf Theorem 4.1 of \cite{Le+07} (Convergence to Mean Field)}
{\em Assume that the initial occupancy measure $\vt{M}^{(N)}(0)$ converges almost surely to the deterministic limit $\vt{\mu}(0)$.
Define $\vt{\mu}(t)$ iteratively by (for $t \geq 0$):
\begin{align}
\label{eq:mu}
\vt{\mu}(t+1) = \vt{\mu}(t) \, \vr{K}(\vt{\mu}(t)).
\end{align}
Then for any fixed time $t$, almost surely:
$$
\lim_{N \to \infty} \vt{M}^{(N)}(t) = \vt{\mu}(t).
$$
}
\end{quotation}
In the sequel, we will write $\vt{M}(t)$ or simply $\vt{M}$ instead of
$\vt{M}^{(N)}(t)$, leaving $N$ and $t$ implicit, when this does not cause
confusion.
\section{Refined Deterministic Approximation Theorem}
\label{sect:Refined}
In this section, we present our main results (Theorem~\ref{theo:main}
and Theorem~\ref{theo:steady}) and provide their proofs.
\subsection{First Main Result : Transient Behaviour}
The iterative procedure of Theorem 4.1 of~\cite{Le+07} can be
formalised as a $t$-indexed family of functions $\Phi_t$ from
$\mathcal{U}^n$ to $\mathcal{U}^n$, where the functions
$\Phi_t: \mathcal{U}^n \rightarrow \mathcal{U}^n$ are defined as follows:
\begin{align*}
\Phi_{0}(\vt{m}) = \vt{m}; \qquad {\displaystyle (\Phi_1(\vt{m}))_j =
\sum_{i=1}^n m_i \vr{K}_{ij}(\vt{m})};
\qquad \Phi_{t+1}(\vt{m}) = \Phi_1(\Phi_t(\vt{m}))
\end{align*}
This implies that for all $\vt{m} \in \mathcal{U}^n$ and $t \in \mathrm{I\!N}$, we
have $\Phi_1(\Phi_t(\vt{m}))= \Phi_t(\Phi_1(\vt{m}))$.
In the following we assume that $\Phi_{1}$ is continuous and twice
differentiable with respect to $\vt{m}$ and that its second derivative
is continuous (note that as $\mathcal{U}^n$ is compact, this implies that
$\Phi_1$ and its first two derivative are uniformly continuous).
Moreover, in what follows, we assume that $\vt{M}^{(N)}(0)$ converges to
$\mu(0)$ (a deterministic value) as $N$ goes to infinity and we let
$\mu(t)$ be defined as in Equation~\eqref{eq:mu}, or, equivalently,
$\mu(t+1)=\Phi_1(\mu(t))=\Phi_{t+1}(\mu(0))$.
Our main theorem can be stated as follows.
\begin{theorem}\label{theo:main}
Assume that the function $\Phi_1$ is twice differentiable with
continuous second derivative and that $M^{(N)}(0)$ converges weakly to
$\mu(0)$. Let $A_t$ and $B_t$ be respectively the $n \times n$
matrix $A_t = (D \Phi_1)(\mu(t))$ and the $n \times n \times n$
tensor $B_t = (D^2 \Phi_1)(\mu(t))$. Then for any continuous and
twice differentiable function with continuous second derivative
$h:\mathcal{U}^n \rightarrow \reals_{\geq 0}^p$ we have:
$$
\lim_{N\rightarrow \infty} N\esp{h(\vt{M}^{(N)}(t))- h(\Phi_t(\vt{M}^{(N)}(0)))} =
Dh(\mu(t)) V_t + \frac{1}{2}D^2h(\mu(t))\cdot W_t,
$$
where $V_t$ is an $n \times 1$ vector and $W_t$ is an $n \times n$ matrix, defined as follows:
$$
\begin{array}{lcl}
V_{t+1} & = & A_tV_t + \frac{1}{2}B_t \cdot W_t\\\\
W_{t+1} & = & \Gamma(\mu(t)) + A_t W_t A_t^T,
\end{array}
$$
with $V_0=0$, $W_0 = 0$ and $\Gamma(\vt{m})$ is the following
$n \times n$ matrix:
$$
\begin{array}{lcl}
\Gamma_{jj}(\vt{m}) & = & \sum_{i=1}^n m_i \vr{K}_{ij}(\vt{m})(1-\vr{K}_{ij}(\vt{m}))\\\\
\Gamma_{jk}(\vt{m}) & = & -\sum_{i=1}^n m_i \vr{K}_{ij}(\vt{m})\vr{K}_{ik}(\vt{m})
\end{array}
$$
\end{theorem}
The key idea of the proof is to use a Taylor expansion of
$\esp{h(\Phi_1(m))}$ around $\Phi_1(m)$. We postpone the proof to
Section~\ref{ssec:proof_transient}.
One of the main consequences of Theorem~\ref{theo:main} is that it
allows us to compute precisely a development in $O(1/N)$ of the mean
and the covariance of the vector $\MN(t)$. This first order
development is what we call the \emph{refined mean field
approximation}. In our numerical simulations, we will show that this
refined approximation can greatly improve the accuracy of the original
mean field approximation when the number of entities $N$ is relatively small.
\begin{coro}
\label{coro:main}
Let $t\in\mathrm{I\!N}$. Under the assumptions of Theorem~\ref{theo:main},
and denoting $\mu(t)=\Phi_t(m)$, it holds that
\begin{itemize}
\item[(i)] For any coordinate $i$ and any time-step
\begin{align*}
\esp{\MN_i(t)} = \mu_i(t) + \frac{(V_{t})_i}{N} + o\p{\frac1N}.
\end{align*}
\item[(ii)] For any pair of coordinates $i,j$, the co-variance satisfies
\begin{align*}
\cov{\MN_i(t),\MN_j(t)} = \frac1N(W_{t})_{i,j} + o\p{\frac1N}.
\end{align*}
\end{itemize}
\end{coro}
\subsection{Second Main Result : Steady-State}
\label{ssec:steady}
Mean field approximation can also be used to characterise the
steady-state behaviour of a population model. It has been shown that,
in the case of continuous time or discrete-time mean field
approximation, if this approximation has a unique attractor, then the
stationary distribution of the system of size $N$ concentrates on this
attractor. In this section, we refine this result by computing the
rate of convergence and by defining the refined approximation in this
case.
We say that a point $\mu(\infty)$ is an exponentially stable attractor
if
\begin{itemize}
\item For any $m\in\mathcal{U}^n$ : $\lim_{t\to\infty}\Phi_t(m)=\mu(\infty)$
(\emph{i.e.} it is a global attractor).
\item There exists an open neighbourhood $V$ of $\mu(\infty)$ and two
constants $a,b$ such that all $m\in V$ :
$\norm{\Phi_t(m)-\mu(\infty)}\le a e^{-bt}\norm{m-\mu(\infty)}$
(\emph{i.e.} it is exponentially stable).
\end{itemize}
\begin{theorem}\label{theo:steady}
Assume that $\MN$ has a unique stationary distribution (for each
$N$), that the function $\Phi_1$ is twice differentiable and that
the flow has a unique exponentially stable attractor
$\mu(\infty)$. Then there exists a $n\times1$ vector $V_{\infty}$ and
a $n\times n$ matrix $W_{\infty}$ such that the constants $V_t$ and
$W_t$ defined in Theorem~\ref{theo:main} satisfy:
\begin{align*}
\lim_{t\to\infty}V_t=V_{\infty} \qquad \mathrm{and}\qquad
\lim_{t\to\infty}W_t=W_{\infty}
\end{align*}
Moreover
\begin{itemize}
\item[(i)] $W_\infty$ is the unique solution of the discrete-time
Lyapunov equation:
\begin{align*}
A_\infty WA_\infty^T - W + \Gamma(\mu(\infty)) = 0
\end{align*}
and $V_{\infty}$ is uniquely determined by
\begin{align*}
V_{\infty}&=\frac12(I-A_\infty)^{-1}B_\infty W_\infty,
\end{align*}
where $A_\infty=D\Phi_1(\mu(\infty))$,
$B_\infty=D^2\Phi_1(\mu(\infty))$ and $I$ is the identity matrix.
\item[(ii)] for any twice differentiable function $h$, we can exchange the
limits :
\begin{align*}
\lim_{N\to\infty}\lim_{t\to\infty}
&N\big(\mathbb{E}[h(\vt{M}(t))]- h(\Phi_t(M^{(N)}(0)))\big)\\
&=\lim_{t\to\infty}\lim_{N\rightarrow \infty}
N\big(\mathbb{E}[h(\vt{M}(t))]- h(\Phi_t(\vt{M}^{(N)}(0)))\big)\\
&= Dh(\mu(\infty)) V_\infty + \frac{1}{2}D^2h(\mu(\infty))\cdot W_\infty,
\end{align*}
\end{itemize}
\end{theorem}
This result is interesting for at least two reasons. First, it
generalises Theorem~\ref{theo:main} to the case of stationary
distribution. Second, it is also the first result that provides a rate
of convergence for the steady-state distribution of a model that has a
discrete-time mean field approximation (to the best of our knowledge,
the rate of convergence had only been obtained for the finite-time horizon
in \cite{gastgaujalDEDS}). We postpone the proof to
Section~\ref{ssec:proof_steady}.
\subsection{Proofs}
To ease notation, in all the proofs we denote $M^N(0)$ by $m$, unless
specified otherwise. In particular, when we write
$\mathbb{E}[h(\vt{M}(t))]$ we formally mean
$\mathbb{E}[h(\vt{M}^{(N)}(t))| \vt{M}^{(N)}(0) = \vt{m}]$.
\subsubsection{Proof of Theorem~\ref{theo:main}}
\label{ssec:proof_transient}
One of the key ingredients to prove our result is to study what happens
for $t=1$. This is what we do in Lemma~\ref{lemma:Lemma1}. Then
Theorem~\ref{theo:main} will follow by using an induction on $t$. Note
that one of the main technicalities in all these lemmas is to prove
that the convergence is uniform in the initial condition. This is why
we make use of various functions $\varepsilon(N)$ or
$\varepsilon_{t,g}(N)$ that control the \emph{uniform} convergence to
$0$.
\begin{lemma}\label{lemma:Lemma1}
Let $h:\mathcal{U}^n \rightarrow \reals_{\geq 0}^p$ (for $p \geq 1$), be a twice
differentiable function such that $D^2h$ is continuous. Then, there
exists a function $\varepsilon(N)$ such that
$\lim_{N\to\infty}\varepsilon(N)=0$ and for all
$M^{(N)}(0)=\vt{m}\in\mathcal{U}^n$, the following holds:
$$
\Big|\Big| N\big(\mathbb{E}[h(\vt{M}(1))] - h(\Phi_1(\vt{m}))\big) -
\frac{1}{2}(D^2h)(\Phi_1(\vt{m}))\cdot \Gamma(\vt{m})\Big|\Big| \le \varepsilon(N).
$$
where $\Gamma(\vt{m})$ is defined in Theorem~\ref{theo:main}.
\end{lemma}
\begin{proof}
The key idea of this proof is to consider the Taylor expansion of
$h$ for $\vt{M}(1)$ in the neighbourhood of
$\Phi_1(\vt{m})$. Let $E=M(1)-\Phi_1(m)$. We get the following :
\begin{align*}
&h(\vt{M}(1))- h(\Phi_1(\vt{m}))
= (Dh)(\Phi_1(\vt{m})) \vt{E} + \frac{1}{2}(D^2h)(\Phi_1(\vt{m})) \cdot (\vt{E} \otimes \vt{E}) + o(||\vt{E}||^2)
\end{align*}
Taking the expectation on both sides, we get :
\begin{align*}
&\mathbb{E}[h(\vt{M}(1))]- h(\Phi_1(\vt{m}))\\ & =
(Dh)(\Phi_1(\vt{m})) \mathbb{E}[\vt{E}] + \frac{1}{2}(D^2h)(\Phi_1(\vt{m})) \cdot \mathbb{E}[(\vt{E}\otimes \vt{E})]
+ \mathbb{E}[o(||\vt{E}||^2)]
\end{align*}
The result follows by using Lemma~\ref{lemma:expectE}, that
establishes that $\mathbb{E}[\vt{E}]=0$ and
$N\mathbb{E}[(\vt{E}\otimes \vt{E})] = \Gamma(\vt{m})$ and
Lemma~\ref{lemma:o(||E||^2)} that shows
$||N\mathbb{E}[o(||\vt{E}||^2)]|| \le\varepsilon(N)$.
\end{proof}
The following lemma is a direct generalisation of
Lemma~\ref{lemma:Lemma1} and it will be used in the proof of
Theorem~\ref{theo:main}. The proof is essentially the same as that of
Lemma~\ref{lemma:Lemma1} and exploits the time-homogeneity of the
Markov chain.
\begin{lemma}\label{lemma:GenLemma1}
Let $h:\mathcal{U}^n \rightarrow \reals_{\geq 0}^p$ be a twice differentiable
function whose second derivative is continuous. Then there
exists a function $\varepsilon(N)$ such that
$\lim_{N\to\infty}\varepsilon(N)=0$ and for all $t \in \mathrm{I\!N}$,
$N \in \nats_{> 0}$ and $M^{(N)}(t)=\vt{m}' \in \mathcal{U}^n$, the following
holds:
$$
\Big|\Big| N\big(\mathbb{E}[h(\vt{M}(t+1)) | \vt{M}(t)=\vt{m}'] -
h(\Phi_1(\vt{m}'))\big)- \frac{1}{2}(D^2h)(\Phi_1(\vt{m}'))\cdot \Gamma(\vt{m}')\Big|\Big|
\le\varepsilon(N)
$$
\end{lemma}
\begin{lemma}\label{lemma:expectE}
Under the assumptions of Lemma~\ref{lemma:Lemma1}, we obtain that\\
$E=\esp{\vt{M}^{(N)}(1)-\Phi_1(\vt{m})\mid M^{(N)}(0)=m}$ satisfies
$\mathbb{E}[E]=0$ and
$\mathbb{E}[\vt{E} \otimes \vt{E}]=\Gamma(\vt{m})/N$.
\end{lemma}
\begin{proof}
We observe that by definition of our model, $M_j(1)$ is the following
random variable:
\begin{equation}\label{M1j}
M_j(1) = \frac{1}{N} \sum_{i=1}^n \widehat{B}_{ij}
\end{equation}
where $(\widehat{B}_{i,.})$ is a random vector with multinomial
distribution, with parameters $N m_i$ and $(\vr{K}_{i,\cdot}(\vt{m}))$. The
variables are independent for different values of $i$ (in particular,
if $i\ne i'$ we have
$\cov{\widehat{B}_{ij},\widehat{B}_{i'k}}=0$). Moreover, for all $i$
and all $j\ne k$ :
\begin{align*}
\esp{\widehat{B}_{ij}} &= Nm_i\vr{K}_{ij}(m)\\
\var{\widehat{B}_{ij}} &= Nm_{i}\vr{K}_{ij}(m)(1-\vr{K}_{ij}(m))\\
\cov{\widehat{B}_{ij},\widehat{B}_{ik}} &= Nm_{i}\vr{K}_{ij}(m)\vr{K}_{ik}(m).
\end{align*}
This implies that
\begin{align}
\mathbb{E}[M_j(1)]=\esp{\frac{1}{N} \sum_{i=1}^n
\widehat{B}_{ij}}
=\frac{1}{N} \sum_{i=1}^n \esp{\widehat{B}_{ij}}
= \Phi_1(\vt{m})_j.
\label{eq:expectE}
\end{align}
The case of $\vt{E} \otimes \vt{E}$ makes again use of
Equation~\eqref{M1j}. Note that by \eqref{eq:expectE},
$E_j=M_j(1)-\Phi_1(\vt{m})_j = M_j(1) - \mathbb{E}[M_j(1)]$.
This shows that
$\esp{(\vt{E} \otimes \vt{E})_{jk}}= \cov{M_j(1),M_k(1)}$, \emph{i.e.} the
covariance of $M_j(1)$ and $M_k(1)$. We consider the case $k=j$ and
the case $k\not=j$ separately.
\textbf{Case} $k=j$.
\begin{align*}
N\esp{(\vt{E} \otimes \vt{E})_{jj}} &= N\var{M_j(1)}\\
&= N\var{\frac1N\sum_{i=1}^n
\widehat{B}_{ij}}\\
&= \frac1N \sum_{i=1}^n
\var{\widehat{B}_{ij}}\\
&=\sum_{i=1}^n
m_i\vr{K}_{ij}(\vt{m})(1-\vr{K}_{ij}(\vt{m})),
\end{align*}
where the one but last equality comes from the independence of
the variables $\widehat{B}_{ij}$ for $i\in\{1\dots n\}$.
\textbf{Case} $k\neq j$. This case is similar.
\begin{align*}
N\esp{(\vt{E} \otimes \vt{E})_{jk}} &= N\cov{M_j(1),M_k(1)} \\
&=N\sum_{i=1}^n\sum_{i'=1}^n\frac{1}{N^2}\cov{
\widehat{B}_{ij},\widehat{B}_{i'k}}\\
&=-\sum_{i=1}^nm_i\vr{K}_{ij}(m)\vr{K}_{ik}(m),
\end{align*}
where in the double sum, only the terms $i=i'$ are non-zero because
$\widehat{B}_{ij}$ and $\widehat{B}_{i'k}$ are independent when
$i\ne i'$.
\end{proof}
\begin{lemma}\label{lemma:o(||E||^2)}
Under the assumptions of Lemma~\ref{lemma:Lemma1} and using the
notations of Lemma~\ref{lemma:Lemma1}, there exists a function
$\varepsilon(N)$ such that $\lim_{N\to\infty}\varepsilon(N)=0$ and that
$||N\mathbb{E}[o(||\vt{E}||^2)]||\le\varepsilon(N)$.
\end{lemma}
\begin{proof}
First of all we note that, as $D^2h$ is continuous, it is uniformly
continuous (because $\mathcal{U}$ is compact). Hence, the term
$o(||\vt{E}||^2) \in \mathrm{I\!R}^p $ is uniform in $\vt{m}$ and
$||\vt{E}||^2$, \emph{i.e.}, there exist a function
$\delta: \mathrm{I\!R}^n \rightarrow \mathrm{I\!R}$ and a constant $\gamma >0$
such that $\lim_{||\vt{e}||\to0}\delta(\vt{e})=0$,
$\delta(\vt{e}) \le \gamma$ for all $\vt{e} \in \mathrm{I\!R}^n$, and
$||o(||\vt{E}||^2)||\le ||\vt{E}||^2\delta(\vt{E})$. This implies
$ \mathbb{E}[||o(||\vt{E}||^2)||] \le
\mathbb{E}[||\vt{E}||^2\delta(\vt{E})] $. The proof proceeds with
the following derivation:
First, note that $\lim_{||\vt{e}||\to 0}\delta(\vt{e})=0$ implies that
for all $\epsilon>0$, there exists $a_\epsilon>0$ such that
$\delta{(\vt{e})}\le \epsilon$ for all $\vt{e}$ such that
$||\vt{e}|| \le a_\epsilon$. Therefore
\begin{align*}
\mathbb{E}[||\vt{E}||^2\delta(\vt{E})]
&\le \mathbb{E}[||\vt{E}||^2\delta(\vt{E})\mathbf{1}_{||\vt{E}||\ge a_\epsilon}] +
\mathbb{E}[||\vt{E}||^2\epsilon\mathbf{1}_{||\vt{E}|| <
a_\epsilon}]\\
&\le \eta\,\gamma\,\mathbb{E}[\mathbf{1}_{||\vt{E}||\ge a_\epsilon}] + \epsilon\mathbb{E}[||\vt{E}||^2],
\end{align*}
for some constant\footnote{\label{footnote:ftntlemma:o(||E||^2)} The specific value of $\eta$ depends on the norm
used; for instance $\eta =1$ for the infinity norm.}
$\eta<\infty$.
As indicated by Equation~\eqref{M1j},
$E_{j}=\sum_{i}(\widehat{B}_{ij}/N-m_iK_{ij}(m))$ is the sum of the
$n$ independent random variables $(\widehat{B}_{ij}/N-m_iK_{ij}(m))$
and $\widehat{B}_{ij}$ has a binomial distribution of parameters
$(Nm_i,K_{ij}(m))$. Hence, $E_j$ can be expressed as the sum of $N$
independent Bernoulli random variables. By Hoeffding's inequality,
\begin{align*}
\Proba{\norm{E_j}\ge t} \le e^{-2Nt^2}.
\end{align*}
This implies that
$\Proba{\norm{E}\ge a_\epsilon}\le \sum_{j=1}^n\Proba{\norm{E_j}\ge
a_\epsilon/n} \le ne^{-2Na_\epsilon/n^2}$.
Moreover, by Lemma~\ref{lemma:expectE},
$\mathbb{E}[||\vt{E}||^2]\le 1/N$. This shows that
\begin{align*}
\mathbb{E}[||\vt{E}||^2\delta(\vt{E})]
&\le \eta\,\gamma\,\mathbb{P}(||\vt{E}||\ge a_\epsilon) +
\epsilon\mathbb{E}[||\vt{E}||^2]\\
&\le \eta\,\gamma\, n e^{-2Na_{\epsilon}/n^2} + \frac{\epsilon}{N}
\end{align*}
The assert follows by using
$\varepsilon(N)=\inf_{\epsilon>0}(N\, \eta\,\gamma\, n
e^{-2Na_\epsilon/n^2} + \epsilon)$.
\end{proof}
\begin{lemma}\label{lemma:expectEt}
Under the assumptions of Lemma~\ref{lemma:Lemma1} and that $M^{(N)}(0)$
converges weakly to $\mu(0)$ as $N$ goes to infinity, for any
continuous function $g:\mathcal{U}^n\to\reals_{\geq 0}^p$ and all $t$,
there exists a function $\varepsilon_{t,g}(N)$ such that
$\lim_{N\to\infty}\varepsilon_{t,g}(N)=0$ and
\begin{align*}
\norm{\mathbb{E}[g(\vt{M}(t))]-g(\mu(t))}\le\varepsilon_{t,g}(N).
\end{align*}
\end{lemma}
\begin{proof}
We proceed by induction on $t$. The lemma holds for $t=0$ because
$M^{(N)}(0)$ converges weakly to $\mu(0)$. As $g$ is continuous, there
exists a function $\delta:\R^+\to\R^+$ such that
$\norm{g(m)-g(m')}\le\delta(\norm{m-m'})$ and
$\lim_{r \rightarrow 0} \delta(r)=0$. Moreover, as $g$ and $\Phi_t$
are continuous, $g\circ\Phi_t$ is also uniformly continuous (and the
continuity is uniform since $\mathcal{U}^n$ is compact). Hence
\begin{align*}
&\esp{\norm{g(M(t+1))-g(\mu(t+1))}}\\
&=\esp{\norm{g(M(t+1))-g(\Phi_1(M(t)))+g(\Phi_1(M(t)))-g(\mu(t+1))}}\\
&\le\esp{\delta({\norm{M(t+1)-\Phi_1(M(t))}})}+\esp{g\circ\Phi_1(M(t))-g\circ\Phi_{1}(\mu(t))}\\
&\le \esp{\esp{\delta(\norm{E})\mid M(t)}} + \varepsilon_{t,g\circ\Phi_1},
\end{align*}
where $E=M(t+1)-\Phi_1(M(t))$ converges to $0$ (uniformly in M(t))
by Lemma~\ref{lemma:expectE}.
\end{proof}
\begin{proof}[Of the main theorem]
We proceed by induction on $t$. The theorem clearly holds for $t=0$
because $\Phi_0(\vt{M^{(N)}(0)})=\vt{M}^{(N)}(0)$ by definition of
$\Phi_0$. Assume that the theorem now holds for some $t\ge 0$. We
have :
\begin{align*}
N\left(\mathbb{E}[h(\vt{M}(t+1))] - h(\mu(t+1))\right)
=&
N\mathbb{E}[h(\vt{M}(t+1)) - h(\Phi_1(\vt{M}(t)))] \\
&+ N(\mathbb{E}[h(\Phi_1(\vt{M}(t)))] - h(\mu(t+1))).
\end{align*}
We will analyse the two lines separately. For the first line, the
idea is to use Lemma~\ref{lemma:Lemma1}. Indeed this line is equal
to
\begin{align*}
&\esp{N\esp{h(\vt{M}(t+1)) - h(\Phi_1(\vt{M}(t)))\mid M(t)}}\\ &
=\esp{\frac{1}{2}(D^2h)(\Phi_1(\vt{M}(t)))\cdot
\Gamma(\vt{M}(t))}+\Theta(N),
\end{align*}
where by Lemma~\ref{lemma:Lemma1}, $\Theta(N)$ is such that
$\norm{\Theta(N)}\le \varepsilon(N)$. By Lemma~\ref{lemma:expectEt}
with $g=(D^2h)(\Phi_1)\cdot \Gamma$, as $N$ goes to infinity, this
quantity converges to
\begin{align}
\frac{1}{2}(D^2h)(\Phi_1(\mu(t)))\cdot
\Gamma(\mu(t))=\frac{1}{2}(D^2h)(\mu(t+1))\cdot
\Gamma(\mu(t))\label{eq:piece1}.
\end{align}
For the second line, the idea is to apply the induction
hypothesis to $h\circ \Phi_1$ which can be done because the
$h\circ\Phi_1$ is twice differentiable (because both $h$ and
$\Phi_1$ are). This shows that
\begin{align*}
&N(\mathbb{E}[h(\Phi_1(\vt{M}(t)))] - h(\mu(t+1)))\\
&=N(\mathbb{E}[h(\Phi_1(\vt{M}(t)))] - h(\Phi_{1}(\mu(t))))\\
&=D(h\circ\Phi_1)(\mu(t))V_t+ \frac{1}{2}D^2(h\circ\Phi_1)(\mu(t)) \cdot W_t + \varepsilon_{t, h \circ \Phi_1}(N)
\end{align*}
The first term can be dealt with by applying the chain rule
$D(h \circ \Phi_1) = (Dh)(\Phi_1) (D \Phi_1)$ which shows that:
\begin{align}
D(h\circ\Phi_1)(\mu(t))V_t
&=(Dh)(\Phi_1(\mu(t))) (D \Phi_1)(\mu(t)) V_t\nonumber\\
&=(Dh)(\mu(t+1)) (D \Phi_1)(\mu(t)) V_t\nonumber\\
&=(Dh)(\mu(t+1)) A_t V_t.\label{eq:piece2}
\end{align}
For the second term, we apply the product rule and again the chain
rule:
\begin{align*}
\frac{1}{2}D^2(h\circ\Phi_1)\cdot W_t
&=\frac{1}{2}D\big((Dh)(\Phi_1) (D \Phi_1)\big)\cdot W_t\\
&=\Big(D\big((Dh)(\Phi_1)\big) \cdot (D \Phi_1) + (Dh)(\Phi_1) \cdot D(D \Phi_1)\Big) \cdot \frac{1}{2}W_t\\
&=\Big( (D^2h)(\Phi_1) \cdot (D \Phi_1) (D \Phi_1)^T +
(Dh)(\Phi_1) (D^2 \Phi_1)
\Big) \cdot \frac{1}{2}W_t
\end{align*}
By applying the last function at the point $\mu(t)$ we get
that:
\begin{align*}
\frac{1}{2}D^2(h\circ\Phi_1)(\mu(t)) \cdot W_t= &(D^2h)(\Phi_1(\mu(t)))\cdot (D \Phi_1)(\mu(t)) \frac{1}{2}W_t (D \Phi_1)^T(\mu(t)) \\
&+(Dh)(\Phi_1(\mu(t))) (D^2 \Phi_1 (\mu(t)))\cdot \frac{1}{2}W_t,
\end{align*}
which, using the definition of $\mu(t+1)=\Phi_{1}(\mu(t))$ and the assumptions
$A_t=(D \Phi_1)(\mu(t))$ and
$B_t=(D^2 \Phi_1 (\mu(t)))$, is the same as:
\begin{align}
\frac{1}{2}(D^2h)(\mu(t+1)) \cdot A_t W_t A_t^T +
\frac{1}{2}(Dh)(\mu(t+1)) (B_t \cdot W_t).
\label{eq:piece3}
\end{align}
The theorem holds by combining Equations~\eqref{eq:piece1},
\eqref{eq:piece2} and \eqref{eq:piece3}.
\end{proof}
\subsubsection{Proof of Theorem~\ref{theo:steady}}
\label{ssec:proof_steady}
\begin{proof} The proof is inspired by the proof of
\cite[Theorem~3.1]{gast2017refined} and uses ideas of Stein's
method. Because many details are similar to the proof of
Theorem~\ref{theo:main}, we skip some details of computation in this
proof.
Let $h$ be a twice-differentiable function and let $G_h$ be the
function defined for all $m$ by:
\begin{align*}
G_h(m) = \sum_{t=0}^\infty [h(\Phi_t(m))-h(\mu(\infty))].
\end{align*}
$G_h(m)$ is well defined because $\mu(\infty)$ is exponentially stable
attractor.
By construction, for any $m$ we have
\begin{align*}
G_h(m) &= h(m) - h(\mu(\infty)) + \sum_{t=1}^\infty
[h(\Phi_t(m))-h(\mu(\infty))]\nonumber\\
&= h(m) - h(\mu(\infty)) + G_h(\Phi_1(m))
\end{align*}
The above equation is a discrete time Poisson equation and implies
that for any $m$:
\begin{align}
h(m) - h(\mu(\infty)) = G_h(m) - G_h(\Phi_1(m))\label{eq:Poisson}
\end{align}
Assume that at time $0$, the initial state $M(0)$ is distributed
according to the stationary distribution of the system of size
$N$. By the definition of stationarity, at time $1$, $M(1)$ is also
distributed according to the same stationary distribution and we
have $\esp{G_h(M(0))}=\esp{G_h(M(1))}$.
By using \eqref{eq:Poisson} and then the above equation, we get:
\begin{align*}
\esp{M(0)} - h(\mu(\infty)) &= \esp{M(0) - h(\mu(\infty))}\\
&=\esp{G_h(M(0))-G_h(\Phi_1(M(0)))}\\
&=\esp{G_h(M(1)) - G_h(\Phi_1(M(0)))}
\end{align*}
By Lemma~\ref{lemma:Lemma1}, this shows that :
\begin{align*}
N(\esp{M(\infty)} - h(\mu(\infty)))
&=\esp{G_h(M(1)) - G_h(\Phi_1(M(0)))}\\
&= \esp{\frac12 D^2(G_h)(\Phi_1(M(0)))\cdot \Gamma(M(0))}+o(1)\\
&= \frac12 D^2(G_h)(\Phi_1(\mu(\infty)))\cdot
\Gamma(\mu(\infty))+o(1),
\end{align*}
where the last equality comes from the fact that the stationary
distribution of the system of size $N$ converges weakly to a Dirac
measure in $\mu(\infty)$ as $N$ goes to infinity (see
\cite[Corollary~14]{gastgaujalDEDS}).
To conclude the proof, the only remaining step is to compute the
second differential of $G_h$ which can be expressed as the infinite
sum:
\begin{align*}
D^2(G_h)(\mu(\infty)) = \sum_{t=0}^\infty D^2(h\circ\Phi_t)(\mu(\infty)).
\end{align*}
The expressions for $V_\infty$ and $W_{\infty}$ come from plugging
the above equations into Equation~\eqref{eq:piece1},
\eqref{eq:piece2} and \eqref{eq:piece3}. The uniqueness of the
solution of the Lyapunov equation is due to the fact that the fixed
point $\mu(\infty)$ is exponentially stable and therefore also
linearly stable.
\end{proof}
\section{Refined Mean Field Model for SEIR}
\label{sect:RefSEIR}
In this section we provide a simple example that illustrates the results for the refined mean field model of the simple computer epidemic SEIR example presented in~\cite{BHLM13}.
Each object in the model consists of four local states: Susceptible (S), Exposed (E), Infected (I) (and active) and Recovered (R). The four-state SEIR model of an individual object is shown in Figure~\ref{fig:seir_model}.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{popautomata.jpg}
\end{center}
\caption{\label{fig:seir_model} SEIR model of individual object}
\end{figure}
Its discrete time evolution is given by the following probability transition matrix $\vr{K}$ in which $\n{S}$, $\n{E}$, $\n{I}$ and $\n{R}$ denote the fraction of objects in the system that are in local state S, E, I and R, respectively:
\begin{align*}
\vr{K}(\n{S}, \n{E},\n{I},\n{R})
= \left(
\begin{array}{cccc}
1 - (\alpha_e + \alpha_i\n{I}) & \alpha_e + \alpha_i \n{I} & 0 & 0 \\
0 & 1- \alpha_a & \alpha_a & 0 \\
0 & 0 & 1 - \alpha_r & \alpha_r \\
\alpha_l & 0 & 0 & 1 - \alpha_l \\
\end{array}
\right)
\end{align*}
In other words, a susceptible becomes exposed with probability
$(\alpha_e+\alpha_i m_I)$ -- \emph{i.e.}, $\alpha_e$ denotes the
external and $\alpha_i$ the internal infection probability --; An
exposed node activates his infection with probability $\alpha_a$; An
infected recovers with probability $\alpha_r$; and $\alpha_l$ is the
probability to loose the protection against infection.
\subsection{Computation of $A$, $B$ and $\Gamma$}
We illustrate how to apply Theorem~\ref{theo:main} in
its simplified form, when $h$ is the identity function, as in
Corollary~\ref{coro:main}--\emph{(i)}. The first step is to compute
the Jacobian and the Hessian of the function $\Phi_1$ for a generic
occupancy measure vector $m$ at time step $t$. Written as a column
vector, the function $\Phi_1(m)=mK(m)$ is given by
\begin{align*}
\Phi_1(m) = \left(
\begin{array}{c}
m_S(1-\alpha_e-\alpha_im_I) + \alpha_lm_R\\
m_S(\alpha_e+\alpha_im_I) + (1-\alpha_a)m_E\\
m_E\alpha_a + (1-\alpha_r)m_I\\
\alpha_rm_I + (1-\alpha_l)m_R
\end{array}
\right)
\end{align*}
Hence, the Jacobian is the following $4 \times 4$ matrix:
$$D(\Phi_1)(\n{S}, \n{E},\n{I},\n{R})
= \left(
\begin{array}{cccc}
1 - (\alpha_e + \alpha_i\n{I}) & 0& -\alpha_i\n{S} & \alpha_l \\
\alpha_e + \alpha_i \n{I} & 1- \alpha_a & \alpha_i\n{S} & 0 \\
0 & \alpha_a & 1 - \alpha_r & 0 \\
0 & 0 & \alpha_r & 1 - \alpha_l \\
\end{array}
\right)$$
The Hessian is a $4 \times 4 \times 4$ tensor. We provide them as 4 matrices of $4 \times 4$, one for each function $(\Phi_1)_j$, where $j \in \{S,E,I,R\}$:
$$D^2((\Phi_1)_S)(\n{S}, \n{E},\n{I},\n{R})
= \left(
\begin{array}{cccc}
0 & 0& -\alpha_i & 0 \\
0 & 0 & 0 & 0 \\
-\alpha_i & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right)$$
$$D^2((\Phi_1)_E)(\n{S}, \n{E},\n{I},\n{R})
= \left(
\begin{array}{cccc}
0 & 0& \alpha_i & 0 \\
0 & 0 & 0 & 0 \\
\alpha_i & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right)$$
The matrices for $I$ and $R$ are two $4 \times 4$ zero-matrices.
The $4 \times 4$ matrix $\Gamma$ depends on $K$ and the occupancy measure $m$ as defined in Lemma~\ref{lemma:Lemma1}.
The refined mean field approximation of the occupancy measure vector is thus given by
$\mathbb{E}[\vt{M}^{(N)}(t)] \approx \mu(t) + V_t/N$, where $V_t$ is computed recursively, according to Theorem~\ref{theo:main}.
\subsection{Dynamics of the SEIR Model and its Approximations}
\begin{figure}[ht]
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{SEIR_all_N10}
&\includegraphics[width=0.45\textwidth]{SEIR_all_N20}\\
$N=10$&$N=20$\\
\includegraphics[width=0.45\textwidth]{SEIR_all_N50}
&\includegraphics[width=0.45\textwidth]{SEIR_all_N100}\\
$N=50$&$N=100$
\end{tabular}
\caption{\label{fig:res} Evolution of the fraction of objects in
state $S$ for population sizes $N=10$, $N=20$, $N=50$ and
$N=100$. The figures compare the classical mean field
approximation (obtained with \texttt{python}--\texttt{numpy}) with
the refined one and with the average of 10,000 simulation runs of
the system. }
\end{figure}
We consider a model with the following parameter values for the local
transition probabilities:
$\alpha_e=0.01, \alpha_i=0.08,\alpha_r=0.02,\alpha_l=0.01$ and
$\alpha_a=0.04$. Initially,
$M(0)=(0.2,0.2,0.2,0.4)$. Figure~\ref{fig:res} shows the results for
the classical mean field approximation, the refined mean field
approximation and the average of 100,000 runs of a stochastic
simulation of the model obtained. The results are given for
population size $N=10$, $N=20$, $N=50$ and $N=100$, respectively; time
$t$ ranges from $0$ to $500$ time units.
We observe that, as exepcted, the gap between the classical mean field
approximation and the simulation is relatively small and decreases
with $N$. Still, for $N=10$, we observe a clear difference between the
classical mean field approximation and the simulation, whereas the
refined mean field provides a much closer approximation (in this case,
the graphs overlap almost everywhere). With the increase of the
population size $N$ both approximations converge to the same value, as
well as the value obtained by simulation: for $N\ge50$, the curves
are almost indistinguishable.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{SEIR_errorMF_N10_large}
&\includegraphics[width=0.45\textwidth]{SEIR_errorRMF_N10_large}\\
Error of the mean field approximation
&
Error of the refined mean field approx.
\end{tabular}
\end{center}
\caption{\label{fig:diff} SEIR model: Quantification of the
difference (error) between the simulation results and the classical
mean field approximation (left) and the refined mean field result
(right), respectively, for $N=10$. The results show simulation value
minus mean field value.}
\end{figure}
To highlight the differences, we plot in Figure~\ref{fig:diff} the
difference between the two approximations with respect to the
simulation~: On the left panel, we plot a function of time for the
quantities $\esp{M(t)}-\mu(t)$; On the right we plot
$\esp{M(t)}+V_t/N-\mu(t)$ (in both cases for $N=10$). We observe that
the refined mean field approximation (right panel) is an order of
magnitude closer to the value obtained by simulation: while the error
of the classical mean field approximation can be larger than $0.05$,
the error of the refined mean field approximation remains always
smaller than $0.01$.
These two figures illustrate that Theorem~\ref{theo:main} is not just
valid asymptotically, but actually it refines the classical
mean field approximation for relatively small values of $N$. To go further, we
study the steady-state distribution in Table~\ref{tbl:steadySEIR} in
which we display the average proportion of objects in states $S$, $E$,
$I$ or $R$ estimated by simulation, refined mean field approximation
and classical mean field approximation. This illustrates the approximation accuracy for
the steady state of the SEIR example for each local state of an
object. As for the two previous figures, this table illustrates that
the refined mean field approximation provides very accurate estimates
of the true stationary distribution even for very small values of $N$,
which shows that the asymptotic results presented in
Theorem~\ref{theo:steady} are also useful for small values of
$N$. These results are in line with the results presented
in~\cite{gast2017refined} for continuous time mean field models.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}\hline
State & $S$ & $E$ & $I$ & $R$ \\\hline
Simulation ($N=10$) & 0.191 & 0.115 & 0.231 &0.462 \\ \hline
Refined mean field ($N=10$) & 0.189& 0.116 & 0.232& 0.464\\ \hline
Mean field ($N=10$) & 0.164 & 0.119 & 0.239 & 0.478\\ \hline
\end{tabular}
\end{center}
\caption{\label{tbl:steadySEIR} SEIR model: Comparison of the
accuracy of the mean field and refined mean field approximation. The
columns show the average proportion of objects in the states
susceptible (S), exposed (E), infected (I) and recovered (R),
respectively. Each item in the table was computed by measuring the
occupancy measure for times $t = 1000$, i.e. when the systems'
occupancy measure has reached a sufficiently stable
value. Simulation values are averages over $100,000$ simulations. }
\end{table}
\section{Refined Mean Field Model for WSN}
\label{sect:RefWSN}
The next example concerns a simple model of a wireless sensor network~\cite{bortolussi2013bounds}. Such networks
are composed of wireless sensor nodes and gateways. This example
serves two purposes. First it shows that the assumption of homogeneous
objects is not restrictive: In this example, there are two classes of
objects, which is represented by having a block-diagonal matrix
$K$. Second, we use it to consider a function $h$ that is not just the
projection on one coordinate.
Wireless sensor nodes have three local states. In the initial state,
$e$, a sensor node waits for detecting an event of interest and
collects data for that event. After that, the node moves to state $c$
to communicate its data to an available gateway. The communication
attempt may timeout if no gateway is available. In that case the
sensor node moves to state $d$, introducing some delay before moving
back to state $c$ for a further communication attempt.
Gateway nodes have two states. Initially they are in state $a$ and available to receive data from a sensor node. Upon connection to a sensor node they move to state $b$ during which they are busy processing the data. When in state $b$ they are temporarily unavailable for communication with other sensor nodes. After processing the batch of data they move back to state $a$.
We consider a model
where objects have five local states $\{a,b,c,d,e\}$, where $a$ and
$b$ are states of a gateway node and $c,d$ and $e$ are states of a
sensor node, i.e. each object in the model can behave either as a
gateway or as a sensor, but it cannot change its behaviour from that
of a gateway to that of a sensor, or vice-versa. A system is then
composed of $N$ (syntactical) homogeneous objects with a fixed
fraction $\n{G}=\n{a} + \n{b}$
of gateway nodes and a fraction $\n{W}=\n{c} + \n{d}+ \n{e}$ of
wireless sensor nodes, such that $\n{W}=1-\n{G}$.
To keep the model simple for the purpose of illustrating the refined mean field
approach, we do not consider interference due to collision in the communication between nodes and gateways.
\begin{figure}
\begin{center}
\resizebox{0.7\textwidth}{!}{
\begin{tikzpicture}
\tikzstyle{place}=[circle,draw=blue!50,fill=blue!20,thick,inner sep=0pt,minimum size=8mm]
\node (WSNC) [place,draw=red!50,fill=red!20] at (1,2) {$\mathbf{c}$};
\node (WSND) [place,draw=red!50,fill=red!20] at (0,0) {$\mathbf{d}$};
\node (WSNE) [place,draw=red!50,fill=red!20] at (2,0) {$\mathbf{e}$};
\node (GWA) [place,draw=blue!50,fill=blue!20] at (5,2) {$\mathbf{a}$};s
\node (GWB) [place,draw=blue!50,fill=blue!20] at (5,0) {$\mathbf{b}$};
\draw[->,thick] (WSNE.north west) .. controls +(up:0mm) and +(up:0mm) .. (WSNC.south east) node[pos=0.5, label=right:{{$\lambda$}}]{};
\draw[->,thick] (WSND.north east) .. controls +(up:0mm) and +(up:0mm) .. (WSNC.south west) node[pos=0.5, label=right:{{$\eta$}}]{};
\draw[->,thick] (WSNC.west) .. controls +(left:10mm) and +(left:5mm) .. (WSND.north west) node[pos=0.5, label=left:{{$\gamma$}}]{};
\draw[->,thick] (WSNC.east) .. controls +(right:10mm) and +(right:5mm) .. (WSNE.north east) node[pos=0.5, label=right:{{$\beta m_a$}}]{};
\draw[->,thick] (GWA.south east) .. controls +(right:5mm) and +(right:5mm) .. (GWB.north east) node[pos=0.5, label=right:{{$\beta m_c$}}]{};
\draw[->,thick] (GWB.north west) .. controls +(left:5mm) and +(left:5mm) .. (GWA.south west) node[pos=0.5, label=left:{{$\alpha$}}]{};
\end{tikzpicture}
}
\caption{\label{fig:wsn_model} WSN model of individual objects: Sensor Node (left) and Gateway (right)}
\end{center}
\end{figure}
The probability transition matrix is given below:
\begin{align*}
\vr{K}(\vt{m})
= \left(
\begin{array}{ccccc}
1 - \beta \n{c} & \beta \n{c} & 0 & 0 & 0 \\
\alpha & 1- \alpha & 0 & 0 & 0\\
0 & 0 & 1 - \gamma-\beta \n{a} & \gamma & \beta m_a \\
0 & 0 & \eta & 1-\eta & 0 \\
0 & 0 & \lambda & 0 & 1 - \lambda \\
\end{array}
\right),
\end{align*}
where $\alpha$ denotes the probability of the gateway to get again
available, $\beta$ the probability of data communication between the
gateway and a sensor node, $\lambda$ the probability that a sensor
node is ready to send data, $\gamma$ the probability that a sensor
node performs a time-out and $\eta$ the probability that a delayed
sensor node tries to communicate again.
In the example we will use the following values for the above parameters:
$
\alpha=0.09,
\beta= 0.9,
\lambda=0.09,
\gamma=0.01, $ and $
\eta=0.01
$,
and let $M^{(N)}(t)$ denote, as usual, the occupancy measure process of the WSN model (leaving $N$ and $t$ implicit for the sake of notation simplicity).
We are interested in the average response time of a sensor node, i.e. the time a sensor node needs to wait to be able to communicate its data to the gateway. This expected response time can be defined as the fraction between the sensor nodes that are already waiting to communicate their data, i.e. the sensor nodes in state $c$ and state $d$, and the new sensor nodes that became ready in the current time step, i.e. $\lambda$ times the nodes in local state $e$:
\begin{align*}
\mathbb{E}[R]= \mathbb{E}\left[\frac{(M_c + M_d)}{\lambda
M_e}\right],
\end{align*}
With reference to Theorem~\ref{theo:main}, we define
$h(x_1,x_2,x_3,x_4,x_5)=\frac{(x_3 + x_4)}{\lambda x_5}$.
\subsection{Computation of $A$, $B$ and $\Gamma$ for the WSN model}
In the sequel, we make reference to $\vt{m}=(m_a, m_b, m_c, m_d, m_e) \in \mathcal{U}^5$.
The Jacobian of function $\Phi_1$:
$$D(\Phi_1)(\vt{m})
= \left(
\begin{array}{ccccc}
1 - \beta \n{c} & \alpha& -\beta \n{a} &0 & 0 \\
\beta \n{c} & 1- \alpha & \beta \n{a} & 0 &0\\
-\beta \n{c} & 0 &1-\gamma-\beta \n{a} & \eta & \lambda \\
0 & 0 &\gamma & 1-\eta &0 \\
\beta \n{c} & 0 & \beta \n{a} & 0 & 1 - \lambda \\
\end{array}
\right)$$
The Hessian of the function $\Phi_1$ satisfies
$D^2((\Phi_1)_d)(\vt{m})=0$:
\begin{align*}
D^2((\Phi_1)_a)(\vt{m})=D^2((\Phi_1)_c)(\vt{m})
&= \left(
\begin{array}{ccccc}
0 & 0& -\beta & 0 & 0 \\
0 & 0 & 0 & 0 & 0\\
-\beta & 0 & 0 & 0 &0\\
0 & 0 & 0 & 0 &0\\
0 & 0 & 0 & 0 &0\\
\end{array}
\right)\\
D^2((\Phi_1)_b)(\vt{m})=D^2((\Phi_1)_e)(\vt{m})
&= \left(
\begin{array}{ccccc}
0 & 0& \beta & 0 & 0 \\
0 & 0 & 0 & 0 & 0\\
\beta & 0 & 0 & 0 &0\\
0 & 0 & 0 & 0 &0\\
0 & 0 & 0 & 0 &0\\
\end{array}
\right)
\end{align*}
The Jacobian of function $h$ is
$
D(h)(\vt{m}) = (0,0, \frac{1}{\lambda \n{e}},\frac{1}{\lambda \n{e}}, -\frac{\n{c}+\n{d}}{\lambda \n{e}^2} )
$
and its
Hessian is
\begin{align*}
D^2(h)(\vt{m})
= \left(
\begin{array}{ccccc}
0 & 0& 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 &-\frac{1}{\lambda \n{e}^2}\\
0 & 0 & 0 & 0 &-\frac{1}{\lambda \n{e}^2}\\
0 & 0 & -\frac{1}{\lambda \n{e}^2} & -\frac{1}{\lambda \n{e}^2} &\frac{2*(\n{c}+\n{d})}{\lambda*\n{e}^3}\\
\end{array}
\right).
\end{align*}
The $5 \times 1$ vector $V_t$ and the $5 \times 5$ matrix $W_t$ are computed recursively according to Theorem~\ref{theo:main}, using the new Jacobian and Hessian for function $\Phi$; thus the
refined mean field approximation of the measure of interest is given by
$\mathbb{E}[h(\vt{M}^{(N)}(t))] \approx h(\Phi_t(\vt{m})) + (D(h) \cdot V_t + \frac{1}{2}*(D^2(h) \cdot W_t))/N $.
\subsection{Results}
In Figure~\ref{fig:wsn_res} various approximations of the expected
response time for the WSN model are shown, for time values $t$ ranging from $0$ to $400$ time units.
We consider a relatively small system with
15 nodes (10 sensor and 5 gateway nodes). Recall that the function $h$
is defined by :
$h(x_1,x_2,x_3,x_4,x_5)=\frac{(x_3 + x_4)}{\lambda x_5}$. We compare
five curves :
\begin{enumerate}
\item The blue curve labelled Classic Mean Field (1) (obtained with
Octave) shows the expected response time when this is approximated
by defining $\mathbb{E}[R]$ as in~\cite{bortolussi2013bounds}:
$$
\mathbb{E}[R] = \frac{(\n{c} + \n{d})}{\lambda \n{e}} = h(m),
$$
where $\n{c}$, $\n{d}$ and $\n{e}$ denote the classical mean field
approximation values for the fractions of sensor nodes being in
state $c$, $d$ and $e$, respectively.
\item The red curve (2) is the expectation of $h(M)$, computed by
stochastic simulation:
\begin{align*}
\esp{h(M)} = \min(\esp{\frac{(M_{c} + M_{d})}{\lambda M_{e}}},100)
\end{align*}
In the case of individual simulation runs it may of course happen that $M_{e}=0$ occasionally. This is why we defined $\esp{h(M)}$ as the minimum between the actual value and 100. The latter is the value one obtains when one but all nodes are waiting and the last node is getting ready for communication too: $M_{c}+M_{d}=9$ and $M_e=1$, i.e. 9/(0.09*1) = 100.
\item The orange curve (3) shows the expected response time approximated
using the refined mean field approximation of
Theorem~\ref{theo:main} with the function $h$.
\end{enumerate}
For comparison, we also compute two other quantities :
\begin{itemize}
\item[4.] The purple curve (4) shows the response time approximated as
follows, using the refined mean field approximation for the fraction
of sensor nodes in each state (i.e. we use Theorem~\ref{theo:main}
with the identity function and then apply $h$):
\begin{align*}
h(\mathit{rmf}) = \frac{(\mathit{rmf\!}_c + \mathit{rmf\!}_d)}{\lambda
\mathit{rmf\!}_e}
\end{align*}
where $\mathit{rmf\!}_c=\mu_c+V_c/N$ denotes the refined mean field
approximation of the fraction of sensor nodes in state $c$, and
similarly for $\mathit{rmf\!}_d$ and $\mathit{rmf\!}_e$ .
\item[5.] Finally, the green curve (5) shows the expected response time
defined as:
\begin{align*}
h(\esp{M}) = \frac{(\mathbb{E}[M_c] + \mathbb{E}[M_d])}{\lambda
\mathbb{E}[M_e]}
\end{align*}
where $\mathbb{E}[M_c]$ is the average fraction of sensor nodes in
state $c$ obtained via the average of 100,000 individual simulation
runs of the model. $\mathbb{E}[M_d]$ and $\mathbb{E}[M_e]$ are obtained in a
similar way.
\end{itemize}
In Figure~\ref{fig:wsn_res}(a), we plot these various curves for $15$
nodes in total. We make two observations. First, in this case the
value obtained by simulation (red curve (2)) is almost 50\% larger
than the classic mean field approximation (1) whereas the refined
approximation (3) is much closer. Second, the purple curve (4) is
close to the green curve (5) but quite far away from the red (2) and
orange (3) curves. This shows that when applying
Theorem~\ref{theo:main}, computing a refined model for $\esp{h(M)}$
and for $h(\esp{M})$ might lead to very different results.
Of course, the larger $N$ gets (in an otherwise equal model), the
closer the orange (3) and red (2) curves will get to the blue curve (1),
i.e. the classic mean field approximation, as illustrated in
Figure~\ref{fig:wsn_res}(b). In both cases, all curves collapse
into to a single curve.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[width=0.5\textwidth]{WSNmodel_simu_N15
&\includegraphics[width=0.5\textwidth]{WSNmodel_simu_N1500}\\%figs/WSN/MeanResponseTime_V4_N1500-crop.pdf}\\
(a) $N=15$ : $10$ sensors; $5$ gateways
&(b) $N=1500$ : $1000$ sensors; $500$ gateways
\end{tabular}
\end{center}
\caption{\label{fig:wsn_res} Expected response time $E[R]$ for a
sensor node to communicate its data to a GW for a WSN model with $N$
nodes of which $2N/3$ are sensor nodes and $N/3$ are gateway
nodes. The red line (2) is an average over 20,000 simulations for
$N=15$ and $1000$ for $N=1500$.}
\end{figure}
\section{Refined Mean Field Model for Majority Rule Decision-making}
\label{sect:RefVoting}
The example in this section concerns a model for collective decision-making. The model is inspired by the work of Montes de Oca et al.~(see \cite{Mo+11,Sch11} and references therein). Collective decision-making is a process whereby the members of a group decide on a course of action by consensus. Such collective decision-making processes have also been applied in swarm robotics. In particular, in that context the robots where asked to choose between two actions that have the same effect but differ in their execution times~\cite{Mo+11}.
One strategy of collective decision making is the use of the majority rule. In this strategy the agents in a population are initially divided into two groups. One in which all members have opinion A and one where all members have opinion B. In every step three agents are selected randomly from the total population to form a temporary team. The team applies the majority rule such that all its members adopt the opinion held by the majority (i.e. at least two) of the members, after which they return to the total population until the population has reached a consensus on one of the two opinions.
In the majority rule strategy extended with differential latency the agents in the population are not all the time available for team formation. Both types of agents are assumed to perform an action with a certain duration during which they cannot participate in team formation. For example, agents with opinion B perform such actions taking (on average) relatively more time than those with opinion A. In~\cite{Mo+11} such latency periods for agents with opinion A and B are modelled by random variables with exponential distributions with rate $\lambda_A$ and $\lambda_B$ respectively. For simplicity, it can also be assumed that the A-type actions take 1 time unit on average (i.e. $\lambda_a=1$) and that B-type actions take $1/\lambda$ time units on average, where $\lambda$ takes a value in $(0,1]$. This variant of self-organised collective decision-making is known as majority rule with differential latency (MRDL).
In the following we develop a probabilistic, discrete time variant of the MRDL strategy which we call MRDL-DT. In this variant agents can have either opinion A or opinion B, and in both cases they can be either be latent or not, leading to a partition of the population into exactly four classes: $\mathit{LA}$ (latent A), $\mathit{NA}$ (non-latent A), $\mathit{LB}$ (latent B), $\mathit{NB}$ (non-latent B). It is assumed that if an agent is latent it cannot be selected for team formation. The four state MRDL-DT model of an individual object of the population is shown in Figure~\ref{fig:MRDLagent}. The name of the states indicate in which class the object is.
\begin{figure}
\begin{center}
\resizebox{0.5\textwidth}{!}{
\begin{tikzpicture}
\tikzstyle{place}=[circle,draw=blue!50,fill=blue!20,thick,inner sep=0pt,minimum size=8mm]
\node (LB) [place,draw=red!50,fill=red!20] at (0,0) {$\mathbf{LB}$};
\node (NB) [place,draw=blue!50,fill=blue!20] at (4,0) {$\mathbf{NB}$};
\node (LA) [place,draw=red!50,fill=red!20] at (0,2) {$\mathbf{LA}$};s
\node (NA) [place,draw=blue!50,fill=blue!20] at (4,2) {$\mathbf{NA}$};
\draw[->,thick] (LA.north west) .. controls +(left:6mm) and +(left:6mm) .. (LA.south west) node[pos=0.5, label=left:{{}}]{};
\draw[->,thick] (LB.north west) .. controls +(left:6mm) and +(left:6mm) .. (LB.south west) node[pos=0.5, label=left:{{}}]{};
\draw[->,thick] (NA.north east) .. controls +(right:6mm) and +(right:6mm) .. (NA.south east) node[pos=0.5, label=right:{{}}]{};
\draw[->,thick] (NB.north east) .. controls +(right:6mm) and +(right:6mm) .. (NB.south east) node[pos=0.5, label=right:{{}}]{};
\draw[->,thick] (NB.south west) .. controls +(down:5mm) and +(down:5mm) .. (LB.south east) node[pos=0.5, label=below:{{keepB}}]{};
\draw[->,thick] (LB.east) .. controls +(up:0mm) and +(up:0mm) .. (NB.west) node[pos=0.5, label=below:{{ actB}}]{};
\draw[->,thick] (NA.north west) .. controls +(up:5mm) and +(up:5mm) .. (LA.north east) node[pos=0.5, label={{keepA}}]{};
\draw[->,thick] (LA.east) .. controls +(up:0mm) and +(up:0mm) .. (NA.west) node[pos=0.5, label={{ actA}}]{};
\draw[->,thick] (NB.north west) .. controls +(up:0mm) and +(up:0mm) .. (LA.south east) node[pos=0.6, label={{changeBA}}]{};
\draw[->,thick] (NA.south west) .. controls +(up:0mm) and +(up:0mm) .. (LB.north east) node[pos=0.6, label=below:{{changeAB}}]{};
\end{tikzpicture}
}
\caption{\label{fig:MRDLagent} Majority rule differential latency model of an individual object. Latent states are red, non-latent ones blue.}
\end{center}
\end{figure}
The behaviour of an individual object is as follows. Initially the object is latent, and, assuming it has opinion A (state $\mathit{LA}$), it finishes its job and becomes available for team-formation (transition $\mathit{actA}$) moving to state $\mathit{NA}$ with probability $1/q$, for appropriate $q$. When in $\mathit{NA}$ it gets selected in a team with two other members. If the two other members have opinion B it changes its opinion into B and moves to state $\mathit{LB}$. This can happen with a probability $3\n{NB}^2$ where factor 3 models the fact that we abstract from the exact order in which the members of the team are selected, which can happen in 3 different ways. In alternative the two other members can have both opinion A, or one opinion A and the other opinion B. In that case the opinion of the object does not change and the object moves back to $\mathit{LA}$ with probability $\frac{3}{q}(\n{NA}^2 +\n{NA}\n{NB})$.
If the object is in state $\mathit{LB}$ it becomes available for team formation, moving to state $\mathit{NB}$ with a probability $\lambda/q$, where $\lambda$ is a value in $(0,1]$. The latter models the relative longer duration of activity B with respect to A. The behaviour in state $\mathit{NB}$ is similar to that in state $\mathit{NA}$, except that now the opinion may change from B to A.
The discrete time evolution of the model is given by probability transition matrix $\vr{K}$ in which $\n{LA}$, $\n{NA}$, $\n{LB}$ and $\n{NB}$ denote the fraction of objects in the system that are in local state $\mathit{LA}$, $\mathit{NA}$, $\mathit{LB}$ and $\mathit{NB}$, respectively and $\vt{m}=(\n{LA}, \n{NA},\n{LB},\n{NB})$:
$$
\vr{K}(\vt{m})=
\left(
\begin{array}{cccc}
1 - \frac{1}{q}& \frac{1}{q} & 0 & 0 \\
\frac{3}{q}(\n{NA}^2 +\n{NA}\n{NB}) & \mathit{NA}(\vt{m}) & \frac{3}{q}\n{NB}^2 & 0 \\
0 & 0 & 1 - \frac{\lambda}{q} & \frac{\lambda}{q} \\
\frac{3}{q}\n{NA}^2 & 0 & \frac{3}{q}(\n{NB}^2 +\n{NA}\n{NB})& \mathit{NB}(\vt{m}) \\
\end{array}
\right)$$
%
where
$$\mathit{NA}(\vt{m}) = 1- \frac{3}{q}(\n{NB}^2 + \n{NA}^2 +\n{NA}\n{NB})$$
and
$$\mathit{NB}(\vt{m})= 1 - \frac{3}{q}(\n{NA}^2 + \n{NB}^2 +\n{NA}\n{NB}).$$
Since we are dealing with clock-synchronous discrete systems, we also introduced the discretisation factor $q=10$ so that only a fraction of the population is moving from the latent to the non-latent state at any time.
In the example we use the values $q=10$ and $\lambda$ taking values 1.0, 0.5 and 0.25 in the various analyses, modelling that task B takes the same time as task A, or twice as much time or four times as much time as task A, {\em on average}, respectively. We are interested in the evolution of the consensus, $C_A$, on opinion A as a function of the initial values and the differential latency $\lambda$. Let
%
$$
\mathbb{E}[C_A]=\mathbb{E}[M_{LA}+M_{NA}]=\mathbb{E}[h(M_{LA},M_{NA},M_{LB},M_{NB})]
$$
where, with reference to Theorem~\ref{theo:main},
$h(x_1,x_2,x_3,x_4)=x_1 + x_2$.
\subsection{Computation of $A$, $B$ and $\Gamma$ for the MRDL-DT model}
As before, we first need to compute the Jacobian and the Hessian of the function $\Phi_1$ for a generic occupancy measure vector $\vt{m}$ at time step t. The Jacobian of function $\Phi_1$ is:
$$D(\Phi_1)(\vt{m})
= \left(
\begin{array}{cccc}
1 -1/q & \frac{9}{q}\n{NA}^2+\frac{12}{q}\n{NA}\n{NB}& 0 & \frac{6}{q}\n{NA}^2 \\
1/q & \mathit{JNA}(\vt{m}) & 0 & -\frac{3}{q}\n{NA}^2-\frac{6}{q}\n{NA}\n{NB} \\
0 & \frac{6}{q}\n{NB}^2 &1-\frac{\lambda}{q}& \frac{12}{q}\n{NB}\n{NA}+\frac{9}{q}\n{NB}^2 \\
0 & -\frac{3}{q}\n{NB}^2-\frac{6}{q}\n{NA}\n{NB}&\frac{\lambda}{q} & \mathit{JNB}(\vt{m}) \\
\end{array}
\right)$$
%
where
$$\mathit{JNA}(\vt{m})= 1-(\frac{9}{q}\n{NA}^2+\frac{6}{q}\n{NA}\n{NB}+\frac{3}{q}\n{NB}^2)$$ and
$$\mathit{JNB}(\vt{m}) = 1-(\frac{3}{q}\n{NA}^2+\frac{9}{q}\n{NB}^2+\frac{6}{q}\n{NA}\n{NB}).$$
\noindent The Hessian of function $\Phi_1$ is:
$$D^2((\Phi_1)_{LA})(\vt{m})
= \left(
\begin{array}{cccc}
0 & 0& 0 & 0 \\
0 & \frac{18}{q}\n{NA}+\frac{12}{q}\n{NB} & 0 & \frac{12}{q}\n{NA} \\
0 & 0 & 0 & 0 \\
0 & \frac{12}{q}\n{NA} & 0 & 0 \\
\end{array}
\right)$$
$$D^2((\Phi_1)_{NA})(\vt{m})
= \left(
\begin{array}{cccc}
0 & 0& 0 & 0 \\
0 & -\frac{18}{q}\n{NA}-\frac{6}{q}\n{NB} & 0 & -\frac{6}{q}\n{NA}-\frac{6}{q}\n{NB} \\
0 & 0 & 0 & 0 \\
0 & -\frac{6}{q}\n{NA}-\frac{6}{q}\n{NB} & 0 & -\frac{6}{q}\n{NA} \\
\end{array}
\right)$$
$$D^2((\Phi_1)_{LB})(\vt{m})
= \left(
\begin{array}{cccc}
0 & 0& 0 & 0 \\
0 & 0 & 0 & \frac{12}{q}\n{NB} \\
0 & 0 & 0 & 0 \\
0 & \frac{12}{q}\n{NB} & 0 & \frac{18}{q}\n{NB}+\frac{12}{q}\n{NA} \\
\end{array}
\right)$$
$$D^2((\Phi_1)_{NB})(\vt{m})
= \left(
\begin{array}{cccc}
0 & 0& 0 & 0 \\
0 & -\frac{6}{q}\n{NB} & 0 & -\frac{6}{q}\n{NA}-\frac{6}{q}\n{NB} \\
0 & 0 & 0 & 0 \\
0 & -\frac{6}{q}\n{NA}-\frac{6}{q}\n{NB} & 0 & -\frac{18}{q}\n{NB}-\frac{6}{q}\n{NA} \\
\end{array}
\right)$$
\subsection{Results for the MRDL-DT example}
We first show some results for a medium size population of $N=160$. Figure~\ref{fig:mrdl_06_160} shows the dynamics of the fractions of the population having opinion A and B, for both the latent and non-latent objects,
for the first $200$ time units. Similarly to the previous examples, a good correspondence can be observed between the results of the mean of 1000 simulation runs and the mean field approximations. Also in this case the refined mean field provides a better approximation than the classical mean field approximation for the period ranging approximately from 50 to 150 time units.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.47\textwidth]{MajorityRule_LANA_N160}
&\includegraphics[width=0.47\textwidth]{MajorityRule_A_N160}\\
(a) Latent and non-latent & (b) Dynamics of opinion $A$
\end{tabular}
\end{center}
\caption{\label{fig:mrdl_06_160} Dynamics of opinion A, latent and
non-latent. Classical mean field (plain lines), refined mean field
(dashed lines) and simulation results (dotted lines) of MRDL model
with 160 objects, $\lambda=1.0$, q=10 and initially $0.6*160=96$
have opinion A and the population is initially latent. }
\end{figure}
A similar improved correspondence can be observed when considering the
aggregated populations with opinion A or opinion B, respectively, as
shown in Figure~\ref{fig:mrdl_06_160}(b).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{MajorityRule_all_N32}
\end{center}
\caption{\label{fig:mrdl_06_32} Classical mean field (plain lines),
refined mean field (dashed lines) and simulation results (dotted
lines) of MRDL model with 32 objects, $\lambda=1.0$, q=10 and
initially $\floor{0.6*32}=9$ have opinion A and the population is
initially latent. }
\end{figure}
However, if a much smaller population is considered, e.g. $N=32$, the
mean field approximation differs considerably from the simulation
results and the refined approximation does not really improve the
accuracy of the approximation. This is what can be observed in
Figure~\ref{fig:mrdl_06_32}, where the results for $N=32$ are shown for a
model without differential latency (i.e. $\lambda=1.0$), for $t\in [0,500]$. This can be
explained as follows~: For a large population, a system that is
initially biased towards one opinion will reach consensus on this
opinion. For a small population, however, a system that is initially
biased towards one opinion still can reach consensus on the other
opinion due to intrinsic stochastic fluctuations. This cannot be
taken into account in a mean field population model. However, both
analytical models, derived from a master equation approach, and
simulation show that the probability to reach consensus on a given
opinion rapidly converges to a step function for a growing population
size $N$~\cite{Sch11}, where the critical density is given by
$\n{A}=\frac{\lambda}{(1+\lambda)}$, where $\n{A}$ is the initial
fraction of the population with opinion $A$. For large $N$, if
$\n{A}>\frac{\lambda}{(1+\lambda)}$ the system almost surely reaches
consensus on A, whereas for $\n{A}<\frac{\lambda}{(1+\lambda)}$ it
almost surely reaches consensus on B. This explains why for larger
populations the mean field approximations become increasingly accurate
as shown in Figure~\ref{fig:mrdl_06_160}.
This example shows a limit of the refined mean field approximation:
when a system has multiple equilibrium point (and in particular when
there are multiple absorbing states as in this example), the dynamics
of the mean field approximation depends on the initial state of the
system: For a given initial state the mean field will always follow
the same trajectory (it is a deterministic system). When the system is
large, the random fluctuations will remain small and the corresponding
stochastic system will stay in the same basin of attraction. In the
case of a small population, however, the dynamics will be greatly
affected by the random fluctuations. These fluctuations can lead the
system to another basin of attraction than the original one.
\section{Non-Exponentially Stable Equilibrium : Accuracy versus Time}
\label{sect:non-exponentially-stable}
In the previous sections, the dynamical system $m=\Phi_1(m)$ has
either one exponentially stable attractor (Section~\ref{sect:RefSEIR})
or multiple exponentially stable attractors
(Section~\ref{sect:RefWSN}). When the attractor is unique and
exponentially stable, the accuracy of the mean field or refined mean
field approximation is uniform in time
(Theorem~\ref{theo:steady}(ii)). In this section, we study the case
of a system that has a unique attractor but that is not exponentially
stable. We show that, in this case, the accuracy of the (refined)
mean field approximation is no longer uniform in time.
We consider a system with $N$ objects in which each object is in state
$0$ or $1$. An object in state $1$ goes to state $0$ with probability
$1$ and an object in state $0$ goes to $1$ with probability
$\alpha m_0$, where $\alpha\in(0,1)$ is a parameter. The transition
matrix $K$ is
\begin{align*}
K(m) = \left[
\begin{array}{cc}
1-\alpha m_0&\alpha m_0\\
1 & 0
\end{array}
\right]
\end{align*}
The function $\Phi_1:m\mapsto mK(m)$ has a unique fixed point whose
first component is
$\mu_0(\infty)=(\sqrt{1+4\alpha}-1)/(2\alpha)$. This fixed point is
exponentially stable if and only if $\alpha < 0.75$.
\subsection{Transient regime and accuracy for large $t$}
\begin{figure}[ht]
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[width=.5\linewidth]{unstable1D_a60_N10}
&\includegraphics[width=.5\linewidth]{unstable1D_a60_N30}\\
(a) $N=10$ & (b) $N=30$
\end{tabular}
\caption{Exponentially stable case ($\alpha=0.6$).}
\label{fig:stable}
\end{figure}
\begin{figure}[ht]
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[width=.5\linewidth]{unstable1D_a75_N10}
&\includegraphics[width=.5\linewidth]{unstable1D_a75_N30}\\[-5pt]
(a) $N=10$ & (b) $N=30$\vspace{-.3cm}
\end{tabular}
\caption{Non-exponentially stable case ($\alpha=0.75$). }
\label{fig:unstable}
\end{figure}
In Figure~\ref{fig:stable} and Figure~\ref{fig:unstable}, we plot the
first component of the mean field $\mu(t)$ and refined mean field
approximation $\mu(t)+V(t)/N$ as well an exact value of $\esp{M(t)}$
for $N=10$ and $N=30$. The initial value is $m=(0.7,0.3)$. The exact
value of $\esp{M(t)}$ was computed by a numerical method that uses the
fact that the system with $N$ objects can be described by a Markov
chain with $N+1$ states.
These figures show that the refined approximation always improves the
accuracy compared to the classical mean field approximation for small
values of $t$, both for $\alpha=0.6$ and $\alpha=0.75$. The situation
for large values of $t$ is quite different. On the one hand, when the
fixed point is exponentially stable ($\alpha=0.6$,
Figure~\ref{fig:stable}), the refined approximation is very accurate
for all values of $t$. On the other hand, when the fixed point is not
exponentially stable ($\alpha=0.75$, Figure~\ref{fig:unstable}), the
refined approximation seems to be unstable and is not a good
approximation of $\esp{M(t)}$ for values of $t$ that are too large
compared to $N$ ($t>7$ for $N=10$ or $t>12$ for $N=30$).
\subsection{Steady-state convergence}
To explore how the non-exponentially stable case affects the accuracy
of mean field approximation, we now study in more details the
steady-state convergence when $\alpha=0.75$. It is known (see for
example \cite[Corollary~14]{gastgaujalDEDS}) that when the dynamical
system $m=\Phi_1(m)$ has a unique attractor $\mu(\infty)$, then the
steady-state expectation $\esp{\MN}$ converges to $\mu(\infty)$ as $N$
goes to infinity. Theorem~\ref{theo:steady} shows that if in addition
the attractor is exponentially stable then
$\esp{\MN}\approx \mu(\infty)+V/N$.
In Figure~\ref{fig:unstable_steadyState}, we show that the latter no longer
holds when the mean field system has a unique attractor that is {\em not}
exponentially stable. We consider the same model with $\alpha=0.75$
for which $\mu(\infty)=2/3$. We plot in
Figure~\ref{fig:unstable_steadyState}
$\sqrt{N}(\esp{\MN}-\mu(\infty))$, where we computed $\esp{\MN}$ by
inverting the transition matrix of the system of size $N$. This figure
shows that $\sqrt{N}(\esp{\MN}-\mu(\infty))$ does not converge to $0$
as $N$ goes to infinity but seems to converge to approximately
$-0.0975$ (as indicated by the fitted line in orange). This suggests
that for this model, one has in steady-state :
\begin{align}
\esp{\MN}\approx\mu(\infty)-\frac{0.0975}{\sqrt{N}}+\frac{0.14}{N}.
\label{eq:sqrt{N}-conv}
\end{align}
Note that the constants $-0.0975$ and $0.14$ were obtained by a purely
numerical method that consist in finding the best curve of the form
$a+b/\sqrt{N}$ that fits $\sqrt{N}(\esp{\MN-\mu(\infty)})$. For now,
we do not known if there exists a systematic way to obtain these
values for another model that would also have a non-exponentially
equilibrium point. We left this question for future work.
We remark that the convergence of $\esp{\MN}$ to $\mu(\infty)$
observed Equation~\eqref{eq:sqrt{N}-conv} is in $O(1/\sqrt{N})$ and
not in $O(1/N)$. This model satisfies all the assumptions of
Theorem~\ref{theo:steady} but one~: The attractor $\mu(\infty)$ is not
exponentially stable. This suggests that having an exponentially
stable attractor is needed to obtain a convergence in $1/N$.
\begin{figure}[ht]
\centering
\includegraphics[width=.8\linewidth]{unstable1D_steadyState}
\caption{Non-exponentially stable case ($\alpha=0.75$) : convergence
of $\esp{M(t)}$ to $\mu(\infty)$. }
\label{fig:unstable_steadyState}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper we studied population models composed of
(clock-)synchronous objects. A classical method to study such systems
is to consider the mean field approximation. By studying the accuracy
of this deterministic approximation, we developed a new approximation,
that we call the \emph{refined} mean field approximation. We
illustrated on a few examples that this approximation can greatly
improve the accuracy of the classical mean field limit, also for
systems with a relatively small size ($10-20$ objects). Yet, this
refined approximation has some limitations when the deterministic
approximation has multiple basins of attraction or has a unique
attractor that is not exponentially stable.
The proposed refined approximation is given by a set of linear
equations that scales as the square of the dimension of the model (but
does not depend on the system size). For now, we limited our study to
relatively small models, for which the Jacobian and Hessian can be
computed in closed form. We are currently investing means to make this
computation automatic which will allow us to study large-scale
examples.
\bibliographystyle{plain
|
1,108,101,563,890 | arxiv |
\section{Introduction}
The average smartphone has more than 25 apps installed~\cite{nielsonmanyapps}, each having varying access to the device depending on the permissions that have been granted to them. These permissions are shared by third-party libraries (henceforth called \textit{libraries}, for brevity), which are embedded within apps and enjoy the privileges granted to the host apps. Libraries are used extensively across the app ecosystem. App developers use them to monetize apps, integrate with social media, or simply to provide complex app functionality with little programming effort required. While libraries provide unquestionable benefit to app developers and the app ecosystem as a whole, prior work has shown that they also contribute negative effects to user privacy~\cite{DBLP:journals/corr/abs-1303-0857, Grace:2012:UEA:2185448.2185464, stevens2012investigating, seo2016flexdroid}. For example, libraries may track users, abuse the permissions they have been granted, or leak sensitive personal data.
The Android security model does not support the separation of privileges between apps and their embedded libraries. As such, not only do libraries inherit the permissions granted to their host apps, the developers of the host apps themselves are sometimes forced to declare additional permissions to support embedded libraries~\cite{Shekhar:2012:ASS:2362793.2362821, Pearce:2012:APS:2414456.2414498}. Additional permissions may especially benefit advertising libraries (henceforth called \textit{ad libraries}) as they facilitate behavioural advertising through greater potential for user profiling. Ur et al.~\cite{Ur:2012:SUS:2335356.2335362} found that users were generally unaware of the inner workings of behavioural advertising, and described the practice as ``scary'' and ``creepy''. Along similar lines, Spensky et al.~\cite{spensky2016sok} argue that users cannot be expected to fully understand the implications of sharing their personal data.\\
\begin{figure*}[!t]
\centering
\includegraphics[width=6.7in,clip=true,trim=0.2in 8.2in 1.6in 0.4in]{ilc.pdf}
\caption{An example of how intra-library collusion (ILC) could happen in practice. Libraries are able to use permissions in \textcolor{blue}{blue} because they have been granted to the app, while permissions in \textcolor{red}{red} are unavailable for libraries within that app. Overall, \texttt{library-2} is able to access a total of four permissions on the device.}
\label{fig-ilc-explained}
\end{figure*}
The current Android security model facilitates the following threats from libraries:\\
\begin{itemize}
\item Libraries may abuse the privileges granted to host apps.
\item Libraries may track users without their consent.
\item Opportunistic libraries may aggregate multiple signals for detailed user profiling.
\end{itemize}
Recently, Bosu et al.~\cite{Bosu:2017:CDL:3052973.3053004} studied the possibility of more than one apps colluding to leak sensitive data using inter-component communication (ICC). Prior studies have also examined privacy leaks using ICC~\cite{Lu:2012:CSV:2382196.2382223, Felt:2011:PRA:2028067.2028089, ndsssdcl, Chan:2012:DAA:2185448.2185466}, but real-world attacks are limited since they require one\footnote{One malicious app is sufficient for a confused deputy attack~\cite{bugiel2012towards} if a benign, but vulnerable, app is present on the device.} or more malicious apps to be installed on devices to facilitate the attack.
On the other hand, a novel privilege escalation attack we call \textit{intra-library collusion} (ILC) is much more likely. ILC is the phenomenon that happens when individual libraries obtain greater combined privileges on a device by virtue of being embedded within multiple apps, with each app having a distinct set of permissions granted. The ``collusion'' between instances of the same library need not happen on the device. Indeed, the library can transmit individual streams of sensitive user data to a remote server where it can be aggregated to better profile the unaware user. Prior work~\cite{stevens2012investigating} has only looked at the possibility of tracking users across apps, whereas ILC concerns aggregating sensitive user data across apps.
ILC is visually depicted in Fig.~\ref{fig-ilc-explained}. From the figure, \texttt{library-2} is shared across all apps, and has combined access to four permissions (\texttt{A,B,C,F}), although in any single app it has access to a maximum of two permissions (\texttt{A,B} or \texttt{A,C}). Thus, \mbox{\texttt{library-2}} may benefit from ILC. Note that libraries may also further their malfeasance by leveraging sources of data not guarded by permissions, such as the list of apps installed on a device. Such data has previously been shown to be useful for predicting user traits~\cite{Seneviratne:2014:PUT:2636242.2636244}.
Concretely, ILC is possible if the following conditions are true:\\
\begin{enumerate}
\item Two or more installed apps, $A_1, ..., A_n$, contain the same library $L$.
\item Apps $A_1, ..., A_n$ have sets of permissions granted to them such that the union of the sets of permissions $U$ contains more permissions than any of the individual sets of permissions.
\item $L$ contains code that allows it to make API calls that are guarded by two or more permissions in $U$.
\item $L$ has the ability to uniquely identify a device.
\item $L$ transmits sensitive data to its server.
\end{enumerate}
Privilege escalation using ILC is a greater problem than privilege escalation through ICC, since libraries are designed to be embedded within multiple apps by definition. This increases the likelihood that any given device will contain two or more apps with the same library embedded. Moreover, the existence of very popular libraries with total install-bases in the billions suggests that potential for ILC should be very common in practice. To make matters worse, libraries may have legitimate claims as to why personal data needs to be leaked from a device, but any aggregation of this data once it gets to library servers will be opaque to users and industry regulators alike.
Risks coming from traditional libraries are not the only worry. Chen et al.~\cite{7546512} argue that libraries are already being repackaged for propagating malicious code. Using their methodology, the authors found that 6.84\% of apps obtained from the Google Play Store were infected with potentially harmful libraries. Potentially harmful behaviour was also seen in the iOS versions of libraries. These observations demonstrate that attackers have turned their attention to libraries as a means of malware propagation, further motivating the need to study ILC and its effects.
Measuring the potential for ILC requires knowledge of the lists of apps installed on real-world devices. To obtain lists of installed apps on real-world devices, we interfaced with the Device Analyzer project~\cite{Wagner:2015:DAP:2766498.2774992}. The Device Analyzer project is concerned with obtaining usage data and other metadata from smartphones, including lists of installed apps on devices. From the Device Analyzer dataset, we were able to obtain the lists of apps installed on over 30,000 real-world devices to facilitate our study. In contrast to prior work, we measure real-world privacy risks by leveraging the additional insight obtained from access to full lists of apps that are installed on devices.
In this work, we focus on Android due to the availability of data on lists of apps installed on Android devices. However, due to similarities in access control and app deployment on iOS, we believe many of our insights may hold on that platform as well.\\
\noindent{\textbf{Contributions.} Our paper makes the following contributions to the state-of-the-art:\\}
\begin{itemize}
\item We describe a novel privilege escalation attack called intra-library collusion (ILC), which comes from libraries embedded within apps.
\item We perform the first study on the extent to which libraries may abuse ILC for better user profiling in the real-world~(Section~\ref{section-ilc}).
\item We use a historical dataset of apps to demonstrate that the potential for ILC (and its consequences) have increased over the last two-and-a-half years~(Section~\ref{section-longitudinal}).
\item We make a first effort to systematically measure the frequency of transmission of personal data from real-world devices by advertisement libraries~(Section~\ref{section-frequency-leaks}).
\item To justify one of our experimental assumptions, we measure the adoption of run-time permissions across popular apps in the Google Play Store~(Section~\ref{section-runtime-permissions}).
\end{itemize}
\noindent{\textbf{Roadmap.} The rest of the paper is organised as follows: Section~2 gives background on important Android-related concepts; Section~3 describes the methodology that was used for data collection and analysis; Section~4 presents our measurements of the potential for ILC; Section~5 shows how the potential for ILC has increased over time; Section~6 studies how often and to how many different destinations sensitive data is leaked; Section~7 discusses the results of our study; Section~8 overviews related work, including mitigation strategies; and finally Section~9 concludes the paper.}
\section{Background}
We frame the problem of ILC by first describing the Android permission model, explaining how libraries fit into the ecosystem, and explaining our threat model.
\subsection{Android Permissions Model}
The Android operating system (OS) mediates access to sensitive device resources using a permission-based access control system. Permissions are divided into \textit{normal} and \textit{dangerous} permissions, with normal permissions protecting resources that have very little risk to user privacy and dangerous permissions guarding access to private user information and data~\cite{androiddangerousperms}. Apps wanting access to sensitive resources guarded by dangerous permissions must request the relevant permission and have it granted by the user. Throughout this paper, we focus exclusively on dangerous permissions. For this reason, we hereafter refer to \textit{dangerous permissions} as \textit{permissions}.
Apps list the permissions they would like to use in a manifest file that is bundled within the app. Prior to Android~6.0, the permissions listed by an app had to be accepted in full at install-time in an all-or-nothing manner. As of Android~6.0, permissions must still be listed in their entirety in the app's manifest, but they may be accepted or rejected selectively by users at run-time. In order for run-time permissions to be triggered, however, the app itself must have a \texttt{targetSDK} of~23 or higher.
Android apps are packaged as \texttt{apk} files, which are compressed archives containing all the resources required by the app. This means that any libraries leveraged by an app are compiled into the same app binary for distribution. The Android OS does not provide a facility for privilege separation between apps and their embedded libraries and thus libraries inherit all permissions granted to apps. Conversely, many libraries use permissions that are not required by the app that embeds them, but due to the lack of privilege separation, apps also gain additional privileges. This is one explanation as to why many functionally-similar apps ask for greatly differing permissions~\cite{Taylor:2016:SSP:2994459.2994474}. The larger problem of privilege separation between apps and their libraries have been explored by several authors~\cite{Pearce:2012:APS:2414456.2414498, Shekhar:2012:ASS:2362793.2362821, seo2016flexdroid}.
\subsection{Embedded Libraries}
The problem of the lack of privilege separation in apps benefits libraries in general and ad libraries in particular. Ad libraries are provided by ad networks, which serve as the middle-man connecting advertisers to app developers. Ad libraries provide easy-to-use interfaces that allow app developers to quickly insert ads into their existing apps. The ad library does the heavy lifting by fetching, displaying, and tracking revenue from ads.
We suspect that ad libraries stand to benefit more from ILC than other libraries because additional ``signals'' obtained from greater access to permissions directly benefit their ability to perform behavioural and demographic targeting of ads. This has an end result of them being better able to provide targeted ads, which increases click-through rates and consequently generates increased profits. This is one explanation for the observations in prior work~\cite{Grace:2012:UEA:2185448.2185464, stevens2012investigating}, where ad libraries check for and use undocumented permissions, i.e., permissions that have not been described in their SDK documentation. Other libraries may also benefit from added access to permissions, as any data they collect can be aggregated and sold to third-parties.
Since libraries may be installed in a variety of apps with varying permissions, they typically contain code that checks whether the relevant permissions have been properly granted before making a permission-protected API call~\cite{Grace:2012:UEA:2185448.2185464, stevens2012investigating}. Indeed, Fang et al.~\cite{Fang:2016:RCA:2897845.2897914} observed that libraries were better designed than apps on a whole as it relates to handling cases where permissions have been revoked or not granted.
In ad libraries, the use of any permissions should be carefully scrutinised, since the sole purpose of the ad library is to serve ads. That is, the ad library itself provides no other functionality on a device beyond serving ads, and thus any permissions used by an ad library should be assumed to facilitate better ad delivery. Better ad delivery may encompass better user profiling or the efficient display of ads. Some permissions, such as \texttt{READ\_EXTERNAL\_STORAGE} and \texttt{WRITE\_EXTERNAL\_STORAGE}, may be useful for the caching of ads, which improves user experience. Other permission usage (such as that observed in~\cite{Grace:2012:UEA:2185448.2185464}), including reading a user's calendar, contact lists and call logs suggests highly invasive data collection. Moreover, the guile of some companies has been demonstrated by their use of the \texttt{RECORD\_AUDIO} permission to enable cross-device tracking using inaudible ultrasound~\cite{mavroudis2017privacy}.
Some libraries employ code obfuscation to frustrate reverse-engineering. This may be to make it difficult to commit ad fraud or disable ad functionality, but may also be used to hide unsavoury data collection practices. Moreover, many libraries also employ dynamic code loading whereby executable code is retrieved from the Internet and loaded by the library dynamically, thus defeating scalable static analysis~\cite{seo2016flexdroid}. The existence of dynamic programming techniques and obfuscation allow library developers to execute questionable code on devices while reducing the likelihood of being discovered. Already, potentially harmful code has been observed in libraries, affecting hundreds of thousands of apps across the smartphone ecosystem~\cite{seo2016flexdroid, 7546512}.
\subsection{Threat Model}
Our main adversaries are the ad networks that provide ad libraries, since they stand to benefit the most from invasive data collection. Invasive data collection is beneficial to both the advertiser and app developer (in the form of revenue), but the end-user suffers from an erosion in their privacy, and the fact that one or more third-parties are able to construct substantial profiles of their habits and interests. Other third-parties distributing libraries (that do not deliver ads) also stand to benefit if they can capture user data, since this data can be later sold. These other third-parties include malicious actors that attempt to introduce harmful code in libraries through repackaging, as observed in~\cite{7546512}.
We assume that our adversaries try to collect personal data, whether overtly or covertly, in order to facilitate their business objective. For ad networks, this business objective would be better user profiling for the purposes of targeting ads. For other non-advertising adversaries, this would be for collecting user data to sell or trade. Thus, in examining ILC, it is important to not focus solely on ad libraries. Additionally, it is important to understand that all streams of personal data may be valuable, as even innocuous pieces of data when combined may lead to greater privacy erosion through inferencing.
\subsubsection{What Personal Data is of Interest}
There are several streams of data on a smartphone that are of interest to adversaries. In what follows, we highlight what we suspect are the most important ones.\\
\noindent{\textbf{Location.} Most modern smartphones contain GPS hardware, which can precisely pinpoint a user's location. A coarse estimate of a user's location can also be obtained from other sources such as nearby Wi-Fi networks and cell towers. Invasive libraries can track user movements to determine where they live, work and socialise based on their location at various times of the week. Given movement patterns in the evenings, an adversary could infer that a user is an alcoholic. This information may be valuable to the user's insurance company, which itself may be inferred if the user has the insurance company app installed. Worryingly, the user's insurance app can even do this profiling itself.}\\
\noindent{\textbf{App Usage.} The list of apps installed on a device may be useful for understanding the interests of a user. Moreover, information about app usage can reveal the level of importance of each of these interests to the user. If app usage data is combined with location data, a broader picture may be painted of a user. For example, a user currently running a stopwatch app does not say much by itself, but when combined with coarse location data, an advertiser can determine a victim is likely training in the gym, as opposed to attending a social event, at a sports facility. This user may then be profiled and targeted with advertisements for protein powder, or more maliciously, anabolic steroids.}\\
\noindent{\textbf{Device Information.} Smartphones reveal device information such as device type and model. An adversary can use this to determine the amount of disposable income that a user has. Devices also contain unique identifier information such as an IMEI or SIM card information. An adversary tracking a user across devices (using SIM card information and assuming the user keeps their phone number) can be more confident in the disposable income of a user, if they are seen to use only high-end devices over time. Additionally, the IMEI of a device is also useful for tracking a user across apps, even if their device is restored to factory state.}\\
\noindent{\textbf{Communication.} Various communication data can be used to profile a user. From call logs and messages, an adversary can determine a user's close friends. With sentiment analysis, they may also be able to determine which contact is the user's spouse. Analysing text messages can also help to uncover a user's interests. The volume, duration, and time of phone calls can paint a picture of whether a user is very social and/or outgoing. Furthermore, parsing a victim's address book can allow the adversary to target persons with similar interests to the victim. Contacts in a user's address book may also prove useful if the user is blackmailed. For example, an attacker may threaten to notify a spouse of the victim's use of a dating app.}\\
\noindent{\textbf{Storage.} The files stored on a device may reveal other interests of a user. A user with more documents than pictures may be targeted with productivity tools instead of a digital camera. Conversely, a user with many pictures whose device is almost out of space may be targeted with ads for a new smartphone or memory cards. Further still, recently taken pictures combined with a user's location may suggest that the user enjoys photographing nature.}\\
\noindent{\textbf{Microphone.} The guile of adversaries has been demonstrated especially with the abuse of access to the device microphone. It is no longer necessary for an adversary to eavesdrop on a victim's conversations (although they may still do). Indeed, apps have been known to track users across devices using ultrasound~\cite{mavroudis2017privacy}.}\\
\section{Methodology}
\label{section-methodology}
To measure the danger of ILC in the real-world, we took the following steps:\\
\begin{enumerate}
\item Obtain all apps in the Google Play Store with more than one million downloads~(Section~\ref{section-app-lists}).
\item Identify the permissions that libraries within these apps are able to use~(Section~\ref{section-library-permission-usage}).
\item Use app lists from real-world devices to understand the number of libraries used within apps on devices and the permissions that they have access to~(Section~\ref{section-realworld-app-lists}).
\end{enumerate}
\subsection{Obtaining Apps for Analysis}
\label{section-app-lists}
In this study, we only considered apps available in the official Android app marketplace: the Google Play Store. Performing our analysis on this universe of apps, i.e., the entire Google Play Store, would require substantial storage and processing resources. For this reason, we instead opted to analyse popular apps. An app was considered to be popular if it had more than one million downloads. By analysing popular apps, we capture a large cross-section of those apps that would likely be installed on the average smartphone. This allows us to get a good indication of the threats to a majority of users, while limiting the amount of computing resources required for analysis. While additional information is lost by not considering less popular apps, these apps have a much smaller install-base and thus their contribution to ILC, if present, is already more limited.
To this end, we used a database of Google Play Store app metadata provided by the authors of~\cite{Taylor:2017:UUI:3052973.3052990} to identify and download all apps with more than one million downloads. This was~15,052 apps in total.
\subsection{Library Permission Usage}
\label{section-library-permission-usage}
Merely looking at the permissions recommended by library SDK documentation does not provide a complete understanding of permission usage within libraries. Indeed, libraries may use more permissions internally, and the app itself also needs to declare particular permissions (and have them granted) in order for the library to be able to use them. Conversely, just because an app declares (and has been granted) particular permissions does not mean that the library contains code that is able to leverage these permissions. Thus, several steps need to be taken to understand what permissions the libraries within each app have access to. In what follows, we outline the steps that were taken in this study to understand the permissions that libraries are able to use.\\
\begin{enumerate}
\item Decompile the \texttt{apk} file for an app to \texttt{smali} code using \texttt{apktool}~\cite{apktool}.
\item Use a whitelist of library signatures provided by the authors of FlexDroid~\cite{seo2016flexdroid} to identify library code as distinct from app code. Libraries could also be detected using techniques described in~\cite{Backes:2016:RTL:2976749.2978333}.
\item Use API-to-permission mappings from PScout~\cite{Au:2012:PAA:2382196.2382222} (improvement over Stowaway~\cite{Felt:2011:APD:2046707.2046779}) to understand what permissions are required by each API call observed in the \texttt{smali} code. At this point, we know what set of permissions can be used by library code and what set can be used by app code.
\item Take the intersection of the permissions declared in the app's manifest and the permissions observed in its \texttt{smali} code to determine what permissions libraries are actually able to leverage.
\end{enumerate}
Note that this technique may fail to reveal all permissions used if programming features such as dynamic code loading or reflection are employed. Thus, the permissions that are observed are a lower bound on the actual permissions that may be used in each case. For this reason, the results we obtain can also be considered as a lower-bound of the actual privacy risks coming from apps.
Not all apps were successfully decompiled. Indeed, a subset of them failed for reasons relating to technical shortcomings of the decompiler that was used. In the end, we were able to obtain library permission usage information for~14,976 apps.
\subsection{Obtaining Real-World App Lists}
\label{section-realworld-app-lists}
Central to the overall goal of understanding ILC is obtaining lists of installed apps on devices, hereafter called \textit{app lists}. To obtain app lists, we leveraged the Device Analyzer dataset. Device Analyzer is a project concerned with collecting usage statistics on smartphones. These usage statistics are collected and uploaded in the background by the client app, which is voluntarily installed by contributors to the project. Device Analyzer has data from over 30,000 contributors.
The most important data that we leverage from Device Analyzer is the app lists from each of the contributing devices and usage information for each of the apps in the app lists. Usage information tells when and how often apps are run by users. Concretely, we leveraged app lists and app usage information for 30,444 devices.
\subsection{Assumptions}
During our study we made several assumptions regarding the effect of run-time permissions and the effect of old Device Analyzer data.
\subsubsection{Assumption for Run-time Permissions}
\label{section-runtime-permissions}
The advent of run-time permissions has allowed users to selectively accept (or reject) permissions individually. For our study, we assume that apps are granted all permissions listed in their manifest as a matter of practicality, since the Device Analyzer dataset that we leverage currently does not capture run-time permissions that have been granted to apps. This may in turn cause us to overstate the number of permissions that have been granted to apps, and consequently, their libraries.
\begin{figure}[!t]
\centering
\includegraphics[width=3.25in]{targetsdk.pdf}
\caption{API levels targeted by apps with more than one million downloads. Disproportionately more apps target API level~23, presumably to facilitate run-time permissions.}
\label{fig-targetsdk}
\end{figure}
However, for run-time permissions to be triggered, a device needs to be running at least Android~6.0 \textit{and} the app needs to have a \texttt{targetSDK} of~23 or higher. Fig.~\ref{fig-targetsdk} shows the \texttt{targetSDK} for the apps in our dataset. Approximately 60\% of apps have a \texttt{targetSDK} of~22 or lower, meaning that our assumption is valid, by default, in a majority of cases.
Approximately~40\% of apps targeted API level~23 or higher. This means that 40\% of the apps in our dataset will trigger run-time permission requests provided that they are run on devices with Android 6.0 or higher. In reality, many devices are running older versions than Android~6.0, and thus run-time permissions will be triggered in less cases, though this number will increase as older devices get replaced with newer ones. Additionally, given user propensity to grant permissions~\cite{7427642, Felt:2012:APU:2335356.2335360}, we suspect that in other cases many run-time permissions will be granted. This leads to our assumption being valid in even more cases.
Disproportionately more apps target API level~23 as shown in Fig.~\ref{fig-targetsdk}. One explanation for this is that these app developers wanted to take advantage of run-time permissions and thus targeted this API level to implement the feature. It remains unclear, however, whether the developers of the approximately~60\% of apps targeting API level~22 or lower have incentive to switch to run-time permissions.
At this point, we remind the reader that these numbers describe the API levels targeted by popular apps. Popular apps have developers with financial incentives and resources to update their apps to target the newest API levels. Thus, we speculate that more than 60\% of unpopular apps will target API levels that pre-date run-time permissions, i.e., API level~22 or lower.
We used the Google Play Store metadata described in Section~\ref{section-app-lists} to make informed speculation on whether apps not supporting run-time permissions were likely to do so in the future. To infer this, we looked at the update history of apps not supporting run-time permissions. Approximately 64\% of apps not supporting run-time permissions have been updated since Android~6.0 was released (officially on October~5,~2015) and still fail to support run-time permissions. These apps, on average, received their latest update 353~days after the release of Android~6.0. The 36\% of apps that have not been updated since the release of Android~6.0 have an average update date of 378~days before the release of Android 6.0. This leads us to believe that many app developers simply have limited incentive to support run-time permissions at all. Thus we believe our assumption regarding run-time permissions is reasonable.
\subsubsection{Assumption for Old Device Analyzer Data}
Some data in Device Analyzer is old, and the apps used on devices in those datasets will be older versions than the versions we have downloaded and statically analysed. The apps could have since been updated to use different permissions and libraries. In performing our analysis, we treat old devices as if they were running the versions of the apps that we analysed. Given that apps are set to auto-update by default in the Google Play Store, we consider it a reasonable assumption that if the devices still had these apps installed they would be running the latest versions of said apps. Of course this is not always the case, but in the worst case what our assumption does is give an idea of the scale of potential ILC on devices that have the latest versions of those apps installed.
\section{Intra-Library Collusion (ILC)}
\label{section-ilc}
To make measurements of ILC meaningful, it is first important to quantify the number of distinct libraries detected within apps on the devices in our dataset. Within our dataset, each device had an average of~23.6 detectable libraries within the popular apps that were installed on those devices.
\begin{table}[!t]
\centering
\caption{Most popular libraries detected within apps with more than one million downloads. Note that we omit libraries detected in less than 1\% of apps.}
\label{table-most-popular-libraries}
\begin{tabular}{p{1.75in}x{1.3in}}
\hline
Library & \% of apps \\ \hline
com/facebook & 11.9 \\ \hline
com/google/android/gms/analytics & 9.8 \\ \hline
com/flurry & 6.3 \\ \hline
com/chartboost/sdk & 5.9 \\ \hline
com/unity3d & 5.2 \\ \hline
com/applovin & 3.5 \\ \hline
com/mopub & 3.1 \\ \hline
com/inmobi & 3.0 \\ \hline
com/google/ads & 3.0 \\ \hline
com/google/android/gcm & 2.7 \\ \hline
com/tapjoy & 2.4 \\ \hline
org/cocos2d & 2.4 \\ \hline
com/amazon & 2.0 \\ \hline
com/millennialmedia & 1.6 \\ \hline
org/apache/commons & 1.4 \\ \hline
com/heyzap & 1.4 \\ \hline
com/nostra13/universalimageloader & 1.3 \\ \hline
com/adobe/air & 1.0 \\ \hline
\end{tabular}
\end{table}
The Top~18 most popular libraries (i.e., libraries observed in more than~1\% of apps that were studied) within apps in our dataset is shown in Table~\ref{table-most-popular-libraries}. The most popular libraries included Facebook, Google Analytics, Flurry, and Chartboost. Popular ad libraries such as InMobi, MoPub, Millennial Media, Heyzap and TapJoy were also seen in the Top~18 most popular libraries. Several utility libraries were also popular, providing functionality such as loading/caching images (\texttt{com/nostra13/universalimageloader}) and rendering graphics (\texttt{com/unity3d} and \texttt{org/cocos2d}).
\subsection{Which Libraries may Benefit from ILC}
Fig.~\ref{fig-which-libraries-empowered} shows which libraries are potentially able to benefit from ILC. This was calculated as the number of each library potentially able to benefit from ILC divided by the total number of instances where libraries were potentially able to benefit from ILC. In most cases, it was \texttt{com/facebook} that was potentially able to benefit from ILC with 31.3\%. Other libraries with the potential to benefit from ILC were \texttt{com/mopub} (21.8\%), \texttt{com/flurry} (14.0\%), \texttt{com/amazon} (10.8\%), and \texttt{com/inmobi} (8.4\%).
Worryingly, the Top~5 libraries that are potentially able to benefit from ILC include MoPub, Flurry Analytics and InMobi; known advertising/analytics providers. Note that this observation is not an indictment against any of the libraries mentioned. Rather, it shows the extent to which ILC could be leveraged in the real-world by these libraries if they had the desire to do so. Given the fierce competition between ad libraries, it is conceivable that wily ad networks would exploit ILC by aggregating user data on their servers, maximising profits while evading detection.
\begin{figure}[!t]
\centering
\includegraphics[width=3.25in]{empowered-libraries.pdf}
\caption{Libraries that are potentially able to benefit from ILC. For clarity, libraries appearing less than 0.5\% of the time are omitted.}
\label{fig-which-libraries-empowered}
\end{figure}
\subsection{How Libraries may Benefit from ILC}
To fully understand the potential risk to users, we measured the number of distinct libraries per device that had the potential to benefit from ILC. These results are given in Table~\ref{table-num-libraries-ilc}. Approximately two in five devices (42.4\%) are not susceptible to an ILC attack. We note, however, that a device can go from not being susceptible to being susceptible through the installation of a single app. On the other hand, 57.6\% of devices had one or more libraries that are potentially able to benefit from ILC. In fact, one in five (20.4\%) devices in our dataset had three or more libraries that would be able to benefit from ILC. This is equivalent to approximately~6,000 devices in our dataset, but translated to the real-world this would amount to hundreds of millions of devices. At this scale, even a slight improvement in ad targeting gained by leveraging ILC would reap substantial revenue for ad networks having the requisite guile.
\begin{table}[]
\centering
\caption{Number of libraries per device that had increased access to permissions.}
\label{table-num-libraries-ilc}
\begin{tabular}{x{1.55in}x{1.5in}}
\hline
Number of Libraries & \% of devices \\ \hline
0 & 42.4 \\ \hline
1 & 20.7 \\ \hline
2 & 16.5 \\ \hline
3 & 10.5 \\ \hline
4 & 5.8 \\ \hline
5+ & 4.1 \\ \hline
\end{tabular}
\end{table}
In addition to understanding the number of libraries able to benefit from ILC, it is also important to quantify what benefit they would receive by doing so. To this end, we examined the number of additional permissions that a library leveraging ILC would be able to obtain. This result is given in Table~\ref{table-num-permissions-ilc}. To measure the increase in permissions, the difference between the total number of permissions obtained using ILC and the maximum number of permissions the library could access from any one app on the device was taken. In most cases (69.2\%), libraries would be able to access one additional permission if they leveraged ILC. The number of devices monotonically decreased as the number of permissions increased. Worryingly, libraries would obtain access to three or more permission on 9.6\% of devices if they leveraged ILC. While 9.6\% is not a significant number of devices, we remind the reader that this would translate to hundreds of millions of devices in the real-world.
\begin{table}[]
\centering
\caption{Number of additional permissions a library had access to beyond the single-app maximum.}
\label{table-num-permissions-ilc}
\begin{tabular}{x{1.55in}x{1.5in}}
\hline
Number of Permissions & \% of cases \\ \hline
1 & 69.2 \\ \hline
2 & 21.2 \\ \hline
3 & 5.9 \\ \hline
4 & 2.8 \\ \hline
5+ & 0.9 \\ \hline
\end{tabular}
\end{table}
\section{How the Potential for ILC has Evolved in Two Years}
\label{section-longitudinal}
Taylor and Martinovic~\cite{Taylor:2017:UUI:3052973.3052990} conducted the first systematic large-scale measurement of how permission usage in apps has increased over time. They also quantified the extent to which embedded libraries benefited from this permission increase in a phenomenon they call \textit{library empowerment}. Along similar lines, we examine how the potential for ILC has changed over time as a result of libraries and apps using more permissions as they are updated.
To perform this measurement, we used a freely available dataset~\cite{archiveplaydrone} of historical versions of apps compiled using the PlayDrone tool~\cite{Viennot:2014:MSG:2591971.2592003}. The historical versions of apps used were the versions as they were available in the Google Play Store in October~2014. By measuring the potential for ILC in these versions of apps as we did in Section~\ref{section-ilc}, we can see how the potential for ILC has changed over an approximately two-and-a-half year period.
Not all~14,976 apps in our current dataset were available in the historic dataset of apps. This is because some apps currently in our dataset did not exist in October~2014. Overall, we were able to obtain~11,836 historic apps. Due to technical shortcomings of the decompilers used, we were able to obtain library permission usage information for~11,821 apps. The difference in number of apps in old and new datasets may skew the results of our longitudinal study. To prevent such skew, we took the intersection of the sets of apps that were available in the old and new datasets. In the end, we were able to do a longitudinal study of how the potential for ILC attacks on devices changed for~11,774 of apps (with more than one million downloads) over a two-and-a-half year period.
Note that increases in the potential for ILC are not only caused by the apps themselves using more permissions. Indeed, the libraries embedded within apps may have now been updated to introduce additional code that makes use of permission-protected APIs. Additionally, wider adoption of particular libraries by app developers may cause substantial changes in the potential for ILC, especially if these libraries use many permissions.
\subsection{Libraries that May Exploit ILC}
Fig.~\ref{fig-longitudinal-ilc} looks at the longitudinal changes in the libraries that are able to exploit ILC. We use \texttt{NEW} to refer to results generated from analysing the new versions of apps, and \texttt{OLD} refers to results obtained by analysing historical versions of apps. The Facebook library had the largest increase, going from 2.7\% to 31.5\%. The Flurry library had the largest decrease going from 37.9\% to 12.9\%.
We manually investigated the four libraries that had the most significant changes over the period to understand why this was the case. These libraries were \texttt{com/facebook}, \texttt{com/mopub}, \texttt{com/flurry} and \texttt{com/inmobi}. The increase in prevalence for \texttt{com/facebook} came about from increased numbers of apps using the library, as well as the fact that the permissions used by the library increased. For \texttt{com/mopub}, the reason was the same: more apps started using the library and its permission usage also increased. On the side of decreases over time, \texttt{com/flurry} and \texttt{com/inmobi} had their decrease because of less apps using the library. For both libraries, no change in number of permissions used was detected. These changes demonstrate the extent to which the possibility of ILC across devices can change simply because libraries start using one or more new permissions and/or because they become more popular.
\begin{figure}[!t]
\centering
\includegraphics[width=3.25in]{longitudinal-ilc.pdf}
\caption{Longitudinal look at changes in the libraries that are able to benefit from ILC. For clarity, libraries appearing less than 0.5\% of the time are omitted.}
\label{fig-longitudinal-ilc}
\end{figure}
\subsection{Changes in Potential Benefit from ILC}
Table~\ref{table-num-libraries-ilc-longitudinal} summarises our longitudinal analysis of the number of libraries per device that were able to exploit ILC. Worryingly, there was a 19.7\% decrease in the number of libraries per device that were unable to benefit from ILC. This suggests that library and permission usage evolution over time is facilitating increases in the potential to exploit ILC. There was also a 13.8\% decrease in the case where one library was able to benefit from ILC. The number of cases where two or more libraries per device were able to benefit from ILC went from 21.5\% of cases to 35.5\% of cases. Thus, not only is the potential for ILC increasing, but the consequences of the attack in terms of number of libraries that can benefit are increasing as well.
\begin{table}[]
\centering
\caption{Longitudinal look at the number of libraries per device that had increased access to permissions.}
\label{table-num-libraries-ilc-longitudinal}
\begin{tabular}{x{1.1in}x{0.55in}x{0.55in}r}
\hline
Number of Libraries & OLD \% & NEW \% & \% Change \\ \hline
0 & 53.8 & 43.2 & -19.7\% \\ \hline
1 & 24.7 & 21.3 & -13.8\% \\ \hline
2 & 12.5 & 16.4 & +31.2\% \\ \hline
3 & 5.6 & 10.1 & +80.4\% \\ \hline
4 & 2.1 & 5.3 & +152.4\% \\ \hline
5+ & 1.3 & 3.7 & +184.6\% \\ \hline
\end{tabular}
\end{table}
We further analysed the number of additional permissions that libraries would be able to exploit if they leveraged ILC. The results of this analysis is shown in Table~\ref{table-num-permissions-ilc-longitudinal}. The cases where libraries were able to leverage one additional permission fell from 86.5\% to 68.5\%. Worryingly, the cases where libraries could access two or more new permissions, increased from 13.5\% of cases to 31.5\% of cases, a 133\% increase. Once again, it was the more dangerous case (increase of~2+ permissions) that had the greater increase with the less dangerous case (increase of 1~permission) having a decrease.
\begin{table}[]
\centering
\caption{Longitudinal look at the number of additional permissions a library had access to beyond the single-app maximum.}
\label{table-num-permissions-ilc-longitudinal}
\begin{tabular}{x{1.2in}x{0.5in}x{0.5in}r}
\hline
Number of Permissions & OLD \% & NEW \% & \% Change \\ \hline
1 & 86.5\% & 68.5\% & -20.8\% \\ \hline
2+ & 13.5\% & 31.5\% & +133.3\% \\ \hline
\end{tabular}
\end{table}
\section{How Often is Sensitive Data Sent to Ad Library Servers}
\label{section-frequency-leaks}
We now turn our attention to a case study on ad libraries. This is because ad libraries have financial incentive to exploit ILC and from our measurements, several ad libraries were among the top libraries that were able to benefit from ILC. Manual decompilation and analysis of the binary \texttt{JAR} files of several ad libraries revealed that sensitive data such as location data and nearby Wi-Fi networks are routinely sent with each ad request.
For this reason, we wanted to approximate a lower bound on how often and to how many different ad networks such sensitive personal data is sent. This is useful for understanding the quantity of data that is exfiltrated from a device to ad networks per day. With this estimate, one can obtain a more comprehensive understanding of the extent to which ad libraries already collect information suitable for exploiting ILC.\\
To make the calculations, we needed to understand how many (and how often) apps were run on devices per day. This information was obtained by analysing data on app usage in the Device Analyzer dataset. In what follows, we discuss some practical assumptions that were made in performing our calculations.\\
\noindent{\textbf{Leakage of personal data.} We assume that each ad library sends personal data (if relevant permissions are available) with each ad request. From the sample of ad libraries that we manually analysed, this assumption is correct. The literature~\cite{stevens2012investigating, Grace:2012:UEA:2185448.2185464} also agrees that ad libraries typically send data useful for profiling with each ad request. Throughout our calculations, however, we were careful to deem an ad library as sending sensitive data with ad requests only if:\\}
\begin{enumerate}
\item The ad library contains permission-protected Android API calls.
\item The app embedding the library has declared the relevant permissions that allow the ad library to make these API calls.
\end{enumerate}
\noindent{\textbf{Frequency of leakage.} In reality, apps are used multiple times a day and many of these apps contain ad libraries which fetch and continuously update ads. Unfortunately, Device Analyzer data is not granular enough to say exactly how many times per day and app is launched or which activity within an app is run. This fine-grained data (were it available) would enable us to precisely determine how often ads were shown. This is because it is possible to determine whether an ad is embedded within a particular activity of an app. Given the absence of the requisite fine-grained data, we assume that a single ad is loaded, i.e. sensitive data is sent once, per ad library per app per day \textit{if} the app is run by the user. Anecdotal data suggests that this is a gross underestimation of what actually happens. However, in giving this baseline, we obtain an estimate of the lower bound of sensitive data leakage by ad libraries. Future researchers with better estimates of how many ads are served per ad library per app per day will be able to scale our measurements accordingly.}\\
In making our calculations, we only considered ad libraries that had a prevalence of more than 1\% across apps in our dataset. Across the Device Analyzer dataset, devices used an average of 7.4~third-party apps (available in the Google Play Store) per day. Of these apps,~4.7 of them were in our app dataset (i.e., they had one million downloads or more). From these~4.7 apps, a mean of~2.4 sensitive data leakages were caused by ad libraries per device per day, using our assumptions. That is, approximately 50\% of apps with over one million downloads leak sensitive data per device per day on average. Further analysis of the data reveals that sensitive data was leaked to approximately~1.7 different ad servers per device per day.
At the upper end of the spectrum, one device had~132 instances of sensitive data leakages per day. Interestingly, at least one device sent data to all the ad servers that were considered in our measurement. We discuss these observations further and put them into context in Section~\ref{section-discussion-frequency}.
\section{Discussion}
We have measured the potential for ILC to take place and the benefits to be gained by libraries exploiting ILC. We have also shown how the potential for ILC has increased over a two-and-a-half year period. This increase is facilitated by apps using more permissions but also the libraries themselves being updated to use more permission-protected API calls internally.
Prior work, and our own observations, confirm that libraries already send enough data back to their servers to facilitate an ILC attack. This does not mean, however, that any of the libraries we identified actually engage in this practice. Given the nature of ILC and the fact that data aggregation happening on the server-side is opaque, it becomes very difficult to know whether ILC happens in practice. This problem is further complicated by the fact that libraries may have legitimate reasons to explain why particular pieces of personal data need to be transmitted off a device. Once off the device, however, any aggregation that happens would be invisible.
We suspect that the advertising and analytics industry has the most to gain from exploiting ILC. Given the guile shown by ad libraries and ad networks in general, we believe that this may be a very attractive attack, especially considering that it would be hard to prove that it was happening. Given the fierce competition in the advertising and analytics space, any additional signals about users that can be leveraged from data that is already being collected can improve an ad network's targeting potential. Even if this improvement is a small one, when translated to the app ecosystem of millions of apps and billions of devices, ILC has the potential to generate (or is already generating) a windfall for ad networks.
The main catalyst that allows ILC to happen is the failure of the Android permission system to separate the privileges of libraries and their host apps. However, even if this privilege separation were to be implemented in future Android versions (using strategies such as those highlighted in Section~\ref{section-privilege-separation}), it may be the case as with run-time permissions (as shown in Section~\ref{section-runtime-permissions}) that there is limited incentive for app developers to support it. On the contrary, there may be incentive for app developers to \textit{not} support library privilege separation, as it may impact their profits negatively.
\subsection{Addressing the Problem of ILC}
In this section, we discuss technical and non-technical approaches that can be used to mitigate the problem of ILC.
\subsubsection{Technical Approaches}
Prior work, such as those highlighted in Section~\ref{section-privilege-separation}, attempt to separate privileges and thus alleviate the problem of ILC somewhat. However, we now use the example of ad libraries to demonstrate that several concerns will remain if these strategies are used.
If privileges are removed from libraries altogether, then ad libraries will have more difficulty in targeting ads to users. This increases the likelihood that advertisers and ad networks will not be interested in such systems. Additionally, app developers stand to lose revenue as well and thus may be uninterested in implementing privacy-preserving features. Moreover, depending on the privilege separation approach used, user data may be explicitly passed from apps to libraries using data passing APIs first observed by Book and Wallach~\cite{Book:2013:CCS:2516760.2516762} (see Section~\ref{section-data-collection}) and confirmed by us.
While good for user privacy, less privileged ad libraries may actually harm the ecosystem. Much of the effort that goes into developing apps comes from the expectation of the app developers that they will receive a return on their investment when they monetise their apps. Poor app monetisation could serve as a deterrent to new app developers entering the market and thus the end users may ultimately suffer from reduced content.
\subsubsection{Non-technical Approaches}
Major app stores, such as Google Play, could attempt to limit ILC by means of modifying their developer policies. Nation states may also enact and enforce laws that prohibit the aggressive cross-aggregation of data that happens when ILC is exploited. These steps go in the right direction, but violations may be difficult to detect and enforce in practice.
The first problem is detecting that a library employs ILC. The only positive evidence that might be available is that libraries leak personal data off a device. While this is a necessary condition for ILC to happen, it is not sufficient. Moreover, detecting privacy leaks in the first place is challenging. Static and dynamic analysis tools are not perfect and may fail to detect all leaks in apps.
Even with perfect detection of leaks, libraries may have legitimate reasons for sending data off a device, and so merely observing leakage of sensitive user data does not imply guilt in exploiting ILC. Thus the major challenge is born out of the fact that the actual data aggregation during ILC happens on third-party servers. That is, the actual point of maliciousness happens where it is not transparent from the outside. Thus, app stores and regulatory bodies have an uphill battle. One promising avenue to infer the exploitation of ILC by third-parties is the use of differential analysis techniques, such as those employed by L{\'e}cuyer et al.~\cite{Lecuyer:2014:XEW:2671225.2671229}
The second problem is that of enforcement. While apps may be in violation of the terms and conditions of app stores (or indeed even local and national law), there is no unifying framework for enforcement across the app ecosystem. Indeed, there are a number of third-party app stores that provide apps to millions of users, which may not necessarily impose the required privacy policies. Moreover, apps may need to be penalised on a case by case basis, since they may only be in violation of a law if the app is downloaded by a user in a particular country, and there exists proof that specific types of data on the user have been aggregated. The aforementioned challenges make the large-scale mitigation of ILC difficult in practice.
\subsection{Frequency of Data Leakage}
\label{section-discussion-frequency}
The frequency of data leakage by libraries in apps is a source of added concern. In Section~\ref{section-frequency-leaks}, it was observed that the average device studied had~2.4 leakages of sensitive data by ad libraries per day. This quantity of leakages comes from the~7.4 third-party apps used in a day, of which~4.7 have more than one million downloads. Generalising, approximately 50\% of popular apps installed on devices leak sensitive data from these devices to ad networks.
We would like to remind the reader at this point that our assumptions with regard to the frequency of data leakage were very conservative. Indeed, we assumed one sensitive data leak per ad library per app per day, only if:\\
\begin{enumerate}
\item An app containing an ad library was seen to be running on a device in a day.
\item An ad library within the running app has the ability to use one or more permissions \textit{and} the permissions declared by the app allowed it to do so.
\end{enumerate}
Many apps are run by users more than once per day. Moreover, ad libraries also rotate the ads that are shown while an app is being run, thus adding to the number of ad requests leaking data. Thus our estimates fail to account for increased data leakage along two dimensions.
Unfortunately, the data provided by Device Analyzer, while rich in many regards, does not allow us to take any more accurate measurements. Moreover, we only considered apps with more than one million downloads. This caused us to ignore approximately 36\% of ``unpopular'' third-party apps that were run on devices per day, further reducing our estimate.
For these reasons, we consider our estimates of data leakage per device per day to be very conservative. However, even if these conservative numbers represent what happens in the real-world, it is not reassuring that ad libraries have the capability to send all the sensitive data that they have access to from a device more than twice a day and to almost two different ad networks. We leave more accurate measurements of the frequency of data leakages by ad libraries on real-world devices as an interesting area for future work.
\subsection{Limitations of this Study}
In this section, the limitations of our methodology are highlighted and discussed.
\subsubsection{Library Permission Usage}
Our static analysis approach to determine library permission usage, as detailed in Section~\ref{section-library-permission-usage}, is limited in its ability to handle dead code and dynamic code. This is an inherent limitation of static analysis approaches. If our system identifies permission usage in dead code, it will incorrectly attribute it to the library in question and overstate the library's permission usage. On the other hand, since our system fails to handle dynamic code, it may miss permission usage and thus understate the library's permission usage.
Our anecdotal observations, however, suggest that dead code and dynamic code are not a significant fraction of the code contained within libraries. Moreover, each type of code has the opposite effect on our estimation of library permission usage. For this reason, we believe that our estimates generally give a good representation of the scale of the problem. We leave handling dead code and dynamic code as an interesting area for future work.
\subsubsection{Device Analyzer Data}
The Device Analyzer client app is more likely to be installed by a technical set of users, since it is an academic endeavour and has mostly been promoted indirectly through publications that make use of its data. For this reason, app lists and app usage information in Device Analyzer may not fully reflect that of the average user in the real-world.
However, we suspect that since contributors to Device Analyzer are more technical, our results actually underestimate the results that would be seen if a more representative sample of users was used. This is because we imagine technical users are more savvy and more likely to take additional steps to preserve their privacy when choosing apps and using their devices.
\section{Related Work}
Related work in the area can be divided into several categories: permission usage increases in apps and libraries, data collection by libraries, approaches to mitigating privacy leaks, and sensitive data tracking. In what follows, we discuss the most closely related work in each category.
\subsection{Permission Usage Increases}
Taylor and Martinovic~\cite{Taylor:2017:UUI:3052973.3052990} first measured the extent to which apps were adding permissions over time across the Google Play Store. Before that, there was a wide belief that systematic permission increases were happening but without concrete data. The authors found that apps on a whole had a tendency to add permissions over time, and importantly, that embedded libraries were also benefiting from these permissions. They refer to this phenomenon as ``library empowerment''. Along similar lines, we look at the totality of personal data that libraries would be able to aggregate by leveraging ILC. Additionally, we use a similar longitudinal study to understand how the potential for libraries exploiting ILC has changed over time.
Book et al.~\cite{DBLP:journals/corr/abs-1303-0857} explore the extent to which Android ad libraries expanded their permission usage over time. By looking at app release dates and the ad library versions that the apps were using, they were able to build a chronological map of ad library permission usage. They found increases in permission usage over time as well as observed that several of the new permissions added posed risks to user privacy.
When taken together with the aforementioned work of Taylor and Martinovic, it paints a worrying picture in that apps are using more permissions, while at the same time libraries are becoming more able to exploit these additional permissions to their benefit. Complementary to this, we measure the extent of what invasive libraries would be able to gain, were they to exploit their combined privileges using ILC.
\subsection{Data Collection by Libraries}
\label{section-data-collection}
A number of authors looked at the collection and transmission of personal data by libraries. Grace et al.~\cite{Grace:2012:UEA:2185448.2185464} developed a system called AdRisk, and used it to examine potential privacy risks posed by ad libraries. They found that most of the ad libraries studied collected private information such as a user's location. Worryingly, they found evidence of invasive data collection taking place such as accessing a user's call logs. The authors also found that some ad libraries downloaded and executed code dynamically, leading to security risks on devices.
Stevens et al.~\cite{stevens2012investigating} also looked at user privacy in ad libraries. The authors discovered unsavoury practices being performed by ad libraries such as the probing of permissions to see which ones were available to it. The permissions checked for were beyond the required and optional permissions specified by the library documentation. This demonstrates the guile of ad libraries in accessing and trying to access user data. Presumably, the ad libraries would then access data guarded by permissions that they have access to. The authors confirm their suspicion by using network traces from a major network provider to confirm the existence of private data leaked by the ad libraries in question. Given these observations that ad libraries will go to extreme (and overt) lengths for better user profiling, our work aims to understand what libraries may be doing covertly.
Along similar lines, Book and Wallach~\cite{Book:2013:CCS:2516760.2516762} consider the situation where apps themselves pass user data to ad libraries through internal ad library APIs\footnote{In our manual analysis, we also observed such API calls in ad libraries. Additionally, ad library SDKs also provided documentation on how app developers should use these APIs. We can thus confirm the observations of these authors.}. The authors did a study on 114,000 apps and found that app popularity is correlated with privacy leakage. The authors argue that the marginal increase in revenue gained, when taken across millions of users, seems to incentivise the violation of user privacy. With this motivation from ad networks in mind, we aim to measure the potentially more damaging threat of ILC. We also add our voice to the call for greater privilege separation between apps and their libraries.
\subsection{Privilege Separation}
\label{section-privilege-separation}
A number of authors propose privilege separation strategies for apps and their embedded libraries. Shekhar et al.~\cite{Shekhar:2012:ASS:2362793.2362821} propose AdSplit, an extension to Android that allows apps and ad libraries to run as separate processes with separate UIDs. AdSplit is able to automatically separate libraries from their host apps. It does this by decompiling an app and replacing ad library code with a ``stub library'' supporting the same API. This stub library can then call a separate AdDroid advertisement service. Similar to AdSplit, AFrame achieves process and permission isolation, but also provides display and input isolation~\cite{Zhang:2013:AIA:2523649.2523652}.
Pearce et al.~\cite{Pearce:2012:APS:2414456.2414498} take a different approach to solving the problem. They build support for advertising directly into the Android platform with a service called AdDroid. This eliminates the need for ad networks to provide ad libraries at all. Instead, AdDroid relays the transmission of data and handles displaying ads, obviating the need for an ad library to be resident in app code.
A major drawback of the privilege separation systems described above is the willingness of ad networks to adopt them. To counter this, Liu and Simpson~\cite{liu2016privacy} propose privacy-preserving targeted mobile advertising (PPTMA). Their goal is a system that provides privacy guarantees to users, while at the same time being palatable to ad networks. PPTMA acts as middleware that is positioned between untrusted ad libraries and sensitive user data. PPTMA hooks privacy-sensitive APIs and takes different actions depending on whether an ad network is ``cooperative'' or whether an app is whitelisted, and the like.
Seo et al.~\cite{seo2016flexdroid} took a more general approach and focused on privilege separation between apps and their libraries as a whole, i.e., their focus was not solely ad libraries. They devise a system called FlexDroid, which provides dynamic and fine-grained privilege separation for third-party libraries. FlexDroid is an extension to the Android permission system that allows app developers to specify different sets of permissions for different libraries. It, however, remains unclear whether app developers would be interested in limiting permissions to ad libraries, since it would directly affect their profit margins.
Similar to FlexDroid, Compac provides fine-grained access control at the component level~\cite{Wang:2014:CEC:2557547.2557560}. NativeGuard provides similar protection against libraries written in native code~\cite{Sun:2014:NPA:2627393.2627396}. Finally, Roesner et al.~\cite{180364} explore the secure embedding of interfaces, a technique that would be useful to support ad libraries.
\subsection{Flow and Taint Tracking}
Enck et al.~\cite{Enck:2010:TIT:1924943.1924971} took a very early look at how apps used and transmitted private data using a system they developed called TaintDroid. Their system is an extension to the Android platform that allows the tracking of sensitive data flows through apps. Using TaintDroid, the authors found 68 instances of data leaks across~20 of the~30 apps that they studied. In a half of the studied apps, the authors found instances where location data was sent to several advertisement servers, without user consent, and even in cases where no ad was displayed.
Arzt et al.~\cite{Arzt:2014:FPC:2594291.2594299} propose a system called FlowDroid that does static taint analysis of Android apps. By accurately modelling the Android life cycle, the authors were able to obtain high precision in their tests. Along similar lines, Wei et al.~\cite{Wei:2014:APG:2660267.2660357} propose Amandroid, a static analysis tool for vetting Android apps. Both FlowDroid and Amandroid can be used to statically determine whether data leakage can happen in apps. These tools can assist in quickly identifying apps that may have concerning behaviour. Apps seen to be sending sensitive data off devices may then be further analysed and their libraries subjected to differential analysis~\cite{Lecuyer:2014:XEW:2671225.2671229} to attempt to identify the exploitation of ILC.
\section{Conclusion}
In this paper, we highlighted a novel and dangerous privilege escalation vulnerability called intra-library collusion. ILC is enabled by inherent shortcomings in the Android permission model whereby privileges between apps and their embedded libraries are not separated. If exploited, ILC allows libraries to secretly aggregate multiple sources of sensitive user data by leveraging the permissions that they have been granted within two or more apps. Using app lists from over~30,000 devices, we observed that several popular social, advertising, and analytics libraries are able to exploit ILC. Some~57.6\% of the devices studied had at least one library that could benefit from ILC. These libraries could access a two or more additional permissions in~30.8\% of cases.
By doing a longitudinal study, we observed that several libraries increased their ability to exploit ILC. This happened because the libraries themselves started to use additional permissions or they became more popular. The capabilities gained by libraries exploiting ILC were also seen to increase over time. By doing a case study on ad libraries, we showed that ad libraries leak sensitive data from a device up to~2.4 times a day and that the average user has their personal data sent to~1.7 different ad servers per day.
As apps and smart devices become more popular, it is important to ensure that user privacy is protected from those who attempt to profit from it. By highlighting the novel capability of adversaries in the form of ILC, we highlight the potential danger, and add our voice to the call for privilege separation between apps and their libraries. This work takes us a step further in securing user privacy as smartphones and smart devices become ubiquitous and more ingrained in our lives.
\section*{Acknowledgement}
\bibliographystyle{abbrv}
|
1,108,101,563,891 | arxiv | \section{INTRODUCTION}
Understanding the ecological implications of infectious disease is one of
a few long-lasting problems that still remains challenging due to their
inherent complexities$^{\cite{anderson:1991,dickmann:2002}}$.
Pathogen-mediated invasion is one of such ecologically important
processes, where one invasive species carrying and transmitting a pathogen invades into a
more vulnerable species.
Apparent competition$^{\cite{hudson:1998,thomas:2005,bonsall:1997,park:1948}}$,
the competitive advantage conferred by a pathogen to a less vulnerable species,
is generally accepted as a major force influencing biodiversity.
Due to the complexities originating from dynamical interactions among
multiple hosts and multiple pathogens, it
has been difficult to single out and to quantitatively measure the
effect of pathogen-mediated competition in nature.
For this reason, pathogen-mediated competition and infectious disease dynamics
in general have been actively studied with theoretical
models$^{\cite{anderson:1991,dickmann:2002,holt:1994,holt:1985,
begon:1992,bowers:1997,greenman:1997}}$.
Theoretical studies of ecological processes generally employ
deterministic or stochastic modeling approaches. In the former case, the
evolution of a population is described by (partial-) differential or difference
equations$^{\cite{murray:1980}}$. In the latter case, the population is modeled as
consisting of discrete entities, and its evolution is represented by transition
probabilities.
The deterministic modeling approach has been favored and widely applied to ecological
processes due to its simplicity and well-established analytic
tools$^{\cite{murray:1980}}$.
The applicability of the deterministic approach is limited in principle to a system with no
fluctuations and no (spatial) correlations, e.g., a system composed of a large number of
particles under rapid mixing.
The stochastic modeling approach is more broadly applicable and more comprehensive,
as the macroscopic equation naturally emerges from a stochastic description of the same
process$^{\cite{vankampen:2001}}$.
While being a more realistic representation of noisy ecology, the stochastic
approach has a downside: most stochastic models are analytically
intractable and stochastic simulation, a popular alternative, is
demanding in terms of computing time. Nonetheless, the stochastic approach
is indispensable when a more thorough understanding of an ecological
process is pursued.
The role of stochastic fluctuations has been increasingly appreciated
in various studies of the spatio-temporal patterns of infectious diseases such as
measles$^{\cite{bjornstad:2001}}$, pertussis$^{\cite{rohany:1999}}$ and
syphilis$^{\cite{grassy:2005}}$.
There has been an escalating interest in elucidating the role of stochastic noise not only
in the studies of infectious disease dynamics but also in other fields
such as stochastic interacting particle systems as model systems for population
biology$^{\cite{liggett:1999}}$, the stochastic Lotka-Volterra
equation$^{\cite{goel:1971}}$,
inherent noise-induced cyclic pattern in a predator-prey model$^{\cite{mckane:2005})}$ and
stochastic gene regulatory
networks$^{\cite{mcadams:1997,hasty:2000,ozbudak:2002,pedraza:2005,
barkai:2001,bialek:2005,vilar:2002}}$.
Here we investigate the effects of noise on pathogen-mediated competition, previously
only studied by deterministic approaches.
In our previous work$^{\cite{joo:2005}}$ we developed an experimental system and a
theoretical framework for studying bacteriophage-mediated competition in
bacterial populations.
The experimental system consisted of two genetically identical bacterial
strains; they differed in that one strain was a carrier of the bacteriophage and
resistant to it while the other strain was susceptible to phage infection. Based on the
{\it in vitro} experimental set-up, we constructed a differential equation model of
phage-mediated competition between the two bacterial strains.
Most model parameters were measured experimentally, and a few unknown parameters were
estimated by matching the time-series
data of the two competing populations to the experiments (See
Fig.~\ref{fig1}). The model predicted, and experimental evidence
confirmed, that the
competitive advantage conferred by the phage depends only on the relative phage pathology
and is independent of other phage and host parameters such as the infection-causing contact
rate, the spontaneous and infection-induced lysis rates, and the phage burst size.
Here we examine if intrinsic noise changes the dynamics of the bacterial populations
interacting through phage-mediated competition, and more specifically if it changes the
validity of the conclusions of the deterministic model.
The phage-bacteria infection system is modeled and analyzed with two probabilistic methods:
(i) a linear Fokker-Plank equation obtained by a systematic expansion of a full
probabilistic model (i.e., a master equation), and (ii) stochastic simulations. Both
probabilistic methods are used to identify the source of noise
and assess its magnitude, through determining the ratio of the standard deviation
to the average population size of each bacterial strain during the infection process.
Finally stochastic simulations show that the conclusions obtained from
the deterministic model are robust against stochastic fluctuations, yet
deviations become large when the phage are more pathological to the invading
bacterial strain.
\begin{figure}[hp]
\begin{center}
\includegraphics[width=10cm]{FigJSP_1.eps}
\caption{\label{fig1} Illustrations of phage-mediated competition
obtained from {\it in vitro} experiments (symbols) and a deterministic model (lines).
The phage infection system consists of two genetically identical
Bordetella bronchiseptica bacteria (Bb) and the bacteriophage BPP-1
($\Phi$)$^{\cite{joo:2005}}$.
A gentamicin marker (Gm) is used to distinguish the susceptible bacterial strain (BbGm)
from the phage-carrying bacterial strain (Bb::$\phi$).
As time elapses, a fraction of BbGm become lysogens (BbGm::$\Phi$) due to
the phage-infection process. Bb::$\Phi$ are represented by open squares and a thick solid line,
BbGm::$\phi$ by open circles and a thin solid line, and the total BbGm
(BbGm+BbGm::$\Phi$) by filled circles and a long-dashed line, respectively.
(a) Lysogens (Bb::$\Phi$) exogenously and endogenously carrying the prophage invade the BbGm
strain susceptible to phage, and
(b) lysogens (Bb::$\Phi$) are protected against the invading
susceptible bacterial strain (BbGm)$^{\cite{joo:2005}}$.
The differential equations were solved with biologically relevant parameter values.
(See section 3.1 and Table 1 for a detailed description.)}
\end{center}
\end{figure}
\section{A MODEL OF PHAGE-MEDIATED COMPETITION IN BACTERIA}
We consider a generalized phage infection system where two
bacterial strains are susceptible to phage
infection, yet with different degrees of susceptibility
and vulnerability to phage. The interactions involved in this
phage-mediated competition between two bacterial strains are
provided diagrammatically in Fig.~\ref{fig2}.
\begin{figure}[hp]
\begin{center}
\includegraphics[width=10cm]{FigJSP_2.eps}
\caption{\label{fig2} Diagrammatic representation of
phage-mediated competition between two bacterial strains
with differential susceptibilities $\kappa_j$ and phage pathogenicities
$P_j$.
The subscript $j \in \{1,2\}$ denotes the type of bacterial strain.
Phage ($\Phi$) are represented by hexagons carrying a thick
segment ($\Phi$ DNA). A susceptible bacterium ($S_j$) is represented
by a rectangle containing an inner circle (bacterial DNA) while a lysogen
($I_j$) is represented by a rectangle containing $\Phi$ DNA integrated
into its bacterial DNA. All bacterial populations grow with an identical
growth rate $r$ while a latent bacterium ($L_j$) is assumed not to divide.
$\delta$ and $\lambda$ represent spontaneous and infection-induced lysis rates,
respectively.
}
\end{center}
\end{figure}
We describe this dynamically interacting system with seven homogeneously
mixed subpopulations: Each bacterial strain can be in one of
susceptible ($S_j$), lysogenic ($I_j$), or latent ($L_j$) states,
and they are in direct contact with bacteriophage ($\Phi$).
All bacteria divide with a constant rate when they are in a log growth phase,
while their growth is limited when in stationary phase.
Thus we assume that the bacterial population grows with a
density-dependent rate $r(\Omega)=a(1-\Omega/\Omega_{max})$
where $\Omega$ is the total bacterial population and $\Omega_{max}$ is the
maximum number of bacteria supported by the nutrient broth environment.
Susceptible bacteria ($S_j$) become infected
through contact with phage at rate $\kappa_j$.
Upon infection the phage can either take a lysogenic pathway or a lytic pathway,
stochastically determining the fate of the infected bacterium$^{\cite{ptashne:1992}}$.
We assume that a fraction $P_j$ of infected bacteria enter a latent state
($L_j$). Thereafter the phage replicate and then lyse the host bacteria
after an incubation period $1/\lambda$, during which the bacteria
do not divide$^{\cite{ptashne:1992}}$. Alternatively the phage
lysogenize a fraction $1-P_j$ of their hosts, which enter
a lysogenic state ($I_j$), and incorporate their genome into the DNA of
the host.
Thus the parameter $P_j$ characterizes the pathogenicity
of the phage, incorporating multiple aspects of phage-host
interactions resulting in damage to host fitness.
The lysogens ($I_j$) carrying the prophage grow, replicating prophage
as part of the host chromosome, and are resistant to phage. Even though
these lysogens are very stable$^{\cite{ptashne:1992}}$ without
external perturbations, spontaneous induction can occur at a low
rate $\delta$, consequently replicating the phage and lysing the
host bacteria. In general, both the number of phage produced
(the phage burst size $\chi$ ) and the phage pathology $P_j$ depend on the
culture conditions$^{\cite{ptashne:1992}}$. The two bacterial strains
differ in susceptibility ($\kappa_j$) and vulnerability ($P_j$)
to phage infection.
When the initial population size of the invading bacterial strain is small,
the stochastic fluctuations of the bacterial population size are expected to be large
and likely to affect the outcome of the invasion process.
A probabilistic model of the phage infection system is able to capture the
effects of intrinsic noise on the population dynamics of bacteria.
Let us define the joint probability density
$P_t(\underline{\eta})$ denoting the probability of the system to
be in a state $\underline{\eta}(t)=(S_1,I_1,L_1,S_2,I_2,L_2,\Phi)$
at time $t$ where $S_j$, $I_j$ and $L_j$ denote the number of bacteria
in susceptible, latent or infected states, respectively.
The time evolution of the joint probability is
determined by the transition probability per unit time
$T(\underline{\eta'}|\underline{\eta};t)$ of going from a state
$\underline{\eta}$ to a state $\underline{\eta'}$.
We assume that the transition probabilities do not depend on the history of the previous
states of the system but only on the immediately past state.
There are only a few transitions that are allowed to take place.
For instance, the number of susceptible bacteria increases
from $S_1$ to $S_1+1$ through the division of a single susceptible bacterium
and this process takes place with the transition rate
$T(S_1+1,I_1,L_1,S_2,I_2,L_2,\Phi|S_1,I_1,L_1,S_2,I_2,L_2,\Phi)$=$r(\Omega) S_1$.
The allowed transition rates are
\begin{eqnarray}
\label{transition}
&&T(S_j+1,...|S_j,...;t)= r(\Omega) S_j,
\\ \nonumber
&&T(...,I_j+1,...|...,I_j,...;t)= r(\Omega) I_j,
\\ \nonumber
&&T(S_j-1,I_j+1,...,\Phi-1|S_j,I_j,...,\Phi;t) =\kappa_j (1-P_j) \Phi S_j
\\ \nonumber
&&T(S_j-1,...,L_j+1,\Phi-1|S_j,...,L_j,\Phi;t) = \kappa_j P_j \Phi S_j
\\ \nonumber
&&T(...,I_j-1,...,\Phi+\chi|...,I_j,...,\Phi;t)= \delta I_j
\\ \nonumber
&&T(...,L_j-1,...,\Phi+\chi|...,L_j,...,\Phi;t)= \lambda L_j
\end{eqnarray}
where $\Omega(t)=\sum_j (S_j(t)+I_j(t)+L_j(t))$.
The second line represents the division of a lysogen;
the 3rd line describes an infection process by phage taking a lysogenic pathway
while the fourth line denotes an infection process by phage taking a lytic pathway.
The last two transitions are spontaneously-induced and infection-induced lysis
processes, respectively.
Bacterial subpopulations that are unchanged during a particular transition are denoted by
``$...$''.
The parameters $a$, $k_j$, $\delta$ and $\lambda$ in the transition rates of
Eq.~(\ref{transition}) represent the inverse of the expected waiting
time between events in an exponential event distribution and they are equivalent to
the reaction rates given in Fig.~\ref{fig2}.
The stochastic process specified by the
transition rates in Eq.~(\ref{transition}) is Markovian, thus we can immediately
write down a master equation governing the time evolution of the
joint probability $P(\underline{\eta})$.
The rate of change of the joint probability $P_t(\underline{\eta})$ is the
sum of transition rates from all other states $\underline{\eta'}$ to
the state $\underline{\eta}$, minus the sum of transition rates from
the state $\underline{\eta}$ to all other states
$\underline{\eta'}$:
\begin{eqnarray}
\label{master}
\frac{d P_t(\underline{\eta})}{d t} &=& \sum_j \Bigl \{
(E^{-1}_{S_j}-1)[T(S_j+1,...|S_j,...;t)P_t(\underline{\eta})
\\ \nonumber
&+&
(E^{-1}_{I_j}-1)[T(...,I_j+1,...|...,I_j,...;t)P_t(\underline{\eta})]
\\ \nonumber
&+&
(E^{+1}_{\Phi}E^{+1}_{S_j}E^{-1}_{I_j}-1)[T(S_j-1,I_j+1,...,\Phi-1|S_j,I_j,...,\Phi;t)
P_t(\underline{\eta})]
\\ \nonumber
&+&(E^{+1}_{\Phi}E^{+1}_{S_j}E^{-1}_{L_j}-1)[T(S_j-1,...,L_j+1,\Phi-1|S_j,...,L_j,\Phi;t)
P_t(\underline{\eta})]
\\ \nonumber
&+& \delta(E^{+1}_{I_j}E^{-\chi}_{\Phi}-1)[T(...,I_j-1,...,\Phi+\chi|...,I_j,...,\Phi;t)
P_t(\underline{\eta})]
\\ \nonumber
&+& \lambda (E^{+1}_{L_j}E^{-\chi}_{\Phi}-1)T(...,L_j-1,...,\Phi+\chi|...,L_j,...,\Phi;t)
P_t(\underline{\eta})]
\Bigr \}
\nonumber
\end{eqnarray}
where $E^{\pm1}_{\alpha}$ is a step operator which acts on
any function of $\alpha$ according to
$E^{\pm1}_{\alpha}f(\alpha,...)=f(\alpha \pm 1,...)$.
The master equation in Eq.~(\ref{master}) is nonlinear and
analytically intractable. There are two alternative ways to seek a
partial understanding of this stochastic system: a stochastic simulation
and a linear Fokker-Plank equation obtained from a systematic
approximation of the master equation.
A stochastic simulation is one of the most accurate/exact methods to study the
corresponding stochastic system. However, stochastic simulations of
an infection process in a large system are very demanding
in terms of computing time, even today.
Moreover, simulation studies can explore only a relatively small fraction of
a multi-dimensional parameter space, thus provide neither
a complete picture nor intuitive insight to the current infection
process.
The linear Fokker-Plank equation is only an approximation of the full
stochastic process; it describes the time-evolution of the probability density,
whose peak is moving according to macroscopic equations. In cases
where the macroscopic equations are nonlinear, one needs to go
beyond a Gaussian approximation of fluctuations, i.e., the higher moments
of the fluctuations should be considered. In cases when an analytic solution
is possible, the linear Fokker-Plank equation method can overcome most
disadvantages of the stochastic simulations. Unfortunately such
an analytic solution could not be obtained for the master equation in Eq.~(\ref{master}).
In the following sections we present a systematic expansion method of
the master equation to obtain both the macroscopic equations and the linear Fokker-Plank
equation, then an algorithm of stochastic simulations.
\section{SYSTEMATIC EXPANSION OF THE MASTER EQUATION}
In this section we will apply van Kampen's elegant method$^{\cite{vankampen:2001}}$
to a nonlinear stochastic process, in a system whose size increases exponentially
in time.
This method not only allows us to obtain a deterministic version of the stochastic
model in Eq.~(\ref{master}) but also gives a method of finding stochastic corrections to the
deterministic result. We choose an initial system size
$\Omega_o=\sum_j (S_j(0)+I_j(0)+L_j(0))+\Phi(0)$
and expand the master equation in order of $\Omega^{-1/2}_o$.
We do not attempt to prove the validity of our application
of van Kampen's $\Omega_o$-expansion method to this nonlinear stochastic system; a
required condition for valid use of $\Omega_o$-expansion scheme, namely the
stability of fixed points, is not satisfied because the system size increases indefinitely
and there is no stationary point.
However, as shown in later sections, the linear Fokker-Plank equation
obtained from this $\Omega_o$-expansion
method does provide very reliable results, comparable to the results of
stochastic simulations.
In the limit of infinitely large $\Omega_o$, the variables ($S_j$,
$I_j$, $L_j$, $\Phi$) become deterministic and equal to ($\Omega_o s_j,
\Omega_o i_j, \Omega_o l_j, \Omega_o \phi$), where ($s_j,i_j,l_j,\phi$)
are normalized quantities, e.g., $s_j=S_j/\Omega_o$. In this infinitely large size limit
the joint probability $P_t(\underline{\eta})$ will be a delta function
with a peak at ($\Omega_o s_j, \Omega_o i_j, \Omega_o l_j, \Omega_o \phi$).
For large but finite $\Omega_o$, we would expect $P(\underline{\eta})$ to have a
finite width of order $\Omega^{1/2}_o$.
The variables ($S_j$,
$I_j$, $L_j$, $\Phi$) are once again stochastic and we
introduce new stochastic variables
($\xi_{S_j},\xi_{I_j}, \xi_{L_j},\xi_{\Phi}$):
$S_j=\Omega_o s_j+\Omega^{\frac{1}{2}}_o \xi_{S_j}$,
$I_j=\Omega_o i_j+\Omega^{\frac{1}{2}}_o \xi_{I_j}$,
$L_j=\Omega_o l_j+\Omega^{\frac{1}{2}}_o \xi_{L_j}$,
$\Phi_j=\Omega_o \phi_j+\Omega^{\frac{1}{2}}_o \xi_{\Phi_j}$.
These new stochastic variables represent inherent noise and contribute to
deviation of the system from the macroscopic dynamical behavior.
The new joint probability density function $\Pi_t$ is defined by
$P_t(\underline{\eta})=\Pi_t(\underline{\xi})$ where
$\underline{\xi}=(\xi_{S_1},\xi_{I_1},\xi_{L_1},\xi_{S_2},\xi_{I_2},\xi_{L_2},\xi_{\Phi})$.
Let us define the step operators $E^{\pm}_{\alpha}$, which change $\alpha$ into $\alpha \pm 1$
and therefore $\xi_\alpha$ into $\xi_\alpha + \Omega^{-1/2}_o$, so that in new variables
\begin{equation}
E^{\pm 1}_{\alpha} =1\pm \Omega^{-\frac{1}{2}}_o\frac{\partial}{\partial \xi_{\alpha}}
+\frac{\Omega^{-1}_o}{2}\frac{\partial^{2}}{\partial \xi_{\alpha}^{2}} \pm ...
\end{equation}
The time derivative of the joint probability $P_t(\underline{\eta})$
in Eq.~(\ref{master}) is taken at a fixed state
$\underline{\eta}=(S_1,I_1,L_1,S_2,I_2,L_2,\Phi)$,
which implies that the time-derivative taken on both sides of
$\alpha=\Omega_o \alpha + \Omega^{1/2}_o \xi_{\alpha}$ should lead to
$d \xi_{\alpha}/dt=-\Omega^{1/2}_o d \alpha/dt$ where $\alpha$ can be
either $S_1$, $I_1$, $L_1$, $S_2$, $I_2$, $L_2$, or $\Phi$. Hence,
\begin{equation}
\frac{d P(\underline{\eta};t)}{dt}= \frac{\partial
\Pi(\underline{\xi})}{\partial t}
-\sum_{\alpha=S_1,S_2,I_1,I_2,L_1,L_2,\Phi} \Bigl \{
\Omega^{\frac{1}{2}}_o \frac{\partial \alpha}{dt} \frac{\partial
\Pi(\underline{\xi};t)}{\partial \xi_{\alpha}} \Bigr \}.
\end{equation}
We shall assume that the joint probability density is a delta function at
the initial condition $\underline{\eta_o}$, i.e.,
$P_0(\underline{\eta})=\delta_{\underline{\eta},\underline{\eta_o}}$.
The full expression of the master equation in the new variables is
shown in appendix A. Here we collect several
powers of $\Omega_o$. In section 3.1 we show that macroscopic
equations emerge from the terms of order $\Omega^{1/2}_o$ and that
a so-called invasion criterion, defined as the condition
for which one bacterial population outcompetes the other, can
be obtained from these macroscopic equations.
In section 3.2 we show that the terms of order $\Omega^{0}_o$
give a linear Fokker-Plank equation whose time-dependent coefficients
are determined by the macroscopic equations.
\subsection{Emergence of the Macroscopic Equations}
There are a few terms of order $\Omega^{1/2}_o$ in the master
equation in the new variables as shown in appendix A,
which appear to make a large $\Omega_o$-expansion of the master
equation improper.
However those terms in order of $\Omega^{1/2}_o$ cancel
if the following equations are satisfied
\begin{eqnarray}
\label{macroscopic} \frac{d s_j}{dt}&=& r(\Omega) s_{j}-\kappa_j
\Omega_o \phi s_j
\\ \nonumber
\frac{d i_j}{dt}&=& (1-P_j)\kappa_j \Omega_o \phi s_j +
(r(\Omega)-\delta) i_j
\\ \nonumber
\frac{d l_j}{dt}&=& P_j \kappa_j \Omega_o \phi s_j -\lambda l_j
\\ \nonumber
\frac{d \phi}{dt}&=& \chi \sum_j (\delta \phi_j + \lambda
l_j)-\sum_j \kappa_j \Omega_o \phi s_j \nonumber
\end{eqnarray}
Eq.~(\ref{macroscopic}) are identical to the deterministic equations
of the corresponding stochastic model in
the limit of infinitely large $\Omega_o$, i.e., in the limit of
negligible fluctuations.
These equations allow for the derivation of the invasion criterion,
defined as the choice of the system parameters in Table 1 that makes one invading bacterial
strain dominant in number over the other strain.
Suppose that an initial condition of Eq.~(\ref{macroscopic})
is $s_1(0)>0$, $s_2(0)>0$, $i_1(0)>0$,
$\phi(0) \ge 0$, and $i_2(0)=l_1(0)=l_2(0)=0$.
(a) In the case of $\phi(0)=\delta=0$, there is no phage-mediated
interaction between bacteria and the ratio of
$s_1(t):s_2(t):i_1(t)$ remains unchanged for $t \ge 0$.
(b) However when either ($\phi(0)=0$ and $\delta>0$) or $\phi(0)>0$,
the above ratio changes in time due to phage-mediated interactions.
Even though in principle these nonlinear coupled equations are unsolvable,
we managed to obtain an analytic solution of the macroscopic
Eq.~(\ref{macroscopic}) in the limit of a fast infection process,
i.e., ($\kappa_j \Omega_o s_2(0)/a >>1$ and $\lambda/a>>1$),
by means of choosing appropriate time-scales and using
a regular perturbation theory$^{\cite{murray:1980}}$.
(See Ref.~\cite{joo:2005} for a detailed description in a simpler system.)
We found a simple relationship between the ratios of
the two total bacterial populations:
\begin{equation}
\label{invasion}
r_{12}(t)=r_{12
}(0)(1-P_1)/(1-P_2)
\end{equation}
where $r_{12}(t) \equiv \frac{s_1(t)+i_1(t)+l_1(t)}{s_2(t)+i_2(t)+l_2(t)}$
for a sufficiently long time $t$.
Thus the final ratio $r_{12}(t)$ is determined solely by three quantities,
the initial ratio, $r_{12}(0)$, and the two phage pathologies, and
is independent of other kinetic parameters such as
the infection-causing contact rate, the
spontaneous and infection-induced lysis rates, and the phage burst size.
The invasion criterion, the condition for which bacterial strain 1 outnumbers
bacterial strain 2, is simply $r_{12}(t)>1$.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{FigJSP_3.eps}
\caption{\label{fig3} Numerical verification of the invasion criterion of
Eq.~(\ref{invasion}) for a generalized deterministic infection system
where both bacterial strains are susceptible to phage infection.
The ratio $r_{12}(0)/r_{12}(T)$ was numerically evaluated by
solving Eq.~(\ref{macroscopic}) with 2000 sets of parameters chosen
uniformly in the intervals $0<P_1,P_2<1$ for phage pathologies,
$1/min\{P_1,P_2\}<\chi<100$ for the phage burst size,
$0<\lambda/a<0.5$ for the spontaneous induction rate,
$10^{-1}s_2(0)<s_1(0)<10 s_2(0)$ and $0<i_1(0),\phi(0)<10^{-2}
s_2(0)$ for the initial concentrations of bacterial strains and phage.
The time $T$ is chosen to be a sufficiently long time.
Filled circles represent the data from 1000
sets of parameters with relatively large $\Omega_o \kappa_j s_j/a$
and $\lambda/a$ (e.g., $0.1<\Omega_o \kappa_j s_j/a, \lambda/a <10$).
Open circles are from another 1000 sets of parameters with small
$\Omega_o \kappa_j s_j/a$ and $\lambda/a$ (e.g., $0<\Omega_o \kappa_j
s_j/a ,\lambda/a<0.1$). }
\end{center}
\end{figure}
To validate the invasion criterion of Eq.~(\ref{invasion})
in the range of relatively small values of $\kappa_{j} \Omega_o s_2(0)/a$ and $\lambda/a$,
we solved Eq.~(\ref{macroscopic}) numerically with 2000 sets of parameters selected randomly
from the biologically relevant intervals.
Fig.~\ref{fig3} shows that the simple relationship in Eq.~(\ref{invasion})
between $r_{12}(0)/r_{12}(t)$ and $(1-P_2)/(1-P_1)$ is robust against parameter
variations. The results deviate from the linear relationship with increasing phage
pathology on the invading bacterial strain 1 compared with that on bacterial strain 2,
i.e., $(1-P_2)/(1-P_1)>>1$, or $P_1>>P_2$.
\subsection{Linear Noise Approximation: a Linear Fokker Plank Equation}
For simplicity we will assume hereafter that all bacteria grow
with a growth rate $r=a$ in a log phase, i.e., there is no resource competition.
Identifying terms of $\Omega^{0}_o$ in the power expansion
of the master equation (see appendix A)
we obtain a linear Fokker-Plank equation (see appendix B).
This approximation is called as linear noise approxiamtion~\cite{vankampen:2001}
and the solution of the linear Fokker-Plank equation in appendix A is a
Gaussian~\cite{vankampen:2001}, which
means that the probability distribution
$\Pi_t(\underline{\xi})$ is completely specified by the first two
moments, $\langle \xi_{\alpha}(t) \rangle$ and $\langle
\xi^{2}_{\alpha}(t) \rangle$, where $\alpha=S_j, I_j, L_j, \Phi$.
Multiplying the Fokker-Plank equation by $\xi_{\alpha}$ and
$\xi_{\alpha}\xi_{\alpha'}$ and integrating over all
$\underline{\xi}$ we find the time-evolution of the first and
the second moments of noise, $\langle \xi_\alpha \rangle$ and
$\langle \xi_{\alpha} \xi_{\alpha'} \rangle$ (see appendix C).
The solutions of all first moments
are simple: $\langle \xi_{\alpha}(t) \rangle=0$ for
all $t$, provided that the initial condition is chosen such that
initial fluctuations vanish, i.e., $\langle \xi_{\alpha}(0)
\rangle=0$.
The differential equations governing the time evolution of
the second moments are coupled, and their solutions can only be
attained by means of numerical integrations.
We use the time evolution of the second moments of noise to study
the role of stochastic fluctuations on phage-mediated competition,
and especially to investigate the effects of noise on the invasion criterion.
Let $\delta N_j$ be the deviation of the total population size $N_j$ of the $j$th
bacterial strain from its average value,
i.e., $\delta N_j=N_j-\langle N_j \rangle=\Omega^{1/2}_o(\xi_{S_j}+\xi_{I_j}+\xi_{L_j})$
where $N_j=S_j+I_j+L_j$ and
$\langle N_j \rangle$=$\langle S_j \rangle+\langle I_j \rangle$+$\langle L_j \rangle$.
Let us define the normalized variance of the total population size
of the $j$th bacterial strain
\begin{equation}
Var(N_j) \equiv \frac{\langle \delta N_j^{2} \rangle }{\langle N_j \rangle^{2} }
=\frac{\Omega_o}{\langle N_j \rangle^{2}} \Bigl \{ \langle
\xi_{S_j}^{2} \rangle + \langle \xi_{I_j}^{2} \rangle + \langle
\xi_{L_j}^{2} \rangle +2( \langle \xi_{S_j} \xi_{I_j} \rangle
+\langle \xi_{S_j} \xi_{L_j} \rangle +\langle \xi_{I_j} \xi_{L_j}
\rangle ) \Bigr \}
\end{equation}
where $\langle . \rangle$ is a statistical ensemble average.
The square root of the normalized variance,
$\sqrt{\langle \delta N_j^{2}(t) \rangle}/\langle N_j(t) \rangle$, is the
magnitude of noise of the $j$th bacterial strain at a given time t.
Another useful quantity is the normalized co-variance between
the $i$th bacterial strain in a state $\alpha$ and the $j$th bacterial strain
in a state $\beta$:
\begin{equation}
Cov(\alpha_i,\beta_j) \equiv \frac{\langle \delta \alpha_i \delta \beta_j \rangle}
{\langle N_i \rangle \langle N_j \rangle}
=\frac{\Omega_o \langle \xi_{\alpha_i} \xi_{\beta_j}\rangle}
{\langle N_i \rangle \langle N_j \rangle}
\end{equation}
We will present the results for these variances and co-variances in section 5.
\section{THE GILLESPIE ALGORITHM FOR STOCHASTIC SIMULATIONS}
In this section we briefly describe our application of the Gillespie
algorithm$^{\cite{gillespie:1977}}$ for simulation of
the stochastic process captured in the master equation of Eq.~(\ref{master}),
where in total 12 biochemical reactions take place stochastically.
The Gillespie algorithm consists of the iteration of the following steps:
(i) selection of a waiting time $\tau$ during which no reaction occurs,
\begin{equation}
\tau=-\frac{1}{\sum_{j} a_j} ln \theta
\end{equation}
where $\theta$ is a random variable uniformly chosen from an
interval $(0,1)$ and $a_j$ is the reaction rate for the $j$th
biochemical reaction.
(ii) After such a waiting time, which
biochemical reaction will take place is determined by the following algorithm.
The occurrence of each event has a weight $a_j/\sum_j a_j$.
Thus the $i$th biochemical reaction
is chosen if $\sum_{j=1}^{i} a_j< \theta' \sum^{N}_{j=1}
a_j<\sum_{j=1}^{i+1} a_j$ where $\theta'$ is another random number
selected from the interval $(0,1)$ and $N$ is the total number of biochemical reactions.
(iii) After execution of the $j$th reaction,
all reaction rates that are affected by the $j$th reaction are updated.
We measure the averages, the normalized variances and co-variances of bacterial populations
at various time points,
by taking an average over $10^{4}$ realizations of the infection process, starting
with the same initial condition.
Because a normalized variance or covariance is a measure of deviations of a stochastic variable
from a macroscopic value (which is regarded as a true value),
it is not divided by the sampling size.
The computing time of the Gillespie algorithm-based simulations increases exponentially
with the system size.
In the absence of resource competition, the total bacterial population increases
exponentially in time.
Because we need to know the stationary ratio of the two bacterial populations,
the computing time should be long enough compared to typical time scales of
the infection process.
This condition imposes a limit on the range of parameters that we can explore
to investigate the validity of the invasion criterion. We choose the values of parameters from
the biologically relevant range given in Table 1 and we, furthermore, set lower bounds on the rates
of infection causing contact $\kappa_j$ and infection-induced lysis $\lambda$,
namely $\kappa_j>\kappa_o$ and $\lambda>\lambda_o$.
\section{RESULTS}
While the methodologies described in section 3 and 4 apply to the
general case of two susceptible bacterial strains, in this section
we limit our investigations to a particular infection
system, called a ``complete infection system'' hereafter,
in which bacterial strain 1 is completely lysogenic and only bacterial strain 2
is susceptible to phage infection.
There are two advantages to studying the complete infection system:
1) this is equivalent to the infection system which we studied
experimentally$^{\cite{joo:2005}}$ and thus the results are immediately
applicable to at least one real biological system.
2) the probabilistic description of bacterial strain 1 (lysogens)
is analytically solvable as it corresponds to a stochastic birth-death
process$^{\cite{vankampen:2001}}$.
In section 5.1, studying a system consisting of only lysogens,
we elucidate the different dynamic patterns of the normalized variance
when the system size remains constant or when it increases.
This finding provides us with the asymptotic behavior of the
normalized variances of both bacterial strains
because both strains become lysogens eventually after all susceptible bacteria are depleted from the system.
In section 5.2, we investigate the role of stochastic noise on phage-mediated competition by identifying
the source of noise and assessing its magnitude in the complete infection system.
Finally in section 5.3, we investigate the effect of noise on the invasion criterion by means of stochastic simulations.
\subsection{Stochastic Birth-death Process: Growth and Spontaneous Lysis of Bacterial
Strain 1}
The dynamics of lysogens of bacterial strain 1 is completely decoupled
from that of the rest of the complete infection system and can be studied independently.
They grow at a rate $r$ and are lysed at a rate $\delta$.
There exists an exact solution for the master equation of this stochastic
birth-death process.
Thus we can gauge the accuracy of an approximate method for the corresponding
stochastic process by comparing it with the exact solution.
(See appendix C for description of the birth-death process
and its exact solution.)
The master equation of the birth-death process is
\begin{equation}
\label{master-BD} \frac{dP_t(I_1)}{dt}=(E^{-1}_{I_1}-1)r I_1
P_t(I_1)+(E^{+1}_{I_1}-1)\delta I_1 P_t(I_1)
\end{equation}
where $I_1(t)$ represents the number of lysogens at time $t$.
$I_1$ is transformed into a new variable $\xi_{I_1}$ as discussed
in section 3, which results in $I_1=\Omega_o i_1 +\Omega^{1/2} \xi_{I_1}$, $P_t(I_1)=\Pi_t(\xi_{I_1})$,
and $E^{\pm 1}_{I_1}=1 \pm \Omega^{-1/2}_o \frac{\partial}{\partial
\xi_{I_1}}+\frac{\Omega^{-1}_{o}}{2}\frac{\partial^{2}}{\partial \xi^{2}_{I_1}}$.
Then keeping terms of order $\Omega^{0}_{o}$ from $\Omega_o$-expansion of Eq.~(\ref{master-BD}),
we obtain the linear Fokker-Plank
equation,
\begin{equation}
\label{BD-FP}
\frac{\partial \Pi_t(\xi_{I_1})}{\partial t}=(r+\delta)\frac{i_{1}}{2}
\frac{\partial^{2} \Pi_t(\xi_{I_1})}{\partial \xi^{2}_{I_1}}
+(\delta-r)\frac{\partial \xi_{I_1} \Pi_t(\xi_{I_1})}{\partial \xi_{I_1}}
\end{equation}
where $i_1(t)=I_1(t)/\Omega_o$ is a normalized quantity that evolves according to
$\frac{d i_1(t)}{dt}=(r-\delta)i_1(t)$ and $\Omega_o=I_1(0)$.
Multiplying by $\xi_{I_1}$ and $\xi^{2}_{I_1}$ both sides of Eq.~(\ref{BD-FP}) and
integrating over $\xi_{I_1}$, we obtain the equations for the
first and the second moments of noise $\xi_{I_1}$:
\begin{eqnarray}
\label{BD-moments}
\frac{d \langle
\xi_{I_1}\rangle}{dt}&=&(r-\delta) \langle \xi_{I_1} \rangle
\\ \nonumber
\frac{d \langle \xi^{2}_{I_1}\rangle}{dt}&=&(r+\delta) i_1
+2(r-\delta)\langle \xi^{2}_{I_1}\rangle
\end{eqnarray}
{\bf CASE 1:} $r>\delta$. When the growth rate is greater than the lysis rate, the
system size is increasing in time and the second moment of
$\xi_{I_1}$ evolves in time according to the solution of
Eq.~(\ref{BD-moments}):
$\langle \xi^{2}_{I_1}(t) \rangle=
\frac{(r+\delta)}{(r-\delta)}i_1(0)
e^{2(r-\delta)t}(1-e^{-(r-\delta)t})$. Then the normalized
variance of lysogens reads
\begin{equation}
\label{BD-const}
\frac{\langle \delta I^{2}_{1}(t) \rangle}{\langle I_1(t)
\rangle^{2}} =\frac{\Omega_o \langle
\xi^{2}_{I_1}(t)\rangle}{\langle I_1(t)
\rangle^{2}}=\frac{(r+\delta)}{(r-\delta)I_1(0)}(1-e^{-(r-\delta)t})
\end{equation}
Asymptotically the normalized variance approaches a constant value
$(r+\delta)/((r-\delta)I_1(0))$, in good agreement with the results
of stochastic simulations (see Fig.~\ref{fig4}(b)).
{\bf CASE 2:} $r=\delta$. When the growth rate is the same as the lysis rate,
the system size remains constant and the
normalized variance increases linearly in time:
$\langle \delta I^{2}_{1}(t) \rangle / \langle I_1(t) \rangle^{2}
=(r+\delta)t/I_{1}(0)$, exactly reproduced by stochastic simulations
as shown in Fig.~\ref{fig4}(d).
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{FigJSP_4.eps}
\caption{\label{fig4} (Color online) Time-evolution of
the normalized variance of a birth-death process of lysogens
when the system size increases (a,b), or when it remains constant (c,d).
(a) and (b) are time-evolutions of the mean and the normalized variance of
lysogens when the system size increases exponentially in time,
$r=0.54$ and $\delta=0.054$.
(c) and (d) depict those of lysogens when their growth and lysis rates
are the same, $r=\delta=0.54$.
Solid lines represent the results of stochastic simulations
while dotted lines are the results of the macroscopic equation (a and c)
or the results of the linear Fokker-Plank equation (b and d).
}
\end{center}
\end{figure}
\subsection{Complete Infection System: the Dynamics of Covariances of
Stochastic Fluctuations}
In this subsection, we discuss the effects of noise on phage-mediated competition.
We explore the dynamical patterns of the normalized variances and covariances of the complete infection
system, from which we identify the major source of stochastic fluctuations and assess their
magnitude. In the complete infection system, all bacteria in strain 1 are lysogens and
all bacteria in strain 2 are susceptible to phage infection.
Bacterial strain 1 (lysogens), while decoupled from the rest of the system,
play a role as the source of the phage, triggering a massive infection process in
the susceptible bacterial strain 2.
Throughout this subsection, we make pair-wise comparisons between the results of the deterministic
equations, stochastic simulations and of the linear Fokker-Plank equation.
Fig.~\ref{fig5}(a) shows the time evolution of bacterial populations
in the susceptible, lysogenic and latent states.
While bacteria of strain 1 (lysogens) grow exponentially unaffected by phage,
the susceptible bacteria of strain 2 undergo a rapid infection process,
being converted either into a latent state or into lysogens.
The number of bacteria in the latent state increases, reaches a peak at a later stage
of infection process, and then decays exponentially at a rate $\lambda$.
As time elapses, eventually all susceptible bacteria are depleted
from the system and both bacterial strains become lysogens,
which grow at a net growth rate $a-\delta$.
The ratio of the two bacterial strains (lysogens) remains unchanged
asymptotically.
Note that although the initial population size of bacterial strain 1
is one-tenth of the initial population size of bacterial strain 2,
strain 1 will outnumber strain 2 at a later time due to phage-mediated competition.
Pair-wise comparisons between the results from stochastic simulation and
those from deterministic equations are made in Fig.\ref{fig5}(a).
They agree nicely with each other except a noticeable discrepancy found in
the population size of susceptible bacteria.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{FigJSP_5.eps}
\caption{\label{fig5} (Color online) Time-evolution of the mean
values of bacterial subpopulations (a) and the
normalized variance of total population of bacterial strain 1 and
2 (b). (a) Each subpopulation is represented by two lines; thick lines
come from macroscopic equations in Eq~(\ref{macroscopic}) and thin lines
are obtained from stochastic simulations. The four bacterial subpopulations
are represented by different line patterns: bacterial strain 1 in lysogenic state
(solid lines), bacterial strain 2 in susceptible (dotted lines),
lysogenic (dashed lines) and latent (dot-dashed lines) states.
(b) Thick solid and dashed lines represent the normalized variances
of the bacterial strain 1 and 2 from stochastic simulations, respectively, while
thin solid and dashed lines denote those from the linear Fokker-plank equation,
respectively. The initial condition is $I_1(0)=10$, $S_2(0)=100$ and the rest are zero.
The parameter values are $\delta=0.054$, $\lambda=0.81$, $\kappa_2=0.00054$, $P_2=0.98$.}
\end{center}
\end{figure}
The temporal patterns of the normalized variances of the two bacterial strains
are illustrated in Fig.~\ref{fig5}(b).
The normalized variance of bacterial strain 1 (lysogens)
increases logistically while that of bacterial strain 2 increases logistically
for the first few hours and then rapidly rises to its peak upon the onset of a massive
phage infection process taking place in the susceptible bacterial strain 2.
Asymptotically, susceptible bacteria are depleted from the system and all remaining
bacteria
are lysogens, and their normalized variances converge to a constant as given by Eq.~(\ref{BD-const}).
The results from stochastic simulations indicate that the magnitude of noise, defined
as the ratio of the standard deviation to the average value, of bacterial strain 2
reaches a maximum value, 80$\%$, during the time interval while the number of susceptible bacteria
dramatically drops and the number of latent bacteria begins to decay from its peak value.
This suggests that the stochastic fluctuations in the phage-mediated competition mainly
come from the stochastic dynamics of the susceptible bacteria undergoing infection
process and death.
Note that the linear Fokker-Plank equation underestimates the peak value of the normalized
variance, compared to the stochastic simulations,
while the stationary values of the normalized variances of both bacterial strains from two methods
agree nicely.
Fig.~\ref{fig6} shows the dynamical patterns of the normalized covariances of
bacterial populations.
We utilize the normalized covariances to identify the main source of stochastic fluctuations
in phage-mediated competition.
The normalized covariance of the total population of bacterial strain 2, $Var(N_2)$,
is composed of the 6 normalized covariances of the subpopulations of bacterial strain
2, $Cov(S_2,S_2)$, $Cov(I_2,I_2)$, $Cov(L_2,L_2)$, $Cov(S_2,I_2)$, $Cov(S_2,L_2)$ and
$Cov(I_2,L_2)$.
The peak values of $Cov(S_2,I_2)$, $Cov(S_2,L_2)$ and $Cov(I_2,L_2)$ are much smaller (ten
times smaller for this particular choice of parameters) than those of
$Cov(S_2,S_2)$, $Cov(I_2,I_2)$ and $Cov(L_2,L_2)$.
The normalized covariance of $Cov(L_2,L_2)$ reaches its peak value,
the largest value among all normalized covariances, at the exact moment when the normalized
variance of the total population of bacterial strain 2, $Var(N_2)$, hits its maximum value
as shown in Fig.~\ref{fig5}(b).
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{FigJSP_6.eps}
\caption{\label{fig6} (Color online) Time evolution of the normalized (co-)variances
of bacterial populations in different states.
See the main text for a formal definition of
the normalized covariance $Cov(\alpha_i,\beta_j)$.
The time-evolution of each co-variance is plotted with two lines: solid lines are
from stochastic simulations while dashed lines are from the linear
Fokker-Plank equation Eq.~(\ref{second_moment_FP}).
``F'' stands for $\Phi$.
The same parameters and initial conditions are used as in
Fig.~\ref{fig5}. Only 12 out of 15 co-variances are plotted.}
\end{center}
\end{figure}
\noindent
This indicates that the stochastic fluctuations in the phage-mediated
competition mainly come from the fluctuations of the bacterial population in the latent state.
Those fluctuations originate from two events:
incoming population flow from the just infected susceptible bacteria into the latent bacterial
population and outgoing population flow by infection-induced lysis of
the bacteria in the latent state.
This indicates that the magnitude of noise does depend on the values of
the kinetic parameters (an infection causing cintact rate $\kappa_j$ and infection-induced
lysis rate $\lambda$) of the complete infection system, and
this also suggests the possibility of large deviations from the deterministic invasion
criterion due to stochastic noise.
The time evolution of the normalized co-variances that are obtained
from both a linear Fokker-Plank equation and
stochastic simulations agrees to each other nicely.
This agreement validates the applicability of van Kampen's $\Omega_o$-expansion method
to a nonlinear stochastic system which grows indefinitely.
\subsection{The Effect of Stochastic Noise on the Invasion Criterion}
In this subsection we investigate the effects of noise on the validity of the invasion criterion
and measure the deviations of the stochastic results from the simple relationship
in Eq.~(\ref{invasion}) obtained from the deterministic model.
For further analysis of the effect of noise on phage-mediated competition,
we need to perform stochastic simulations with different values of kinetic parameters and
to investigate the effect of noise on the invasion criterion.
We consider both a complete infection system having
only lysogens in bacterial strain 1 ($P_1=0$) in Fig.\ref{fig7}(a) and a
generalized infection system in Fig.\ref{fig7}(b) where both strains are susceptible to phage infection,
yet with different degrees of susceptibility and vulnerability to phage.
The invasion criterion obtained from the deterministic equations is expressed
with a simple relationship between the initial and final ratios of population
sizes of two strains and phage pathologies:
$r_{12}(0)/r_{12}(T)=(1-P_2)/(1-P_1)$.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{FigJSP_7.eps}
\caption{\label{fig7} (Color online) Verification of the invasion criterion by means of
stochastic simulations: (a) a complete infection system case where
bacterial strain 1 is lysogen and only bacterial strain 2 is susceptible,
(b) a general infection system where both strains are susceptible to phage.
Thick red lines represent the invasion criterion obtained from deterministic
equations, i.e., $r_{12}(0)/r_{12}(T)=(1-P_2)/(1-P_1)$ where the time $T$
is chosen to be a sufficiently long time so that there are no more susceptible bacteria in the system.
Error bars are the standard deviations calculated from the stochastic simulation.
Filled circles are for a fast infection process ($\kappa > 10\kappa_o$,$\lambda > 10 \lambda_o$)
while open squares are for slow infection ($\kappa_o <\kappa <10
\kappa_o$,$\lambda_o < \lambda < 10 \lambda_o$).
Each one of about 500 data points in each figure represents the result of stochastic simulations,
averaged over $10^{4}$ realizations. Please see the main text for the choice of parameter values.
}
\end{center}
\end{figure}
Here $T$ is defined as a sufficiently long time such that there are no more
susceptible bacteria left to undergo the infection process and only
lysogens are in the system.
To amplify the effect of noise on phage-mediated competition,
we set the initial sizes of bacterial populations to be small;
they are randomly chosen from an interval $10< S_{j}(0), I_{j}(0)<110$.
To make sure that the complete infection system reaches a stationary state of having only lysogens within 24 hours,
we limit the values of the infection-causing contact rate $\kappa_j$
and of the infection-induced lysis rate $\lambda$:
$\kappa_j>\kappa_o$ and $\lambda>\lambda_o$ where $\kappa_o=0.000054$ and $\lambda_o=0.081$.
We distinguish infection processes based on their speed: a very fast
infection process ($\kappa > 10\kappa_o$, $\lambda > 10 \lambda_o$) and a slow infection process
($\kappa_o<\kappa<10 \kappa_o$, $\lambda_o < \lambda < 10 \lambda_o$).
The values of all other kinetic parameters in Fig.~\ref{fig2} are
randomly selected from the biologically relevant intervals (see Table 1):
$0< \delta < 0.108$, $1< \chi <100$ and $0< P_j<1$.
For about 500 sets of parameters for each figure in Fig.~\ref{fig7},
we measure the average and the standard deviation of the ratio
$r_{12}(0)/r_{12}(T)$ after taking ensemble average over $10^{4}$ realizations.
Note that the standard deviation is measured as a deviation from the macroscopic (true)
value and it is not normalized by the square root of the sampling size.
We find that the average values of the ratios $r_{12}(0)/r_{12}(T)$
still fall onto the linear relationship with phage pathologies,
independently of other kinetic parameters.
However, the ratios of $r_{12}(0)/r_{12}(T)$ are broadly distributed
around the mean value with large deviations, especially when the phage is more
pathological on strain 2, i.e., as $P_2 \rightarrow 1$ with a fixed $P_1=0$
for Fig.~\ref{fig7}(a) and when the phage is more pathological on strain 1 than
on bacterial strain 2, i.e., $(1-P_2)/(1-P_1)>>1$ or $P_1>>P_2$ for Fig.~\ref{fig7}(b)
\footnote[4]{The apparent contradiction between these two cases is a result of differences
in the initial
condition as well as the fundamental differences in partial and complete resistance.}.
Thus the probabilistic model of phage-mediated competition in bacteria confirms that the quantitative amount of
phage-mediated competition can be still predictable despite inherent stochastic
fluctuations, yet deviations can be also large, depending on the values of phage
pathologies.
\section{CONCLUSION}
We utilized a probabilistic model of a phage-mediated invasion process to investigate the
conjecture that (i) a bacterial community structure is shaped by phage-mediated
competition between bacteria, and to examine (ii) the effect of intrinsic noise
on the conclusions obtained from a deterministic model of the equivalent
system. The system under our consideration consists of two strains of bacteria: both
bacterial strains are susceptible to phage infection and one invasive bacterial strain
contains lysogens carrying the prophage.
Two bacterial strains are genetically identical except in their susceptibilities to phage and
in phage pathologies on them. We restricted the infection system such that
bacteria grow in a log phase, i.e., there is no resource competition between them.
Despite the historical success of deterministic models of ecological processes, they
produce, at best, only partially correct pictures of stochastic processes in ecological
systems. A good number of examples of the failures of deterministic models in
ecology are presented in Ref.~\cite{durrett:1994}. The principal flaw of deterministic
models is their reliance on many, sometimes unphysical, assumptions such as
continuous variables, complete mixing and no rare events.
Thus, we used both Fokker-Plank equations and stochastic
simulations in the study of stochastic phage-mediated invasion processes in bacteria.
Van Kampen's system size expansion$^{\cite{vankampen:2001}}$ was used to obtain the
linear
Fokker-Plank equation while the Gillespie algorithm was used for stochastic simulations.
We found that the linear Fokker-Plank equation is a good approximation to the nonlinear
dynamics of the stochastic phage-mediated invasion process; the time evolutions of
co-variances of bacterial populations from both Fokker-Plank equation and stochastic
simulations agree well with each other.
To investigate the role of noise during phage-mediated processes, we measured the magnitude
of noise, defined as the ratio of the standard deviation of bacterial population to its mean
as time elapses. After a sufficiently long time, compared to the typical time scale of
infection processes, all surviving bacteria are lysogens, which undergo the process of
growth and spontaneous lysis. As it is a simple birth-death process with a positive net
growth rate, the magnitude of noise asymptotically converges to a rather small constant
value. However, it was found from both the linear Fokker-Plank equation and stochastic simulations
that the magnitude of noise of the bacterial subpopulations both in the susceptible and
latent states rapidly increases and reaches a peak value in the middle of the massive phage-induced lysis event. Thus the
population size of the susceptible and latent bacteria is subject to large deviations from its mean.
We investigated the effect of noise on the invasion criterion, which is defined as the
condition of the system parameters for which the invading bacterial strain harboring and
transmitting the phage takes over the ecological niches occupied by bacterial strains
susceptible to the phage. In our previous work$^{\cite{joo:2005}}$, we showed, by using
{\it in vitro} experiments and deterministic models, that phage-conferred competitive
advantage could be quantitatively measured and is predicted and that the final ratio
$r_{12}(T)$ of population sizes of two competing bacteria is determined by only two
quantities, the initial ratio $r_{12}(0)$ and the phage pathology (phage-induced
mortality), independently of
other kinetic parameters such as the infection-causing contact rate, the spontaneous and
infection-induced lysis rates, and the phage burst size.
Here we found from stochastic simulations that the average values
of the ratios $r_{12}(0)/r_{12}(T)$ still fall onto the deterministic linear
relationship with phage pathologies, independently of other kinetic parameters.
However, the ratios $r_{12}(0)/r_{12}(T)$ are broadly
distributed around the mean value, with prominently large deviations when the phage is
more pathological on the invading bacterial strain than the strain 2, i.e., $P_1>>P_2$.
Thus the probabilistic model of
phage-mediated competition in bacteria confirms that the quantitative amount of
phage-mediated competition can still be predictable despite inherent stochastic
fluctuations, yet deviations can also be large, depending on the values of phage
pathologies.
Here we assumed that the bacterial growth rates and lysis rates are identical
in the two strains. Relaxing this assumption has a drastic simplifying effect as
the steady state is determined solely by the net growth rates of the two strains.
Regardless of initial conditions in the generalized infection system, all bacteria that
survive after a massive phage infection
process are lysogens, so long as the phage-infection is in action on both bacterial strains.
If the net growth rates of two
strains are such that $r_1-\delta_1>r_2-\delta_2>0$, asymptotically bacterial strain 1
will outnumber strain 2, regardless of phage pathologies and initial population sizes. If
the
net growth rate of any bacterial strain is negative, it will go extinct. Thus the
non-trivial case is only when the growth rates of two bacterial strains are identical.
We significantly simplified many aspects of complex pathogen-mediated
dynamical systems to obtain this stochastic model. The two most
prominent yet neglected features are the spatial distribution and the
connectivity pattern of the host population. As demonstrated by stochastic
contact processes on complex networks (e.g., infinite scale-free networks)
or on d-dimensional hypercubic
lattices$^{\cite{newman:2002,liggett:1999,pastorsatorras:2001,andjel:1996,durrett:1991,kuulasmaa:1982}}$,
these two effects may dramatically change the dynamics and stationary
states of the pathogen-mediated dynamical systems. While our experimental
system does not necessitate incorporation of spatial effects, complete
models of real pathogen-modulated ecological processes, e.g.,
phage-mediated competition as a driving force of the oscillation of two V.
cholera bacterial strains, one toxic (phage-sensitive) and the other
non-toxic (phage-carrying and resistant)$^{\cite{faruque:2005}}$,
may need to take these effects into account.
\newpage
\begin{appendix}
\noindent
{\bf APPENDIX A: SYSTEMATIC EXPANSION OF THE MASTER EQUATION}
\\
In this appendix we provide the result of the systematic expansion of the master equation in Eq.~(\ref{master}).
The master equation in the new variables reads
\begin{eqnarray}
\label{expansion}
&&\frac{\partial \Pi}{\partial t}
-\sum_{\alpha=S_1,S_2,I_1,I_2,L_1,L_2,\Phi} \Bigl \{
\Omega^{\frac{1}{2}}_o \frac{\partial \alpha}{dt} \frac{\partial \Pi}{\partial \xi_{\alpha}}
\Bigr \}
\\ \nonumber
&&= \sum_j \biggl \{
a \sum_{\alpha=S_j,I_j} \Bigl \{
(-\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_\alpha}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_\alpha^{2}} -...)
(\Omega_o \alpha +\Omega^{\frac{1}{2}}_o \xi_{\alpha}) \Pi
\Bigr \}
\\ \nonumber
&&+\kappa_j \Bigl \{
(1+\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_\Phi}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_\Phi^{2}}+...)
(1+\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{S_{j}}}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_{S_{j}}^{2}}+...)
\\ \nonumber
&&
\Bigl [
(1-P_j)(1-\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{I_j}}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_{I_j}^{2}} -...)+
P_j (1-\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{L_j}}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_{L_j}^{2}} -...)
\Bigr ]
-1
\Bigr \}
\\ \nonumber
&& (\Omega_o \phi +\Omega^{\frac{1}{2}}_o \xi_{\Phi})
(\Omega s_j +\Omega^{\frac{1}{2}}_o \xi_{S_j}) \Pi
+\delta \Bigl \{
(1+\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{I_j}}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_{I_j}^{2}}+...)
\\ \nonumber
&& (1-\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_\Phi}
+\frac{\Omega^{-1}}{2} \frac{\partial^{2}} {\partial \xi_\Phi^{2}}+...)^{\chi}
-1 \Bigr \}
(\Omega_o i_j +\Omega^{\frac{1}{2}}_o \xi_{I_j}) \Pi
\\ \nonumber
&&+\lambda \Bigl \{
(1+\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_{L_j}}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_{L_j}^{2}}+...)
(1-\Omega^{-\frac{1}{2}}_o \frac{\partial}{\partial \xi_\Phi}
+\frac{\Omega^{-1}_o}{2} \frac{\partial^{2}} {\partial \xi_\Phi^{2}}+...)^{\chi}
-1 \Bigr \}
\\ \nonumber
&& (\Omega_o l_j +\Omega^{\frac{1}{2}}_o \xi_{L_j}) \Pi
\biggr \}
\nonumber
\end{eqnarray}
\bigskip
\noindent
{\bf APPENDOX B: LINEAR FOKKER-PLANK EQUATION DERIVED FROM SYSTEMATIC EXPANSION OF THE MASTER EQUATION}
\\
From Eq.~(\ref{expansion}) we can collect terms of order $\Omega^{0}$ and obtain
the linear Fokker Plank equation,
\begin{eqnarray}
\label{FP}
&&\frac{\partial \Pi}{\partial t}=\sum_j \biggl \{
a(-\frac{\partial \xi_{S_j}\Pi}{\partial \xi_{S_j}}
-\frac{\partial \xi_{I_j}\Pi}{\partial \xi_{I_j}}
+\frac{s_j}{2}\frac{\partial \Pi}{\partial \xi_{S_j}^{2}}
+\frac{i_j}{2}\frac{\partial \Pi}{\partial \xi_{I_j}^{2}})
\\ \nonumber
&&+\kappa_j \Omega_o \phi s_j \Bigl \{
\frac{\partial^{2}}{\partial \xi_\Phi \partial \xi_{S_j}}
+\frac{1}{2}(\frac{\partial^{2}}{\partial \xi_{\Phi}^{2}}
+\frac{\partial^{2}}{\partial \xi_{S_j}^{2}})
+(1-P_j)(\frac{1}{2}\frac{\partial^{2}}{\partial \xi_{I_j}^{2}}
-\frac{\partial^{2}}{\partial \xi_\Phi \partial \xi_{I_j}}
-\frac{\partial^{2}}{\partial \xi_{S_j} \partial \xi_{I_j}}
)
\\ \nonumber
&&+P_j(\frac{1}{2}\frac{\partial^{2}}{\partial \xi_{L_j}^{2}}
-\frac{\partial^{2}} {\partial \xi_\Phi \partial \xi_{L_j}}
-\frac{\partial^{2}} {\partial \xi_{S_j} \partial \xi_{L_j}}
)
\Bigr \} \Pi
+\kappa_j \Omega_o \Bigl \{
\frac{\partial}{\partial \xi_\Phi}+
\frac{\partial}{\partial \xi_{S_j}}
-(1-P_j)\frac{\partial}{\partial \xi_{I_j}}
-P_j\frac{\partial}{\partial \xi_{L_j}}
\Bigr \}
\\ \nonumber
&&(\phi \xi_{S_j} \Pi+ s_j \xi_\Phi \Pi)
+\delta \Bigl \{
\frac{\partial \xi_{I_j} \Pi}{\partial \xi_{I_j}}
-\chi \frac{\partial \xi_{I_j} \Pi}{\partial \xi_{\Phi}}
+\frac{i_j}{2}\frac{\partial^{2} \Pi}{\partial \xi_{I_j}^{2}}
+i_j(\frac{\chi}{2}+{}_{\chi}C_{2}) \frac{\partial^{2} \Pi}{\partial \xi_{\Phi}^{2}}
-\chi i_j \frac{\partial^{2} \Pi}{\partial \xi_{I_j} \partial \xi_{\Phi}}
\Bigr \}
\\ \nonumber
&&+\lambda \Bigl \{
\frac{\partial \xi_{L_j} \Pi}{\partial \xi_{L_j}}
-\chi \frac{\partial \xi_{L_j} \Pi}{\partial \xi_{\Phi}}
+\frac{l_j}{2}\frac{\partial^{2} \Pi}{\partial \xi_{L_j}^{2}}
+l_j(\frac{\chi}{2}+{}_{\chi}C_{2})\frac{\partial^{2} \Pi}{\partial \xi_{\Phi}^{2}}
-\chi l_j \frac{\partial^{2} \Pi}{\partial \xi_{L_j} \partial \xi_{\Phi}}
\Bigr \}
\biggr \}
\end{eqnarray}
We obtain the first moments of the Gaussian noise by multiplying the Eq.~(\ref{FP}) by $\xi_{\alpha}$ and
integrating over all $\underline{\xi}$, i.e., $\int \xi_{\alpha} d \Pi=
\langle \xi_{\alpha} \rangle$.
\begin{eqnarray}
\label{first_moment_FP}
\frac{d \langle \xi_{S_j} \rangle }{dt}&=& a \langle \xi_{S_j} \rangle
-\kappa_j \Omega_o (\phi \langle \xi_{S_j} \rangle + s_j \langle \xi_{\Phi} \rangle)
\\ \nonumber
\frac{d \langle \xi_{I_j} \rangle }{dt}&=& (a-\delta) \langle \xi_{I_j} \rangle
+\kappa_j \Omega_o (1-P_j)(\phi \langle \xi_{S_j} \rangle +s_j \langle \xi_{\Phi} \rangle)
\\ \nonumber
\frac{d \langle \xi_{L_j} \rangle }{dt}&=&
\kappa_j \Omega_o P_j(\phi \langle \xi_{S_j} \rangle +s_j \langle \xi_{\Phi} \rangle)
-\lambda \langle \xi_{L_j} \rangle
\\ \nonumber
\frac{d \langle \xi_{\Phi} \rangle }{dt}&=&
\sum_j \Bigl \{
\delta \chi \langle \xi_{I_j} \rangle +\lambda \chi \langle \xi_{L_j} \rangle
-\kappa_j \Omega_o \phi (\langle \xi_{S_j} \rangle + s_j \langle \xi_{\Phi} \rangle)
\Bigr \}
\nonumber
\end{eqnarray}
Similarly we obtain the second moments (covariance) of the Gaussian noise
by multiplying the Eq.~(\ref{FP}) by $\xi_{\alpha} \xi_{\beta}$ and
integrating over all $\underline{\xi}$, i.e.,
$\int \xi_{\alpha} \xi_{\beta} d \Pi=\langle \xi_{\alpha} \xi_{\beta} \rangle$.
\begin{eqnarray}
\label{second_moment_FP}
\frac{d \langle \xi_{S_j} \xi_{S_{j'}} \rangle}{dt}&=&
2a \langle \xi_{S_j} \xi_{S_{j'}} \rangle
+(a s_j+ \kappa_j \Omega_o s_j \phi) \delta_{j j'}
-\Bigl \{
\kappa_j \Omega_o (\langle \xi_{S_j} \xi_{S_{j'}} \rangle \phi
\\ \nonumber
&&+\langle \xi_{S_{j'}} \xi_{\Phi} \rangle s_j)
+(j \longleftrightarrow j') \Bigr \}
\\ \nonumber
\frac{d \langle \xi_{I_j} \xi_{I_{j'}} \rangle}{dt}&=&
2a \langle \xi_{I_j} \xi_{I_{j'}} \rangle
+(a i_j+ \delta i_j+k_j \Omega_o \phi s_j (1-P_j)) \delta_{j j'}
-2 \delta \langle \xi_{I_j} \xi_{I_{j'}} \rangle
\\ \nonumber
&&+\Bigl \{
\kappa_j \Omega_o (1-P_j)(\langle \xi_{S_j} \xi_{I_{j'}} \rangle \phi
+\langle \xi_{I_{j'}} \xi_{\Phi} \rangle s_j)
+(j \longleftrightarrow j') \Bigr \}
\\ \nonumber
\frac{d \langle \xi_{L_j} \xi_{L_{j'}} \rangle}{dt}&=&
(\lambda l_j +k_j \Omega_o \phi s_j P_j)\delta_{j j'}
-2 \lambda \langle \xi_{L_j} \xi_{L_{j'}} \rangle
+\Bigl \{
\kappa_j \Omega_o P_j (\langle \xi_{S_j} \xi_{L_{j'}} \rangle \phi
+\langle \xi_{L_{j'}} \xi_{\Phi} \rangle s_j)
\\ \nonumber
&&+(j \longleftrightarrow j') \Bigr \}
\\ \nonumber
\frac{d \langle \xi_{\Phi}^{2} \rangle}{dt}&=&
\sum_j \Bigl \{
\kappa_j \Omega_o s_j \phi-2 \kappa_j \Omega_o (\langle \xi_{S_j} \xi_{\Phi} \rangle \phi
+\langle \xi_{\Phi}^{2} \rangle s_j)
+2 \chi (\delta \langle \xi_{I_j} \xi_{\Phi} \rangle
+\lambda \langle \xi_{L_j} \xi_{\Phi} \rangle )
\\ \nonumber
&&+(\chi+2 {}_{\chi}C_{2})(\delta i_j+\lambda l_j)
\Bigl \}
\\ \nonumber
\frac{d \langle \xi_{S_j} \xi_{I_{j'}} \rangle}{dt}&=&
2a \langle \xi_{S_j} \xi_{I_{j'}} \rangle
-(1-P_j) \kappa_j \Omega_o s_j \phi \delta_{j,j'}
-\kappa_j \Omega_o (\langle \xi_{S_j} \xi_{I_{j'}} \rangle \phi
+\langle \xi_{I_{j'}} \xi_{\Phi} \rangle s_j)
\\ \nonumber
&&+(1-P_{j'}) \kappa_{j'} \Omega_o (\langle \xi_{S_j} \xi_{S_{j'}} \rangle \phi
+\langle \xi_{S_j} \xi_{\Phi} \rangle s_{j'})
-\delta \langle \xi_{S_j} \xi_{I_{j'}} \rangle
\\ \nonumber
\frac{d \langle \xi_{S_j} \xi_{L_{j'}} \rangle}{dt}&=&
a \langle \xi_{S_j} \xi_{L_{j'}} \rangle
-P_j \kappa_j \Omega_o s_j \phi \delta_{j,j'}
-\lambda \langle \xi_{S_j} \xi_{L_{j'}} \rangle
-\kappa_j \Omega_o (\langle \xi_{S_j} \xi_{L_j'} \rangle \phi
+\langle \xi_{L_{j'}} \xi_{\Phi} \rangle s_j)
\\ \nonumber
&&+P_{j'} \kappa_{j'} \Omega_o (\langle \xi_{S_j} \xi_{S_{j'}} \rangle \phi
+\langle \xi_{S_j} \xi_{\Phi} \rangle s_{j'})
\\ \nonumber
\end{eqnarray}
\begin{eqnarray}
\frac{d \langle \xi_{S_j} \xi_{\Phi} \rangle}{dt}&=&
a \langle \xi_{S_j} \xi_{\Phi} \rangle
+\kappa_j \Omega_o s_j \phi
+\chi \sum_{j'} (\delta \langle \xi_{S_j} \xi_{I_{j'}} \rangle
+\lambda \langle \xi_{S_j} \xi_{L_{j'}} \rangle)
\\ \nonumber
&&-\sum_{j'} (\kappa_{j'} \Omega_o \langle \xi_{S_{j'}} \xi_{S_j} \rangle
\phi
+\kappa_{j'} \Omega_o \langle \xi_{S_j} \xi_{\Phi} \rangle s_{j'})
-\kappa_j \Omega_o (\langle \xi_{S_{j}} \xi_{\Phi} \rangle \phi
+\langle \xi_{\Phi}^{2} \rangle s_j)
\\ \nonumber
\frac{d \langle \xi_{I_j} \xi_{L_{j'}} \rangle}{dt}&=&
(a-\delta) \langle \xi_{I_j} \xi_{L_{j'}} \rangle
-\lambda \langle \xi_{I_j} \xi_{L_{j'}} \rangle
+\kappa_j \Omega (1-P_j)
( \langle \xi_{S_j} \xi_{L_{j'}} \rangle \phi
+\langle \xi_{L_{j'}} \xi_{\Phi} \rangle s_j )
\\ \nonumber
&&+\kappa_{j'} \Omega_o P_{j'}
( \langle \xi_{I_j} \xi_{S_{j'}} \rangle \phi
+\langle \xi_{I_j} \xi_{\Phi} \rangle s_{j'})
\\ \nonumber
\frac{d \langle \xi_{I_j} \xi_{\Phi} \rangle}{dt}&=&
(a-\delta) \langle \xi_{I_j} \xi_{\Phi} \rangle
-\delta \chi i_{j}
+\chi \sum_{j'} (\lambda \langle \xi_{I_j} \xi_{L_{j'}} \rangle
+\delta \langle \xi_{I_j} \xi_{I_{j'}} \rangle )
-\kappa_j \Omega_o s_j \phi (1-P_j)
\\ \nonumber
&&-\sum_{j'} \kappa_{j'} \Omega_o (\langle \xi_{S_{j'}} \xi_{I_{j}} \rangle \phi
+\langle \xi_{I_{j}} \xi_{\Phi} \rangle s_{j'})
+\kappa_j \Omega_o (1-P_j) (\langle \xi_{S_j} \xi_{\Phi} \rangle \phi
+\langle \xi_{\Phi}^{2} \rangle s_j)
\\ \nonumber
\frac{d \langle \xi_{L_j} \xi_{\Phi} \rangle}{dt}&=&
-P_j \kappa_j \Omega_o s_j \phi
-\lambda \chi l_j
-\lambda \langle \xi_{\Phi} \xi_{L_{j}} \rangle
+\chi \sum_{j'} (\delta \langle \xi_{L_{j}} \xi_{I_{j'}} \rangle
+\lambda \langle \xi_{L_j} \xi_{L_{j'}} \rangle )
\\ \nonumber
&&- \sum_{j'} \kappa_{j'} \Omega_o (\langle \xi_{S_{j'}} \xi_{L_{j}} \rangle \phi
+\langle \xi_{L_{j}} \xi_{\Phi} \rangle s_{j'})
+\kappa_j \Omega_o P_j (\langle \xi_{S_j} \xi_{\Phi} \rangle \phi
+\langle \xi_{\Phi}^{2} \rangle s_j)
\nonumber
\end{eqnarray}
\end{appendix}
\bigskip
\noindent
{\bf APPENDIX C: STOCHASTIC BIRTH-DEATH PROCESSESS}
\\
In this section we present an exact solution of the master equation
of a stochastic birth-death process, i.e., a prototype of
all birth-death systems which consists of a population of non-negative
integer individuals $X$ that can occur with a $x$ population size
~\cite{vankampen:2001,gardiner:2004}.
The concept of birth and death is usually that only a finite number of $X$
are born and die at a given time. The transition probabilities can be written
\begin{equation}
T(x'|x;t)=t^{+}(x)\delta_{x',x+1}+t^{-}(x)\delta_{x',x-1}.
\end{equation}
Thus there are two processes: birth, $x \rightarrow x+1$, with a
transition probability $t^{+}(x)$, and death, $x \rightarrow x-1$,
with a transition probability $t^{-}(x)$.
The master equation then takes the form,
\begin{equation}
\frac{d P_t(x)}{dt}=(E-1)t^{-}(x)P_t(x)+(E^{-1}-1)t^{+}(x)P_t(x)
\end{equation}
where $E$ is a step operator, e.g., $E^{\pm1}f(x)=f(x\pm1)$.
This expression remains valid at boundary points if we impose $t^{+}(0)=t^{-}(-1)=0$.
In the case of the growth and spontaneous lysis process of
bacteria carrying prophage, $\xrightarrow{r} X \xrightarrow{\delta}$ with
$t^{+}(x)=rx$ and $t^{-}(x)=\delta x$, the master equation
takes the simple form
\begin{equation}
\label{masterBD}
\frac{dP_t(x)}{dt}=(E^{-1}-1) r x P_t(x)+(E^{+1}-1) \delta x P_t(x)
\end{equation}
To solve Eq.~(\ref{masterBD}), we introduce the generating function
$G(s,t)=\sum^{\infty}_{x=0} s^{x} P_t(x)$ so that
\begin{equation}
\label{GFBD}
\partial_{t}G(s,t)=f(s) \partial_{s} G(s,t)
\end{equation}
where $f(s)=(rs-\delta)(s-1)$.
We find a substitution that provides the desirable transformation of
variable, $f(s) \partial_{s}=f(s)\frac{\partial z}{\partial s}
\frac{\partial}{\partial z}=-\partial_{z}$,
\begin{equation}
z=-\int\frac{ds}{f(s)}=\frac{1}{\delta-r}
log \Bigl ( \frac{s-1}{rs-\delta} \Bigr )
\end{equation}
This substitution, $G(s,t)=\psi(z,t)$, gives
\begin{equation}
\partial_{t}\psi(z,t)+\partial_{z}\psi(z,t)=0
\end{equation}
whose solution is an arbitrary function of $t-z$.
We write the solution of the above equation as $\psi(z,t)=F[e^{-t+z}]$,
so
\begin{equation}
G(s,t)=F \Bigl [ e^{-t} \Bigl ( \frac{s-1}{r s-\delta}
\Bigr )^{\frac{1}{\delta-r}} \Bigr ]
\end{equation}
Normalization requires $G(1,t)=\sum_x P_t(x)=1$, and hence $F(0)=1$.
The initial condition $P_{t=0}(x)=\delta_{x,x_o}$ determines $F$,
which means
\begin{equation}
G(s,0)=s^{x_o}=F \Bigl [ \Bigl (\frac{s-1}{r s-\delta}
\Bigr )^{\frac{1}{\delta-r}} \Bigr ]
\end{equation}
so that
\begin{equation}
\label{GF}
G(s,t)=\Bigl ( \frac{rs-\delta+\delta (1-s) e^{-t(\delta-r)}}
{rs-\delta+r(1-s) e^{-t(\delta-r)}} \Bigr )^{x_o}
\end{equation}
Eq.~(\ref{GF}) can be expanded in a power series in $s$ to
produce the conditional probability density $P_t(x) \equiv P(x,t|x_o,0)$,
which is the complete solution of the master equation Eq.~(\ref{masterBD}).
Because it is of little practical use and complicated,
we do not present the conditional probability density here,
but compute the moment equations from the generating function in Eq.~(\ref{GF})
\begin{eqnarray}
\Bigl [ \frac{\partial log G(s,t)}{\partial s} \Bigr ]_{s=1}&=&\langle x(t) \rangle
\\ \nonumber
\Bigl [\frac{\partial^{2} log G(s,t)}{\partial s^{2}} \Bigr ]_{s=1}&=&
\langle x(t)^{2} \rangle-\langle x(t) \rangle^{2} -\langle x(t) \rangle,
\end{eqnarray}
obtaining
\begin{eqnarray}
\langle x(t) \rangle &=& x_o e^{t(r-\delta)}
\\ \nonumber
\langle \delta x(t)^{2} \rangle &=&
\langle x(t)^{2} \rangle-\langle x(t)\rangle^{2}
=x_o \frac{r+\delta}{r-\delta}
\bigl ( e^{2t(r-\delta)}-e^{t(r-\delta} \bigr ),
\end{eqnarray}
which exactly corresponds to the the mean and the variance from the
linear Fokker-Plank equation.
\bigskip
\noindent
{\bf ACKNOWLEDGMENT}\\
This work was supported by a Sloan Research Fellowhship to R.A.
and by NIH grant 5-R01-A1053075-02 to E.H.
|
1,108,101,563,892 | arxiv | \section{Introduction}
The evolution of Internet-based technologies in recent days has led to challenges in terms of traffic congestion and lack of resources and secrecy that current wireless networks have failed to support.
Therefore, optical wireless communication (OWC) has attracted massive interest from scientific researchers to provide unprecedented communication speeds. Basically, OWC sends information modulated on the optical band, which offers huge license free-bandwidth and high spectral and energy efficiency.
In \cite{6011734}, light-emitting diodes (LEDs) were used as transmitters providing data rates in gigabit-per-second (Gbps) communication speeds. Despite the characteristics of LEDs, the modulation speed is limited, and they are usually deployed for providing illumination, and therefore, increasing the number of transmitters must be in compliance with the recommended illumination levels in such indoor environments.
Alternatively, infrared lasers such as vertical-cavity surface-emitting lasers (VCSELs) were used in \cite{9803253} to serve users at Terabit-per-second (Tbps) aggregate data rates, which makes OWC as a strong candidate in the next generation of wireless communications. However, the transmit power of the VCSEL can be harmful to human eyes if it operates at high power levels without considering eye safety regulations.
Optimization problems for rate-maximization were formulated in \cite{9685357,JLT1111} to enhance the spectral efficiency of OWC networks. In particular, a resource allocation approach was designed in \cite{9685357} to guarantee high quality of service for users with different demands. In \cite{JLT1111}, centralized and decentralized algorithms were proposed to maximize the sum rate of the network under the capacity constraint of the optical AP. It is worth pointing out that optimization problems in the context of rate-maximization are usually defined as complex problems that are time consumers. Recently, machine learning (ML) techniques have been considered to provide practical solutions for NP-hard optimization problems. In \cite{df}, a deep learning algorithm was used for power allocation in massive multiple-input multiple-output (MIMO) to achieve relatively high spectral efficiency at low loss. In \cite{9839259}, an artificial neural network (ANN) model was trained for resource allocation-based rate maximization in OWC network. It is shown that a closed form solution to the optimum solution of exhaustive search can be achieved at low complexity. However, the use of ML techniques in optical or RF wireless networks is still under investigation especially in complex scenarios where decisions for example in rate-maximization must be made promptly.
In contrast to the work in the literature, in this paper, we design two ANN models working in cooperation to maximize the sum rate of a discrete-time OWC network in which the serving time is partitioned into consecutive periods of time. First, a multi user OWC system model is defined where a transmission scheme referred to as blind interference alignment (BIA) is applied for multiple access services. Then, an optimization problem is formulated to find the optimum user-association and resource allocation during a certain period of time. The computational time of solving such complex optimization problems exceeds the time during which the optimum solution must be determined. Therefore, two ANN models are designed and trained to maximize the sum rate of the network during the intended period of time prior to its staring by exploiting the records of the network in the previous period of time and performing prediction. The results show the ability of the trained ANN models in providing accurate solutions close to the optimum ones.
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.83\linewidth]{./plot/indoor5.pdf}
\end{center}
\vspace{-2mm}
\caption{ An OWC system with $ L $ optical APs serving $ K $ users. }\label{Figmodel}
\vspace{-2mm}
\end{figure}
\section{System Model}
\label{sec:system}
We consider a discrete-time downlink OWC network as shown in Fig. \ref{Figmodel}, where multiple optical APs given by $ L $, $ l=\{1, \dots, L\} $, are deployed on the ceiling to serve multiple users given by $ K $, $ k=\{1, \dots, K\} $, distributed on the communication floor. Note that, the VCSEL is used as a transmitter, and therefore, each optical AP consists of $ L_{v} $ VCSELs to extend its coverage area. On the user side, a reconfigurable optical detector with $ M $ photodiodes providing a wide field of view (FoV)\cite{8636954} is used to ensure that each user has more than one optical link available at a given time. In this work, the serving time in the network is partitioned into a set of time periods given by $\mathcal{T}$, where $ t=\{ 1, \dots, t, t+1, \dots \mathcal{T}\} $, and the duration of each time period is $ \tau $. In this context, the signal received by a generic user $ k $, $ k \in K $, connected to AP $ l $ during the period of time $t+1$ can be expressed as
\begin{equation}
y^{[l,k]}(t+1)=\mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T} \mathbf{x}(t+1)+ z^{[l,k]}(t+1),
\end{equation}
where $ m \in M $ is a photodiode of user $ k $, $ \mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T} \in \mathbb{R}_+^{l\times 1} $, is the channel matrix, $ \mathbf{x}(t+1) $ is the transmitted signal, and $ z^{[l,k]}(t+1) $ is real valued additive white Gaussian noise with zero mean and variance given by the sum of shot noise,
thermal noise and the intensity noise of the laser.
In this work, all the optical APs are connected through a central unit (CU) to exchange essential information for solving optimization problems. It is worth mentioning that the distribution of the users is known at the central unite, while the channel state information (CSI) at the transmitters is limited to the channel coherence time due to the fact that BIA is implemented for interference management \cite{8636954,Gou11}.
\subsection{Transmitter}
The VCSEL transmitter has Gaussian beam profile with multiple modes. For lasers, the power distribution is determined based on the beam waist $ W_{0} $, the wavelength $ \lambda $ and the distance $ d $ between the transmitter and user. Basically, the beam radius of the VCSEL at photodiode $ m $ of user $ k $ located on the communication floor at distance $d$ is given by
\begin{equation}
W_{d}=W_{0} \left( 1+ \left(\frac{d}{d_{Ra}}\right)^{2}\right)^{1/2},
\end{equation}
where $ d_{Ra} $ is the Rayleigh range. Moreover, the spatial distribution of the intensity of VCSEL transmitter $ l $ over the transverse plane at distance $ d $ is given by
\begin{equation}
I_{l}(r,d) = \frac{2 P_{t,l}}{\pi W^{2}_{d}}~ \mathrm{exp}\left(-\frac{2 r^{2}}{W^{2}_{d}}\right).
\end{equation}
Finally, the power received by photodiode $ m $ of user $ k $ from transmitter $ l $ is given by
\begin{equation}
\label{power}
\begin{split}
&P_{m,l}=\\
&\int_{0}^{r_m} I(r,d) 2\pi r dr = P_{t,l}\left[1- \mathrm{exp} \left(\frac{ -2r^{2}_{m}}{W^{2}_{d}}\right)\right],
\end{split}
\end{equation}
where $ r_m $ is the radius of photodiode $ m $. Note that, $ A_m = \frac{A_{rec}}{M} $, $ m \in M $, is the detection area of photodiode $ m $, assuming $ A_{rec} $ is the whole detection area of the receiver. In \eqref{power}, the location of user $ k $ is considered right under transmitter $ l $, more details on the power calculations of the laser are in \cite{9803253}.
\subsection{Blind interference alignment}
BIA is a transmission scheme proposed for RF and optical networks to manage multi-user interference with no CSI at the transmitters \cite{8636954,Gou11}, showing superiority over other transmit precoding schemes with CSI such as zero-forcing (ZF). Basically, the transmission block of BIA allocates multiple alignments block to each user following a unique methodology. For instance, an AP with $ L_{v}=2 $ transmitters serving $ K=3 $ users, one alignment block is allocated to each user as shown in Fig. \ref{bia}. For the general case where an optical AP composed of $ L_v $ transmitters serving $ K $ users, BIA allocates $ \ell= \big\{1, \dots, (L_{v}-1)^{K-1}\big\} $ alignment blocks to each user over a transmission block consisting of $ (L_{v}-1)^{K}+K (L_{v}-1)^{K-1} $ time slots. In this context, user $ k$ receives the symbol $ \mathbf{u}_{\ell}^{[l,k]} $ from AP $ l $ during the $ \ell $-th alignment block as follows
\begin{equation}
\label{recibia}
\mathbf{y}^{[l,k]} = \mathbf{H}^{[l,k]}\mathbf{u}_{\ell}^{[l,k]} +\sum_{ \substack{l' = 1, l'\neq l}}^{L}
\sqrt{\alpha_{l'}^{[l,k]}}\mathbf{H}^{[l',k]}\mathbf{u}_{\ell}^{[l',k]}+ \mathbf{z}^{[l,k]},
\end{equation}
where $ \mathbf{H}^{[l,c]} $ is the channel matrix of user $k$. It is worth mentioning that user $ k $ is equipped with a reconfigurable detector that has the ability to provide $L_v$ linearly independent channel responses, i.e.,
\begin{equation}
\mathbf{H}^{[l,k]} = \begin{bmatrix} \mathbf{h}^{[l,k]}(1)& \mathbf{h}^{[l,k]}(2)& \dots & \mathbf{h}^{[l,k]}(L_v)
\end{bmatrix} \in \mathbb{R}_+^{L_{v} \times 1}.
\end{equation}
In \eqref{recibia}, $ \alpha_{l'}^{[l,k]} $ is the signal-to-interference ratio (SIR) received at user $ k $ due to other APs $ l \neq l' $, and $ \mathbf{u}_{\ell}^{[l',k]} $
represents the interfering symbols received from the adjacent APs during the alignment block $ \ell $ over which the desired symbol $ \mathbf{u}_{\ell}^{[k,c]} $ is received. It is worth pointing out that frequency reuse is usually applied to avoid inter-cell interference so that the interfering symbol $ \mathbf{u}_{\ell}^{[l',k]} $ can be teared as noise.
Finally, $\mathbf{z}^{[k,c]}$ is defined as noise resulting from interference subtraction, and it is given by a covariance matrix, i.e.,
\begin {equation}
\mathbf{R_{z_p}} =
\begin{bmatrix}
(K)\mathbf{I}_{L_{v}-1} & \mathbf{0}\\
\mathbf{0} & 1\\
\end{bmatrix}.
\end{equation}
According to \cite{8636954}, the BIA-based data rate received by user $k$ from its corresponding APs during the period of time $ (t+1) $ is expressed as
\begin{multline}
\label{rate}
r^{[l,k]} (t+1) =\\ B_{t+1}^{[l,k]} \mathbb{E}\left[\log\det\left(\mathbf{I}_{L_{v}} + P_{\rm{str}} \mathbf{H}_{t+1}^{[l,k]}{\mathbf{H}^{[l,k]}}^{H} \mathbf{R_{\tilde{z}}}^{-1}(t+1) \right)\right],
\end{multline}
where $B_{t+1}^{[l,k]}=\dfrac{(L_{v}-1)^{K-1}}{(L_{v}-1)^{K}+K(L_{v}-1)^{K-1}}= \dfrac{1}{K+L_{v}-1}$
is the ratio of the alignment blocks allocated to each user connected to AP $ l $ over the entire transmission block, $ P_{\rm{str}} $ is the power allocated to each stream and
\begin{equation}
\mathbf{R_{\tilde{z}}}(t+1)= \mathbf{R_{z_p}} + P_{\rm{str}}\sum_{l' = 1}^{L} \alpha_{l'}^{[l,k]}\mathbf{H}_{t+1}^{[l',k]}{\mathbf{H}^{[l',k]}}^{H},
\end{equation}
is the covariance matrix of the noise plus interference received from other APs $ l \neq l'$.
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.65\linewidth]{./plot/BIA.pdf}
\end{center}
\vspace{-2mm}
\caption{ Transmission block of BIA for a use case.}\label{bia}
\vspace{-2mm}
\end{figure}
\section{Problem Formulation}
We formulate an optimization problem in a discrete time OWC
system aiming to maximize the sum rate of the users by determining the optimum user assignment and resource allocation simultaneously.
It is worth mentioning that data rate maximization in the network must be achieved during each period of time $ t$, otherwise, it cannot be considered as a valid solution due to the fact that user-conditions are subject to changes in the next period of time. Focussing on period of time $ t+1 $, the utility function of sum rate maximization is given by
\begin{equation}
U(x,e)= \sum_{k \in K} \varphi \left( \sum_{ l\in L} x^{[l,k]}_{t+1} R^{[l,k]}({t+1})\right),
\end{equation}
where $ x^{[l,k]}_{t+1} $ is an assignment variable that determines the connectivity of user $ k $ to optical AP $ l$, where $ x^{[l,k]}_{t+1}=1 $ if user $ k $ is assigned to AP $ l $ during the period of time $ t+1 $, otherwise, it equals 0. Moreover, the actual data rate of user $ k $ during ${t+1} $ is $ R^{[l,k]}({t+1})= e^{[l,k]}_{t+1} r^{[l,k]}({t+1}) $, where $ e^{[l,k]}_{t+1} $, $ 0\leq e^{[l,k]}_{t+1} \leq 1$, determines the resources devoted from AP $ l $ to serve user $ k $, and $ r^{[l,k]}({t+1}) $ is the user rate given by equation \eqref{rate}. The sum rate maximization during the period of time $ {t+1} $ can be obtained by solving the optimization problem as follows
\begin{equation}
\label{pro-c1}
\begin{aligned}
\mathbf{P1:} ~~\max_{x,e} \quad \sum_{k \in K} \varphi \left( \sum_{ l\in L} x^{[l,k]}_{t+1} R^{[l,k]}({t+1})\right)\\
\textrm{s.t.} \quad \sum\limits_{l\in L} x^{[l,k]}_{t+1}=1,~~~~~~~~~~~\forall k\in K \\
\quad \sum\limits_{k\in K} x^{[l,k]}_{t+1} R^{[l,k]}({t+1}) \leq \rho_{l},~~~~~\\~~~~~~~~\forall k\in K\\
\quad R_{min} \leq x^{[l,k]}_{t+1} R^{[l,k]}({t+1}) \leq R_{max}, \\~~~~~~~~\forall l\in L, k\in K \\
x^{[l,k]}_{t+1} \in \big\{0,1\big\}, l \in L, k \in K,~~~~~~~\\
\end{aligned}
\end{equation}
where $ \varphi (.) = \log(.) $ is a logarithmic function that achieves proportional fairness among users \cite{879343}, and $ \rho_{l} $ is the capacity limitation of AP $ l$ . The first constraint in \eqref{pro-c1} guarantees that each user is assigned to only one AP, while the second constraint ensures that each AP is not overloaded. Moreover, the achievable user rate must be within a certain range as in the third constraint where $ R_{min} $ is the minimum data rate required by a given user and $ R_{max} $ is the maximum data rate that user $ k $ can receive. It is worth mentioning that imposing the third constraint helps in minimizing the waste of the resources and guarantees high quality of service. Finally, the last constraint defines the feasible region of the optimization problem.
The optimization problem in \eqref{pro-c1} is defined is as a mixed integer non-linear
programming (MINLP) problem in which two variables, $ x^{[l,k]}_{t+1} $ and $ e^{[l,k]}_{t+1} $, are coupled.
Interestingly, some of the deterministic algorithms can be used to solve such complex MINLP problems with high computational time. However, the application of these algorithms in real scenarios is not practical to solve optimization problems like in \eqref{pro-c1}, where the optimal solutions must be determined within a certain period of time.
One of the assumptions for relaxing the main optimization problem in \eqref{pro-c1} is to connect each user to more than one AP, which means that the association variable $ x^{[l,k]}_{t+1} $ equals to 1. In this context, the optimization problem can be rewritten as
\begin{equation}
\label{pro-c2}
\begin{aligned}
\mathbf{P2:} ~~\max_{e} \quad \sum_{k \in K} \varphi \left( \sum_{ l\in L} e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)\\
\textrm{s.t.} \quad \sum\limits_{k\in K} e^{[l,k]}_{t+1} r^{[l,k]}({t+1}) \leq \rho_{l},~~~~~\\~~~~~~~~\forall k\in K\\
\quad R_{min} \leq e^{[l,k]}_{t+1} r^{[l,k]}({t+1}) \leq R_{max}, \\~~~~~~~~\forall l\in L, k\in K \\
0\leq e^{[l,k]}_{t+1} \leq 1, l \in L, k \in K.~~~~~~~\\
\end{aligned}
\end{equation}
Note that, considering our assumption of full connectivity, the variable $ x^{[l,k]}_{t+1} $ is eliminated. Interestingly, the optimization problem in \eqref{pro-c2} can be solved in a distributed manner on the AP and user sides using the full decomposition method via Lagrangian multipliers \cite{FRLHANZO}. Thus, the Lagrangian function is
\begin{multline}
\label{eq:lag}
f\left(e,\mu, \xi_{\max},\lambda_{\min} \right) =\sum_{ k\in K} \sum_{ l\in L} \varphi \left( e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)+\\ \sum_{l\in L} \mu^{[l]}_{t+1}
\left(\rho_{l}- \sum_{k\in K} R^{[l,k]}({t+1})\right)+
\sum_{k\in K} \xi^{[k]}_{{t+1},\max}
\\ \left(R_{max}- \sum_{l\in \L} R^{[l,k]}({t+1})\right) +\sum_{k \in K} \lambda^{[k]}_{{t+1},\min} \\ \left( \sum_{l\in L} R^{[l,k]}({t+1})- R_{min}\right),
\end{multline}
where $ \mu$, $\xi_{\max} $ and $\lambda_{\min}$ are the Lagrange multipliers according to the first and second constraints in \eqref{eq:lag}, respectively.
However, the assumption of users assigned to more than one AP is unrealistic in real time scenarios where users might not see more than one AP at a given time due to blockage. Therefore, focusing on resource allocation more than user association as in \eqref{pro-c2} can cause relatively high waste of resources due to the fact that an AP might allocate resources to users blocked from receiving its LoS link. In the following, an alternative solution is proposed using ANN models.
\subsection{Artificial neural network }
Our aim in \eqref{pro-c1} is to provide optimal solutions during the period of time $ t+1 $. Therefore, our ANN model must have the ability to exploit the solutions of the optimization problem in the previous period of time $ t $.
Given that, the problem in hand can be defined as time series prediction,
Focussing on the optimization problem in \eqref{pro-c1}, calculating the network assignment vector $ \mathbf {X}_{t+1} $ involves high complexity. Therefore, having an ANN model that is able to perform prediction for the network assignment vector can considerably minimize the computational time, while valid sub-optimum solutions are obtained within a certain period of time. As in \cite{9839259}, we design a convolutional neural network (CNN) to estimate the network assignment vector denoted by $ \widehat{{\mathbf{X}}}_{t} $ during a given period of time $ t $ based on user-requirements sent to the whole set of the APs through uplink transmission. It is worth mentioning that the CNN model must be trained over a data set generated from solving the original optimization problem as in the following sub-section. For prediction, we consider the use of long-short-term-memory (LSTM) model classified as a recurrent neural network (RNN)\cite{7960065}, which is known to solve complex sequence problems through time. Once, the network assignment vector is estimated during the period of time $ t $, it is fed into the input layer of the LSTM model trained to predict the network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $ during the next period of time $ t+1 $ prior to its starting. Note that, resource allocation can be performed in accordance with the predicted network assignment vector to achieve data rate maximization during the intended period of time.
\subsection{Offline phase}
We first train the CNN model over a dataset with $ N $ size within each period of time to determine an accurate set of weight terms that can perfectly map between the information sent to the input layer of the ANN model and its output layer. Note that, the CNN model aims to estimate the network assignment vector at a given time. For instance during period of time $ t $, the CNN model provides an estimated network assignment vector within the interval $ [\widehat{{\mathbf{X}}}_{t-\tau+1}, \widehat{{\mathbf{X}}}_{t-\tau+2}, \dots, \widehat{{\mathbf{X}}}_{t} ] $, which then can be fed into the input layer of the LSTM model to predict the network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $. In this context, the CNN model must be trained during the period of time $ t $ over data points generated from solving the following problem
\begin{equation}
\label{pro-3}
\begin{aligned}
\mathbf{P3:} ~~\max_{x} \quad \sum_{k \in K} \varphi \left( \sum_{ l\in L} x^{[l,k]}_{t} \frac{r^{[l,k]}({t})}{\sum_{k \in K} x^{[l,k]}_{t}}\right)\\
\textrm{s.t.} \quad \sum\limits_{l\in L} x^{[l,k]}_{t}=1,~~~~~~~~~~~\forall k\in K \\
x^{[l,k]}_{t} \in \big\{0,1\big\}, l \in L, k \in K.~~~~~~~\\
\end{aligned}
\end{equation}
This optimization problem is a rewritten form of the problem in \eqref {pro-c1} with the assumption of uniform resource allocation, i.e., $ e^{[l,k]}_{t}=\frac{1}{K_{l}} $, where $ K_{l}= \sum_{k \in K} x^{[l,k]}_{t} $. It is worth pointing out that this assumption is considered due to the fact that once the estimation and prediction processes for the network assignment vector are done using CNN and LSTM models, respectively, resource allocation is performed at each optical AP to satisfy the requirements of the users as in sub-section \ref{sub}. The optimization problem in \eqref {pro-3} can be solved through brute force search with a complexity that increases exponentially with the size of the network.
Note that, since the dataset is generated in an offline phase, complexity is not an issue.
For the LSTM model, the dataset is generated over $ \mathcal{T} $ consecutive period of times. Then, it is processed to train the LSTM model for determining a set of wight terms that can accurately predict the network assignment vector during a certain period of time. Interestingly, the training of the LSTM model for predicting $ \widetilde{{\mathbf{X}}}_{t+1} $ during $ t+1 $ is occurred over date points included in the dataset during the previous time duration $ \tau $, i.e., $ [\widehat{{\mathbf{X}}}_{t-\tau+1}, \widehat{{\mathbf{X}}}_{t-\tau+2}, \dots, \widehat{{\mathbf{X}}}_{t}] $.
\subsection{Online application}
\label{sub}
After generating the dataset and training the ANN models in an offline phase, their application is considered at the optical APs to perform instantaneous data rate-maximization during a certain period of time $ t+1 $ by finding the optimum user association and resource allocation. Basically, the users send their requirements to the optical APs at the beginning of the period of time $ t $ through uplink transmission. Subsequently, these information are injected into the trained CNN model to estimate the network assignment vector $ \widehat{{\mathbf{X}}}_{t} $ during the interval $ [t-\tau+1, t-\tau+2, \dots, t] $, which then can be used as information for the input layer of the LSTM trained to predict the network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $ during the next period of time prior to its actual starting. Once the network assignment variable $ x^{[l,k]}_{t+1} $ is predicted for each user $ k $ during $ t+1 $, resource allocation is determined at each AP according to equation \eqref{eq:lag} as follows
\begin{multline}
\label{OPT4}
\mathcal{L}(e,\mu, \xi_{\max},\lambda_{\min})=\\ \sum_{k \in K_{l}} \varphi \left( e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)- \mu^{[l]}_{t+1} \sum_{k \in K_{l}}e^{[l,k]}_{t+1} r^{[l,k]}({t+1})
\\-\sum_{k \in K_{l}} ( \xi^{[k]}_{{t+1},\max}-\lambda^{[k]}_{{t+1},\min}) ~ e^{[l,k]}_{t+1} r^{[l,k]}({t+1}).
\end{multline}
The optimum resources allocated to user $ k $ associated with AP $ l $ during ${t+1}$ is determined by taking the partial derivative of $ \mathcal{L}(e,\mu, \xi_{\max},\lambda_{\min}) $ with respect to $ e^{[l,k]}_{t+1} $. Therefore, it is given by
\begin{multline}
\label{OPT5}
\left(\dfrac{ \partial \varphi \left( e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)} {\partial e_{t+1}}\right)= \\r^{[l,k]}({t+1}) \left(\mu^{[l]}_{t+1}+ \xi^{[k]}_{{t+1},\max}-\lambda^{[k]}_{{t+1},\min}\right),
\end{multline}
Otherwise, considering the definition of $ \dfrac{\partial \mathcal{L} \left(e\right)} {\partial e} $ as monotonically decreasing function with $ e^{[l,k]}_{t+1} $,
the partial derivative $ \dfrac{\partial \mathcal{L} \left(e\right)} {\partial e} \vert _{e^{[l,k]}_{t+1}=0}\leq0 $ means that the optimum value $ e^{*[l,k]}_{t+1} $ equals zero, while $ \dfrac{\partial \mathcal{L}\left(e\right)} {\partial e} \vert _{e^{[l,k]}_{t+1}=1} \geq 0 $ means that the optimum value $ e^{*[l,k]}_{t+1} $ equals one. At this point, the gradient projection method is applied to solve the dual problem, and the Lagrangian multipliers in \eqref{OPT4} are updated as follows
\begin{equation}
\label{var}
\mu^{[l]}_{t+1}(i)= \left[\mu^{[l]}_{t+1}(i-1)-\Omega_{\varepsilon} \left(\alpha_l-\sum_{k\in K_{l}} R^{[l,k]}({t+1})\right) \right]^{+},
\end{equation}
\begin{equation}
\label{nu}
\xi^{[k]}_{{t+1},\max}(i)= \Bigg[\xi^{[k]}_{{t+1}}(i-1)-\Omega_{\nu}\Bigg(R_{\max}-R^{[l,k]}({t+1})\Bigg)\Bigg]^{+},
\end{equation}
\begin{equation}
\label{lam}
\lambda^{[k]}_{{t+1},\min} (i)=\Bigg[\lambda^{[k]}_{{t+1}}(i-1)-\Omega_{\lambda}\Bigg( R^{[l,k]}({t+1})-R_{\min}\Bigg)\Bigg]^{+},
\end{equation}
where $ i $ denotes the iteration of the gradient algorithm and [:]+ is a projection on the positive orthant. The Lagrangian variables work as indicators between the users and APs to maximize the sum rate of the network, while ensuring that each AP is not overloaded and the users receiver their demands. Note that, the resources are determined based on the predicted network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $. Therefore, at the beginning of the period of time $ t+1 $, each AP sets its link price according to \eqref{var}, and the users update and broadcast their demands as in \eqref{nu} and \eqref{lam}. These values remain fixed during the whole time interval so that the trained CNN estimate a new assignment vector to feed the LSTM model in order to predict $ \widetilde{{\mathbf{X}}}_{t+1} $ for the next period of time $ t+2 $.
\begin{table}
\centering
\caption{Simulation Parameters}
\begin{tabular}{|l|c|}
\hline
{\bf Laser-based OWC parameter} & {\bf Value} \\ \hline\hline
Laser Bandwidth & 5 GHz \\\hline
Laser Wavelength & 830 nm \\\hline
Laser beam waist & $ 5~ \mu $m \\\hline
Physical area of the photodiode &15 $\text{mm}^2$ \\\hline
Receiver FOV & 45 deg \\\hline
Detector responsivity & 0.9 A/W \\\hline
Gain of optical filter & 1.0 \\\hline
Laser noise & $-155~ dB/H$z \\\hline\hline
{\bf ANNs parameter} & {\bf Value} \\ \hline\hline
Model & CNN and LSTM \\\hline
Dataset size & $ 5000-10000 $\\\hline
Training & $ 90\% $ of dataset\\\hline
Validation & $ 10\% $ of dataset\\\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.91\linewidth]{./plot/re1b.pdf}
\end{center}
\vspace{-2mm}
\caption{ The performance of the ANN model trained for prediction. }\label{re1}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.91\linewidth]{./plot/re2.pdf}
\end{center}
\vspace{-2mm}
\caption{Sum rates of different rate-maximization techniques versus the beam waist of the laser. $\mathcal{T}=4$, $ K=10$. }\label{re2}
\vspace{-2mm}
\end{figure}
\section{PERFORMANCE EVALUATIONS}
\label{sec:Pcom}
We consider an indoor environment with 5m$ \times $ 5m$ \times $ 3m dimensions where on the ceiling $ L=8 $ APs are deployed, and each AP with $ L_v $ transmitters. On the communication floor located at 2 m from the ceiling, $ K $ users are distributed randomly with different activities. Note that, each user $ k $ is equipped a reconfigurable detector that gives the opportunity to connect to most of the APs, more details are in \cite{JLT1111}. All the other simulation parameters are listed in Table 1.
The accuracy of the trained ANN model, i.e., LSTM, in preforming prediction is depicted in Fig. \ref{re1} in terms of mean square error (MSE) versus a set of epochs. It is can be seen that training and validation losses decrease with the number of epochs regardless of the dataset size considered since the optimal wights needed to preform specific mathematical calculations are set over time. However, increasing the size of dataset from 5000 to $ 10^4 $ results in decreasing the error of validation and training processes. It is worth noticing that MSE increases if more periods of time, $ \mathcal{T}=5 $, is assumed for the same dataset size which is due to an increase in the prediction error. This issue can be avoided by training the ANN model over a larger dataset with data points $>10^4$ . The figure also shows that our ANN model is not overfitting and can predict accurate solutions in the online application where unexpected scenarios are more likely to occur.
In Fig. \ref{re2}, the sum rate of the network is shown against different values of the beam waist $ W_0 $, which is known as a vital parameter in laser-based OWC that influences the power received at the user end. It is shown that the sum rate of the users increases with the beam waist due to the fact that more transmit power is directed towards the users, and less interference is received from the neighboring APs. Interestingly, our cooperative ANNs provides accurate solutions close to the optimal ones that involves high computational time. Note that, solving the
the optimization problem in \eqref{pro-c2} results in low sum rates compared to our ANN-based solutions, which is expected due to the assumption of full connectivity, i.e., $ x^{[l,k]}_{t+1}=1 $, which in turn leads to wasting the resources. Moreover, the proposed models shows superiority over the conventional scheme proposed in \cite{8636954} in which each AP serves users located at a distance determining whether the signal received is useful or noise, and therefore, users are served regardless of their demands, the available resources and capacity limitations of the APs.
Fig. \ref{re3} shows the sum rate of the network versus a range of SNR values using the trained ANN models. It can be seen that determining the optimal user assignment and resource allocation using ANN results in higher sum rates compared to the scenarios of full connectivity and distance-based user association. It is because of in our model, each user is assigned to an AP that has enough resources to satisfy its demands and positively impact the sum rate of the network. Interestingly, as in \cite{8636954}, BIA achieves a higher sum rate than ZF due to its ability to serve multiple users simultaneously with no CSI, while the performance of ZF is dictated by the need for CSI.
\section{CONCLUSIONs}
\label{sec:CONCLUSION}
In this paper, sum rate-maximization is addressed in a discrete time laser-based OWC. We first define the system model consisting of multiple APs to serve users distributed on the receiving plane. Then, the user rate is derived considering the application of BIA, which manages multi-user interference without the need for CSI at the transmitters. Moreover, an optimization problem is formulated to maximize the sum rate of the network during a certain period of time. Finally, CNN and LSTM models are designed and trained to provide instantaneous solutions during the validity of each period of time. The results show that solving the formulated model achieves higher sum rates compared to other benchmark models, and the trained ANN models have the ability to obtain accurate and valid solutions close to the optimal ones.
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.91\linewidth]{./plot/re3.pdf}
\end{center}
\vspace{-2mm}
\caption{Sum rates of the network versus SNR. $\mathcal{T}=4$, $ K=10$.}\label{re3}
\vspace{-2mm}
\end{figure}
\bibliographystyle{IEEEtran}
\section{Introduction}
The evolution of Internet-based technologies in recent days has led to challenges in terms of traffic congestion and lack of resources and secrecy that current wireless networks have failed to support.
Therefore, optical wireless communication (OWC) has attracted massive interest from scientific researchers to provide unprecedented communication speeds. Basically, OWC sends information modulated on the optical band, which offers huge license free-bandwidth and high spectral and energy efficiency.
In \cite{6011734}, light-emitting diodes (LEDs) were used as transmitters providing data rates in gigabit-per-second (Gbps) communication speeds. Despite the characteristics of LEDs, the modulation speed is limited, and they are usually deployed for providing illumination, and therefore, increasing the number of transmitters must be in compliance with the recommended illumination levels in such indoor environments.
Alternatively, infrared lasers such as vertical-cavity surface-emitting lasers (VCSELs) were used in \cite{9803253} to serve users at Terabit-per-second (Tbps) aggregate data rates, which makes OWC as a strong candidate in the next generation of wireless communications. However, the transmit power of the VCSEL can be harmful to human eyes if it operates at high power levels without considering eye safety regulations.
Optimization problems for rate-maximization were formulated in \cite{9685357,JLT1111} to enhance the spectral efficiency of OWC networks. In particular, a resource allocation approach was designed in \cite{9685357} to guarantee high quality of service for users with different demands. In \cite{JLT1111}, centralized and decentralized algorithms were proposed to maximize the sum rate of the network under the capacity constraint of the optical AP. It is worth pointing out that optimization problems in the context of rate-maximization are usually defined as complex problems that are time consumers. Recently, machine learning (ML) techniques have been considered to provide practical solutions for NP-hard optimization problems. In \cite{df}, a deep learning algorithm was used for power allocation in massive multiple-input multiple-output (MIMO) to achieve relatively high spectral efficiency at low loss. In \cite{9839259}, an artificial neural network (ANN) model was trained for resource allocation-based rate maximization in OWC network. It is shown that a closed form solution to the optimum solution of exhaustive search can be achieved at low complexity. However, the use of ML techniques in optical or RF wireless networks is still under investigation especially in complex scenarios where decisions for example in rate-maximization must be made promptly.
In contrast to the work in the literature, in this paper, we design two ANN models working in cooperation to maximize the sum rate of a discrete-time OWC network in which the serving time is partitioned into consecutive periods of time. First, a multi user OWC system model is defined where a transmission scheme referred to as blind interference alignment (BIA) is applied for multiple access services. Then, an optimization problem is formulated to find the optimum user-association and resource allocation during a certain period of time. The computational time of solving such complex optimization problems exceeds the time during which the optimum solution must be determined. Therefore, two ANN models are designed and trained to maximize the sum rate of the network during the intended period of time prior to its staring by exploiting the records of the network in the previous period of time and performing prediction. The results show the ability of the trained ANN models in providing accurate solutions close to the optimum ones.
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.83\linewidth]{./plot/indoor5.pdf}
\end{center}
\vspace{-2mm}
\caption{ An OWC system with $ L $ optical APs serving $ K $ users. }\label{Figmodel}
\vspace{-2mm}
\end{figure}
\section{System Model}
\label{sec:system}
We consider a discrete-time downlink OWC network as shown in Fig. \ref{Figmodel}, where multiple optical APs given by $ L $, $ l=\{1, \dots, L\} $, are deployed on the ceiling to serve multiple users given by $ K $, $ k=\{1, \dots, K\} $, distributed on the communication floor. Note that, the VCSEL is used as a transmitter, and therefore, each optical AP consists of $ L_{v} $ VCSELs to extend its coverage area. On the user side, a reconfigurable optical detector with $ M $ photodiodes providing a wide field of view (FoV)\cite{8636954} is used to ensure that each user has more than one optical link available at a given time. In this work, the serving time in the network is partitioned into a set of time periods given by $\mathcal{T}$, where $ t=\{ 1, \dots, t, t+1, \dots \mathcal{T}\} $, and the duration of each time period is $ \tau $. In this context, the signal received by a generic user $ k $, $ k \in K $, connected to AP $ l $ during the period of time $t+1$ can be expressed as
\begin{equation}
y^{[l,k]}(t+1)=\mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T} \mathbf{x}(t+1)+ z^{[l,k]}(t+1),
\end{equation}
where $ m \in M $ is a photodiode of user $ k $, $ \mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T} \in \mathbb{R}_+^{l\times 1} $, is the channel matrix, $ \mathbf{x}(t+1) $ is the transmitted signal, and $ z^{[l,k]}(t+1) $ is real valued additive white Gaussian noise with zero mean and variance given by the sum of shot noise,
thermal noise and the intensity noise of the laser.
In this work, all the optical APs are connected through a central unit (CU) to exchange essential information for solving optimization problems. It is worth mentioning that the distribution of the users is known at the central unite, while the channel state information (CSI) at the transmitters is limited to the channel coherence time due to the fact that BIA is implemented for interference management \cite{8636954,Gou11}.
\subsection{Transmitter}
The VCSEL transmitter has Gaussian beam profile with multiple modes. For lasers, the power distribution is determined based on the beam waist $ W_{0} $, the wavelength $ \lambda $ and the distance $ d $ between the transmitter and user. Basically, the beam radius of the VCSEL at photodiode $ m $ of user $ k $ located on the communication floor at distance $d$ is given by
\begin{equation}
W_{d}=W_{0} \left( 1+ \left(\frac{d}{d_{Ra}}\right)^{2}\right)^{1/2},
\end{equation}
where $ d_{Ra} $ is the Rayleigh range. Moreover, the spatial distribution of the intensity of VCSEL transmitter $ l $ over the transverse plane at distance $ d $ is given by
\begin{equation}
I_{l}(r,d) = \frac{2 P_{t,l}}{\pi W^{2}_{d}}~ \mathrm{exp}\left(-\frac{2 r^{2}}{W^{2}_{d}}\right).
\end{equation}
Finally, the power received by photodiode $ m $ of user $ k $ from transmitter $ l $ is given by
\begin{equation}
\label{power}
\begin{split}
&P_{m,l}=\\
&\int_{0}^{r_m} I(r,d) 2\pi r dr = P_{t,l}\left[1- \mathrm{exp} \left(\frac{ -2r^{2}_{m}}{W^{2}_{d}}\right)\right],
\end{split}
\end{equation}
where $ r_m $ is the radius of photodiode $ m $. Note that, $ A_m = \frac{A_{rec}}{M} $, $ m \in M $, is the detection area of photodiode $ m $, assuming $ A_{rec} $ is the whole detection area of the receiver. In \eqref{power}, the location of user $ k $ is considered right under transmitter $ l $, more details on the power calculations of the laser are in \cite{9803253}.
\subsection{Blind interference alignment}
BIA is a transmission scheme proposed for RF and optical networks to manage multi-user interference with no CSI at the transmitters \cite{8636954,Gou11}, showing superiority over other transmit precoding schemes with CSI such as zero-forcing (ZF). Basically, the transmission block of BIA allocates multiple alignments block to each user following a unique methodology. For instance, an AP with $ L_{v}=2 $ transmitters serving $ K=3 $ users, one alignment block is allocated to each user as shown in Fig. \ref{bia}. For the general case where an optical AP composed of $ L_v $ transmitters serving $ K $ users, BIA allocates $ \ell= \big\{1, \dots, (L_{v}-1)^{K-1}\big\} $ alignment blocks to each user over a transmission block consisting of $ (L_{v}-1)^{K}+K (L_{v}-1)^{K-1} $ time slots. In this context, user $ k$ receives the symbol $ \mathbf{u}_{\ell}^{[l,k]} $ from AP $ l $ during the $ \ell $-th alignment block as follows
\begin{equation}
\label{recibia}
\mathbf{y}^{[l,k]} = \mathbf{H}^{[l,k]}\mathbf{u}_{\ell}^{[l,k]} +\sum_{ \substack{l' = 1, l'\neq l}}^{L}
\sqrt{\alpha_{l'}^{[l,k]}}\mathbf{H}^{[l',k]}\mathbf{u}_{\ell}^{[l',k]}+ \mathbf{z}^{[l,k]},
\end{equation}
where $ \mathbf{H}^{[l,c]} $ is the channel matrix of user $k$. It is worth mentioning that user $ k $ is equipped with a reconfigurable detector that has the ability to provide $L_v$ linearly independent channel responses, i.e.,
\begin{equation}
\mathbf{H}^{[l,k]} = \begin{bmatrix} \mathbf{h}^{[l,k]}(1)& \mathbf{h}^{[l,k]}(2)& \dots & \mathbf{h}^{[l,k]}(L_v)
\end{bmatrix} \in \mathbb{R}_+^{L_{v} \times 1}.
\end{equation}
In \eqref{recibia}, $ \alpha_{l'}^{[l,k]} $ is the signal-to-interference ratio (SIR) received at user $ k $ due to other APs $ l \neq l' $, and $ \mathbf{u}_{\ell}^{[l',k]} $
represents the interfering symbols received from the adjacent APs during the alignment block $ \ell $ over which the desired symbol $ \mathbf{u}_{\ell}^{[k,c]} $ is received. It is worth pointing out that frequency reuse is usually applied to avoid inter-cell interference so that the interfering symbol $ \mathbf{u}_{\ell}^{[l',k]} $ can be teared as noise.
Finally, $\mathbf{z}^{[k,c]}$ is defined as noise resulting from interference subtraction, and it is given by a covariance matrix, i.e.,
\begin {equation}
\mathbf{R_{z_p}} =
\begin{bmatrix}
(K)\mathbf{I}_{L_{v}-1} & \mathbf{0}\\
\mathbf{0} & 1\\
\end{bmatrix}.
\end{equation}
According to \cite{8636954}, the BIA-based data rate received by user $k$ from its corresponding APs during the period of time $ (t+1) $ is expressed as
\begin{multline}
\label{rate}
r^{[l,k]} (t+1) =\\ B_{t+1}^{[l,k]} \mathbb{E}\left[\log\det\left(\mathbf{I}_{L_{v}} + P_{\rm{str}} \mathbf{H}_{t+1}^{[l,k]}{\mathbf{H}^{[l,k]}}^{H} \mathbf{R_{\tilde{z}}}^{-1}(t+1) \right)\right],
\end{multline}
where $B_{t+1}^{[l,k]}=\dfrac{(L_{v}-1)^{K-1}}{(L_{v}-1)^{K}+K(L_{v}-1)^{K-1}}= \dfrac{1}{K+L_{v}-1}$
is the ratio of the alignment blocks allocated to each user connected to AP $ l $ over the entire transmission block, $ P_{\rm{str}} $ is the power allocated to each stream and
\begin{equation}
\mathbf{R_{\tilde{z}}}(t+1)= \mathbf{R_{z_p}} + P_{\rm{str}}\sum_{l' = 1}^{L} \alpha_{l'}^{[l,k]}\mathbf{H}_{t+1}^{[l',k]}{\mathbf{H}^{[l',k]}}^{H},
\end{equation}
is the covariance matrix of the noise plus interference received from other APs $ l \neq l'$.
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.65\linewidth]{./plot/BIA.pdf}
\end{center}
\vspace{-2mm}
\caption{ Transmission block of BIA for a use case.}\label{bia}
\vspace{-2mm}
\end{figure}
\section{Problem Formulation}
We formulate an optimization problem in a discrete time OWC
system aiming to maximize the sum rate of the users by determining the optimum user assignment and resource allocation simultaneously.
It is worth mentioning that data rate maximization in the network must be achieved during each period of time $ t$, otherwise, it cannot be considered as a valid solution due to the fact that user-conditions are subject to changes in the next period of time. Focussing on period of time $ t+1 $, the utility function of sum rate maximization is given by
\begin{equation}
U(x,e)= \sum_{k \in K} \varphi \left( \sum_{ l\in L} x^{[l,k]}_{t+1} R^{[l,k]}({t+1})\right),
\end{equation}
where $ x^{[l,k]}_{t+1} $ is an assignment variable that determines the connectivity of user $ k $ to optical AP $ l$, where $ x^{[l,k]}_{t+1}=1 $ if user $ k $ is assigned to AP $ l $ during the period of time $ t+1 $, otherwise, it equals 0. Moreover, the actual data rate of user $ k $ during ${t+1} $ is $ R^{[l,k]}({t+1})= e^{[l,k]}_{t+1} r^{[l,k]}({t+1}) $, where $ e^{[l,k]}_{t+1} $, $ 0\leq e^{[l,k]}_{t+1} \leq 1$, determines the resources devoted from AP $ l $ to serve user $ k $, and $ r^{[l,k]}({t+1}) $ is the user rate given by equation \eqref{rate}. The sum rate maximization during the period of time $ {t+1} $ can be obtained by solving the optimization problem as follows
\begin{equation}
\label{pro-c1}
\begin{aligned}
\mathbf{P1:} ~~\max_{x,e} \quad \sum_{k \in K} \varphi \left( \sum_{ l\in L} x^{[l,k]}_{t+1} R^{[l,k]}({t+1})\right)\\
\textrm{s.t.} \quad \sum\limits_{l\in L} x^{[l,k]}_{t+1}=1,~~~~~~~~~~~\forall k\in K \\
\quad \sum\limits_{k\in K} x^{[l,k]}_{t+1} R^{[l,k]}({t+1}) \leq \rho_{l},~~~~~\\~~~~~~~~\forall k\in K\\
\quad R_{min} \leq x^{[l,k]}_{t+1} R^{[l,k]}({t+1}) \leq R_{max}, \\~~~~~~~~\forall l\in L, k\in K \\
x^{[l,k]}_{t+1} \in \big\{0,1\big\}, l \in L, k \in K,~~~~~~~\\
\end{aligned}
\end{equation}
where $ \varphi (.) = \log(.) $ is a logarithmic function that achieves proportional fairness among users \cite{879343}, and $ \rho_{l} $ is the capacity limitation of AP $ l$ . The first constraint in \eqref{pro-c1} guarantees that each user is assigned to only one AP, while the second constraint ensures that each AP is not overloaded. Moreover, the achievable user rate must be within a certain range as in the third constraint where $ R_{min} $ is the minimum data rate required by a given user and $ R_{max} $ is the maximum data rate that user $ k $ can receive. It is worth mentioning that imposing the third constraint helps in minimizing the waste of the resources and guarantees high quality of service. Finally, the last constraint defines the feasible region of the optimization problem.
The optimization problem in \eqref{pro-c1} is defined is as a mixed integer non-linear
programming (MINLP) problem in which two variables, $ x^{[l,k]}_{t+1} $ and $ e^{[l,k]}_{t+1} $, are coupled.
Interestingly, some of the deterministic algorithms can be used to solve such complex MINLP problems with high computational time. However, the application of these algorithms in real scenarios is not practical to solve optimization problems like in \eqref{pro-c1}, where the optimal solutions must be determined within a certain period of time.
One of the assumptions for relaxing the main optimization problem in \eqref{pro-c1} is to connect each user to more than one AP, which means that the association variable $ x^{[l,k]}_{t+1} $ equals to 1. In this context, the optimization problem can be rewritten as
\begin{equation}
\label{pro-c2}
\begin{aligned}
\mathbf{P2:} ~~\max_{e} \quad \sum_{k \in K} \varphi \left( \sum_{ l\in L} e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)\\
\textrm{s.t.} \quad \sum\limits_{k\in K} e^{[l,k]}_{t+1} r^{[l,k]}({t+1}) \leq \rho_{l},~~~~~\\~~~~~~~~\forall k\in K\\
\quad R_{min} \leq e^{[l,k]}_{t+1} r^{[l,k]}({t+1}) \leq R_{max}, \\~~~~~~~~\forall l\in L, k\in K \\
0\leq e^{[l,k]}_{t+1} \leq 1, l \in L, k \in K.~~~~~~~\\
\end{aligned}
\end{equation}
Note that, considering our assumption of full connectivity, the variable $ x^{[l,k]}_{t+1} $ is eliminated. Interestingly, the optimization problem in \eqref{pro-c2} can be solved in a distributed manner on the AP and user sides using the full decomposition method via Lagrangian multipliers \cite{FRLHANZO}. Thus, the Lagrangian function is
\begin{multline}
\label{eq:lag}
f\left(e,\mu, \xi_{\max},\lambda_{\min} \right) =\sum_{ k\in K} \sum_{ l\in L} \varphi \left( e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)+\\ \sum_{l\in L} \mu^{[l]}_{t+1}
\left(\rho_{l}- \sum_{k\in K} R^{[l,k]}({t+1})\right)+
\sum_{k\in K} \xi^{[k]}_{{t+1},\max}
\\ \left(R_{max}- \sum_{l\in \L} R^{[l,k]}({t+1})\right) +\sum_{k \in K} \lambda^{[k]}_{{t+1},\min} \\ \left( \sum_{l\in L} R^{[l,k]}({t+1})- R_{min}\right),
\end{multline}
where $ \mu$, $\xi_{\max} $ and $\lambda_{\min}$ are the Lagrange multipliers according to the first and second constraints in \eqref{eq:lag}, respectively.
However, the assumption of users assigned to more than one AP is unrealistic in real time scenarios where users might not see more than one AP at a given time due to blockage. Therefore, focusing on resource allocation more than user association as in \eqref{pro-c2} can cause relatively high waste of resources due to the fact that an AP might allocate resources to users blocked from receiving its LoS link. In the following, an alternative solution is proposed using ANN models.
\subsection{Artificial neural network }
Our aim in \eqref{pro-c1} is to provide optimal solutions during the period of time $ t+1 $. Therefore, our ANN model must have the ability to exploit the solutions of the optimization problem in the previous period of time $ t $.
Given that, the problem in hand can be defined as time series prediction,
Focussing on the optimization problem in \eqref{pro-c1}, calculating the network assignment vector $ \mathbf {X}_{t+1} $ involves high complexity. Therefore, having an ANN model that is able to perform prediction for the network assignment vector can considerably minimize the computational time, while valid sub-optimum solutions are obtained within a certain period of time. As in \cite{9839259}, we design a convolutional neural network (CNN) to estimate the network assignment vector denoted by $ \widehat{{\mathbf{X}}}_{t} $ during a given period of time $ t $ based on user-requirements sent to the whole set of the APs through uplink transmission. It is worth mentioning that the CNN model must be trained over a data set generated from solving the original optimization problem as in the following sub-section. For prediction, we consider the use of long-short-term-memory (LSTM) model classified as a recurrent neural network (RNN)\cite{7960065}, which is known to solve complex sequence problems through time. Once, the network assignment vector is estimated during the period of time $ t $, it is fed into the input layer of the LSTM model trained to predict the network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $ during the next period of time $ t+1 $ prior to its starting. Note that, resource allocation can be performed in accordance with the predicted network assignment vector to achieve data rate maximization during the intended period of time.
\subsection{Offline phase}
We first train the CNN model over a dataset with $ N $ size within each period of time to determine an accurate set of weight terms that can perfectly map between the information sent to the input layer of the ANN model and its output layer. Note that, the CNN model aims to estimate the network assignment vector at a given time. For instance during period of time $ t $, the CNN model provides an estimated network assignment vector within the interval $ [\widehat{{\mathbf{X}}}_{t-\tau+1}, \widehat{{\mathbf{X}}}_{t-\tau+2}, \dots, \widehat{{\mathbf{X}}}_{t} ] $, which then can be fed into the input layer of the LSTM model to predict the network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $. In this context, the CNN model must be trained during the period of time $ t $ over data points generated from solving the following problem
\begin{equation}
\label{pro-3}
\begin{aligned}
\mathbf{P3:} ~~\max_{x} \quad \sum_{k \in K} \varphi \left( \sum_{ l\in L} x^{[l,k]}_{t} \frac{r^{[l,k]}({t})}{\sum_{k \in K} x^{[l,k]}_{t}}\right)\\
\textrm{s.t.} \quad \sum\limits_{l\in L} x^{[l,k]}_{t}=1,~~~~~~~~~~~\forall k\in K \\
x^{[l,k]}_{t} \in \big\{0,1\big\}, l \in L, k \in K.~~~~~~~\\
\end{aligned}
\end{equation}
This optimization problem is a rewritten form of the problem in \eqref {pro-c1} with the assumption of uniform resource allocation, i.e., $ e^{[l,k]}_{t}=\frac{1}{K_{l}} $, where $ K_{l}= \sum_{k \in K} x^{[l,k]}_{t} $. It is worth pointing out that this assumption is considered due to the fact that once the estimation and prediction processes for the network assignment vector are done using CNN and LSTM models, respectively, resource allocation is performed at each optical AP to satisfy the requirements of the users as in sub-section \ref{sub}. The optimization problem in \eqref {pro-3} can be solved through brute force search with a complexity that increases exponentially with the size of the network.
Note that, since the dataset is generated in an offline phase, complexity is not an issue.
For the LSTM model, the dataset is generated over $ \mathcal{T} $ consecutive period of times. Then, it is processed to train the LSTM model for determining a set of wight terms that can accurately predict the network assignment vector during a certain period of time. Interestingly, the training of the LSTM model for predicting $ \widetilde{{\mathbf{X}}}_{t+1} $ during $ t+1 $ is occurred over date points included in the dataset during the previous time duration $ \tau $, i.e., $ [\widehat{{\mathbf{X}}}_{t-\tau+1}, \widehat{{\mathbf{X}}}_{t-\tau+2}, \dots, \widehat{{\mathbf{X}}}_{t}] $.
\subsection{Online application}
\label{sub}
After generating the dataset and training the ANN models in an offline phase, their application is considered at the optical APs to perform instantaneous data rate-maximization during a certain period of time $ t+1 $ by finding the optimum user association and resource allocation. Basically, the users send their requirements to the optical APs at the beginning of the period of time $ t $ through uplink transmission. Subsequently, these information are injected into the trained CNN model to estimate the network assignment vector $ \widehat{{\mathbf{X}}}_{t} $ during the interval $ [t-\tau+1, t-\tau+2, \dots, t] $, which then can be used as information for the input layer of the LSTM trained to predict the network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $ during the next period of time prior to its actual starting. Once the network assignment variable $ x^{[l,k]}_{t+1} $ is predicted for each user $ k $ during $ t+1 $, resource allocation is determined at each AP according to equation \eqref{eq:lag} as follows
\begin{multline}
\label{OPT4}
\mathcal{L}(e,\mu, \xi_{\max},\lambda_{\min})=\\ \sum_{k \in K_{l}} \varphi \left( e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)- \mu^{[l]}_{t+1} \sum_{k \in K_{l}}e^{[l,k]}_{t+1} r^{[l,k]}({t+1})
\\-\sum_{k \in K_{l}} ( \xi^{[k]}_{{t+1},\max}-\lambda^{[k]}_{{t+1},\min}) ~ e^{[l,k]}_{t+1} r^{[l,k]}({t+1}).
\end{multline}
The optimum resources allocated to user $ k $ associated with AP $ l $ during ${t+1}$ is determined by taking the partial derivative of $ \mathcal{L}(e,\mu, \xi_{\max},\lambda_{\min}) $ with respect to $ e^{[l,k]}_{t+1} $. Therefore, it is given by
\begin{multline}
\label{OPT5}
\left(\dfrac{ \partial \varphi \left( e^{[l,k]}_{t+1} r^{[l,k]}({t+1})\right)} {\partial e_{t+1}}\right)= \\r^{[l,k]}({t+1}) \left(\mu^{[l]}_{t+1}+ \xi^{[k]}_{{t+1},\max}-\lambda^{[k]}_{{t+1},\min}\right),
\end{multline}
Otherwise, considering the definition of $ \dfrac{\partial \mathcal{L} \left(e\right)} {\partial e} $ as monotonically decreasing function with $ e^{[l,k]}_{t+1} $,
the partial derivative $ \dfrac{\partial \mathcal{L} \left(e\right)} {\partial e} \vert _{e^{[l,k]}_{t+1}=0}\leq0 $ means that the optimum value $ e^{*[l,k]}_{t+1} $ equals zero, while $ \dfrac{\partial \mathcal{L}\left(e\right)} {\partial e} \vert _{e^{[l,k]}_{t+1}=1} \geq 0 $ means that the optimum value $ e^{*[l,k]}_{t+1} $ equals one. At this point, the gradient projection method is applied to solve the dual problem, and the Lagrangian multipliers in \eqref{OPT4} are updated as follows
\begin{equation}
\label{var}
\mu^{[l]}_{t+1}(i)= \left[\mu^{[l]}_{t+1}(i-1)-\Omega_{\varepsilon} \left(\alpha_l-\sum_{k\in K_{l}} R^{[l,k]}({t+1})\right) \right]^{+},
\end{equation}
\begin{equation}
\label{nu}
\xi^{[k]}_{{t+1},\max}(i)= \Bigg[\xi^{[k]}_{{t+1}}(i-1)-\Omega_{\nu}\Bigg(R_{\max}-R^{[l,k]}({t+1})\Bigg)\Bigg]^{+},
\end{equation}
\begin{equation}
\label{lam}
\lambda^{[k]}_{{t+1},\min} (i)=\Bigg[\lambda^{[k]}_{{t+1}}(i-1)-\Omega_{\lambda}\Bigg( R^{[l,k]}({t+1})-R_{\min}\Bigg)\Bigg]^{+},
\end{equation}
where $ i $ denotes the iteration of the gradient algorithm and [:]+ is a projection on the positive orthant. The Lagrangian variables work as indicators between the users and APs to maximize the sum rate of the network, while ensuring that each AP is not overloaded and the users receiver their demands. Note that, the resources are determined based on the predicted network assignment vector $ \widetilde{{\mathbf{X}}}_{t+1} $. Therefore, at the beginning of the period of time $ t+1 $, each AP sets its link price according to \eqref{var}, and the users update and broadcast their demands as in \eqref{nu} and \eqref{lam}. These values remain fixed during the whole time interval so that the trained CNN estimate a new assignment vector to feed the LSTM model in order to predict $ \widetilde{{\mathbf{X}}}_{t+1} $ for the next period of time $ t+2 $.
\begin{table}
\centering
\caption{Simulation Parameters}
\begin{tabular}{|l|c|}
\hline
{\bf Laser-based OWC parameter} & {\bf Value} \\ \hline\hline
Laser Bandwidth & 5 GHz \\\hline
Laser Wavelength & 830 nm \\\hline
Laser beam waist & $ 5~ \mu $m \\\hline
Physical area of the photodiode &15 $\text{mm}^2$ \\\hline
Receiver FOV & 45 deg \\\hline
Detector responsivity & 0.9 A/W \\\hline
Gain of optical filter & 1.0 \\\hline
Laser noise & $-155~ dB/H$z \\\hline\hline
{\bf ANNs parameter} & {\bf Value} \\ \hline\hline
Model & CNN and LSTM \\\hline
Dataset size & $ 5000-10000 $\\\hline
Training & $ 90\% $ of dataset\\\hline
Validation & $ 10\% $ of dataset\\\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.91\linewidth]{./plot/re1b.pdf}
\end{center}
\vspace{-2mm}
\caption{ The performance of the ANN model trained for prediction. }\label{re1}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.91\linewidth]{./plot/re2.pdf}
\end{center}
\vspace{-2mm}
\caption{Sum rates of different rate-maximization techniques versus the beam waist of the laser. $\mathcal{T}=4$, $ K=10$. }\label{re2}
\vspace{-2mm}
\end{figure}
\section{PERFORMANCE EVALUATIONS}
\label{sec:Pcom}
We consider an indoor environment with 5m$ \times $ 5m$ \times $ 3m dimensions where on the ceiling $ L=8 $ APs are deployed, and each AP with $ L_v $ transmitters. On the communication floor located at 2 m from the ceiling, $ K $ users are distributed randomly with different activities. Note that, each user $ k $ is equipped a reconfigurable detector that gives the opportunity to connect to most of the APs, more details are in \cite{JLT1111}. All the other simulation parameters are listed in Table 1.
The accuracy of the trained ANN model, i.e., LSTM, in preforming prediction is depicted in Fig. \ref{re1} in terms of mean square error (MSE) versus a set of epochs. It is can be seen that training and validation losses decrease with the number of epochs regardless of the dataset size considered since the optimal wights needed to preform specific mathematical calculations are set over time. However, increasing the size of dataset from 5000 to $ 10^4 $ results in decreasing the error of validation and training processes. It is worth noticing that MSE increases if more periods of time, $ \mathcal{T}=5 $, is assumed for the same dataset size which is due to an increase in the prediction error. This issue can be avoided by training the ANN model over a larger dataset with data points $>10^4$ . The figure also shows that our ANN model is not overfitting and can predict accurate solutions in the online application where unexpected scenarios are more likely to occur.
In Fig. \ref{re2}, the sum rate of the network is shown against different values of the beam waist $ W_0 $, which is known as a vital parameter in laser-based OWC that influences the power received at the user end. It is shown that the sum rate of the users increases with the beam waist due to the fact that more transmit power is directed towards the users, and less interference is received from the neighboring APs. Interestingly, our cooperative ANNs provides accurate solutions close to the optimal ones that involves high computational time. Note that, solving the
the optimization problem in \eqref{pro-c2} results in low sum rates compared to our ANN-based solutions, which is expected due to the assumption of full connectivity, i.e., $ x^{[l,k]}_{t+1}=1 $, which in turn leads to wasting the resources. Moreover, the proposed models shows superiority over the conventional scheme proposed in \cite{8636954} in which each AP serves users located at a distance determining whether the signal received is useful or noise, and therefore, users are served regardless of their demands, the available resources and capacity limitations of the APs.
Fig. \ref{re3} shows the sum rate of the network versus a range of SNR values using the trained ANN models. It can be seen that determining the optimal user assignment and resource allocation using ANN results in higher sum rates compared to the scenarios of full connectivity and distance-based user association. It is because of in our model, each user is assigned to an AP that has enough resources to satisfy its demands and positively impact the sum rate of the network. Interestingly, as in \cite{8636954}, BIA achieves a higher sum rate than ZF due to its ability to serve multiple users simultaneously with no CSI, while the performance of ZF is dictated by the need for CSI.
\section{CONCLUSIONs}
\label{sec:CONCLUSION}
In this paper, sum rate-maximization is addressed in a discrete time laser-based OWC. We first define the system model consisting of multiple APs to serve users distributed on the receiving plane. Then, the user rate is derived considering the application of BIA, which manages multi-user interference without the need for CSI at the transmitters. Moreover, an optimization problem is formulated to maximize the sum rate of the network during a certain period of time. Finally, CNN and LSTM models are designed and trained to provide instantaneous solutions during the validity of each period of time. The results show that solving the formulated model achieves higher sum rates compared to other benchmark models, and the trained ANN models have the ability to obtain accurate and valid solutions close to the optimal ones.
\begin{figure}[t]
\begin{center}\hspace*{0cm}
\includegraphics[width=0.91\linewidth]{./plot/re3.pdf}
\end{center}
\vspace{-2mm}
\caption{Sum rates of the network versus SNR. $\mathcal{T}=4$, $ K=10$.}\label{re3}
\vspace{-2mm}
\end{figure}
\bibliographystyle{IEEEtran}
|
1,108,101,563,893 | arxiv |
\section{Introduction%
\label{introduction}%
}
When summarizing a large text, only a subset of the available topics and stories can be taken into account. The decision which topics to cover is largely editorial. This paper introduces a tool that assists this editorial process using word vector representations and dimensionality reduction. It enables a user to visually identify agreement and disagreement between two text sources.
There are a variety of different ways to approach the problem of visualizing the topics present in a text. The simplest approach is to look at unique words and their occurrences and visualize the words in a list. Topics could also be visualized using word clouds, where the font size of a word is determined by the frequency of the word. Word clouds have a variety of shortcomings: they can only visualize small subsets, they focus on the most frequent words and they do not take synonyms and semantically similar words into account.
This paper describes a human-computer interaction-inspired approach of comparing two text sources. The approach yields a bird’s-eye view of different text sources, including text summaries and their source material, and enables users to explore a text source like a geographical map.
As similar words are close to each other, the user can visually identify clusters of topics that are present in the text.
This paper describes a tool, which can be used to visualize the topics in a single text source as well as to compare different text sources. To compare the topics in source A and source B, three different sets of words can be computed: a set of unique words in source A, a set of unique words in source B as well as the intersection set of words both in source A and B. These three sets are then plotted at the same time. For this, a colour is assigned to each set of words. This enables the user to visually compare the different text sources and makes it possible to see which topics are covered where. The user can explore the word map and zoom in and out. He or she can also toggle the visibility, i.e. show and hide, certain word sets.
The comparison can be used to visualize the difference between a text summary and its source material. It can also help to compare Wikipedia revisions in regards to the topics they cover. Another possible application is the visualization of heterogeneous data sources like a list of search queries and keywords.
The Github repository of the tool includes an online demo {[}Heu15{]}. The tool can be used to explore the precomputed topic sets of the Game of Thrones Wikipedia article revisions from 2013 and 2015. The repository also includes the precomputed topic sets for the Wikipedia article revisions for the articles on World War 2, Facebook, and the United States of America.
\section{Distributional semantic models%
\label{distributional-semantic-models}%
}
The distributional hypothesis by Harris states that words with similar meaning occur in similar contexts {[}Sah05{]}. This implies that the meaning of a word can be inferred from its distribution across contexts. The goal of distributional semantics is to find a representation, e.g. a vector, that approximates the meaning of a word {[}Bru14{]}. The traditional approach to statistical modeling of language is based on counting frequencies of occurrences of short word sequences of length up to N and did not exploit distributed representations {[}Cun15{]}. Distributional semantics takes word co-occurrence in context windows into account.
The general idea behind word space models is to use distributional statistics to generate high-dimensional vector spaces, where a word is represented by a context vector that encodes semantic similarity {[}Sah05{]}. The representations are called distributed representations because the features are not mutually exclusive and because their configurations correspond to the variations seen in the observed data {[}Cun15{]}. LeCun et al. provide the example of a news story. When the task is to predict the next word in a news story, the learned word vectors for Tuesday and Wednesday will be very similar as they can be easily replaced by each other when used in a sentence {[}Cun15{]}.
There are a variety of computational models that implement the distributional hypothesis, including word2vec {[}Che13{]}, GloVe {[}Pen14{]}, dependency-based word embeddings {[}Lev14{]} and Random Indexing {[}Sah05{]}. There are a variety of Python implementations of these techniques. word2vec is available in gensim {[}Řeh10{]}. For GloVe, the C source code was ported to Python {[}Gau15, Kul15{]}. The dependency-based word embeddings by Levy and Goldberg are implemented in spaCy {[}Hon15{]}. Random Indexing is available in an implementation by Joseph Turian {[}Tur10{]}.
For this paper, word2vec was selected because Mikolov et al. provide 1.4 million pre-trained entity vectors trained on 100 billion words from various news articles in the Google News dataset {[}Che13{]}. However, other models might perform equally well for the purpose of text comparison. Moreover, custom word vectors trained on a large domain-specific dataset, e.g. the Wikipedia encyclopedia for the Wikipedia revision comparison, could potentially yield even better results.
\subsection{word2vec%
\label{word2vec}%
}
word2vec is a tool developed by Mikolov, Sutskever, Chen, Corrado, and Dean at Google. The two model architectures in the C tool were made available under an open-source license {[}Mik13{]}. Gensim provides a Python reimplementation of word2vec {[}Řeh10{]}.
Word vectors encode semantic meaning and capture many different degrees of similarity {[}Lev14{]}. word2vec word vectors can capture linguistic properties such as gender, tense, plurality, and even semantic concepts such as \textquotedbl{}is the capital city of\textquotedbl{}. word2vec captures domain similarity while other more dependency-based approaches capture functional similarity.
In the word2vec vector space, linear algebra can be used to exploit the encoded dimensions of similarity. Using this, a computer system can complete tasks like the Scholastic Assessment Test (SAT) analogy quizzes, that measure relational similarity.\begin{equation*}
king - man + woman \approx queen
\end{equation*}It works for the superlative:\begin{equation*}
fastest - fast + slow \approx slowest
\end{equation*}As well as the past participle:\begin{equation*}
woken - wake + be \approx been
\end{equation*}It can infer the Finnish national sport from the German national sport.\begin{equation*}
football - Germany + Finland \approx hockey
\end{equation*}Based on the last name of the current Prime Minister of the United Kingdom, it identifies the last name of the German Bundeskanzlerin:\begin{equation*}
Cameron - England + Germany \approx Merkel
\end{equation*}The analogies can also be applied to the national dish of a country:\begin{equation*}
haggis - Scotland + Germany \approx Currywurst
\end{equation*}Fig. 1 shows the clusters of semantically similar words and how they form semantic units, which can be easily interpreted by humans.\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{word_clusters.png}}
\caption{Clusters of semantically similar words emerge when the word2vec vectors are projected down to 2D using t-SNE. \DUrole{label}{egfig}}
\end{figure}
\section{Dimensionality reduction with t-SNE%
\label{dimensionality-reduction-with-t-sne}%
}
t-distributed Stochastic Neighbour Embedding (t-SNE) is a dimensionality reduction technique that retains the local structure of data and that helps to visualize large real-world datasets with limited computational demands {[}Maa08{]}. Vectors that are similar in a high-dimensional vector space get represented by two- or three-dimensional vectors that are close to each other in the two- or three-dimensional vector space. Dissimilar high-dimensional vectors are distant in the two- or three-dimensional vector space. Meanwhile, the global structure of the data and the presence of clusters at several scales is revealed. t-SNE is well-suited for high-dimensional data that lies on several different, but related, low-dimensional manifolds {[}Maa08{]}.
t-SNE achieves this by minimizing the Kullback-Leibler divergence between the joint probabilities of the high-dimensional data and the low-dimensional representation. The Kullback-Leibler divergence measures the faithfulness with which a probability distribution q represents a probability distribution p by a discrete scalar and equals zero if the distributions are the same {[}Maa08{]}. The Kullback-Leibler divergence is minimized using the gradient descent method. In contrast to other Stochastic Neighbor Embedding methods that use Gaussian distributions, it uses a Student t-distribution.
\section{Implementation%
\label{implementation}%
}
The text comparison tool implements a workflow that consists of a Python tool for the back-end and a Javascript tool for the front-end. With the Python tool, a text is converted into a collection of two-dimensional word vectors. These are visualized using the Javascript front-end. With the Javascript front-end, the user can explore the word map and zoom in and out to investigate both the local and the global structure of the text sources. The Javascript front-end can be published online.
The workflow of the tool includes the following four steps:
\subsection{Pre-processing%
\label{pre-processing}%
}
In the pre-processing step, all sentences are tokenized to extract single words. The tokenization is done using the Penn Treebank Tokenizer implemented in the Natural Language Processing Toolkit (NLTK) for Python {[}Bir09{]}. Alternatively, this could also be achieved with a regular expression.
Using a hash map, all words are counted. Only unique words, i.e. the keys of the hash map, are taken into account for the dimensionality reduction. The 3000 most frequent English words according to a frequency list collected from Wikipedia are ignored to reduce the amount of data.
\subsection{Word representations%
\label{word-representations}%
}
For all unique non-frequent words, the word representation vectors are collected from the word2vec model from the gensim Python library {[}Řeh10{]}. Each word is represented by an N-dimensional vector (N=300, informed by the best accuracy in {[}Mik13{]} and following the default in {[}Che13{]}).\begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
\PY{k+kn}{from} \PY{n+nn}{gensim.models} \PY{k+kn}{import} \PY{n}{Word2Vec}
\PY{n}{model} \PY{o}{=} \PY{n}{Word2Vec}\PY{o}{.}\PY{n}{load\char`\_{}word2vec\char`\_{}format}\PY{p}{(}
\PY{n}{word\char`\_{}vectors\char`\_{}filename}\PY{p}{,} \PY{n}{binary}\PY{o}{=}\PY{n+nb+bp}{True}
\PY{p}{)}
\PY{k}{for} \PY{n}{word} \PY{o+ow}{in} \PY{n}{words}\PY{p}{:}
\PY{k}{if} \PY{n}{word} \PY{o+ow}{in} \PY{n}{model}\PY{p}{:}
\PY{k}{print} \PY{n}{model}\PY{p}{[}\PY{n}{word}\PY{p}{]}
\end{Verbatim}
\subsection{Dimensionality Reduction%
\label{dimensionality-reduction}%
}
The resulting N-dimensional word2vec vectors are projected down to 2D using the t-SNE Python implementation in scikit-learn {[}Ped11{]}.
In the dimensionality reduction step, the N-dimensional word vectors are projected down to a two-dimensional space so that they can be easily visualized in a 2D coordinate system (see Fig. 2).\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{tsne_dimensionality_reduction.png}}
\caption{In the dimensionality reduction step, the word vectors are projected down to 2D. \DUrole{label}{egfig}}
\end{figure}
For the implementation, the t-SNE implementation in scikit-learn is used:\begin{Verbatim}[commandchars=\\\{\},fontsize=\footnotesize]
\PY{k+kn}{from} \PY{n+nn}{sklearn.manifold} \PY{k+kn}{import} \PY{n}{TSNE}
\PY{n}{tsne} \PY{o}{=} \PY{n}{TSNE}\PY{p}{(}\PY{n}{n\char`\_{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{tsne}\PY{o}{.}\PY{n}{fit\char`\_{}transform}\PY{p}{(}\PY{n}{word\char`\_{}vectors}\PY{p}{)}
\end{Verbatim}
\subsection{Visualization%
\label{visualization}%
}
After the dimensionality reduction, the vectors are exported to a JSON file. The vectors are visualized using the D3.js JavaScript data visualization library {[}Bos12{]}. Using D3.js, an interactive map was developed. With this map, the user can move around and zoom in and out. The colour coding helps to judge the ratio of dissimilar and similar words. At the global scale, the map can be used to assess how similar two text sources are to each other. At the local scale, clusters of similar words can be explored.
\section{Results%
\label{results}%
}
As with many unsupervised methods, the evaluation can be difficult and the quality of the visualization is hard to quantify. The goal of this section is, therefore, to introduce relevant use cases and illustrate how the technique can be applied.
The flow described in the previous section can be applied to different revisions of Wikipedia articles. For this, a convenience sample of the most popular articles in 2013 from the English Wikipedia was used. For each article, the last revision from the 31st of December 2013 and the most recent revision on the 26th of May 2015 were collected. The assumption was that popular articles will attract sufficient changes to be interesting to compare. The list of the most popular Wikipedia articles includes Facebook, Game of Thrones, the United States, and World War 2.
The article on Game of Thrones was deemed especially illustrative for the task of comparing the topics in a text, as the storyline of the TV show developed between the two different snapshot dates as new characters were introduced. Other characters became less relevant and were removed from the article. The article on World War 2 was especially interesting as one of the motivations for the topic tool is to find subtle changes in data.
Fig. 3 shows how different the global cluster, i.e. the full group of words on the minimum zoom setting, of the Wikipedia articles on the United States, Game of Thrones and World War 2 are.\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{global_clusters.png}}
\caption{Global clusters of the Wikipedia articles on the United States (left), Game of Thrones (middle), and World War 2 (right). \DUrole{label}{egfig}}
\end{figure}
Fig. 4 shows four screenshots of the visualization of the Wikipedia articles on the United States including an overview and detail views that only show the intersection set of words, words only present in the 2013 revision of the article, and words only present in the 2015 revision of the article.\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{topic_comparison_usa.png}}
\caption{Topic comparison of the Wikipedia article on the United States. In the top left, all words in both texts are plotted. On the top right, only the intersection set of words is shown. In the bottom left, only the words present in the 2013 revision are displayed. In the bottom right, only the words present in the 2015 revision are shown. \DUrole{label}{egfig}}
\end{figure}
When applied to Game of Thrones, it is e.g. easy to visually compare names that were removed since 2013 and that were added in 2015 (Fig. 5). Using the online demo available {[}Heu15{]}, this technique can be applied to the Wikipedia articles on the United States and World War 2.
The technique can also be applied to visualize the Google search history of an individual. Similar words are represented by similar vectors. Thus, terms related to different topics, e.g. technology, philosophy or music, will end up in separate clusters.\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{game_of_thrones.png}}
\caption{Names present in the Wikipedia article on Game of Thrones. Red names were added to the 2015 revision, orange names removed. White names are present in both revisions.}
\end{figure}
\section{Conclusion%
\label{conclusion}%
}
Word2vec word vector representations and t-SNE dimensionality reduction can be used to provide a bird’s-eye view of different text sources, including text summaries and their source material. This enables users to explore a text source like a geographical map.
The paper gives an overview of an ongoing investigation of the usefulness of word vector representations and dimensionality reduction in the text and topic comparison context. The major flaw of this paper is that the introduced text visualization and text comparison approach is not validated empirically.
As many researchers publish their source code under open source licenses and as the Python community embraces and supports these publications, it was possible to integrate the findings from the literature review of my Master's thesis into a useable tool. Distributed representations are an active field of research. New findings on word, sentence or paragraph vectors can be easily integrated into the workflow of the tool.
Both the front-end and the back-end of the implementation were made available on GitHub under GNU General Public License 3 {[}Heu15{]}. The repository includes the necessary Python code to collect the word2vec representations using Gensim, to project them down to 2D using t-SNE and to output them as JSON. The repository also includes the front-end code to explore the JSON file as a geographical map.
The tool can be used in addition to topic modeling techniques like LDA. It enables the comparison of large text sources at a glance and is aimed at similar text sources with subtle differences.
|
1,108,101,563,894 | arxiv |
\section{Introduction}
Convolutional neural networks (CNNs) have revolutionized medical image analysis~\cite{bekkers2018roto, ching2018opportunities, topol2019high, prevedello2019challenges, lutnick2019integrated, karimi2020deep, gaonkar2021eigenrank}. These networks achieve impressive results, but their outputs are often difficult to interpret, which is problematic for clinical decision making~\cite{yao2018weakly}. Designing explainable classification models is a challenge. In some applications, such as cancer diagnosis, interpretability can be achieved by localizing the regions of the input image that determine the output of the model~\cite{shen2020interpretable}.
Alternatively, detection and segmentation networks, such as U-Net~\cite{ronneberger2015u} and Faster R-CNN~\cite{ren2016faster} can be trained with annotations indicating regions relevant to diagnosis.
Unfortunately, acquiring such annotations is labor-intensive, and requires medical expertise. Moreover, learning under such supervision might bias the network to ignore lesions occult to radiologists, which a neural network can still identify.
Given the above obstacle, weakly-supervised localization (WSL) has recently become an area of active research~\cite{diba2017weakly, singh2017hide, zhang2018adversarial, zhang2018self, cui2019weakly}. These approaches aim to identify image regions relevant to classification utilizing only image-level labels during training, based upon the observation that feature maps in the final convolutional layers of CNNs reveal the most influential regions of the input image~\cite{oquab2015object, zhou2016learning}. These methods are usually designed for natural images and applying them to medical images is challenging due to their unique characteristics. For example, mammography images have a much higher resolution ($\sim10^7$ pixels) than natural images ($\sim10^5$ pixels) in most benchmark datasets, such as ImageNet~\cite{deng2009imagenet}. Because of this, when applied to medical images, CNNs often aggressively downsample the input image~\cite{shen2019globally,shen2020interpretable} to accommodate GPU memory constraints, making the resulting localization too coarse.
This is a crucial limitation for many medical diagnosis tasks, where regions of interest (ROIs) are often small (e.g. $\leq 1\%$ pixels).
\begin{figure*}[htbp]
\centering
\resizebox{0.7\linewidth}{!}{
\begin{tabular}{c c c c c c}
\LARGE Input & \LARGE \LARGE Ground Truth & \LARGE CAM & \LARGE GMIC & \LARGE GLAM (proposed) \\
\centering
\includegraphics[ height=0.3\linewidth]{images/example_161_rcc/Input.png}
&
\includegraphics[height=0.3\linewidth]{images/example_161_rcc/groundtruth.pdf} & {
\includegraphics[height=0.3\linewidth]{images/example_161_rcc/CAM.pdf}
}
& {
\includegraphics[height=0.3\linewidth]{images/example_161_rcc/GMIC.pdf}
}
& {
\includegraphics[height=0.3\linewidth]{images/example_161_rcc/Ours.pdf}}
\\
\includegraphics[ height=0.3\linewidth]{images/example_78_rmlo/Input.pdf} &
\includegraphics[height=0.3\linewidth]{images/example_78_rmlo/groundtruth_v2.pdf} & {
\includegraphics[height=0.3\linewidth]{images/example_78_rmlo/CAM.pdf}
}
& {
\includegraphics[height=0.3\linewidth]{images/example_78_rmlo/GMIC.pdf}
}
& {
\includegraphics[height=0.3\linewidth]{images/example_78_rmlo/Ours.pdf}}
\end{tabular}
}
\centering
\caption{Comparison of saliency maps generated by CAM~\cite{zhou2016learning}, GMIC~\cite{shen2020interpretable}, and the proposed method on a mammography image containing a malignant lesion (first row, red) and a benign lesion (second row, green). Both CAM and GMIC produce coarse saliency maps that fail to localize the lesion accurately. The proposed method generates a high-resolution saliency map that precisely localizes the lesions.}
\label{fig:compare_fig2}
\end{figure*}
In this work, we propose GLAM (Global-Local Activation Maps), a novel framework to generate fine-grained segmentation using only image-level labels. The proposed model processes high resolution medical images in a memory-efficient way. The main idea behind GLAM is to select informative regions (patches) that may contain ROIs via coarse-level localization and then to perform segmentation on selected patches rather than the entire image in a weakly supervised manner. We train and evaluate GLAM on a dataset containing more than one million mammography images. We demonstrate that the model outperforms existing baselines in segmentation of both benign and malignant lesions, improving the Dice similarity score relatively by 39.6\% and 20\%, respectively, while preserving classification accuracy. To achieve that, GLAM produces fine-grained saliency maps with a 300 times higher resolution than previous works~\cite{shen2020interpretable} ($736\times 480$ pixels for $2944\times1920$ pixels input images). In Figure~\ref{fig:compare_fig2}, we illustrate how the saliency maps generated by GLAM enable high-resolution segmentation of lesions relevant to breast cancer diagnosis.
\section{Background}
\label{background}
WSL is the task of learning to locate ROIs (i.e. the \emph{objects}) in an image when only image-level labels are available during training. WSL methods are usually based on CNNs that produce saliency maps encoding the location of ROIs. To train the whole system using only image-level labels, the saliency maps are collapsed to predictions indicating the presence of each class using a pooling function. Once the CNN is trained, the saliency map can be used for localization~\cite{oquab2015object,zhou2016learning}. WSL
has been applied in a wide range of medical-imaging applications, including the detection of lung disease in chest X-ray images~\cite{wang2017chestx, yao2018weakly, tang2018attention, ma2019multi, liu2019align, guan2018diagnose}, diagnosis of injuries from pelvic X-ray images~\cite{wang2019weakly}, brain lesion segmentation~\cite{wu2019weakly}, breast MRI analysis~\cite{luo2019deep}, cancer detection in lung CT images~\cite{feng2017discriminative, schlemper2018attention}, and scan-plane detection in ultrasound~\cite{schlemper2018attention,baumgartner2017sononet}.
The majority of these works focus on images that have relatively low resolution, that is, $512\times512$ pixels or less. Only a few works have considered higher resolution images, which are standard in some imaging procedures such as screening mammography~\cite{shen2019globally, shen2020interpretable}.
In this work we focus on the diagnosis of breast cancer from screening mammography images.
Breast cancer is the second leading cause of cancer-related deaths among women~\cite{bray2018global} and screening mammography is the main tool for its early detection~\cite{marmot2013benefits}.
CNN classifiers have shown promise in diagnosis from mammograms~\cite{zhu2017deep, kim2018applying, ribli2018detecting, wu2019deep, geras2019artificial, mckinney2020international, shen2019globally, shen2020interpretable}. Accurate localization of suspicious lesions is crucial to aid clinicians in interpreting model outputs, and can provide guidance for future diagnostic procedures.
However, existing methods that explain their predictions, e.g. \citet{shen2019globally,shen2020interpretable},
offer only coarse localization.
GLAM is inspired by recent works~\cite{yao2018weakly, shen2019globally, shen2020interpretable, shamout2020artificial}, which improve classification accuracy by processing image patches selected from coarse saliency maps. The main innovation of GLAM with respect to these works is that it generates a
a high-resolution saliency map from the selected patches, which significantly improves lesion segmentation accuracy.
\section{Proposed Approach}
Our goal is to generate fine-grained saliency maps that localize objects of interest in high-resolution images using only image-level labels during training.
We start this section by describing the inference pipeline of our approach in Section~\ref{sec:inference}.
We then describe each module in detail in Section~\ref{sec:architecture}. Finally, we explain the training strategy in Section~\ref{sec:training}.
\subsection{Inference pipeline}
\label{sec:inference}
\begin{figure}[]
\centering
\includegraphics[width=0.7\textwidth,trim=0mm 10mm 0mm 8mm]{images/overviewinferencev5.pdf}
\caption{Inference pipeline of GLAM. 1) The global network $f_g$ is applied to the whole image $\mathbf{x}$ to obtain a coarse image-level segmentation map $\mathbf{S}_g$. 2) Based on this coarse-level segmentation, several patches are extracted from the input image. 3) The local network $f_l$ processes these patches to generate a high-resolution saliency map $\mathbf{S}_l$.
}
\label{fig:frameworkinference}
\end{figure}
As illustrated in Figure~\ref{fig:frameworkinference}, during inference, our system processes an input $\mathbf{x}\in \mathbb{R}^{H,W}$ as follows:
\begin{enumerate}[leftmargin=*]
\itemsep0em
\item The image $\mathbf{x}$ is fed into the \emph{global module}, a memory-efficient CNN denoted by $f_g$, to produce an image-level coarse saliency map $\mathbf{S}_{g}$ and an image-level class prediction $\hat{y}_{g}$.
\item We select $M$ patches from $\mathbf{x}$ based on $\mathbf{S}_{g}$. To do that, we greedily select the patches for which the sum of the entries in $\mathbf{S}_{g}$ is the largest (see Algorithm~\ref{alg:alg1} for a detailed description).
\item We feed the selected patches $\tilde{\mathbf{x}}_1, \ldots, \tilde{\mathbf{x}}_M$
to the \emph{local module} $f_l$, another CNN which produces a fine-grained saliency map associated with each patch. We then remap the patch-level saliency maps back to their location in the original input image. We denote the saliency map obtained through this procedure by $\mathbf{S}_{l}$.
\end{enumerate}
GLAM produces an image-level saliency map $\mathbf{S}_{g}$ and a fine-grained multi-patch saliency map $\mathbf{S}_{l}$. These maps are aggregated through averaging to produce the final saliency map $\mathbf{S}_{c} = (\mathbf{S}_{g} + \mathbf{S}_{l})/2$. In addition, GLAM generates a classification output, which is produced by the global module (as this yields the best classification accuracy).
\subsection{Module parameterization}
\label{sec:architecture}
\paragraph{Global module}
The architecture of $f_g$ is based on the design of \citet{shen2019globally,shen2020interpretable}, which is similar to ResNet~\cite{he2016deep} with a reduced number of channels for memory efficiency.
The main difference between $f_g$ and the global module of \citet{shen2019globally,shen2020interpretable} is that we combine saliency maps at different scales to generate the global saliency map, inspired by~\citet{sedai2018deep} who observed that using convolutional feature maps extracted only from the last layer of a CNN may be suboptimal in localization of small objects. We generate saliency maps at different scales ($\mathbf{S}_0$, $\mathbf{S}_1$ and $\mathbf{S}_2$) using a pyramidal hierarchy of feature maps (see Figure~\ref{fig:global} in Appendix~\ref{append:globalstructure}). Our image-level saliency map $\mathbf{S}_g$ is obtained by averaging them. For each $\mathbf{S}_n$ $ (n\in \{0,1,2\})$, we obtain a classification prediction $\tilde{y}_n$ associated with $\mathbf{S}_n$ using $\mathrm{top} \, t\%$ pooling (see Appendix~\ref{append:globalstructure}), where $t$ is a hyperparameter. The image-level classification prediction $\hat{y}_g$ is calculated by averaging $\tilde{y}_0$, $\tilde{y}_1$, and $\tilde{y}_2$. Additionally, we output a representation vector $z_g$ to feed it into a fusion module (described below) to enable joint training with the local module.
See Appendix \ref{append:globalstructure} for more details.
\paragraph{Local module}
Our local module is based on ResNet-34 with a reduced stride in the residual blocks to maintain a higher resolution. We replace the global average pooling and the last fully connected layer by a $1\times1$ convolution layer followed by a sigmoid non-linearity.
The local module is applied to each of the selected patches $\tilde{\mathbf{x}}_k$ ($k \in 1,\ldots,K$) to extract a patch-level saliency map $\mathbf{A}_{k}$. As we only have image-level labels, we need to train the patch-level network without patch-level labels.
To address this challenge, we combine insights from weakly-supervised localization and multiple-instance learning to train the patch-level saliency maps hierarchically.
In multiple instance learning~\cite{maron1998framework} the labels are associated with bags of instances (the label is negative if all instances in a bag are negative, and positive otherwise). In our case each instance is a patch, and the patches from an image form a bag.
We use a patch aggregation function $f_\text{p}$ to combine the information across all patches and form an image-level prediction $\hat{y}_{l}$.
Formally, we have $\hat{y}_{l} = f_\text{p}(\mathbf{A_1},\ldots,\mathbf{A_k})$. We propose two different patch aggregation functions.
\begin{itemize}[leftmargin=*]
\itemsep0em
\item {\textit{Concatenation-based aggregation}:} We concatenate the saliency maps spatially and apply pooling function $f_{\text{agg}}$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\mathrm{top} \, t\%$ pooling).
The prediction is thus given by
$\hat{y}_{l} = f_{\text{agg}} (\mathrm{concat}(\mathbf{A}_1, \ldots, \mathbf{A}_K))$.
\item {\textit{Attention-based aggregation:}} $\mathrm{top} \, t\%$ pooling is applied to $\mathbf{A}_k$ to obtain a patch-level prediction $\hat{y}_k$. Additionally, we output a representation vector $z_k$ for each patch, which will be aggregated to $z_l$ and fed into the fusion module. We use the Gated Attention Mechanism~(\cite{ilse2018attention}) to
combine the prediction and the representation vectors using attention weights $\alpha_i \in [0,1]$. The prediction is given by $\hat{y}_{l} = \sum_{i=1}^K \alpha_i \hat{y}_i$, and the representation vector by ${z}_{l} = \sum_{i=1}^K \alpha_i {z}_i$.
\end{itemize}
Section~\ref{sec:training} explains when we use each aggregation method. Refer to Appendix~\ref{append:localstructure} for more details.
\paragraph{Fusion module}
In order to jointly optimize the parameters of the global and local modules, we incorporate a \emph{fusion module} consisting of one fully connected layer that combines the representation vectors from the global (${z}_{g}$) and local modules (${z}_{l}$) to produce a fusion prediction $\hat{y}_f$. Formally, $\hat{y}_f = \mathrm{sigmoid}(w_f[z_g,z_l]^T)$, where $w_f$ is a vector of learnable parameters.
\subsection{Training strategy}
\label{sec:training}
The training strategy that achieves the best experimental performance for GLAM is sequential. We first train the global module, then we train the local module and finally we train both together. This makes sense because in order to train the local module $f_l$ effectively, we need to select meaningful input patches. Since the selection relies on the global saliency map, this requires pretraining the global module. Our training strategy is as follows (see also Figure~\ref{fig:frameworktrain}).
\begin{figure}[]
\centering
\includegraphics[width=0.7\textwidth,trim=0mm 10mm 0mm 8mm]{images/overviewtrainv4.pdf}
\caption{Proposed training strategy. 1) Train the global module and select the best segmentation model. 2) Freeze the global module and use it to extract input patches for the local module. 3) Train the local module on the selected patches. 4) Joint training with the fusion module. We use BCE loss and sparsity loss to train the system.
}
\label{fig:frameworktrain}
\end{figure}
\begin{enumerate}[leftmargin=*]
\itemsep0em
\item Train the global module with the loss function {\small{$L_{g} = \sum_{n \in \{0,1,2\}} ( \operatorname{BCE}(y, \tilde{y}_{n}) + \lambda \sum_{(i,j)} |\mathbf{S}_{n}(i,j)| )$}} using the whole training set. Here, {\small{$\operatorname{BCE}(y, \tilde{y}_{n})$}} is the binary cross-entropy loss and {\small{$\sum_{(i,j)} |\mathbf{S}_{n}(i,j)|$}} is an $\ell_1$ regularization enforcing sparsity of the saliency map.
The hyperparameter $\lambda$ balances the two terms. The model is selected based on segmentation performance on the validation set.
\item Create training data for the local module. To produce negative examples, we select $K$ patches randomly from images with negative labels. The positive examples are generated by selecting $K$ patches from images with positive labels using Algorithm \ref{alg:alg1} applied to the saliency maps produced by the pretrained global module.
\item Train the local module minimizing the cost function $L_{l} = \operatorname{BCE}(y, \hat{y}_{l}) + \lambda \sum_{(i,j)} |\mathbf{S}_{l}(i,j)|$.
Here we apply the concatenation-based aggregation described in Section~\ref{sec:architecture} because it yields the best results experimentally and speeds up training.
\item Train the global and local modules jointly with the fusion module.
Here we use the attention-based aggregation in the local module. The fusion module outputs a fusion prediction $\hat{y}_{f}$. The loss function for joint training is $L_{t} = L_{g} + L_{l} + L_{f}$, where $L_{f} = \operatorname{BCE}(y, \hat{y}_{f})$.
\end{enumerate}
Note that the number of patches selected as input to the local module during training ($K$) and inference ($M$) do not need to be the same. In fact, we find empirically that it is beneficial to use $K$ as large as possible (within the constraints imposed by GPU memory), which is consistent with~\citet{shen2020interpretable}. However, choosing a large $M$ increases the false positive rate. We used $K=6$ and $M=1$, based on experiments reported in Appendix~\ref{append:numpatchtraining} and \ref{append:patchnuminference}.
\section{Experiments}
\subsection{Dataset and evaluation metrics}
We trained and evaluated the proposed model on the NYU Breast Cancer Screening Dataset v1.0~\cite{NYU_dataset} that includes 229,426 exams (1,001,093 images) from 141,472 patients. Each exam contains at least four images with a resolution of $2944 \times 1920$ pixels, corresponding to the four standard views used in screening mammography: R-CC (right craniocaudal), L-CC (left craniocaudal), R-MLO (right mediolateral oblique) and L-MLO (left mediolateral oblique). The dataset is divided into disjoint training (186,816), validation (28,462) and test (14,148) sets, ensuring that each patient only belongs to one of the sets.
Each breast has two binary labels indicating whether malignant or benign lesions are present. A subset of the images with lesions have pixel-level annotations provided by radiologists, which indicate the position of the lesions.
Note that the dataset (458,852 breasts) contains more exams without lesions (452,311 compared to 5,556 with benign lesions, and 985 with malignant lesions). To account for this imbalance when training our models, at each epoch we use all exams with lesions and an equal number of randomly-sampled exams without lesions.
To measure classification performance, we report the area under the ROC curve (AUC) for identifying breasts with both malignant and benign lesions. To evaluate localization ability, we use the Dice similarity coefficient and pixel average precision (PxAP)~\cite{choe2020evaluating}. PxAP is the average of the area under the precision-recall curve for each pixel. The threshold to compute the precision and recall is either chosen for each image (image-level PxAP) or fixed for the whole test set (dataset-level PxAP). These metrics are described in more detail in Appendix~\ref{append:Evaluation metrics}.
\subsection{Comparison to baselines}
We compare GLAM to two baselines: GMIC~\cite{shen2020interpretable}, a WSOL method specifically designed for high-resolution medical images, and CAM~\cite{zhou2016learning}, one of the most popular WSOL methods for natural images.
We use the same backbone architecture for the two baselines and the global module of GLAM.
The same hyperparameter tuning is applied to all models to ensure a fair comparison, as described in Appendix~\ref{append:Hyperparametertuning}. The models are selected based on segmentation performance, which is not necessarily equivalent to selection based on classification performance (see Appendix~\ref{Append:discussion} for further discussion). We also compare to a U-Net~\cite{ronneberger2015u} trained with strong supervision using the pixel-level annotations. This provides an upper bound for the segmentation performance of the WSL methods. In Table~\ref{t:comparision}, we report the performance of GLAM and the baselines. GLAM outperforms both GMIC and CAM in all segmentation evaluation metrics by a large margin, while achieving very similar classification accuracy. We observe a performance gap between GLAM and the strongly supervised U-Net model trained with ground-truth segmentation annotations, which indicates that there is room for further improvement.
\begin{table}[H]
\caption{Segmentation performance of our method (GLAM) and several baselines evaluated in terms of Dice (mean and standard deviation over the test set), image-level PxAP and dataset-level PxAP for malignant and benign lesions. We also report the classification AUC achieved by each model. GLAM outperforms the baselines, while achieving very similar classification accuracy. The performance of a model (U-Net) trained with segmentation annotations is also included for comparison.}
\label{t:comparision}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccccccc}
\hline
& \multicolumn{2}{l}{Dice} & \multicolumn{2}{l}{image-level PxAP} & \multicolumn{2}{l}{dataset-level PxAP} &
\multicolumn{2}{l}{classification AUC} \\ \hline
& \multicolumn{1}{l}{Malignant} &
\multicolumn{1}{l}{Benign} & \multicolumn{1}{l}{Malignant} &\multicolumn{1}{l}{Benign} & \multicolumn{1}{l}{Malignant} &
\multicolumn{1}{l}{Benign} &
\multicolumn{1}{l}{Malignant} &
\multicolumn{1}{l}{Benign} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9}
GLAM (proposed) & \textbf{0.390 $\pm$ 0.253} & \textbf{0.335 $\pm$ 0.259} & \textbf{0.461 $\pm$ 0.296} &\textbf{0.396 $\pm$ 0.315} & \textbf{0.341} & \textbf{0.215} & 0.882 & 0.770 \\
GMIC & 0.325 $\pm$ 0.231& 0.240 $\pm$ 0.175& 0.396 $\pm$ 0.275& 0.283 $\pm$ 0.244& 0.295 & 0.112 &0.886 & {0.780} \\
CAM & 0.250 $\pm$ 0.221 & 0.207 $\pm$ 0.180 & 0.279 $\pm$ 0.240 & 0.222 $\pm$ 0.210 & 0.226 & 0.084 & {0.894} & 0.770 \\
U-Net (fully supervised) & 0.504 $\pm$ 0.283& 0.412 $\pm$ 0.316& 0.589 $\pm$ 0.329& 0.498 $\pm$ 0.357 & 0.452& 0.265 & - &- \\
\hline
\end{tabular}
}
\end{table}
\subsection{Ablation study}
We analyze different design choices in GLAM through an extensive ablation study. Due to space constraints, here we only discuss the advantages of training the global and local modules jointly, and the segmentation properties of the global and local saliency maps. We defer results on the following choices to the appendices: design of the global module (Appendix~\ref{append:multi-level}), selection of training data for the local module (Appendix~\ref{append:random}), design of the local module (Appendix \ref{append:architecture}), number of input patches used by the local module during training (Appendix~\ref{append:numpatchtraining}) and inference (Appendix~\ref{append:patchnuminference}), the fusion module (Appendix~\ref{append:fusionmodule}), and hyper-parameter selection:
(Appendix~\ref{append:tvalue}, {\color{green} ~\ref{append:lambda}} and~\ref{append:combinehyper}).
\paragraph{Joint training of local and global modules.} Here we compare the GLAM saliency map $\mathbf{S}_{c-joint}$ obtained via joint training as described in Section~\ref{sec:training} with (1) the global saliency map $ \mathbf{S}_{g}$ from the global module pretrained in isolation, (2) the local saliency map $\mathbf{S}_{l}$ for the local module trained using patches selected using a frozen global module, (3) a saliency map $\mathbf{S}_{c-sep}$ obtained by averaging $ \mathbf{S}_{g}$ and $\mathbf{S}_{l}$. The results reported in Table \ref{t:ablation1} show that $\mathbf{S}_{c-sep}$ is superior to $\mathbf{S}_{g}$ and $\mathbf{S}_{l}$, but joint training achieves better performance across all metrics.
\begin{table}
\caption{Segmentation performance of the GLAM saliency map $\mathbf{S}_{c-joint}$, the global ($\mathbf{S}_{g}$) and local saliency ($\mathbf{S}_{l}$) maps trained separately, and the average of $ \mathbf{S}_{g}$ and $\mathbf{S}_{l}$ with ($\mathbf{S}_{c-joint}$) and without ($\mathbf{S}_{c-sep}$) joint training. Averaging helps, joint training further improves.}
\label{t:ablation1}
\centering
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{lcccccc}
\hline
& \multicolumn{2}{l}{Dice} & \multicolumn{2}{l}{image-level PxAP} & \multicolumn{2}{l}{dataset-level PxAP} \\ \hline
& \multicolumn{1}{l}{Malignant} &
\multicolumn{1}{l}{Benign} & \multicolumn{1}{l}{Malignant} &\multicolumn{1}{l}{Benign} & \multicolumn{1}{l}{Malignant} &
\multicolumn{1}{l}{Benign} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}
$\mathbf{S}_{g}$ & {0.325} $\pm$ 0.239 & {0.261} $\pm$ 0.185 & {0.363} $\pm$ 0.276 & {0.302} $\pm$ 0.266 & {0.324} & {0.140 } \\
$\mathbf{S}_{l}$ & {0.343} $\pm$ 0.283 & {0.297} $\pm$ 0.287 &{0.444} $\pm$ 0.337 & {0.337} $\pm$ 0.310 &{0.207} &{0.126 } \\
$\mathbf{S}_{c-sep}$ & {0.375} $\pm$ 0.264 & {0.318} $\pm$ 0.243& {0.449} $\pm$ 0.319 & {0.382} $\pm$ 0.317 & {0.340} & {0.191} \\
$\mathbf{S}_{c-joint}$ & \textbf{0.390 $\pm$ 0.253} & \textbf{0.335 $\pm$ 0.259} & \textbf{0.461 $\pm$ 0.296} &\textbf{0.396 $\pm$ 0.315} & \textbf{0.341} & \textbf{0.215} \\
\hline
\end{tabular}
}
\end{table}
\paragraph{Segmentation properties of the global and local saliency maps} In Figure~\ref{fig:comp_global_vs_local}, we plot the Dice scores of the global ($\mathbf{S}_{g}$) and local ($\mathbf{S}_{l}$) saliency maps generated by GLAM for 400 randomly selected examples from the validation set. Each saliency map has different strengths and weaknesses. $\mathbf{S}_{l}$ fails completely for a subset of examples, this is because the patch-selection procedure did not select the correct patches (lower row in Figure~\ref{fig:failure_fig}). On the remaining examples, $\mathbf{S}_{l}$ tends to outperform $\mathbf{S}_{g}$ because it has a much higher resolution. However, in some cases it underperforms $\mathbf{S}_{g}$, often in cases where the ground-truth segmentation is larger than the size of the patch (upper row in Figure~\ref{fig:failure_fig}). Averaging $\mathbf{S}_{l}$ and $\mathbf{S}_{g}$ (our strategy of choice in GLAM) achieves high-resolution segmentation, while hedging against the failure of the local module.
\begin{figure}[h!]
\begin{minipage}{0.46\linewidth}
\centering
\includegraphics[width=0.49\textwidth]{images/Dice_malignant_global_vs_local.pdf}
\includegraphics[width=0.49\textwidth]{images/Dice_benign_global_vs_local.pdf}
\caption{Scatter plot of Dice score of the global $\mathbf{S}_{g}$ and local $\mathbf{S}_{l}$ modules for 400 validation examples. $\mathbf{S}_{l}$ outperforms $\mathbf{S}_{g}$ for most small lesions, but may miss some larger lesions and fails entirely if the wrong patches are selected as input.
}
\label{fig:comp_global_vs_local}
\end{minipage}\hfill
\begin{minipage}{0.51\linewidth}
\centering
\setlength{\tabcolsep}{1.2pt}
\begin{tabular}{c c c c }
\tiny{Ground Truth} & \tiny{$\mathbf{S}_{g}$} & \tiny{Patch Map} & \tiny{$\mathbf{S}_{l}$} \\
\includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/example_2614_lcc/groundtruth.png} &
\includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/example_2614_lcc/global.png} &
\includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/example_2614_lcc/patchmap.png}&
\includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/example_2614_lcc/localbranch.png} \\
\includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/failure_case/175_rcc/idx_175_rcc_gt_v2.png} & \includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/failure_case/175_rcc/idx_175_rcc_global.png} & \includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/failure_case/175_rcc/idx_175_rcc_patch.png} & \includegraphics[width=0.21\linewidth,height=0.31\linewidth]{images/failure_case/175_rcc/idx_175_rcc_local.png}
\end{tabular}
\caption{Two failure cases of the local module. (Top) The lesion is larger than the input patch, so $\mathbf{S}_{l}$ only captures it partially. (Bottom) The input patches to $\mathbf{S}_{l}$ (in blue) do not cover the lesion.
}
\label{fig:failure_fig}
\end{minipage}
\end{figure}
\section{Conclusion}
In this work, we propose a novel framework to perform weakly-supervised segmentation of images with very high resolution, which outperforms existing methods. Our results suggest that the general principle underlying GLAM (hierarchical selection of saliency maps) could be effective in other applications involving high-resolution images and videos. Another interesting question for future research is how to extend this principle to settings where high-resolution segmentation is desired, but (in contrast to breast cancer) the regions of interest do not tend to be small.
\newpage
\midlacknowledgments{
The authors would like to thank Mario Videna, Abdul Khaja and Michael Costantino for supporting our computing environment and Catriona C. Geras for proofreading the paper. We also gratefully acknowledge the support of Nvidia Corporation with the donation of some of the GPUs used in this research. C.F. was partially supported by NSF DMS grant 2009752. K.L. was partially supported by NIH grant R01 LM013316 and NSF NRT grant HDR-1922658. This work was supported in part by grants from the National Institutes of Health (P41EB017183, R21CA225175), the National Science Foundation (1922658) and the Gordon and Betty Moore Foundation (9683).
}
|
1,108,101,563,895 | arxiv | \section{Introduction}
Quantum computation is a fascinating subject which manifests the
peculiarities of quantum mechanics and uses them in order to
achieve an advantage in terms of computational power over
classical computers. Shor's algorithm\cite{shor1} is the
most astonishing known example for such an advantage in computational
power: It enables one to factor an integer in polynomial time
using a quantum computer, whereas the best classical algorithm
for this task is sub-exponential. The advantage of quantum
algorithms such as Shor's algorithm over classical algorithms
suggests, though not proves, that the computational complexity class
of quantum computation is not polynomially equivalent to that of
classical, even randomized, Turing machines.
In real life, quantum computers will be subjected to noise.
We will make here the assumption that the noise is probabilistic
and local, meaning that each particle, each time step,
suffers a certain faulty event with probability $\eta$, which
is referred to as the noise rate.
Quantum computing is now known to maintain its full computational
power even in the presence of such local noise, as long as the noise
rate is
weaker than a certain threshold $\eta_0$\cite{aharonov1,kitaev0,knill1}.
On
the other hand, it is known\cite{aharonov2} that when the noise
in the system is stronger than a much higher threshold, $\eta_1$,
the quantum computation can be simulated efficiently by a
classical Turing machine. Trying to put together the two results,
we learn that there are two regimes of noise in the quantum
computer, in which the computational power of the system is
qualitatively different. For weak noise, the computational power
is fully quantum, whereas for strong noise it becomes
classical. This raises the following question: What is the
physical difference between the two noise regimes, which reflects
itself in the difference in computational power, and how does the
transition between the two different physical behaviors occur?
It turns out that an answer to these questions
can be given in terms of the behavior of entanglement, or quantum
correlations in the system.
Perhaps the best way to explain the notion of entanglement
is by saying what
entanglement is not: We say a state in the Hilbert space of
a composite system $A\otimes B$ is non entangled,
if two persons, Alice and Bob, each having access to one side of
the system, could construct the overall state
by applying local quantum
operations on their side,
and exchanging classical information, by, say, talking on the
phone.
Any state in the composite system $A\otimes B$ that cannot
be constructed in this way is said to be {\it entangled}.
Here we will be interested not only in whether or not states
are entangled, but rather in the amount
of entanglement in quantum states. Several possible
definitions for the amount of entanglement has been suggested:
The entanglement of formation\cite{bennett16}, the asymptotic entanglement
of formation\cite{bennett16} and the asymptotic entanglement
of distillation\cite{bennett16, rains}.
All these definitions are equally suitable for the purposes
of this paper, and the results hold for any measure
of entanglement which satisfies certain continuity requirements.
To study the behavior of entanglement in noisy quantum computers,
we define the notion of {\it entanglement length}.
Roughly speaking,
the entanglement length is the rate of decay of the entanglement
between two disjoint sets of particles,
as a function of the distance between the two sets.
This is the analogous quantity to
correlation length in statistical physics, except that here
we will be interested in correlations between two subsets of the
system, rather than in two-point correlations.
We study the behavior of the entanglement length
in the noisy quantum computer as a function of the noise rate.
We find that there exists a noise rate, $\eta_1$, which depends on the
geometry of the system, such that
the entanglement length is finite for $\eta_1<\eta\le 1$. This means
that the entanglement between two sets of particles decays
exponentially with the distance between the
two sets for this range of noise rates.
On the other
hand, the entanglement length is shown to be infinite in the range
$0\le \eta< \eta_0$, where $\eta_0$ is the threshold for
fault tolerance.
This means that the entanglement between two
sets of particles is independent of the distance between the two
sets. These two facts show the existence of a phase transition in
entanglement at a non-trivial noise rate $\eta_0\le\eta_c\le\eta_1$.
The
system in the sub-critical regime behaves
quantumly even on the macroscopic scale- two sets of particles,
within macroscopic distance, can share a lot of entanglement, so
there is long range entanglement in the system. In the
super-critical phase, where entanglement decays exponentially with
the distance, the system behaves classically on the macroscopic scale,
because two macroscopic subsets within macroscopic distance
are practically non entangled.
The results here are by no means
specific to quantum computers.
In fact, we show that macroscopic classical behavior
is expected above the critical noise rate for any
macroscopic quantum system
with local interactions, and local noise,
where we make the additional assumption that
there is time separation between two interactions
in which one particle participates.
This shows that a phase transition in entanglement is expected in any
such quantum system which exhibits long range
entanglement in the absence of noise.
Moreover, our results can be verified
experimentally in any such system, as long as the density matrices
of subsystems can be measured accurately enough.
The entanglement length can then be numerically
computed (this is a difficult computational task,
but possible for small subsystems)
and compared to the finite entanglement length which is predicted
by our analysis.
The emergence of
classical macroscopic behavior in large quantum systems
has been an intriguing area of research for the last several decades.
Perhaps the most common and acceptable explanation so far
is by decoherence, i.e. interactions with the environment
which cause the system to lose its quantum features.
See, for example,
\cite{zurek1,decoherence} and references therein.
This explanation, however, predicts a gradual transition
from quantum to classical behavior.
The most interesting implication of the results presented
in this paper
is that they suggest an alternative way to explain the
transition from quantum macroscopic behavior
to classical macroscopic behavior
in certain physical systems, which is qualitatively different
from the standard gradual explanation.
The origin for the abrupt phase transition from quantum to
classical predicted
by our results is that in our model, we combine
the decoherence process with the assumption that noise, as well
as interactions, are local, where the behavior we are interested in
is global.
\ignore{ The importance of the results here is that
it seems that our assumptions are applicable in many general
cases. }
The first part of the proof involves showing that the entanglement
decays exponentially with the distance when $\eta$ is larger than
a certain threshold. To do this, we use a method due to Aharonov
and Ben-Or\cite{aharonov2} to present the density matrix by
mixtures of clustered states, and then we study the behavior of
the sizes of the clusters, evolving in time, using a mapping of
the problem to percolation. Known results from percolation
theory\cite{grimmett} imply
an
exponentially decaying bound on the probability for distant sets
of particles to be connected in the same cluster, and this implies
exponentially small entanglement between the two sets. For the
second part of the proof, i.e. in order to show that the
entanglement length is infinite for weak noise, we use fault
tolerant quantum computation, which enables one to create long
range entangled states in the noisy quantum computer.
We start by
defining the notion of entanglement, and entanglement length,
and then proceed to prove the strong noise case and the weak noise case.
We conclude with several open questions regarding possible
implications to the transition from quantum to classical.
\section{Entanglement}
The notion of {\it entanglement} is associated with
a state of a quantum system
composed of two systems, $A$ and $B$.
The term entanglement refers to the quantum correlations between
$A$ and $B$, in a state which lives in $A\otimes B$.
Remarkably, two parts of a composite quantum system
can exhibit very strong
correlations which cannot be explained classically,
unless one drops a very important axiom in physics, namely locality.
The remarkable phenomena which can be exhibited due to entanglement
between two quantum systems
were first discovered by Einstein, Podolski and Rosen\cite{epr}, more
than $60$ years ago, and manifested in Bell's inequality more than
$30$ years ago\cite{bell,bell1}.
However, the elusive phenomena of entanglement is still
far from being understood.
In this paper we will be interested in
the {\it amount} of entanglement
in quantum systems; we therefore need a good measure
of entanglement.
One very important requirement on such a measure is that
the entanglement in any state cannot increase by classical
communication and local quantum operations on the $A$ and $B$ sides
separately; this is the whole
essence of the term entanglement.
We will denote such a process involving
local operations
and classical communication by
LOCC. A good measure of entanglement should also be additive.
A natural way to construct a measure of entanglement is to ask
whether there is an elementary unit of
entanglement, so that any state can be constructed with LOCC
given sufficiently many such entanglement units.
It turns out that there exists exactly such a unit:
the Bell state,
$
\frac{1}{\sqrt{2}}(|0\ra\otimes|0\ra+|1\ra\otimes
|1\ra)$. It was shown\cite{bennett15,bennett16}
that any bipartite quantum state
can be generated by Alice and Bob using only LOCC operations
given that sufficiently many Bell states are a priori shared between
$A$ and $B$. One can try to use the number
of elementary units required to construct a state
as a good measure of the entanglement in this state.
It is reasonable to take the asymptotic limit of such a process,
and to define the entanglement in a state as the following limit.
Let $\phi$ be our quantum state, and
let $k_n$ be the number of Bell states required to
generate a state, $\phi_n$, and let $\phi_n$ approach the state
$\phi^{\otimes n}$ as $n$ tends to infinity.
Bennett {\it et al}\cite{bennett16}
defined the infimum over all such processes, of $k_n/n$, as $n$ tends to
infinity,
as the asymptotic entanglement of formation of the state $\phi$.
Let us denote this measure by $E_f^\infty$.
This measure is clearly additive, and can also be shown not to
increase by LOCC\cite{bennett16}.
An equally natural definition would be the converse one, called
the asymptotic entanglement of distillation, in which one is interested
in generating as many Bell states as possible by applying LOCC
on many copies of the original state $\phi$.
The asymptotic limit of the ratio between the number of Bell
states generated in this way, and $n$, was defined
in \cite{bennett16} to be the asymptotic entanglement of distillation.
A more rigorous definition was given by Rains\cite{rains}.
Let us denote this measure by $E_d^\infty$.
Fortunately, for pure states these two measures
coincide, and have a very beautiful form.
As was shown in \cite{bennett15},
they are exactly the von-Neumann entropy of
the reduced density matrix on one part
of the system.
\beq E(A:B,|\phi\ra\la\phi|)=S(|\phi\ra\la\phi||_A).\enq
The entropy of entanglement thus possesses both additivity
and monotonicity under LOCC, and also
behaves nicely in many other ways.
The situation for mixed states, however, is much more interesting.
It turns out that though the asymptotic distillable entanglement and the
asymptotic entanglement of formation coincide on pure states,
there are very interesting differences between them
when mixed states are considered. Clearly,
the asymptotic entanglement of distillation
is not larger than the asymptotic entanglement of formation\cite{bennett16}.
The question of whether there exist states in which
$E_f^\infty$ is strictly larger than $E_d^\infty$ is still open.
This irreversible process, in which not all
of the entanglement which was inserted into the state
can be distilled, is called
{\it bound entanglement}\cite{horodecki} and is now being extensively
studied. \ignore{(eg. \cite{bound})}
The asymptotic entanglement of formation is believed to
be equal to the following quantity,
called the {\it entanglement
of formation}\cite{bennett16}, and denoted by $E_f$.
$E_f(\rho)$ is the least expected entropy
of entanglement of any ensemble of pure states
realizing $\rho$, or more formally:
\begin{equation}\label{ef} E_f(A:B,\rho)=\min_{\sum_i w_i|\alpha_i\ra\la
\alpha_i|=\rho} \sum_i w_i E(A:B,
|\alpha_i\ra\la\alpha_i|).\end{equation}
The question of whether $E_f$ is equal or not to $E_f^\infty$
depends on whether $E_f$ is additive. It is believed that
indeed it is the case that $E_f$ is additive, but this is not known.
Let us survey what is known about the above three entanglement
measures, in terms of convexity and continuity.
Entanglement of formation is trivially convex.
Asymptotic entanglement of formation can also
be shown to be convex, using the law of large numbers.
Currently it is not known whether asymptotic distillable
entanglement is convex or not.
As for continuity, the situation is even less clear.
It is known that the above three entanglement measures are
continuous, in the
sense that if a sequence of density matrices
$\sigma_n$ converges to a density matrix $\rho$ in the trace metric,
then the entanglement in $\sigma_n$ converges to the entanglement
in $\rho$:
\beq
\lim_{n \longrightarrow \infty} \sigma_n =\rho
~~~ \Longrightarrow ~~~
\lim_{n \longrightarrow \infty} E(\sigma_n) = E(\rho).
\enq
However, we will be interested in how different can the entanglement
of two very close density matrices be. Entanglement
of formation was recently shown\cite{neilsen}
to have very strong
continuity properties, in the following sense:
Given two density matrices of a bipartite Hilbert space
of dimension $d\times d'$,
which are within $\epsilon$ distance one from another,
in the trace metric, the entanglement of formation of the two matrices
is at most $\epsilon$ times some linear function in
$\log(d)$ and $\log(d')$,
plus a term independent of $d$ and $d'$ which goes to $0$
as $\epsilon$ goes to $0$:
\begin{eqnarray}
&&|E_f(A:B|\rho)-E_f(A:B|\sigma)|\le \\\nonumber
&& 9|\rho-\sigma|\log(\max\{d,d'\})-
|\rho-\sigma|\log(|\rho-\sigma|)). \end{eqnarray}
This strong continuity implies that when two density matrices
of $n$ finite state particles are polynomially close one to another
(in the number of particles), the entanglement
of formation between them is also polynomially small.
It is not yet known whether the asymptotic measures
of entanglement possess these nice continuity properties, or not.
In this paper we work with the entanglement of formation, $E_f$, which is
known to be both convex and strongly continuous.
However, it should be stressed that the phenomena
presented in this paper depend very weakly on the exact properties
of the measure of entanglement which is being
used.
The results in this paper hold,
with straight forward modifications, for any measure of entanglement $E$
which is continuous in a sufficiently strong sense, meaning that
two density matrices which are $\epsilon$ apart have entanglements
not different by more than $\epsilon$ times some polynomial
in the number of particles.
\section{The Model of the Quantum System}\label{model}
We are interested in quantum systems composed of
$n$ two-state particles, embedded on a $d$-dimensional
lattice.
Such quantum particles are usually called {\it qubits}
in the context of quantum computers.
The Hilbert space of $n$ such particles is the tensor product
of $n$ two dimensional complex vector space, ${\cal C}^2$, where
the basis of ${\cal C}^2$
is standardly taken to be $|0\ra$ and $|1\ra$.
The system is initialized with a certain state (usually a tensor
product state, but not necessarily) and evolves in time
via interactions between the particles.
Time is discretized into time steps
and all interactions are assumed to be instantaneous and occur
at integer times.
In this model, particles interact only with their nearest neighbors
on the lattice. An important assumption
is that one particle cannot interact with more than
one other particle at a time.
For simplicity, we will assume that the particles interact alternately with
particles to each of their sides. For one dimension, i.e. an
array of particles, this means that a particle interacts with a
particle to its left and to its right alternately.
The interaction
graph can be easily viewed in a $d+1$ dimensional scheme, which
for $d=1$ looks as follows:
{~}
\begin{picture}(150,80)(-30,20)
\put(10,20){\vector(0,1){95}} \put(9,15){\shortstack{$a$}}
\put(24,15){\shortstack{$b$}} \put(39,15){\shortstack{$c$}}
\put(25,20){\vector(0,1){95}}\put(40,20){\vector(0,1){95}}
\put(55,20){\vector(0,1){95}} \put(70,20){\vector(0,1){95}}
\put(85,20){\vector(0,1){95}} \put(100,20){\vector(0,1){95}}
\put(130,20){\vector(0,1){95}} \put(140,68){\shortstack{$Time$}}
\put(130,20){\circle*{2}} \put(134,20){\shortstack{$0$}}
\put(134,30){\shortstack{$1$}} \put(134,40){\shortstack{$2$}}
\put(134,50){\shortstack{$3$}} \put(134,60){\shortstack{$4$}}
\put(134,70){\shortstack{$5$}} \put(134,80){\shortstack{$6$}}
\put(134,90){\shortstack{$7$}}\put(134,100){\shortstack{$8$}}
\put(130,30){\circle*{2}} \put(130,40){\circle*{2}}
\put(130,50){\circle*{2}} \put(130,60){\circle*{2}}
\put(130,70){\circle*{2}} \put(130,80){\circle*{2}}
\put(130,90){\circle*{2}}\put(130,100){\circle*{2}}
\put(10,30){\line(1,0){15}} \put(40,30){\line(1,0){15}}
\put(70,30){\line(1,0){15}} \put(10,50){\line(1,0){15}}
\put(40,50){\line(1,0){15}} \put(70,50){\line(1,0){15}}
\put(10,70){\line(1,0){15}} \put(40,70){\line(1,0){15}}
\put(70,70){\line(1,0){15}} \put(10,90){\line(1,0){15}}
\put(40,90){\line(1,0){15}} \put(70,90){\line(1,0){15}}
\put(25,40){\line(1,0){15}} \put(55,40){\line(1,0){15}}
\put(85,40){\line(1,0){15}} \put(25,60){\line(1,0){15}}
\put(55,60){\line(1,0){15}} \put(85,60){\line(1,0){15}}
\put(25,80){\line(1,0){15}} \put(55,80){\line(1,0){15}}
\put(85,80){\line(1,0){15}} \put(25,100){\line(1,0){15}}
\put(55,100){\line(1,0){15}} \put(85,100){\line(1,0){15}}
\end{picture}
\begin{figure}[h!]
\begin{center}
\caption{The vertical axis corresponds to time, and the horizontal
axis corresponds to space. Horizontal edges connect two
interacting particles. Particles interact alternately with
particles to their left or to their right. }\label{lattice}
\end{center}
\end{figure}
Two particles can interact via an arbitrary interaction, and we do
not assume anything about the nature or strength of each interaction.
After the
interactions are turned off, and before the next step of
interactions is turned on, we apply the noise step.
The noise is assumed here to be local and stochastic,
meaning that each particle with a certain probability $\eta$
undergoes an arbitrary fault process.
As was shown in \cite{aharonov1, gottesman} such $d$-dimensional
noisy quantum circuits
are capable of performing fault tolerant quantum computation,
as long as the noise rate is smaller than a certain threshold.
The threshold, however, is worse than the threshold without the nearest
neighbor restriction, by one or two orders of magnitude, depending
on the dimension.
We make here another assumption, and restrict the noise to be
one of the following two processes.
The first process, namely independent stochastic
collapses, is a process in which at each time step, each particle is
measured with independent probability $\eta$, in a fixed but arbitrary
basis. Alternatively, we can use the depolarization model, in
which at each time step, each particle,
with independent probability $\eta$, is replaced by a particle in a
completely mixed state.
In the rest of the paper, we will assume that the noise model
is independent stochastic collapses, but all results can
be easily stated using stochastic
depolarization.
It should be noted that the results of the paper hold
when relaxing several of the assumptions we have made.
The results apply to particles with any finite number of states,
not necessarily qubits,
and different particles need not have the same number of possible states.
The exact form of the alternating interactions is not important,
since any interaction graph of nearest neighbor interactions
is a subgraph of the alternating graph, if time is scaled by a
factor of $2d$.
The assumption of instantaneous interactions is also not essential,
as long as the interactions last for less than a time step,
so that there is a time interval in which decoherence takes place when
the particle is not participating in any interaction.
The results hold also in the case of noisy interactions,
with some noise rate $\delta$.
We will see that the proof of the upper bound
on the entanglement length, for large $\eta$,
holds regardless of the amount of
noise in the interactions, $\delta$, because
the proof only uses the noise occurring between interactions.
The proof of the lower bound on the entanglement length,
for small $\eta$, goes through as long as $\eta+\delta$ is smaller than
the threshold for fault tolerance. Hence,
it is straight forward to include in our model the above generalizations.
For simplicity, however, we will work with the model defined above,
of
two state particles with noiseless instantaneous
interactions.
\section{Entanglement Length}
For quantum systems which are embedded on a lattice,
the notion of distance between sets of particles is well defined.
In this case, one can define the {\it entanglement length}
in the system. We would like to define an analogous quantity to
the standard correlation length from statistical physics.
In this case, one says that the correlation
length in a physical system is $\xi$ if
the correlation between the outcomes of a certain
observable $O$ measured at sites $a$ and $b$, decays
exponentially with the distance between them, $d(a,b)$, where the distance
is scaled in the decay factor by $\xi$: \beq
<O_aO_b>\propto e^{-\frac{d(a,b)}{\xi}}.\enq
More precisely, $\xi$ is defined to be the following
quantity:
\beq\label{limit}
\xi^{-1} =\lim_{d(a,b)\longmapsto \infty} \left(\frac{-log<O_aO_b>}{d(a,b)}\right).
\enq
In analogy with correlation length, we could have defined
the entanglement length in the quantum system to be
$\mu$ if
the entanglement between two particles, $a$ and $b$, decays
exponentially with the distance between them, $d(a,b)$, where the distance
is scaled in the decay factor by $\mu$: \beq\label{expdec0}
E(a:b)\propto e^{-\frac{d(a,b)}{\mu}}.\enq
\ignore{Or
\beq\label{expdec1}
\mu^{-1} =\lim_{n\longmapsto \infty} \left(\frac{-log(E(a:b)}{d(a,b)}\right).
\enq}
However, there are a few problems with this definition, which
will force us to modify it slightly.
The first modification is necessary due to the fact that
entanglement is a non-local quantity.
It might well be that the system contains a lot of entanglement,
but small subsets of the system are completely unentangled.
For example, in fault tolerant quantum computers,
the entanglement is bound to be shared by
large sets of qubits, and in order to see entanglement it is
necessary to probe large subsets of the system.
We will therefore be interested not in two point correlations, but
in entanglement between two sets $A$ and $B$ of arbitrary sizes.
Another problem is the following. In systems
which are homogeneous in space and time, one can easily take the
limit of the size of the system to infinity, and therefore the
asymptotic behavior in equation (\ref{limit}) is well defined.
However, we are interested in
fault tolerant quantum
computers, which are not homogeneous in space, nor in time.
Roughly speaking, we will say that the entanglement length in
the system is $\mu$ if
the entanglement between any two sets $A$ and $B$
is {\it bounded} by a function which decays exponentially
with the distance between the sets, where the decay factor is scaled by
$\mu$. The fact that we are interested in a bound, and not in exact
behavior of any pair of sets, allows for non homogeneity in space.
To allow for non homogeneity in time,
we will consider the average
entanglement between $A$ and $B$ over the time
from $t=0$ to $t=\infty$.
This corresponds to the following behavior:
\beq\label{expdec3}
<E(A:B)>_{t=0}^{\infty}\propto poly(|A|,|B|)e^{-\frac{d(A,B)}{\mu}}.\enq
where $|A|$ is the number of particles in $A$ and similarly for $B$.
We allow the additional polynomial factor due to the fact that
for sets which are not too large compared to the distance
between them, the exponential decay dominates the
polynomial in the sizes of $A$ and $B$, and what we will see is merely
an exponential decaying behavior. We claim that it
is not reasonable to consider
two sets of particles which are very large, and to study the
behavior of the entanglement they share as a function of
the distance between them, in the range where
that distance is extremely small compared to the sizes of the sets.
The characterization of $\mu$ by equation (\ref{expdec3})
is very helpful to keep in mind.
We can also make the definition of entanglement length more rigorous,
by giving it a similar form to that of equation (\ref{limit}).
This would be useful when one actually wants to
calculate the entanglement length.
\ignore{
In order to take the limit in equation
(\ref{expdec1}), we need sets $A$ and $B$ with distance
$d(A,B)$ which goes to infinity.
However, in the non-homogeneous case, this might depend on the
way these limits are taken.
Therefore, we
need to be more careful in explaining what we mean by
asymptotic behavior.
To generalize the homogeneous case,}
In order to do that, we first
need to make the notion of a quantum system more
precise.
In the non-homogeneous case,
it is not clear what the notion of an infinite system
means. We therefore define a quantum (infinite) system to be
a sequence of quantum systems, $Q_n$, where $Q_n$ consists of
$n$ particles. We think of $n$ as growing to $\infty$, but
for a given $n$ $Q_n$ is a finite system in space, which evolves
in time from
$t=0$ to $t=\infty$.
Since each $Q_n$ is finite in space, in order to take a limit
similar to that of equation (\ref{limit}),
we need to consider
a sequence of pairs of sets, $A$ and $B$
which belong to larger and larger systems.
We thus add a subscript $n$ to the subsets $A_n$ and $B_n$,
indicating that they belong to the quantum system $Q_n$.
We would now like to translate
the fact that we are interested in sets
which are not too large compared with their distance
to a precise restriction on the sequences of sets
$\{A_n\}$,$\{B_n\}$. The weakest condition
which we can impose, to avoid pathologic cases,
is that
$\lim_{n\longmapsto \infty} |A_n|\cdot |B_n|/exp(d(A_n,B_n)) =0$,
meaning that
the sizes of $A_n$ and $B_n$ are not growing
exponentially or faster than exponentially with the distance between them.
Finally, we want to take care of the fact that we are interested in
the largest entanglement length which can be observed
in the system. This corresponds to taking the
infimum over all such sequences of $A_n$ and $B_n$.
All this translates to the following definition:
\begin{deff}\label{expdec4}
The entanglement length $\mu$ of a quantum system $\{Q_n\}_{n=1}^\infty$ is defined by:
\[
\mu^{-1} =\inf_{\{A_n\}_{n=1}^{\infty},\{B_n\}_{n=1}^{\infty}}
\liminf_{n\longmapsto \infty} \left(\frac{-log<E(A_n:B_n)>_t}{d(A_n,B_n)}\right)
\]
where for all $n$, $A_n$ and $B_n$ are disjoint sets
in $Q_n$, and the sequences $\{A_n\}_{n=1}^{\infty},\{B_n\}_{n=1}^{\infty}$
satisfy
$\lim_{n\longmapsto \infty} |A_n|\cdot |B_n|/exp(d(A_n,B_n)) =0$.
\end{deff}
Note that if one plugs into
definition (\ref{expdec4}) the exponential behavior
of equation (\ref{expdec3}),
the contribution
of the polynomial factor in equation (\ref{expdec3})
tends to zero due to the requirements on $A_n$ and $B_n$,
and the correct $\mu$ pops out.
Though definition (\ref{expdec4}) might seem complicated,
calculating the above infimum
turns out to be very simple in all our applications.
\ignore{
-----------------------
Thus, our modified definition of entanglement length
is
\beq\label{expdec2}
\mu^{-1} =\inf_{a,b} \lim_{n\longmapsto \infty} \left(\frac{-log(E(a:b))}{d(a,b)}\right).
\enq
Now, a subtle question should be clarified.
The question is whether one should take the infimum over
all possible disjoint sets, or restrict the sets which are considered.
We claim that in reasonable scenarios, one will
be interested in cases in which the distance between
the sets $A$ and $B$ is of the order of their sizes, or at least
not exponentially smaller than their sizes.
Now, it is no longer reasonable to consider an exponential
decay multiplied by a constant, because the maximal entanglement
$E(A,B)$ is not a constant anymore if we allow arbitrarily large
sets $A$ and $B$. It is more reasonable
to scale the entanglement with
the maximal amount of entanglement which can exist between
$A$ and $B$, which is linear in the number of particles
in $A$, and $B$ (for finite state particles.)
In fact,
we will use a slightly rougher definition, in which
we allow the coefficient to
depend polynomially on the sizes of $A$ and $B$.
This is reasonable to do since if the distance
between $A$ and $B$ is of the order of their sizes, or larger,
the exponential decay will dominate any polynomial in $|A|$ and $|B|$.
We will therefore change equation \ref{expdec0}
to the following:
More precisely one
is interested in the asymptotic behavior, and therefore we take
the limit: \beq \xi=\lim_{d(a,b)\longmapsto
\infty}-\frac{d(a,b)}{\log(<c_ac_b>).\enq}
The last problem with the definition
of entanglement length according to
equation \ref{expdec1} is that it depends strongly on the entanglement
length of the initial state
of the system.
For example, consider the trivial system which applies no interactions,
where the entanglement between $A$ and $B$ in the initial state
is of the order of $n$, the number of particles in the system.
This
entanglement is very quickly lost, due to collapses, and no new
entanglement
is generated in the system since there are no interactions.
The system therefore
relaxes to its typical unentangled state very quickly.
However, this fact does not show in the average entanglement in the system.
In fact, the average entanglement from $0$ to $t$ remains significantly
larger than $0$ for very long time intervals- $t$ should
be of the order of the
size of the system, $n$, for the effect of the initial entanglement
on the average entanglement to vanish.
This strong dependence on initial conditions is undesirable.
We want to say that a system has finite entanglement length
even if its initial condition has long entanglement
length, as long as the system
relaxes to its typical behavior of short correlations quickly.
Since it is reasonable to assume that the time over which a system is
observed is polynomial in its size, we will say that negligible effects are those which disappear exponentially fast.
Therefore
we add a time dependent correction term to
equation \ref{expdec0}:
\beq\label{expdec}
E(A:B)\propto poly(|A|,|B|)e^{-\frac{d(a,b)}{\xi}}+poly(|A|,|B|)
e^{-\frac{t}{\xi_t}}\enq
The correction term
corresponds to a logarithmic relaxation time to the
behavior described by the exponential decay in space.
Alternatively, one can simply start considering the quantum system
after $log(n)$ time steps,
i.e. after it has relaxed to its typical entanglement length, or restrict
the possible initial conditions to unentangled states.
These variants of the definition do not
affect the decision of whether or not
a system does or does not have finite entanglement length. }
\section{Clustered Density Matrices}\label{sim}
We now proceed to study the entanglement length
in $d$-dimensional noisy quantum circuits, in the
strong noise regime. In this case,
we will try to bound the entanglement in the system from above.
Now a very useful
observation is in place. After a particle was hit by the noise
process, it is no longer entangled with the rest of the system. In
other words, in both noise processes which we consider,
the density matrix after
applying the noise process can be written in the form \beq {\cal
E}\rho= (1-\eta)\rho+\eta \sum_i p_i\rho^Q_i\otimes \rho_i^q \enq
where the index $q$ refers to the noisy particle, and $Q$ refers to
the rest of the system. For example, for the stochastic noise process,
in which the last qubit is measured in the basis $\{|0\ra,|1\ra\}$,
the resulting density matrix would be of the form:
\beq {\cal
E}\rho= (1-\eta)\rho+\eta \sum_{i=0}^{1} Pr(i)\rho^Q_i\otimes |i\ra\la i| \enq
where $\rho^Q_i$ is the density matrix of all but the last qubit,
under the condition
that the last qubit is measured to be in the state $|i\ra$.
We use this observation as follows. We
will aim to present the density matrix in such a way that lack of
entanglement translates to tensor product structure. In other
words, we will present a density matrix as a mixture of tensor
product states, as follows:
\begin{eqnarray}\label{clusterdef} \rho(t)&=&\sum_i w_i \rho_i(t),\\\nonumber
\rho_i(t)&=&\rho^1_i(t)\otimes\cdots \rho_i^{m_i}(t)\end{eqnarray}
where $\rho^j_i(t)$ is a density matrix which describes a set of
particles $A_i^j$, and for each $i$ the sets $A_i^j$ are a
partition of the system.
These sets of supposedly entangled
particles are called clusters.
It should be understood here that given
a density matrix, there is no single way to present it as a mixture
of clustered states. However, we will define the representation according
to the dynamics of the process which generated $\rho$, so that our
representation will be well defined.
Our goal would be to find a way to represent the matrix
as a mixture of clustered states with as small clusters as possible.
The intuition is that we want to give an upper bound on the amount of
entanglement in the system.
When all the clusters are of size one,
there is no entanglement in the system. We will see later, that
this can be generalized to say that small localized clusters imply no
entanglement between distant sets. We will thus try to keep
the clusters as small as possible.
The way we do this is as follows.
In a quantum computer, the initial state is a basic state, which
is a pure state in which all qubits are in tensor product with one
another: \beq\rho=\rho(1)\otimes \rho(2)\otimes \cdots \otimes
\rho(n).\enq Thus, for $t=0$, all clusters are of size $1$. Given
any clustered states description of the density matrix at time
$t$, we can obtain a clustered state description for the matrix at
time $t+1$ as follows.
From each participant in the
mixture, $\rho_i(t)$, we obtain $\rho_i(t+1)$ which will be a
mixture of clustered states. $\rho(t+1)$ will then be a mixture
of all $\rho_i(t+1)$. To obtain $\rho_i(t+1)$ from $\rho_i(t)$, we
first apply the interaction step, and then apply the noise step.
To apply the interaction step of time $t$, we apply for each
interaction at that time step the unitary transformation
corresponding to the interaction, on the appropriate pair of
particles. If the two particles are from one cluster, then we
simply apply the appropriate unitary matrix, corresponding to the
interaction, on the density matrix describing this cluster, and
there is no need to change the clusters. However, if the two
particles are from two different clusters, we can no longer keep
the two clusters in tensor product, because in general they will
be entangled. Therefore, we first join the two clusters together,
by taking the tensor product of the density matrices describing
the two clusters, and then apply the appropriate unitary matrix on
the new big cluster. The resulting state after all interactions of
time step $t$ were applied is therefore, in general, a clustered
state with larger clusters than the state $\rho_i(t)$.
We then apply the noise step on the
resulting clustered state. Recall that a
measurement detaches a particle from its cluster, and thus after a
measurement the particle is a cluster of its own. To apply the
noise process, we transform the state to a mixture of states,
which are the results of all possible combinations of which
particles where measured, with the appropriate probabilities.
Clusters in the state can only shrink due to this process.
We would now like to understand the typical size of clusters in
this representation of the density matrix. Before we do that in a
more formal way, let us gain some intuition. If the system were
noise free, very soon all the clusters would become one giant
cluster of $n$ particles. What makes the situation more
interesting are the stochastic collapses, which separate a
measured particle from its cluster, thereby decreasing the size of
the cluster by one, and creating another cluster of size one. One
can view the noisy quantum evolution in time as a struggle between
two forces: The interactions, which tend to join clusters and
entangle the different parts of the system, and the stochastic
collapses, which tend to detach particles from their clusters,
thereby destroying this entanglement constantly. A crucial point
here is that the two competing forces are matched in power, since
they both operate on a linear number of particles $\theta(n)$ each
time step. We thus expect a
critical error rate, $\eta_c$,
at which the two forces are equal, and at which some transition
between the dominance of the entangling interaction process
transforms to the dominance of the disentangling noise process.
We now go on to see this phenomenon more rigorously,
using a map
to a percolation process.
\section{The Percolation Process}\label{perc}\label{percpt}
It turns out that the dynamics of the clusters in the above
description
are intimately connected with a percolation process
on the {\em quantum circuit} itself. The percolation process on
the graph is defined as follows: For each time step, each vertical
edge, along the $i'$th wire, between time $t$ and $t+1$, is
erased with independent probability $\eta$. In the cluster
picture, this corresponds to the collapse of the $i$'th particle
between time steps $t$ and $t+1$, which disentangles the
particle's past and future. Thus an event in the probability space
of the noise process, i.e. a specific choice of which particles
collapsed during the process, is mapped to an event in the
percolation process, in which the corresponding vertical edges are
erased. Since events in the stochastic noise process correspond to
members in the mixed density matrix,
we have a map between
clustered states arising from our cluster dynamics,
and realizations of the percolation process.
This map preserves the probability measure.
We now claim that clusters
in the clustered state correspond to connected components in the
percolation process:
\begin{lemm}\label{corres} {\bf Correspondence Lemma: }
Two particles $a$ and $b$ are in the same cluster at time $t$, in
one realization of the noise process in the cluster model, iff
$(a,t)$ and $(b,t)$ are connected
in the corresponding realization of the percolation model.
\end{lemm}
{\bf Proof:} To prove this combinatorial lemma
we use induction on $t$.
For the base of the induction, $t=0$, the correspondence
is true by definition. Let us now assume that the lemma is correct
for $t$, and prove for $t+1$.
To apply the induction step, the following
observation comes
in handy. Each path that connects $(a,t+1)$ and
$(b,t+1)$ in the percolation process,
is actually a concatenation
of alternating paths, occurring either after time $t$ or at
times up to $t$. \ignore{Clearly the transition from one path to the
next path in the concatenation always occurs
at time $t$. }
We denote the points at which the different concatenated paths
connect one to another
by
$(x_1,t_1),...,(x_{2k},t_{2k}).$ It is easy to see that there is always
an even number of such points, and that $t_1=t_2=....=t_{2k}=t$.
Let us call the particles $x_1,...,x_{2k}$ the connection particles.
We shall also denote $a=x_0$, $b=x_{2k+1}$.
A schematic example for the one dimensional quantum circuit case
is shown in figure $2$.
\begin{figure}[h!]
\centerline{\vbox{\epsfxsize=2in\epsfbox{tetris.eps}}}
\caption{A path
connecting the two particles, $a$ and $b$, at time $t+1$
can be represented as a
concatenation of paths which are restricted alternately to
the time intervals $[0,t]$ and $[t,t+1]$.
The path $(a,t+1)\longmapsto (b,t+1)$ in the figure is a concatenation of the
paths $(a,t+1)\longmapsto (x_1,t)\longmapsto (x_2,t)
\longmapsto (x_3,t)\longmapsto (x_4,t) \longmapsto (b,t+1).$
}
\end{figure}
Let us now prove the first direction:
Let $a$ and $b$ be two
particles connected at time $t+1$ in the percolation model.
We want to show that $a$ and $b$ are in the same cluster at time
$t+1$.
To see this, we show that all particles $a=x_0,x_1,..,x_{2k},x_{2k+1}=b$
are in the same cluster at time $t+1$.
For the pairs of particles $x_{2i+1}$ and
$x_{2i+2}$, i.e. pairs in which the first particle has odd
index, this is true since they
are connected by a path confined to time steps $t$
or earlier, so by the induction assumption, they
are in the same cluster at time $t$. Moreover,
none of the connection particles
collapsed between time steps $t$ and
$t+1$, due to the fact that they connect between
a path
before time $t$ and a path after time $t$ (or vice-versa).
Therefore $x_{2i+1}$ and $x_{2i+2}$
are in the same cluster also at time $t+1$.
(In the schematic example, this shows that particles
$x_1$
and $x_2$ are in the same cluster at time $t+1$,
and similarly $x_3$ and $x_4$.)
Now by definition of the connection particle,
there is a path after time $t$
connecting $x_{2i}$ and $x_{2i+1}$, which means that
$x_{2i}$ and $x_{2i+1}$ interact at time $t+1$. At the edges
of the chain, i.e. for $i=0$ or $i=k$,
it might be that $x_{2i}$ and $x_{2i+1}$ are
the same particle, and therefore they are trivially connected.
(In the example, this corresponds to the
interaction at time $t+1$
between $a$ and $x_1$, and between $x_2$ and $x_3$,
and to the fact that $b=x_4$. )
By the definition of the clusters' evolution in time,
the fact that particles interact imply that
their clusters are joined, and therefore
$x_{2i}$ and $x_{2i+1}$ are in the same
cluster at time $t+1$.\ignore{ (In the example, this corresponds to the fact that
the particles $x_2$ and $x_3$ are in the same cluster at time $t+1$,
and similarly the pair $a$ and $x_1$. $b$ and $x_4$
are trivially in the same cluster since they are the same particle.) }
Combining this with the fact that $x_{2i+1}$ and $x_{2i+2}$ are connected at $t+1$, this implies that all the particles $a=x_0,x_1,..,x_{2k},x_{2k+1}=b$
are in the same cluster at time $t+1$, which
completes one direction of the induction step.
Let us now prove the other direction of the induction step.
We want to prove that there is a path connecting
$(a,t+1)$ and $(b,t+1)$ in the percolation process, assuming that
$a$ and $b$ are in the same cluster at time $t+1$.
This cluster of time step $t+1$, which contains $a$ and $b$,
was generated by joining together several smaller
clusters, which existed after the noise step of
time step $t$. It is easy to see that
there is a subset of those clusters,
$ C_{-1}, C_0,...,C_{k-1},C_{k}$, such that $a\in C_{-1}$ and $b\in C_{k}$,
and such that
each two subsequent clusters, $C_i$ and $C_{i+1}$,
were connected at time $t+1$ by a unitary gate.
Note that each cluster $C_i$, except maybe
$C_{-1}$ and $C_{k}$, consists of at least two particles: one particle,
denoted by $x_{2i+1}$,
participates in an interaction with a particle from the preceding
cluster in the chain, $C_{i-1}$,
and the other particle, $x_{2i+2}$, participated in an interaction
with a particle from the next cluster in the chain, $C_{i+1}$.
To construct a path from $a$ to $b$ at time $t+1$, we first
note that by the induction assumption, there is
a path connecting the two particles
in the same cluster, $x_{2i+1}$ and $x_{2i+2}$, at time $t$.
Moreover, these particles did not collapse between time step
$t$ and $t+1$, because if they did collapse, they would have
belonged to a cluster consisting of one particle exactly.
Hence there is a path connecting them at time $t+1$.
A path between the second particle of $C_i$ and the first particle
of $C_{i+1}$ exists because by definition they interact at time $t+1$.
Paths from $a$ to the first particle $x_1$, and from
the last particle $x_{2k}$ to $b$, exist either because
they interact at time $t$, or they are simply the same particle.
This enables us to construct a concatenated path from $a$ to $b$
at time $t+1$. $\Box$
\ignore{
to this cluster by interactions at time $t+1$.
The fact that $a$ and $b$ are in the same cluster at time $t+1$,
implies that there exists a set of particles
$X_1,...X_{2k}$ such that the following conditions are satisfied
\begin{enumerate}
\item At time $t$ $X_{2i+1}$ is in a different cluster than $X_{2i+2}$ and
$X_{2i}$ is in the same cluster as $X_{2i+1}$.
\item At time $t+1$ unitary matrices are applied on particles $X_{2i+1}$ and
$X_{2i+2}$.
\item $X_i$ did not collapse between the time steps
$t$ and $t+1$.
\item (A unitary matrix is applied on $a$ and $X_1$ at $t+1$)
or
($X_1$ is in the same cluster as $a$ at time $t$ and $a$ did not
collapse between $t$ and $t+1$)
\item (A unitary matrix is applied on $b$ and $X_{2k}$ at $t+1$)
or
($X_{2k}$ is in the same cluster as $b$ at time $t$ and $b$ did
not collapse between $t$ and $t+1$)
\end{enumerate}
The claim is that there is a path connecting $a$ and $b$ in $B_{t+1}$.
We build this path as a concatenated path from $a$ to $X_1$ to
$X_2$... to $b$, where the paths between $X_{2i}$ and $X_{2i+1}$
exist due to the induction assumption, and the paths between
$X_{2i+1}$ and $X_{2i+2}$ exist because of conditions $2$ and $3$.
The edges of the path (From $a$ to $X_1$ and from $X_{2k}$ to
$b$) exist because of conditions $4$ and $5$.}
We can now investigate the sizes of the connected components in
the percolation model and then translate our findings to the
cluster model. Such percolation processes are known to exhibit a
phase transition at a critical point, below which there is a
connected component of linear size, but
above which the typical connected
components is of logarithmic size. Moreover, the connected
components in the super-critical phase are localized, in the
sense that the probability for two particles to be in the same
connected component decays exponentially with the distance between
these particles.
Let us first release the restriction to a percolation on a square
of size $n \times T$ and consider the infinite lattice.
We show the existence of a phase transition in
connectivity for percolation on
the infinite lattice, and from this we will be
able to get information about the finite case. One more
simplification which is useful is to notice that by contracting
each edge which corresponds to an interaction to one point, we do
not change the connectivity properties of the process,
and the resulting percolation process is
the usual model of percolation.\ignore{ on translational invariant lattices,
in which each edge is erased with independent probability
$\eta$.
This simplifies the original process,
in which there were two types of edges:
Interaction edges which were always present, and
time evolution edges which were erased with certain probability.}
For example, in the two dimensional lattice associated with the
one dimensional quantum circuit, the interaction edges
are exactly the horizontal edges in figure $1$, and so
after contracting each of these edges to one point,
the resulting percolation process
is the standard percolation on the square lattice, which is rotated
by $45$ degrees.
The contraction therefore transforms the problem to standard
bond percolation on translational invariant lattices,
which is pretty well understood.
In bond percolation on translational invariant lattices,
one usually uses $p$ as the probability for one edge
to be present, so in our case $p=1-\eta$. One can define the
critical $p$, $p_c$, to be the smallest probability in which the
point $0$ belongs to an infinite connected component with positive
probability. In translational invariant lattices, this is also equal
to the smallest $p$ in which the expected size of $0'$s connected component
is infinite\cite{mms,ab,chayes}
\begin{eqnarray}
p_c&=&inf\{p~|~
Pr_p(|H(0)|=\infty)>0\}=\\\nonumber
&=&inf\{p~|~E_p(|H(0)|)=\infty\}.
\end{eqnarray}
where $H(0)$ is the connected component of $0$.
A theorem by
Hammersley\cite{chayes,grimmett}, asserts that for
translational invariant lattices, for $p<p_c$, the probability
$\tau(x,y:p)$ for $x$ to be connected to $y$, decays exponentially
with the distance:
\begin{equation}\label{decay}
\tau(x,y:p)\le \exp\left(-\frac{d(x,y)}{\xi(p)}\right)
\end{equation}
with $\xi(p)<\infty.$ Above $p_c$, the probability for $0$ to be
connected to infinity is larger than $0$ by definition of $p_c$, so
$\xi(p)=\infty$ for $p>p_c$.
Let us denote by $p_c(d+1)$ the critical $p_c$ for bond percolation
on the lattice corresponding to a quantum circuit of dimension $d$.
For one dimensional quantum systems, it is easy to calculate the
critical probability $p_c(1+1)$. After contracting the interaction edges, we
simply get percolation on $Z^2$, for which it is known\cite{grimmett} that
the critical probability is a half. Hence:
\beq p_c(1+1)=\frac{1}{2}.\enq
For higher dimensions, we can bound $p_c(d+1)$ away from $0$ and $1$,
but the bounds are not tight:
\begin{lemm}
\( \frac{1}{3}\le p_c(d+1)\le\frac{1}{2^{1/d}}.\)
\end{lemm}
{\bf Proof:}
The upper bound comes from the fact that
the projection of the $d+1$
percolation with parameter $p$ on a $1+1$ percolation gives
percolation in $1+1$ with parameter $p^{d}$. This is true since after a
particle interacts with a particle along one axis, it waits $d$
time steps before it interacts again with a neighbor along the
same axis. This gives the upper bound, since if the original process had
exponentially decaying correlations, it cannot be that in the
projected process we are above the phase transition where $0$ is
connected with constant probability to infinity.
The lower bound is derived from a standard argument
which reduces the problem to a branching process.
A branching process is a process which starts with one node,
and each nodes gives birth to $k$ nodes with some probability
distribution
$p(k)$, independent of the other nodes.
It is a standard result (See \cite{feller} , for example)
that when the expected number
of descendents for each node is less than $1$,
the dynasty dies in finite time with probability $1$.
To construct the corresponding branching process,
observe that the degree of the interaction
graph, after contracting the horizontal edges,
is exactly $4$, regardless of $d$.
Starting with the point $0$, we regard each of its neighbors to which
it is connected in the percolation as an
ancestor of a dynasty. The descendents of each such node
are all its neighbors,
except for $0$, in the percolation process;
Each descendent has its own descendents, and so on.
When we encounter a node which is already in the dynasty, we do not count it
again; this way we have a tree.
Clearly, if the branching process is finite, then the connected component
of $0$ in the percolation process is definitely finite.
However, each neighbor of $0$ is the ancestor of a dynasty
in which the expected number of descendents of each node
is exactly the number of its neighbors which are not yet in the dynasty,
times $p$. Since the degree of the graph is $4$, and one of the
neighbors is the node's ancestor, the expected number
of descendents is at most $3p$, which is less than $1$ for $p$ less than
$\frac{1}{3}$. This gives the desired result. $\Box$
The analysis of the percolation process has taught us that
when $p$ is smaller than the critical point for percolation, $p_c$,
the connected components in the system are small.
Going back to the density matrices, using the correspondence lemma,
this implies
that for $\eta < 1-p_c$, the clusters are small and localized.
This will be used in the next section
to prove an upper on the entanglement between
distant sets in this noise regime.
\section{Finite Entanglement Length}
From the correspondence to percolation,
we have that for $\eta < 1-p_c$, the density
matrix of the quantum system can be approximated by
a mixture of clustered states with
localized clusters of logarithmic size.
Thus, distant subsets of
particles are with high probability contained in non-intersecting
clusters, i.e. most of the weight of the density matrix is
concentrated on states in which there is no entanglement between
the two subsets.
The weight of the states in which there
is entanglement between the two sets decays exponentially with the distance
between the sets. By continuity of entanglement, this implies that
the entanglement between the two sets decays exponentially with the distance.
We can now show,
that the entanglement between any two sets of
particles decays exponentially with the distance between them,
when the noise rate $\eta$ is such that $1-\eta$ is sub-critical
in the percolation process.
The rate of the decay is the {\it entanglement length} of the
system.
The entanglement between the two sets becomes negligible already when the
distance is of the order of $\log(n)$ particles.
This translates to the following theorem:
\begin{theo}\label{theo1}
Consider a $d$ dimensional quantum circuit with
nearest neighbor interactions, subjected to local noise
of the type of stochastic depolarization
or stochastic collapses, with noise rate $\eta$.
If the circuit is initialized with an unentangled state, i.e.
a tensor product state, and if $\eta>1-p_c(d+1)$,
then the entanglement of formation
between any two sets of qubits $A$ and $B$ at any time $t\ge 0$ decays
exponentially with the distance between the two sets:
\[ E_f(A:B)\le \min\{|A|,|B|\}|A|\cdot|B|e^{-\frac{d(A,B)}{\xi(1-\eta)}}.\]
For a general initial state, a similar formula is true
except for a correction term which decays exponentially with time:
\[ E_f(A:B,\rho(t))\le \min\{|A|,|B|\}\left(|A|\cdot|B|e^{-\frac{d(A,B)}{\xi(1-\eta)}}+\right.\]\[~~~~~~~~~~~~\left.+ n\min\{|A|,|B|\}e^{-\frac{t}{\xi(1-\eta)}}\right).\]
\end{theo}
\noindent{\bf Proof:} Let us start with the simple case, in which the initial state
is a complete tensor product, i.e. all clusters are of size $1$.
By equation \ref{decay},
the probability for two particles $A$ and $B$ to be connected
decays exponentially in the distance between them.
The correspondence lemma (\ref{corres}) implies that the
probability for two particles to be in the same cluster at time
$t$ is equal to the probability they are connected in the
percolation model at time $t$. i.e. the probability for two
particles from $A$ and $B$ to be in the same cluster is bounded
above by $\exp(-d(A,B)/\xi(1-\eta))$. Thus, the probability for any
pair of particles from $A$ and $B$ to be in the same cluster is
bounded above by $|A|\cdot|B|\exp(-d(A,B)/\xi(1-\eta))$. The density
matrix can thus be written as a mixture of one density matrix with
weight smaller than
$|A|\cdot|B|\exp(-d(A,B)/\xi(1-\eta))$, and another density matrix which
is a mixture of density matrices, where in all these matrices all
the particles in $A$ are in different clusters than all the
particles in $B$. The reduced density matrix
to $A,B$ of the
second matrix is thus separable, and contains no entanglement
between $A$ and $B$.
By convexity of entanglement of formation,
the entanglement in the entire density matrix
is bounded above by the entanglement in the first
density matrix, times the weight of this matrix. The entanglement
of the first matrix is at most
the number of qubits in the system, and this gives the desired result.
(For measures of entanglement which are not convex,
but strongly continuous, one should replace the term $\min\{|A|,|B|\}$
by the appropriate polynomial from the continuity bound.)
We now proceed to the general initial state.
We will give an upper bound for the case in which the initial
state is one big cluster, and any other case is trivially
implied by it.
To do this, we have to understand where we have
used the fact that the initial state is not entangled.
This was used
for the base of the
induction in the correspondence lemma,
where the fact that all clusters are of one qubit
corresponds to the fact that in the percolation graph,
the initial connected components at
time $0$ are all of size $1$.
To adapt the situation to the case in which
all particles are in one big cluster
at time $0$, we add a horizontal line of length $n$
connecting all particles to
one big connected component at time $t=0$. The correspondence lemma
then goes through. However, equation \ref{decay} no longer holds.
To correct it, we add to it a term which corresponds to
the probability for $A$ to be connected to $B$ by a path
that goes through time $t=0$, i.e. through the additional new line
we have added to the graph.
For such a path to exist, both $A$ and $B$ need
to be connected to time $0$.
The probability for any one of the qubits in $A$ to be connected to any
one
of the $n$ qubits at time $0$ is at most
$n|A|$ times the probability for one qubit
at time $t$ to be connected to one qubit at time $0$, which is at most
$exp(-\frac{t}{\xi(1-\eta)})$ by equation \ref{decay}. The same argument
applies for the connection from $B$ to time $0$, and this
gives the desired result.
$\Box$
This shows that the system cannot create entanglement between far
sets of particles: Roughly speaking, the typical range of
entanglement is microscopic.
This is true for any initial condition, where
the relaxation time to the typical unentangled state
is of the order of $\log(n)$ steps.
This result implies an upper bound on the
entanglement length in the quantum system above the critical noise rate,
and in particular shows that it is finite.
This is done by simply taking the limit in the definition of entanglement length (\ref{expdec4}), which gives:
\begin{coro}\label{coro1}
The entanglement length $\mu(\eta)$ of a $d$ dimensional quantum circuit with
nearest neighbor interactions, subjected to local noise
of the type of stochastic depolarization
or stochastic collapses, with noise rate $\eta$, satisfies:
\[\mu(\eta)\le \xi(1-\eta)\]
and in particular $\mu(\eta)$ is finite for
$\eta>1-p_c(d+1)$.
\end{coro}
This gives a bound on the entanglement length, in terms
of the correlation length in classical bond percolation.
The correlation length of a given lattice
can be easily estimated by computer experiments, and
analytical bounds are given
in \cite{chayes, grimmett}.
\section{Infinite Entanglement Length}
We now want to concentrate on the other noise regime, and show
that below the critical noise the entanglement length is infinite.
One might naively think that this can be deduced from the fact
that the density matrix is a mixture of clustered states with
linear sized clusters. However, there is a difficulty in pursuing
the connection between clusters and entanglement for this purpose,
because of the following reason. The density matrix is actually a
mixture of many clustered states. The mixture of two clustered
states, with very large clusters, can be a density matrix in which
the clusters are of size one. One example of such a case is a
mixture of the two states, $\frac{1}{\sqrt{2}}(|0^n\ra+|1^n\ra)$ and $\frac{1}{\sqrt{2}}(|0^n\ra-|1^n\ra)$,
the mixture of which is a non entangled state. Thus, the sizes of
the clusters can be used for upper-bounds on entanglement, but it
is not clear how to use them in order to show a lower bound on the
entanglement in the system.
We therefore need to use different techniques for
lower bounds on entanglement. We will use techniques from
quantum computation.
A quantum computer embedded on a lattice is a special case of the
quantum systems we are discussing. The particles are quantum bits,
and the interactions are fixed according to the algorithm.
Therefore, corollary \ref{coro1} shows
that the entanglement length is finite above
the critical noise rate also in fault tolerant quantum computers.
For fault tolerant quantum
computers we can also analyze the other side of
the noise scale, and show that the entanglement length in the
system is infinite if the noise rate is below a certain threshold.
We will use the
threshold result\cite{aharonov1,kitaev0,knill1} for
fault tolerant quantum computation, which shows that
quantum computation can be made robust to noise, using quantum error
correcting codes, as long as the noise is smaller than a certain threshold.
In fact, here we need the slightly stronger version of the
threshold result\cite{aharonov1,gottesman}, which asserts that
this can be done even when the quantum system is embedded on a $d$
dimensional lattice. The threshold is then $\eta_0(d)$, which
for $d=1$ is estimated to be $10^{-7}$\cite{aharonov1}.
In the fault tolerant range, two distant sets
of qubits can be entangled, and remain entangled for a long time,
with the amount of entanglement independent of the distance.
We now give an example of a quantum computer which exhibits
entanglement among far parts of the system when $\eta<\eta_0(d)$,
but the entanglement length is finite for noise
$\eta>1-p_c(d+1)$. The idea is that
a fault tolerant computer can simulate any quantum state,
including states which contain entanglement between sets of qubits
which are far apart. Hence, we will
construct a quantum algorithm in which there is entanglement
between two far parts of the system, and make it fault tolerant.
This can be done in many ways, but here is a simple example,
for $d=1$.
Divide the set of qubits to three sets, $A,B,C.$ We will create
entanglement among $A$ and $B$, while leaving the qubits in the middle,
$C$, in
a basic state. This will be done by constructing the state:
\beq\label{state}
\frac{1}{\sqrt{2}}\left(
|0^m\ra_A\otimes |0^n\ra_C\otimes |0^q\ra_B+
|1^m\ra_A\otimes |0^n\ra_C\otimes |1^q\ra_B\right)
\enq
on a fault tolerant quantum computer, and keeping this state for a
long time, by applying error corrections.
This state indeed contains entanglement between the two registers
$A$ and $B$, which are $n$ sites apart.
The algorithm which constructs such a state
is very simple, and uses only two basic quantum gates:
The Hadamard gate, which is a one qubit gate applying the following unitary
transformation \begin{eqnarray} |0\ra &\longmapsto&
\frac{1}{\sqrt{2}}( |0\ra+|1\ra)\\\nonumber
|1\ra &\longmapsto&
\frac{1}{\sqrt{2}}(|0\ra-|1\ra),\end{eqnarray}
and the controlled NOT gate
which is a two qubit gate applying the following
unitary transformation
(said to be applied {\it from} the first qubit to the second one):
\beq |a\ra \otimes |b\ra \longmapsto |a\ra\otimes
|a\oplus b\ra,\enq
where $\oplus$ means addition mod $2$.
Using these gates, it is easy to create the state \beq
\frac{1}{\sqrt{2}}(|0^{m+q}\ra+|1^{m+q}\ra) \enq on the first $m+q$ qubits, by
applying a Hadamard gate on the first qubit and then controlled NOT gates
from
the first qubit to the second, from the second to the third, and
so on. Then, we want to swap the $m+1,...,m+q$ qubits to register
$C$. To do this, we first swap the last qubit in $B$ with qubits
to its right until it gets to the last site in $C$;
In the same way we bring the one before last qubit in $B$ to the one
before last site in $C$, and so on until all qubits in $B$ are
in the right most sites of $C$, which achieves the desired state
with only nearest neighbor interactions.
This algorithm by itself is not fault tolerant, and in the
presence of any amount of noise, i.e. $\eta>0$, the entanglement
in the system will be lost immediately. However, we can make this
algorithm fault tolerant by the methods in \cite{aharonov1,kitaev0,knill1},
as long as $\eta$ is smaller than $\eta(d)$,
the threshold for fault tolerance
for $d$-dimensional quantum computers\cite{aharonov1,gottesman}.
These results are too complicated to explain here in details.
In a nutshell, fault tolerance is
achieved by encoding the qubits using
quantum error correcting
codes, and computing the algorithm on the encoded states, while
applying quantum error correction on the state frequently.
Each qubit is replaced by
polylog($n$) qubits, encoding its state.
The state $|0\ra$ is encoded by the state $|S_0\ra$ of polylog($n$)
qubits, and similarly $|1\ra$ is encoded by the state $|S_1\ra$.
Let us denote by
$A'$, $B'$ and $C'$ the qubits encoding
the original sets of qubits $A$, $B$, and $C$, respectively.
If no fault occurs, at the end of the
algorithm the state of the system will be in the
state (\ref{state}) encoded by the quantum error correcting code:
\beq\label{encstate}
|S_0^m\ra_{A'}\otimes |S_0^n\ra_{C'}\otimes |S_0^q\ra_{B'}+
|S_1^m\ra_{A'}\otimes |S_0^n\ra_{C'}\otimes |S_1^q\ra_{B'}
\enq
normalized by a factor of $\frac{1}{\sqrt{2}}$.
The entanglement in this state will remain there for ever if errors
do not occur. However, errors do occur. By fault tolerance,
this means that at the end of the computation
the density matrix is polynomially close to a density matrix $\rho$
which can be corrected to the
correct state (\ref{encstate}) by noiseless quantum error corrections.
Due to continuity of entanglement, it suffices to argue
that such $\rho$ contains a constant amount of entanglement.
But this is true since we know that $\rho$ can be corrected
to the state (\ref{encstate}) by local operations not involving interactions
between $A'$ and $B'$.
Since entanglement cannot increase by
local operations, the entanglement between $A'$ and $B'$ in $\rho$
is at least as that in the state encoding (\ref{encstate}), i.e. one
entanglement unit.
The distance between the actual density matrix
and a correctable density matrix $\rho$ is,
by \cite{aharonov1}, at most the number of time steps $t$
divided by a polynomial in $n$. This distance
is smaller than some constant $\epsilon$
as long as the number of time steps is polynomial in the size
of the system $n$. Thus, by strong continuity
the entanglement between $A'$ and $B'$
will remain bounded from below by a constant for
polynomially many time steps.
After polynomially many steps, we can replace all qubits by
qubits in the state $|0\ra$, and run the whole algorithm again.
The average entanglement over time from $0$ to $\infty$
is very close to one,
since the time it takes to construct the state is much smaller than
the polynomial time for which the entanglement remains in the system.
This proves the existence of a non trivial sub-critical side of
the phase transition:
\begin{theo}
The entanglement length in the $d$ dimensional
fault tolerant quantum computer
defined above satisfies
\[ \mu(\eta)=\infty\]
for $\eta$ smaller than the threshold for fault tolerance in $d$
dimensional quantum computers, i.e. $\eta<\eta_0(d)$. $\Box$
\end{theo}
\section{Other Quantum Systems}\label{gen}
The model of a noisy quantum computer actually holds not only for
quantum systems designed to serve as computational devices,
but for a much broader class of
physical systems as well.
We first claim that putting aside the noise process,
any $d$ dimensional quantum system, in which the particles
are located in space with low enough density, and
in which interactions occur only between particles which are not
too far apart,
can be modeled by a quantum circuit.
This can be done by discretizing the medium to very small cells,
such that each cell contains at most one particle.
Time will be discretized to sufficiently small intervals
such that a particle can only move
to a neighbor cell in one time step. Then, the
movement of particles can be modeled
by an interaction between an occupied and an un-occupied cell, and
since the density of particles is low, one particle never interacts
with more than one other particle at the same time, so the notion of quantum
gate is appropriate.
The noise model which is used in this paper is quite general as well,
when low density or instantaneous interactions systems are considered.
During the time interval in which a particle does not participate
in any interaction, stochastic collapses are actually equivalent to
a process of local
decoherence.
Assume that each particle interacts with its own
independent thermal bath, and the
Markovian assumption is applied, so that the environment of each
particle is renewed each time step. This corresponds to the process
in which the off diagonal terms in the density matrix of
one particle decay by some factor between two interactions\cite{palma}.
If the decoherence process operates for time
$\Delta t$, the $(i,j)$ element of the density matrix,
transforms to \beq \rho_{i,j}
\longmapsto \rho_{i,j}\exp(-\gamma\Delta t(1-\delta_{i,j}))\enq
If we set $\exp(-\gamma\Delta t)=1-\eta$, we get
\beq \rho_{i,j}\longmapsto (1-\eta)\rho_{i,j}+
\eta\rho_{i,j}\delta_{i,j},\enq which is equivalent to a
measurement with probability $\eta$.
Similarly, the depolarization process can be
presented as a gradual change of the density matrix of one particle.
The above arguments show that the model of noisy
quantum circuits which we are discussing
is interesting as a representative of
the class of quantum systems with
macroscopically many
finite state particles, local instantaneous
interactions and local decoherence noise.
The analysis done in this paper regarding
upper bounds on the entanglement length in the super-critical phase
goes through, and therefore theorem \ref{theo1} and corollary \ref{coro1}
can be generalized to this case.
In such quantum systems, our analysis provides an explanation
to the emergence of macroscopic classical behavior above the critical
noise rate, as will be discussed in the conclusions.
\section{Experimental Verification}
Unfortunately, we do not yet have a physical realization
of a quantum computer of more than several qubits, on which
the existence of a phase transition in entanglement length
in fault tolerant quantum computer can be verified.
However, for the more general case of
quantum systems with local interactions and local noise,
satisfying the requirements of section \ref{gen},
the bound on entanglement length in the
super-critical regime can in principle be experimentally testable.
What is needed to perform such an experiment is to be able to measure
with high enough accuracy the joint density matrix of
two subsets of the system.
The entanglement between the two sub-systems can then be numerically
approximated, using a (doable, but extremely difficult)
minimization over equation \ref{ef}.
The entanglement can then be found as a function
of the distance between the sets, from which the entanglement length
can be deduced.
An extremely interesting open problem is to give a concrete design
for such an
experiment, for an existing quantum system,
and to compare the outcomes
with the entanglement length predicted by the percolation process.
\section{Quantum-Classical Transition}\label{quantclass}
The results presented here suggest that
the emergence of classical macroscopic phenomena
in large quantum systems can be attributed, in certain cases,
to the fact that the noise rate is larger than a certain critical
point, so that the entanglement length is finite.
However, we merely introduced in this paper a new phenomena.
The list of questions which remain open is extremely large,
and varies on different physical fields.
The first and most basic question should be
how general these results are.
In this paper, we have been able to show the existence
of a non trivial sub-critical phase in fault tolerant
quantum computers. Are there more natural quantum systems,
in particular systems which are homogeneous (or periodic) in
space and in time, which have local interactions and local
noise, which are
able to maintain long range entanglement in the presence of
weak or zero local noise for a long time?
Does a random quantum system, i.e.
in which random interactions are applied, exhibit long range
quantum correlations?
Such systems will provide more examples for quantum systems
in which a phase transition in entanglement length occurs.
It should be noted here that the notion of zero noise rate does not
trivially coincide with that of zero temperature,
i.e. long range entanglement
in the ground state of the Hamiltonian of the system.
An important observation is that
the model discussed here
deals with non-equilibrium quantum systems.
The quantum systems we consider here can be in a
steady state, but the density matrix is not in the Gibbs
distribution of the eigenvectors of some Hamiltonian.
The reason for this is that we did not allow the
system to approach equilibrium. Our noise model, or the
interaction with the thermal bath, is limited to local
interactions.
This is a crucial ingredient that causes the phase
transition. It is the fact that two forces of even power
compete: local interactions in the system against local
interactions with the environment, that gives the critical noise
rate. The fact that the quantum computer does not
achieve equilibrium despite the noise is explained by the fact
that the system is cooled constantly by
quantum error corrections.
It is left as an open problem to further investigate possible
equilibrium phase transitions in entanglement, and the connection
to the non-equilibrium phase transition presented here.
The reader is referred to \cite{marro} and references therein
for an introduction to non-equilibrium phase transitions.
We view the fact that we did not allow the system to achieve
equilibrium as very important in the derivation here.
\ignore{This relates to a remark regarding the
difference between our explanation of the emergence
of classical behavior, and the standard explanation.
The suggestion to explain the transition from
quantum to classical macroscopic behavior as a phase transition
stands in contradiction to the
standard point of view of gradual transition, usually explained by
decoherence\cite{zurek1} or by taking $\hbar$ to $0$\cite{}.}
Indeed, it is worthwhile to ask which of our assumptions
regarding the properties of the physical system are
essential, and which are technical. Intuitively, the locality of both
noise and interactions seems crucial. This is true since
this locality is what makes the two competing
forces, the interactions which tend to
entangle the system and the noise which tends to disentangle it,
comparable in power, which gives rise to the phase transition.
It seems, however, that the assumption on
the exact form of the noise process might not be so important,
and neither is the discretization of the
interactions.
An important open problem is to relax the assumptions
used in this paper, and to generalize
the results presented here to other noise models, and
to the continuous case. In particular, it is not clear how to
generalize the results to the case in which
the particle interacts simultaneously with all its neighbors
and the environment. It seems that a considerably different
approach would be needed in this case.
If the phenomena of the phase transition in entanglement
is indeed general, its effect on
our understanding the transition from quantum to classical
physics needs to be deeply understood.
One important question is whether there exists
some classical or quasi-classical description of the behavior
of a quantum system in its super-critical phase.
Another, related, question is whether the existence of a
phase transition in entanglement induces other
quantum phase transitions at the same critical point
in the same system.
\ignore{ Why is the
transition from quantum to classical connected at all to the
entanglement length? We claim that when the entanglement length is
finite, the system will behave classically when observed in the
macroscopic scale. This is true because two macroscopic subsets
of the system,
which are within macroscopic distance one from another, are almost
non entangled, so there will be no quantum correlations between
the two sub-systems.}
\ignore{It is easy to construct an homogeneous quantum system
in which the phase transition occurs at exactly $0$.
An example for such a system is a system in which
random Hadamard transforms and random controlled NOT's
are applied at each time step.}
A set of open questions regarding the phase transition
comes from statistical physics. For example,
what are the critical exponents related to this phase transition?
What is the universality class of this phase transition?
In fact, it is not clear that there is only one critical
point here. In the case of the quantum
computer, or other quantum systems, there might well be
an intermediate
regime of noise, in between the two thresholds, for which the
entanglement behaves in a different way, i.e. its dependence on the
distance is neither an exponential decay nor constant. The question
of showing that there is only one critical point at which a transition
occurs from exponential decay to independence on the distance
remains open.
A very interesting problem is to come up with a better order
parameter related to entanglement, rather than the entanglement
length. There are many problems
with the entanglement length as an order parameter. The most
important one is that it might be that the system is very
entangled, but the entanglement between two distant subsets is
zero. Such is for example a system in the state $\frac{1}{\sqrt{2}}(|0^n\ra+|1^n\ra)$,
for which any subsystem is non-entangled. Entanglement in such
very entangled quantum systems will not be detected when looking
at sub-systems, and the entanglement length will therefore contain
no information about the actual behavior of the entanglement in
the system. Another motivation for this question is provided
by \cite{aharonov2}, where the sizes of the clusters
is analyzed relaxing the assumptions of nearest neighbor
interactions. The sizes of the clusters in this case indeed
transform from logarithmic to linear at a critical noise rate.
However, the notion of entanglement length cannot be defined in a system
without geometry, so it is not clear how to define an order parameter
which exhibits the phase transition in this case.
To summarize, we have discovered a phenomena of a phase transition
in entanglement in quantum computers, and in general in quantum systems
with local decoherence and local interactions which
are able to generate long range entanglement in the absence of noise.
The suggestion to explain the transition from
quantum to classical macroscopic behavior as a phase transition
in entanglement is fundamentally different from the
standard point of view of gradual transition, usually explained by
decoherence.
Our results have experimental implications, and raise
a long list of open problems related to the foundations of
quantum mechanics, as well as to quantum statistical physics.
\section{Acknowledgments}
I am most grateful to Michael Ben-Or and to Michael Nielsen.
Discussions with them inspired this work.
I am also in debt to
Joseph Imri, David Ruelle,
and Wojtek Zurek, for interesting discussions.
I would like to thank Jennifer Chayes, Christian
Borgs, Jeong Han Kim, David Aldous and Oded Schramm
for useful comments about classical percolation.
Thanks to Julia Kempe, Daniel Lidar and Michael Nielsen for
useful comments and corrections on early drafts of this paper.
\bibliographystyle{plain}
|
1,108,101,563,896 | arxiv | \section{Introduction}
The stability of Dark Matter (DM) on cosmological time scales strongly suggests the existence of new accidental symmetries in Nature.
In a minimalistic approach where DM is a representation of the Standard Model (SM) gauge group, the possibilities that DM is accidentally stable are very limited and constrained \cite{Cirelli:2005uq}. A different class of models where DM is accidentally stable with a wide variety of SM quantum numbers is the one where DM is a baryon-like object of some strongly interacting dark sector \cite{Antipin:2015xia,Mitridate:2017oky,Kribs:2016cew}, in the framework of vector-like confinement \cite{Kilic:2009mi}. In these scenarios, the presence of elementary `dark' fermions with an accidental U(1) symmetry guarantees the stability of the lightest dark baryons, as the stability of protons and nuclei is related to baryon number conservation in the SM.
In such scenarios, dark nuclear forces are expected to give rise to dark nuclei. For example in \cite{mccullough1,mccullough2} it was shown that bound states with baryon number 2 exist and the absence of a Coulomb barrier implies that states with large baryon number very likely exist in the spectrum. The possibility that DM is confined into more complex structures is of obvious theoretical and experimental interest.
In this work we wish to quantitatively study the nucleosynthesis of the dark sector.
In the SM, the formation of light elements during Big Bang Nucleosynthesis (BBN) depends upon a few odd circumstances: for example on the fact that the deuterium binding energy is small and comparable to the proton-neutron mass difference. We find that even for baryonic DM, the success of nucleosynthesis depends on the formation of dark deuterium
but, contrary to the SM, the different densities and binding energy of DM require a precise knowledge of the dark deuterium production cross-section.
First, we establish the general features of dark nucleons and nuclei in models with a new confining gauge interaction and fermions in vector-like representations. Under broad assumptions the spectrum is dominated by the nuclear binding energies $E_B$ much smaller than the nucleon mass $M_N$ and electro-weak effects can be included perturbatively. Importantly the synthesis of nuclei requires release of energy which automatically allowed in models with electro-weak constituents where it is always possible to radiate a photon,
\begin{equation}\label{BSF}
\begin{tikzpicture}[line width=1.5 pt, scale=1.7]
\node at (-1.2,0.25) {$N$};
\node at (-1.2,-0.25) {$N$};
\draw[] (-1,0.25)--(0,0);
\draw[] (-1,-0.25)--(0,0);
\draw[line width=4pt, color=red] (1,0)--(0,0);
\draw[vector,color= blue] (0,0)--(0.7,.7);
\draw[fill, color=gray] (0,0) circle (.3cm);
\node at (-1.8,0) {$(\mathbf{r},S)\, \LARGE \bigg\{$};
\node at (1.8,0) {$D\, (\mathbf{r}',S')$};
\node at (1.05,.7) {$\ \\ ~~ \gamma, W, Z$};
\end{tikzpicture}
\nonumber\,
\end{equation}
The key input to determine the abundance of nuclei is the deuterium cross-section. At first sight its calculation involves strongly coupled nuclear reactions that seem difficult to control. This is not the case, however, in light of the smallness of $E_B/M_N$ and a precise computation is possible for shallow bound states. We will get inspiration from effective field theory of nucleon interactions \cite{Kaplan:1998tg}, applying the same techniques to the case of dark nucleons with arbitrary quantum numbers $(\mathbf{r},S)$.
The cross-section for bound state formation through electric (dipole) and magnetic interactions are found,
\begin{equation}
\sigma v_{\rm rel}\big|_{\rm electric} = K_E \frac {2\pi \alpha^3}{v_{\rm rel}^3}\times \frac {\pi \alpha}{M_N^2}\left( \frac {M_N}{E_B}\right)^{\frac 1 2} v_{\rm rel}^2\,,\quad
\sigma v_{\rm rel}\big|_{\rm magnetic}= K_M \frac {2\pi \alpha}{v_{\rm rel}}\times \frac {\pi \alpha}{M_N^2}\left( \frac {E_B}{M_N}\right)^{\frac 3 2}\,,
\nonumber
\end{equation}
where the first factor accounts for possible Sommerfeld enhancement (SE) and $K_E$ and $K_M$ are group theory factors.
With these result in hand we can easily compute the abundance of deuterium and heavier nuclei in a given model by solving the relevant Boltzmann equations. For models with electro-weak charges we find that a fraction of DM is bound in deuterium and heavier elements are not significantly produced.
Formation of nuclei through photon emission can be tested in indirect detection experiments such as FERMI.
The present work clarifies how in general the first steps of dark nuclei formation can occur, based on first principle calculations. In this respect it provides a quantitative input for works that focus on the asymptotic fusion of large nuclei \cite{Hardy:2014mqa,Hardy:2015boa}, and also for models where the dark baryon coupled to dark photons as in \cite{1406.1171}, that we reconsider. On a more technical side, this paper extends the analysis of cosmological bound state formation of perturbative bound states \cite{vonHarling:2014kha,Mitridate:2017izz} to strongly coupled nuclei.
The paper is organised as follows. After an introduction to the properties of nuclei in vector-like confinement scenarios in section \ref{sec:darkforce}, we derive in section \ref{sec:estimates} the Boltzmann equations for dark deuterium in a few models. We compute the deuterium formation cross section with non-relativistic effective field theory techniques in section \ref{sec:calculation} and the appendix. Section \ref{sec:triplet} discusses the case of dark nuclei made of SU(2)$_L$ triplets, while in section \ref{sec:darkphoton} we consider dark nuclei charged under a dark photon. We conclude and outline future directions in section \ref{sec:conclusions}.
\section{Dark Force}
\label{sec:darkforce}
We frame our discussion of DM nuclei within the scenarios of vector-like confinement \cite{Kilic:2009mi}.
The SM is extended with a new non abelian gauge force and fermions charged under the SM and dark group, described by the
renormalisable lagrangian,
\begin{equation}
\mathscr{L}=\mathscr{L}_{\rm SM}- \frac { {\cal G}_{\mu\nu}^a{\cal G}^{a\,\mu\nu}} {4 g_{\rm DC}^2}+{\bar \Psi}(i\slashed{D}-m_\Psi)\Psi\,.
\end{equation}
Such framework has special interest for DM as it automatically generates accidentally stable DM candidates \cite{Antipin:2015xia}.
We will focus on SU(N)$_{\rm DC}$ gauge theories with fermions in the fundamental of dark colour\footnote{In models with real reps such as fundamental of SO($N$) or adjoint of SU($N$) \cite{Contino:2018crt} baryon number is only conserved modulo 2 so they do not support stable nuclei with $A>1$. Models with pseudo-real representations such as fundamental of Sp($N$) do not have stable baryons.} and masses $m_\Psi$ smaller than the confinement scale $\Lambda$.
Upon confinement the spectrum consists of dark pions and dark baryons with charges under the SM determined by their constituents.
The theory features an accidental U(1) symmetry, the dark baryon number, under which the fermions $\Psi$ transform with the same phase. This symmetry guarantees the stability of the lightest baryon, which for appropriate choices of SM representation can be a viable DM candidate. The very same dark baryon number also guarantees the exact stability of the lightest states in each charge sector.
This leads to the stability or metastability of nuclei or more in general state of matter that carry baryon number.
The quantum numbers under the SM are uniquely fixed as gauge interactions naturally select the smallest representation as the most bound.
As an example we will consider the simplest models with SU(3)$_{\rm DC}$ gauge group:
\begin{itemize}
\item $\Psi$ is a triplet under a SU(2)$_L$. The lightest baryon $V\equiv\Psi\Psi\Psi$ is also a triplet and has spin 1/2. The lightest states are pions
transforming as a triplet and quintuplet with mass splitting $\Delta M_\Pi^2\sim \alpha_2/(4\pi) \Lambda^2$. The triplet is accidentally stable due to G-parity \cite{hill},
but it can decay through dimension-5 operators.
\item $\Psi$ is a singlet of the SM. In this context some heavier unspecified charge fermions are needed to guarantee thermal contact with the SM
but play no other role in the dynamics. The lightest baryon has spin 3/2 for one flavour of singlets or 1/2 for more flavours.
\end{itemize}
Previous studies focussed on symmetric DM where the relic abundance is generated through thermal decoupling, which leads to a baryon mass
around 100 TeV to reproduce the known DM density. Here will focus on asymmetric DM as this this maximises the formation of nuclei.
This requires a suppressed symmetric component so naturally $M_N < 100$ TeV. Direct searches at the LHC place a bound of around 1 TeV
for the mass \cite{Barducci:2018yer,Kribs:2018ilo}, while the singlets just need to be heavier than about 10 MeV to avoid bound on number of species.
\subsection{Properties of Dark Nuclei}\label{sec:properties}
At zero temperature dark nucleons are expected to bind into larger nuclei due to residual strong interactions.
Based on nuclear physics examples, as well as lattice results \cite{Beane:2012vq}, we consider as typical binding energies,
\begin{equation}
0.001< E_B/M_N < 0.1\,.
\end{equation}
In the SM due to the electro-static repulsion, only nuclei with atomic number $A\lesssim 100$ are long lived or cosmologically stable.
Moreover the presence of bottlenecks associated to the details of the nuclear spectrum implies that only the lightest nuclei are synthesised cosmologically.
For accidental DM models with electro-weak constituents the situation is likely different.
If the mass of the dark baryon is larger than the electro-weak splittings, it is possible to exploit the approximate SU(2)$_L$ symmetry to classify nuclei into SM representations at least for the nuclei with small atomic number $A$. Actually, in the limit where we neglect SM interactions the theory has a larger accidental global symmetry SU($N_F$) corresponding to the dimension of the lightest nucleon representation. All the states can thus be classified into multiplets of SU($N_F$). Electro-weak interactions break this symmetry splitting the lightest nucleon multiplet into SM multiplets by an amount,
\begin{equation}
\Delta M_N \sim I_N(I_N+1) \frac {\alpha_2}{4\pi} M_N
\label{nucleonsplitting}
\end{equation}
where $I_N$ is the isospin of the nucleon.
The electro-weak splitting of nucleons induces a splitting between the dark nuclei. Since the shift above
can be larger than the nuclear binding energies, the splitting of nuclei made of different nucleon representation is dominated by the splitting of constituents over a large range of parameters.
One consequence is that as in the SM heavier baryons do not participate to nucleosynthesis.
Cosmologically the bound state formation begins at temperatures of order $E_B/20$, where $E_B$ is the binding energy.
Since the heavier nucleons decay into the lighter ones through strong interactions we can neglect the population of heavier nucleons as long as,
\begin{equation}
\frac {E_B} {20}< \Delta M_N
\longrightarrow \frac {E_B} {M_N}< \alpha_2
\end{equation}
This is the typical range expected for nuclear binding energies.
Thus we can focus on the nuclei made of the lightest SM rep. The nuclei made of the lightest nucleon multiplet can be decomposed into SU(2)$_L$ reps
split by nuclear binding energies $\Delta E_B^N$. The electro-weak correction to the binding energy is given by,
\begin{equation}
\Delta E_B^W \sim \lambda \frac{\alpha_2}{R_{\rm nucleus}}
\end{equation}
where $\lambda=I_N(I_N+1)-I_B(I_B+1)/2$ is the effective coupling in the isospin channel of the nucleus.
For shallow bound states we estimate the size of the nucleus with the scattering length $1/a=\sqrt{M_N E_B}$, see section \ref{sec:calculation}.
It follows that the nuclear binding energies dominate for $E_B/M_N> 10^{-3}$. Electro-weak corrections can be included perturbatively in the relevant region
of parameters, and for $R_{\rm nucleus}< M_Z^{-1}$ the SM gauge fields can be treated as massless.
For large $A$ a multitude of SM reps exist with isospin up to $ A I$ where $I$ is the nucleon isospin.
Of these the smaller representations are attractive making the nuclei more bound while the largest representation have a Coulombian
energy that scales as $A I (A I+1)$ that unbind nuclei of arbitrarily large charges. In light of this, the valley of stability of dark nuclei will likely extend to very large to $A$,
at least for small electro-weak charges.
Only the lightest baryon in each baryonic number sub-sector will be stable so that at late times all baryons produced cosmologically
decay to a neutral state with isospin 0 or 1/2. This process is controlled by the decay rates among and within isospin multiplets induced by
electro-weak interactions, analogously to de-excitation of hydrogen atom. In particular,
\begin{itemize}
\item Each isospin multiplet can decay to states with smaller isospin.
Since we focus on the $s$-wave bound states the rate is dominated by emission of a photon through magnetic dipoles with $\Delta S=1$.
As we show in appendix \ref{sec:appB} the rate can be computed model independently as,
\begin{equation}
\Gamma \approx \alpha \frac{(E_{B_\mathbf{1}}-E_{B_2})^2}{M_N^2} \sqrt{E_{B_\mathbf{1}} E_{B_2}}
\end{equation}
where $B_{1,2}$ are the binding energies of two bound states with $\Delta I=\Delta S=1$.
\item The splitting within each baryonic electro-weak multiplet is given by \cite{Cirelli:2005uq},\footnote{The size of the nuclei scales as $A^{1/3}/M_N$,
therefore they can be treated as elementary as long as the size is smaller than $1/M_W$.}
\begin{equation}
\Delta M_N= Q^2 \alpha_2 M_W \sin \frac{{\theta}_W}2
\end{equation}
which for an SU(2)$_L$ triplet gives $\Delta M_N=165$ MeV. Since charged and neutral components remain in equilibrium, the abundance of charged nuclei is then approximately $n_{V^+}\approx 2 n_{V^0}$ for $T> \Delta M$.
\end{itemize}
In what follows we will study the cosmological synthesis of dark nuclei. Cosmologically states with large $A$ could be produced through fusion processes via aggregation of heavy elements. In \cite{Hardy:2014mqa,1406.1171} it was argued that at least for light DM the dark synthesis is very efficient and one ends up with a distribution of large nuclear states. These studies overlook the first step of formation of nuclei with baryon number 2 (dark deuterium) that cannot take place through simple fusion processes but requires
some energy to emitted. As we will show the deuterium abundance is often suppressed leading to a small abundance of larger nuclei.
\section{Deuterium Abundance}\label{sec:estimates}
As in the SM, we assume a separation of scales between the nucleon masses $M_N$ and the nuclear binding energies $E_B$.
In this case the treatment of dark nucleosynthesis becomes relatively simple.
At temperatures below $E_B$, when nuclear reactions can form nuclei, the nucleons are non-relativistic and already decoupled from the SM plasma. The yield of dark matter number is then set by
\begin{equation}\label{YDM}
Y_{\rm DM}=\frac{n_{\rm DM}}{s}=4.3\, \times \, 10^{-13} \ \bigg(\frac{\mathrm{TeV}}{M_N}\bigg)\,,
\end{equation}
where $n_{\rm DM}$ is the DM number density and $s=(2 \pi^2/45) g_\star T^3$ the entropy of the plasma and $g_\star$ the relativistic degrees of freedom of the plasma at a temperature $T$.
We will assume for simplicity that DM is asymmetric, since this maximises the production of bound states.
Given that thermal abundance indicates $M_N\sim 100$ TeV we assume the DM mass to be below this value.
This is also necessary to produce a significant fraction of nuclei.
Dark BBN takes place during a period where the dark baryon number is conserved, that is
\begin{equation}\
1 = \sum_{A, \{i\}} \frac{A Y_{A_i}}{Y_{\rm DM}}\equiv\sum_{A, \{i\}}X_{A_i}\,,
\end{equation}
where we have introduced the mass fractions $X_{A_i}=A Y_{A_i}/Y_{\rm DM}$ of a nucleus with atomic number $A_i$.
Practically, we are only tracking how the total DM yield in eq.~(\ref{YDM}) gets redistributed among different nuclear species.
A successful nucleosynthesis of states with $A\gg 2$ depends on the efficiency of deuterium formation, similarly to the SM case.
In this section, we derive the parametric dependence of the dark deuterium abundance on the relevant parameters of the theory with a focus on two scenarios
\begin{itemize}
\item SM charged constituents, with production $N+N\to D +X$, with $X=W,Z,\gamma$ .
\item SM neutral constituents, with production $N+N+N\to D+N$.
\end{itemize}
In general bound state formation requires some energy to be released.
We do not consider the possibility of emitting a pion of the strong sector. At first sight this process could be favoured
by the strong coupling to nucleons but it is likely forbidden kinematically. Indeed, based on nuclear physics examples \cite{Beane:2012vq}, we expect the following scaling
\begin{equation}
\sqrt{M_N E_B} \sim 0.3 m_\pi\,.
\end{equation}
This is particularly significant for models with singlets where
no other light states exist.
\subsection{SM charged constituents}\label{subsec:SMcostituents}
When the constituents of DM have electro-weak charges, direct searches at LHC imply that the scale of new states must be larger than about 1 TeV \cite{Barducci:2018yer,Kribs:2018ilo}.\footnote{We will focus mostly on electro-weak constituents that are more naturally compatible with direct detection bounds. See however \cite{DeLuca:2018mzn} for counter-examples.}
For this value of the mass the numerical density in eq.(\ref{YDM}) is much smaller than the yield of baryonic matter, suggesting that the formation of bound states less likely than in the SM.
We consider the first step of formation of dark deuteron $N+N \to D+ X$, where $X$ stands for an electro-weak gauge boson $\gamma/Z/W$ in equilibrium with the SM bath. For simplicity we assume that no other channels contribute, so that the Boltzmann equation for $D$ takes the form
\begin{equation}\label{eq:2to2}
\dot n_D+ 3H n_D= \langle \sigma_{D} v\rangle\left[ n_N^2- \frac {(n_N^{\rm eq})^2}{n_D^{\rm eq}}n_D\right]\,,
\end{equation}
where $n_B$ and $n_D$ are the numerical densities of dark baryons and dark deuterium and we assume radiation domination. During radiation domination, the Hubble parameter is
\begin{equation}
H= \sqrt{g_\star (T)\frac{\pi^2}{90}} \frac{T^2}{M_{\rm Pl}}\,,\quad M_{\rm Pl}\equiv2.4 \times 10^{18}\, \mathrm{GeV}\,.
\end{equation}
The second term in the square brackets of eq.~\eqref{eq:2to2} becomes exponentially suppressed as $e^{-E_B/T}$, when the temperature drops below the binding energy.
At this stage, dark deuterium can be formed, so that it is convenient to introduce a new time variable $z\equiv B/T$.
In terms of the mass fractions, the Boltzmann equation can be cast in the following form,
\begin{equation}\label{boltzmann22}
\frac {d X_{D}}{dz}= 2\frac{c \sqrt{g_\star} M_{\rm Pl} E_B \langle\sigma_{D} v\rangle}{z^2} Y_{\rm DM}\bigg[ (1-X_D)^2 - \beta \big(\frac{g_N^2}{g_D g_\star}\big) \big(\frac{z^{3/2} e^{-z}}{Y_{\rm DM}}\big)\big( \frac{M_N}{E_B}\big)^{3/2} \frac{X_D}{2} \bigg]\,
\end{equation}
where $c=1.32$, $\beta=45/(16 \pi^{7/2})$ and $g_*$ is the effective number of degrees of freedom. Only at a temperature significantly lower than the binding energy the bound state can be produced without being immediately dissociated. Given the smallness of $Y_{\rm DM}$, this happens at values of $z_f\approx 20$, when the coefficient of the term linear in $X_D$ inside the square brackets becomes of O(1). At this time we have the most efficient stage of deuterium synthesis starting with boundary condition $X_D(z_f)\approx 0$. Soon after, $X_D$ approaches a constant value that is linearly sensitive to the product of the binding energy and the cross-section, given by
\begin{equation}
\frac{X_D}{1-X_D}=2 c\sqrt{g_\star} M_{\rm Pl} E_B Y_{\rm DM} \int_{z_f}^\infty dz \frac{\langle \sigma_{D} v\rangle}{z^2}\,.
\end{equation}
Away from saturation, $X_D\ll 1$, we have the following for an $s$-wave cross section,
\begin{equation}
X_D = 5\%\, \bigg(\frac{3 \rm TeV}{M}\bigg)^2\bigg(\frac{E_B/M_N}{0.05}\bigg)\bigg( \frac{\langle \sigma_D v\rangle}{ \alpha/M_N^2}\bigg) \bigg(\frac{g_\star}{106.75}\bigg)^{1/2} \bigg(\frac{25}{z_f}\bigg)\,.
\end{equation}
Contrary to the SM the production of deuterium is far from saturation for heavy dark matter. For this reason the actual abundance depends on the precise value of the cross-section that we will compute in section \ref{sec:calculation}.
\subsection{Neutral Constituents}\label{utterly}
When the fundamental fermions are SM singlets, the hadrons and mesons of the dark sector can be much lighter than the electroweak or even QCD scale, without occurring in constraints from LHC direct searches. As a consequence DM numerical densities comparable or even larger than visible matter (see eq.~\eqref{YDM}) become possible. Naively this is the most favourable situation for nucleosynthesis and even very large nuclei could be formed as advocated in \cite{Hardy:2014mqa}.
Nevertheless in this section we would like to argue that, in presence of SM singlets, nucleosynthesis is very unlikely to take place unless extra light degrees of freedom such as a dark photon \cite{1406.1171,McDermott:2017vyk} or a scalar \cite{Wise:2014jva,Wise:2014ola,Gresham:2017zqi,Gresham:2018anj} are included (see also realisations in scenarios of mirror world \cite{Chacko:2018vss}).
In absence of light fields external to the strong sector, such as a dark photon or light pions (see the discussion in the previous section), the first step of nucleosynthesis cannot occur as a $2\to 2$ process.
The dark deuterium production must necessarily proceed through $3\to 2$ processes involving only baryon states, such as $N+N+N \leftrightarrow D + N$ reactions. Even at temperatures below $E_B$, when the production reaction is the only one that can occur, the fusion is suppressed as compared to the previous case by an additional power of $Y_{\rm DM}$.
The Boltzmann equation in this case takes the form
\begin{equation}\label{boltzmann32}
\dot{n}_{\rm D}+ 3 H n_{\rm D}= \langle \sigma_{3\to 2} v^2\rangle \left(n_N^3- \frac {(n_N^{\rm eq})^2}{n_D^{\rm eq}}n_D n_N\right)\,.
\end{equation}
where we have introduced a generalised $3\to 2 $ cross-section with mass dimension $-5$. The thermally averaged $3\to 2$ cross-section is defined as
\begin{equation}
\langle \sigma_{3 \to 2} v^2 \rangle = \frac{n_D^{\rm eq}}{(n_N^{\rm eq})^2} \langle\sigma_{2\to 3} v^2 \rangle =\int \frac {d^3 p_1}{(2\pi)^3 2E_1}\frac {d^3 p_2}{(2\pi)^3 2E_2}\frac {d^3 p_3}{(2\pi)^3 2E_3} e^{-\frac {E_1+E_2+E_2}T} \left| {\cal M}_{NNN\to DN} \right|^2
\end{equation}
For the analog of $s$-wave processes at low energy $\sigma v^2$ goes to a constant.
Following \cite{sigurdson-glueball} we estimate
\begin{equation}\label{stima3in2}
\langle\sigma_{3\to 2} v^2 \rangle\sim \frac {(4\pi)^3}{N^6} \frac 1 {M_N^5}\,.
\end{equation}
Introducing the deuterium baryonic fraction the Boltzmann equation \eqref{boltzmann32} can be written as
\begin{equation}
\begin{split}
\frac {d X_{D}}{dz}&= a\frac{g_\star^{3/2} M_{\rm Pl} E_{B}^4 \langle \sigma_{3\to 2} v^2\rangle}{z^5} Y_{\rm DM}^2 \left[(1-X_D)^3 - \beta \big(\frac{g_N^2}{g_D g_\star}\big) \big(\frac{z^{3/2} e^{-z}}{Y_{\rm DM}}\big)\big( \frac{M_N}{E_B}\big)^{3/2}\frac{X_D(1-X_D)}{2}\right]
\end{split}
\end{equation}
where $a=1.16$ is a numerical coefficient arising from the evaluation of $H$ and $s$. We can now compare the third line of the above equation with eq.~\eqref{boltzmann22} that sets the abundance of dark deuterium from $2\to 2$ processes. Compared to section \ref{subsec:SMcostituents} for $z\gtrsim z_f$ the source term decouples as fast as $z^{-4}$, is suppressed by higher powers of the binding energy and especially by one more power of $Y_{\rm DM}$.
We then obtain the estimate
\begin{equation}
X_D\sim 10^{-13} \bigg(\frac{g_\star}{10}\bigg)^{3/2}\, \bigg(\frac{E_B/M_N}{0.01}\bigg)^4 \bigg(\frac{20}{z_f}\bigg)^4 \bigg(\frac{\langle \sigma_{3\to 2} v^2\rangle}{1/M_N^5}\bigg) \bigg(\frac{ \mathrm{GeV}}{M_N} \bigg)^3\,,
\end{equation}
which is utterly negligible in the relevant range of parameters.
\section{Production cross section of Dark Deuterium}\label{sec:calculation}
In this section we explain how the cross-section for formation of nuclei can be computed from first principles exploiting universal properties of short range nuclear interactions.
The main point is that, for shallow bound states such as nuclei where $E_B\ll M_N$, it possible to write a general effective theory of nucleons. This effective field theory (EFT) reproduces the
effective range expansion of quantum mechanics and allows us to compute the properties of nuclei such as their production cross-section, see \cite{Kaplan:2005es} for a review.
Here, we quickly outline the formalism and apply to the formation cross-section of dark deuterium.
\subsection{Dark Nucleon effective field theory}
The production of nuclei with $A=2$ is a process entirely analogous to the deuteron formation in the SM, $p n \to d \gamma$. This can be calculated in quantum mechanics with appropriate potentials, but, as noticed long ago \cite{Bethe:1949yr,Bethe:1950jm}, it does not depend much on the details of the potentials used, as long as they are short range. As emphasised in \cite{scaldeferri,Rupak:1999rk}, the generality of this phenomenon is immediately captured by the $\pi$-less effective theory of non-relativistic nucleons \cite{Kaplan:1998tg} that we briefly review. We refer the reader to the appendix and Refs. for more details.\footnote{In \cite{Braaten:2017gpq,Braaten:2017kci,Braaten:2017dwq} this formalism was applied to Wino scattering and annihilation.}
Since the energy scale relevant for nuclei formation is much below the pion mass it is useful to describe nucleons with a non relativistic lagrangian where the pions are integrated out, the $\pi$-less EFT \cite{Kaplan:1998tg}. Such theory is extremely simple because it only contains contact interactions among nucleons and couplings to SM gauge fields. Generalising the results of the Refs. above, the nucleons, in a generic isospin representation $\mathbf{r}$, are described by the effective lagrangian
\begin{equation}\label{eff}
\mathscr{L}= N^\dag \left(i D_t + \frac{\vec D^2}{2M_N}+ \frac{ D^2_t}{2M_N}\right) N + \mathscr{L}_{4} + \frac{\kappa}{ M_N} g_2 N^\dag J^a (\vec\sigma \cdot \vec{B}_a) N\,,
\end{equation}
where the covariant derivative $D_\mu= \partial_\mu - i g_2 A^a_\mu J^a$ contains the minimal coupling to SM gauge fields
and we have included 4-nucleons interaction and a magnetic dipole\footnote{In the case where the strong sector violates CP through a $\theta$ angle
an electric dipole is also present producing similar effects for deuterium formation. We will neglect this term.} interaction where $J^a$ is generator of SU(2)$_L$ in the nucleon rep.
The coefficient $\kappa$ is expected to be of order unity (in the SM the isovector nuclear magnetic moment is $\kappa_V=2.35$).
At sufficiently low energies the leading interactions in $\mathscr{L}_{4}$ include only operators without derivatives, that can be decomposed into spin and isospin channels as
\begin{equation}\label{L4}
\begin{split}
\mathscr{L}_{4}&= - \sum_{\mathbf{r},S}\frac{C_{\mathbf{r},S}}{4}\, (N [{\rm CG}^M_{\mathbf{r}}\otimes P^i_{S}] N)^\dag\, \, (N [{\rm CG}^M_{\mathbf{r}}\otimes P_{S}^i] N)\,,
\end{split}
\end{equation}
where the matrices ${\rm CG}_{\mathbf{r}}$ and $P_S$ act on the isospin and spin space respectively.\footnote{For SU(2) the matrices ${\rm CG}$ can be identified with the Clebsch-Gordan coefficients, while the explicit expression of the spin projectors onto the spin 0 and 1 states are
\begin{equation}
P_0=\frac{\sigma_2}{\sqrt{2}},\quad P^i_1=\frac{\sigma_2\sigma_i}{\sqrt{2}}\,.
\end{equation}
The labels $\mathbf{r}$ and $S$ identify the SU(2)$_L$ and spin representations, while $M$ and $i$ are the indices of such representations. }
Remarkably, the lagrangian above also describes the non-perturbative bound state allowing to compute for example the production cross-section. A quick way to derive the main result is the following.
The non-relativistic amplitude for the elastic scattering of two nucleon has the general form in each isospin/spin channel $\mathbf{r},S$,
\begin{equation}\label{scattering-amplitude}
{\cal A}_{\mathbf{r},S} = \frac {4\pi } {M_N} \frac 1 {p \cot \delta_{\mathbf{r},S}- i p}
\end{equation}
where $\delta_{\mathbf{r},S}$ is the phase shift and $p=\sqrt{E M_N}$ is the nucleon momentum in the center of mass frame.
For $s-$wave scattering in the low velocity regime one can show that $p \cot \delta_{\mathbf{r},S} = -1/a_{\mathbf{r},S} +O(p^2)$, where $a_{\mathbf{r},S} $ is the scattering length.
This is know as the effective range expansion. When this is large and positive it follows that the amplitude has a pole at negative energy.
From this we can recover the general relation between the scattering length of the binding energy of shallow bound states,
\begin{equation}
\frac{1}{a_{\mathbf{r},S}}\approx \sqrt{M_N E_B}\,,
\end{equation}
where $E_B$ is the binding energy of lightest $s$-wave bound state.
The coefficients of the 4-Fermi interactions in eq.~(\ref{L4}) must be fixed to reproduce the effective range expansion of nucleon nucleon elastic scattering.
As shown in the appendix, to leading order in the derivative expansion but to all order in the scattering length $a_{\mathbf{r},S}$, on finds
\begin{equation}
C_{\mathbf{r},S}= \frac {4\pi}{M_N} a_{\mathbf{r},S}\,.
\end{equation}
Once this matching has been performed, the above lagrangian can be used to compute other processes,
such as the production of deuterium (see \cite{scaldeferri} and refs for the SM case).
Indeed, the amplitude above determines also the coupling of two nucleons and deuterium as
\begin{equation}
g_{NND}^2 = {\rm Res}_{E=-E_B} \left[{\cal A}\right]= \frac {8 \pi \gamma}{M^2}\,
\end{equation}
where $\gamma=1/a$. This effective coupling can then be used to compute the interaction between 2-nucleons and the deuterium.
Amazingly, we just need to study the elastic nucleon-nucleon ($NN$) scattering to infer all the quantities needed to perform the leading order calculation of the deuteron formation rate. Since this process occurs cosmologically at energy much below the mass of the pions (see also the discussion in section \ref{sec:properties}), it is very reasonable to use an effective field theory of non-relativistic nucleons (with SM quantum numbers) without the pions.
\subsection{Magnetic and Electric transitions}
The effective field theory in eq.~\eqref{eff} allows us to compute the short distance cross section for the process $N+N\to D + V^a$ in terms of the binding energies and scattering lengths alone.
The nucleus can be formed through emission of a SM gauge boson either through the electric coupling (minimal interaction) or through the magnetic dipole interaction in eq.~\eqref{eff}. For $\kappa\sim 1$, as expected for strongly coupled baryonic states, the two processes can have similar size. Importantly, different selection rules apply to electric and magnetic transition so that $\Delta L=1$ for the first and $\Delta S=1$ for the second. This implies a different velocity scaling of the cross-sections.
The amplitude for the formation of bound state can be simply computed with Feynman diagrams of the non-relativistic effective theory \eqref{eff} using eq. \eqref{eff} for the overlap of final state with the deuteron.\footnote{For Majorana bound states a factor $\sqrt{2}$ must be included in the amplitude to account for the normalisation of the wave-function of bound states made of identical particles.}
\begin{equation}
\begin{tikzpicture}[line width=1.5 pt, scale=1.7]
\draw[] (-1,0.5)--(0,0);
\draw[] (-1,-0.5)--(0,0);
\draw[vector,color= blue] (-0.5,0.25)--(0,.7);
\node at (0.1,0) {$\boldsymbol \textcolor{red}{\otimes}$};
\node at (-2,0) {$\mathcal{A}=$};
\node at (-1,0) {$(\mathbf{r},s)$};
\node at (.7,0) {$(\mathbf{r}',s')$};
\node at (0.2,0.67) {$\varepsilon_{\lambda_a}$};
\end{tikzpicture}
\end{equation}
The only subtlety arise for large scattering length of the initial state where an extra long distance contribution must be taken into account enhancing the tree level magnetic cross-section. This effect is discussed in detail in appendix \ref{sec:appA}.
The amplitudes for bound formation can be conveniently decomposed in the basis of total spin and isospin of initial and final states $(NN, D)$ using the projectors of eq.~\eqref{L4}.
In the limit $v_{\rm rel}\ll 1$ one finds~\footnote{We neglect here the effect of non-abelian interactions \cite{Mitridate:2017izz}. This is justified if the nuclei are dominated by the strong interactions so that their size is smaller than the Coulombian Bohr radius $a_0^{-1}=\lambda \alpha_2 M_N/2$.}
\begin{eqnarray}
\mathcal{A}_{\rm mag}((NN)_{M,i}\to D_{M',i'} +V^a)&=& \frac{2 g_2\kappa}{M_N |\vec k|} g_{NND_{\mathbf{r}'}}(1- a_{\mathbf{r}} \gamma_{\mathbf{r}'})(\vec k \times \vec \varepsilon_{(\lambda_a)})^{i+i'}\,\, C_{\cal J}^{a M M'}\,,\\
\mathcal{A}_{\rm ele}((NN)_{M,i}\to D_{M',i'} +V^a)&=&\frac{2 g_2}{M_N |\vec k|}g_{NND_{\mathbf{r}'}} \vec p \cdot \vec\varepsilon_{(\lambda_a)}\,\, \delta_{i i'} C_{\cal J}^{a M M'}\,.
\label{amplitudes}
\end{eqnarray}
$M,M'$ are the indices of the isospin representations, while $i,i'$ are the indices of the total spin representations of initial and final states, and so these expressions should be read taking into account the selection rule of the magnetic and electric transition.
The group theory factor is \cite{Mitridate:2017izz}
\begin{equation}
C_{\cal J}^{a M M'}= \frac 1 2 {\rm Tr}[{\rm CG}_{\mathbf{r}'}^{M'} \{ {\rm CG}_{\mathbf{r}}^{M},J^a \}].
\end{equation}
The formulae above are general, and can be even applied to other gauge groups and different representations for the constituents of the bound state with minor modifications. For any more details we refer the reader to the appendix \ref{sec:appA}.
With our normalisations, the cross-section for the bound state formation can be computed with
\begin{equation}\label{cross}
\sigma v_{\rm rel} = \frac{|\vec{k}|}{8\pi^2}\int d\Omega_k |\mathcal{A}|^2\,,\quad \quad E_{\vec k}= E_B+ \frac{M_N}4 v_{\rm rel}^2\,.
\end{equation}
This gives the following magnetic and electric cross-sections:
\paragraph{Magnetic cross-section}~\\
The averaged cross-section for the production of an $s$-wave bound state with energy $E_B$ and isospin quantum number $(I',M')$ through the emission of a (massive) vector boson $V^a$ from an initial state with $(I,M)$ at low velocity is given by,
\begin{equation}
(\sigma v_{\rm rel})_{aMM'}^{\rm mag}=\kappa^2\frac {2^8}{g_N^2} \sigma_0 \sqrt{1- \frac{M_a^2}{E_B^2}}\left(\frac {E_B} {M_N}\right)^{\frac 3 2}(1-a_{\mathbf{r}}\gamma_{\mathbf{r}'})^2 |C_{\cal J}^{a M M'}|^2\,,\quad\quad \sigma_0\equiv\frac{\pi \alpha}{M_N^2}\,.
\label{xsec:magnetic}
\end{equation}
Here $g_N=2(4) d_R$ is the number of degrees of freedom of the nucleon initial state for Majorana (Dirac) particles.
If the initial state supports a weakly bound state with $a_i=\sqrt{E_{B_i} M}$ otherwise $a_i$ is negative and
can be large. For example in the SM the second term is $a_i \gamma_f \approx 5$ so it dominates \cite{scaldeferri,Rupak:1999rk}.
At low velocities $\sigma v_{\rm rel}$ is constant as this correspond to a an $s$-wave capture with $\Delta L=0$ and $\Delta S=1$. The presence of a coherent contribution from the initial state scattering length can be understood by noticing that being a $s$-wave process, the initial state can be in principle have an unnaturally large coefficient in eq.~\eqref{eff} that need to be kept into account to all orders.
\paragraph{Electric cross-section (as in dipole approximation). }~\\
The averaged cross-section for the formation of an s-wave shallow bound state through dipole emission of a photon is found to be,
\begin{equation}
(\sigma v_{\rm rel})_{aMM'}^{\rm ele}=\frac {2 S+1}{g_N^2} \frac {2^6}3 \sigma_0 v_{\rm rel}^2 \sqrt{1- \frac{M_a^2}{E_B^2}} \sqrt{\frac {M_N} {E_B}} \left(1+\frac{M_a^2}{2E_B^2}\right)|C_{\cal J}^{a M M'}|^2
\label{xsec:electric}
\end{equation}
The velocity suppression follows from the fact that in dipole approximation $\Delta L=1$ so that an $s$-wave bound state is produce from a $p$-wave.
Note that the formula above differs by a numerical factor from the cross-section for the formation of Coulombian bound state \cite{Mitridate:2017izz}.
This is because the energy levels of $1/r$ potentials cannot be treated as shallow bound states.
From the formulae in each isospin channel for electric and magnetic transitions we can recover the component cross-section, relevant for example for indirect detection in \ref{sec:ID}
using the appropriate Clebsch-Gordan coefficients. Let us note that the formulae above are actually general and also apply to bound state made of different representations
and different global symmetry group.
\subsection{SE(s)}
\label{sec:SE}
The discussion so far has neglected the long distance Sommerfeld effect due to electro-weak interactions on the initial state.
In a given SU(2)$_L$ channel, the cross section is multiplied by the corresponding factor depending on whether we deal with $s$ or $p$-wave initial states, see for example \cite{Iengo:2009ni}.
For massless mediators one finds,
\begin{eqnarray}\label{sommerfeld}
\mathrm{SE}_{s-\mathrm{wave}}&=&\frac {2\pi \alpha_{\rm eff}/v_{\rm rel}}{1-e^{-2\pi \alpha_{\rm eff}/v_{\rm rel}}}\approx \frac {2\pi\alpha_{\rm eff}}{v_{\rm rel}}\,,\\
\mathrm{SE}_{p-\mathrm{wave}}&=&\left[1+\left(\frac {\alpha_{\rm eff}}{v_{\rm rel}}\right)^2\right]\frac {2\pi \alpha_{\rm eff}/v_{\rm rel}}{1-e^{-2\pi \alpha_{\rm eff}/v_{\rm rel}}}\approx 2\pi \left(\frac {\alpha_{\rm eff}}{v_{\rm rel}}\right)^3\,.
\end{eqnarray}
Here $\alpha_{\rm eff}$ is the effective strength of the electro-weak forces in a given channel. Importantly, taking into account this effect, both the magnetic and electric transition rates have the same scaling with velocity, $\sigma v_{\rm rel}\propto 1/v$. Note that such enhancement of electric transitions is not present for deuteron in the SM so that the magnetic transition dominates at very low velocities. This can be different for dark nuclei.
The approximate scalings above are accurate for $v_{\rm rel} \lesssim \alpha$. The effectiveness of the SE,
then crucially depends on the typical velocity during the nucleosynthesis. Dark deuterium forms at $T\sim E_B/z_f$ where $z_f\sim 20$, which implies $v_{\rm rel}\sim \sqrt{E_B/M_N}/5$ at that time.
Numerically the enhancement is more pronounced when the interaction are SU(2)$_L$ symmetric, as in this case the relevant coupling is $\alpha_2$, instead of $\alpha_{\rm em}$.
Let us now discuss domain of validity of the massless approximations.
From the point of view of the initial state particles, the mediator masses can be neglected as long
as the de-Broglie wave-length is smaller than the range of the electro-weak interactions $M_W^{-1}$.
Therefore Sommerfeld effects are maximal if the above two conditions are met
\begin{equation}
\frac{M_W}{M_N}\lesssim v_{\rm rel}\lesssim \alpha\,.
\end{equation}
For $v_{\rm rel} \lesssim M_W/M_N$ the vector boson masses must be taken into account.
These has 2 effects, the first to freeze the enhancement to a constant value corresponding to the critical velocity,
\begin{equation}
\mathrm{SE}\rightarrow {\rm max}\bigg[2\pi \alpha_{\rm eff}\frac {M_N}{M_W},\,1 \bigg]\,
\end{equation}
This approximation gives results comparable to the analytic formulas derived using the Hulthen potential \cite{Mitridate:2017izz}.
In addition the mediator mass produces peaks in the cross-section. As well known the resonant behaviour originates from bound states of zero energy
supported by the potential at the critical mass. Around the peak the SE has the model independent form \cite{Blum:2016nrz},
\begin{equation}
\mathrm{SE} \sim \frac{V_0}{M_N v_{\rm rel}^2/4+ |E_B|}
\label{SEpeak}
\end{equation}
where $V_0$ is the typical energy and $E_B$ the energy of the bound state close to threshold in the initial state.
We also note that in the limit of large scattering length of the initial state the second term in (\ref{amplitudes}) can be interpreted as SE due to the strong interactions.
Indeed this is large when the initial state supports a bound state with energy $E_B\ll M_N$.
Using $1/a= \sqrt{E_B M_N}$ and extending the computation of bound state formation to finite velocity of nucleons
one can check that the formula for bound state production through magnetic interaction ($s$-wave)
reduces to eq. (\ref{SEpeak}) where $E_B$ is the energy of the bound state in the initial state while $V_0$ is the binding energy of the produced bound state.
The subleading terms in eq.~(\ref{xsec:magnetic}) is associated to the failure of factorisation between short and long distance effects.
\section{A case study: $\mathrm{SU(2)}_L$ triplet}\label{sec:triplet}
In this section we compute explicitly the production of deuterium in the $V$ model where the constituents are triplet of SU(2)$_L$.
In this scenario, in the limit of vanishing SM couplings, the models enjoys an SU(3)$_F$ flavour symmetry and the lightest baryon multiplet is an octet.
This decomposes under SU(2)$_L$ as
\begin{equation}
\mathbf{8}= \mathbf{3}_{=V} +\mathbf{5}\,,
\end{equation}
where with abuse of notation we named the triplet nucleon $V$. SM gauge interactions split the two multiplets as in eq. (\ref{nucleonsplitting}).
The triplet is expected to be the lightest state in light of the smaller weak charge. Since at the temperatures relevant for bound state formation the
abundance of the quintuplet is exponentially suppressed we can focus on the nucleon $V$.
\begin{table}
\begin{center}
\begin{tabular}{cccc|c}
\hbox{Name}& $\mathbf{r}$ & $S$ &$\lambda$ & \hbox{Constituents}\\ \hline
$D_1$ &$1$ & 0&2 & $V V$ \\
$D_3$ & 3 & 1 &1& $V V$ \\
$D_5$ & 5 & 0 & -1 & $V V$ \\ \hline
$T_1$ &1 & 3/2 & 2 & $V D_3$ \\
$T_3^a$ & 3 & 1/2 & 0 & $V D_1$ \\
$T_3^b$ & 3 & 1/2 & 1 & $V D_3$\\
$T_3^c$ & 3 & 1/2 & 3 & $V D_5$\\
$T_5^a$ & 5 & 1/2 & -1 & $V D_3$ \\
$T_5^b$ & 5 & 1/2 & 1 & $V D_5$ \\
$T_7$ & 7 & 1/2 & -2 & $V D_5$
\end{tabular}
\quad \quad \quad
\begin{tabular}{c|c|c|c}
$\mathbf{r}\leftrightarrow \mathbf{r'}$ & $\sum\limits_{aMM'}|C_{\mathcal{J}}^{aMM'}|^2$ & $C_{\mathcal{J}}^{300}$ & $C_{\mathcal{J}}^{+01}$\\ [1.6ex]
\hline
$1\leftrightarrow 3$ & $2$ & $\sqrt{2/3}$ & $\sqrt{2/3}$ \\
$3\leftrightarrow 5$ & 5/2 & $\sqrt{1/3}$ & $-\sqrt{1/12}$
\end{tabular}
\caption{ On the left quantum numbers of nuclear bound states (deuterium and tritium) for nucleons SU(2)$_L$ triplets. On the right group theory factors for the transition from an initial state with isospin $\mathbf{r}$ to a bound state with isospin $\mathbf{r'}$.}
\label{table:boundstates}
\end{center}
\end{table}
In absence of electro-weak interaction he nuclei with baryon number 2 belong to the product of two octet $\mathbf{8}\times \mathbf{8}=\mathbf{1}+ \mathbf{8}_S+ \mathbf{8}_A+ \mathbf{10} + \overline{\mathbf{10}}+\mathbf{27}_S$, where $S(A)$ refers to the symmetry of the isospin wave-function.
Lattice studies indicate that all these representation actually form bound states \cite{Beane:2012vq} and also provide the binding energies in different channels.
As expected in the flavor symmetric limit the singlet (corresponding to the $H-$baryon in QCD)
is the most bound. Following the discussion of section \ref{sec:properties}, we can work in a limit where we can classify the bound states according to SU(2)$_L$ representations, while neglecting the baryon in the \textbf{5} whose abundance is Boltzmann suppressed.
Therefore the dark deuterium of this model belongs just to the product
\begin{equation}
V \times V = \mathbf{1}_S + \mathbf{3}_A + \mathbf{5}_S\,.
\end{equation}
Anti-symmetry of the full wave-function implies that singlet and quintuplet of SU(2)$_L$ ($D_1$ and $D_5$) are spin-0 while isospin triplet are spin-1 ($D_\mathbf{3}$) (for s-wave bound states).
The triplet and quintuplet deuterons are branches of $\mathbf{8}_A$ and $\mathbf{8}_S$ respectively so they are heavier than the singlet, with splitting dominated by the strong interactions.
The classification can be generalised to larger nuclei.
The dark tritium made of 3 triplets has quantum numbers,
\begin{equation}
(D_\mathbf{1} + D_\mathbf{3} + D_\mathbf{5}) \times V = \mathbf{1}_S + 3 \times \mathbf{3}_A + 2 \times \mathbf{5}_S + \mathbf{7}
\end{equation}
where the symmetry of the wave-function determines the spin. We can estimate the electro-weak binding energy
by adding a nucleon to the deuterium and taking into account the reduced mass.
We summarise the bound states up to dark Tritium in table \ref{table:boundstates}.
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{figs/figuraA.pdf}~~
\includegraphics[width=.45\textwidth]{figs/figuraB.pdf}~~
\caption{\label{fig:xsec} Thermally averaged cross sections as a function of the temperature $z=E_B/T$. The rates are decomposed per channel and per type (magnetic $M$, and electric $E$).}
\end{figure}
\subsection{Relic abundances of deuterium made of $\mathrm{SU(2)}_L$ triplets}
To compute the abundance of deuterium we assume that the initial state is SU(2)$_L$ symmetric, i.e. we neglect the mass difference between $V_\pm$ and $V_0.$
Using \eqref{xsec:magnetic} and \eqref{xsec:electric} the cross-section for deuterium formation through emission of $W, Z, \gamma$, averaged over initial states, is approximately given by,
\begin{equation}
\begin{split}
(\sigma v_{\rm rel})_{3\to1}^a&= \sigma_a \times \frac{2^7}{27}\sqrt{1- \frac{M_a^2}{E_{B_\mathbf{1}}^2}}\left[ \mathrm{SE}_3^{s}\,\kappa^2\left(\frac{E_{B_\mathbf{1}}}{M_N}\right)^{\frac 3 2}(1-a_{\mathbf{3}} \gamma_\mathbf{1})^2+ \mathrm{SE}_3^{p}\, \frac 1 {12} \sqrt{\frac {M_N} {E_{B_\mathbf{1}}}}\left(1+ \frac {M_a^2}{2 E_{B_\mathbf{1}}^2}\right) v_{\rm rel}^2 \right]\\
(\sigma v_{\rm rel})_{1\to 3}^a&=\sigma_a \times \frac{2^7}{27}\sqrt{1- \frac{M_a^2}{E_{B_\mathbf{3}}^2}}\left[ \mathrm{SE}_1^{s}\,\kappa^2\left(\frac{E_{B_\mathbf{3}}}{M_N}\right)^{\frac 3 2}(1-a_{\mathbf{1}} \gamma_\mathbf{3})^2+\frac 1 4 \mathrm{SE}_1^{p}\, \sqrt{\frac {M_N} {E_{B_\mathbf{3}}}} \left(1+ \frac {M_a^2}{2 E_{B_\mathbf{3}}^2}\right) v_{\rm rel}^2 \right]\\
(\sigma v_{\rm rel})_{5\to 3}^a&=\sigma_a \times \frac{2^5 \cdot 5}{27}\sqrt{1- \frac{M_a^2}{E_{B_\mathbf{3}}^2}}\left[ \mathrm{SE}_5^{s}\,\kappa^2\left(\frac{E_{B_\mathbf{3}}}{M_N}\right)^{\frac 3 2}(1-a_{\mathbf{5}} \gamma_\mathbf{3})^2+ \mathrm{SE}_5^{p}\, \frac 1 4\sqrt{\frac {M_N} {E_{B_\mathbf{3}}}} \left(1+ \frac {M_a^2}{2 E_{B_\mathbf{3}}^2}\right)v_{\rm rel}^2 \right]\\
(\sigma v_{\rm rel})_{3\to 5}^a&=\sigma_a \times \frac{2^5\cdot 5}{27}\sqrt{1- \frac{M_a^2}{E_{B_\mathbf{5}}^2}}\left[ \mathrm{SE}_3^{s}\,\kappa^2\left(\frac{E_{B_\mathbf{5}}}{M_N}\right)^{\frac 3 2}(1-a_{\mathbf{3}} \gamma_\mathbf{5})^2+\mathrm{SE}_3^{p}\, \frac 1 {12} \sqrt{\frac {M_N} {E_{B_\mathbf{5}}}} \left(1+ \frac {M_a^2}{2 E_{B_\mathbf{5}}^2}\right)v_{\rm rel}^2 \right]
\end{split}
\label{xsecSU2}
\end{equation}
where
\begin{equation}
\sigma_a=\frac {\pi \alpha_a}{M_N^2}\,,\quad \quad \alpha_{W,Z,\gamma}= \alpha_2\times [2,\,c_W^2,\,s_W^2]
\end{equation}
The first term in bracket corresponds to magnetic $\Delta S=1$ transitions and the second the electric one $\Delta L=1$.
The triplet can be produced either from a singlet or quintuplet channels. The scattering length are given by $1/a_i\approx \sqrt{M_N E_{B_i}}$.
If a state is unbound the formulas above still apply with a negative scattering length. This is the case of deuterium in the SM where $nn$ is weakly unbound
producing a large scattering length.
In Fig. \ref{fig:xsec} we show the numerical values of the electric and magnetic cross-sections $1\leftrightarrow 3$ normalised to $\sigma_0=\pi\alpha_2/M^2$ for various choices of the binding energies.
Due to the SE described in section \ref{sec:SE} the p-wave electric cross-section can be larger than the magnetic one.
To compute the abundance of deuterium in principle one should write a different Boltzmann equation for each bound state.
We can however simplify the problem by noting that transition between different
bound states are fast so that they are in equilibrium among them (see section \ref{sec:darkforce} and appendix \ref{sec:appB}). This implies that $n_{D_i}/n_{D_j}= n_{D_i}^{\rm eq}/n_{D_j}^{\rm eq}$.
The abundance of deuterium is then determined by eq. (\ref{boltzmann22}) with the effective cross-section and degrees of freedom,
\begin{equation}\label{effective-xsec}
(\sigma v_{\rm rel})^{\rm eff} = \sum_i (\sigma v_{\rm rel})_{i}\,,~~~~~~~~~~~g_D^{\rm eff}(T)= \sum_i g_{D^i} \exp{\left[- \frac{E_{B_1}-E_{B_i}}{T}\right]}\,.
\end{equation}
In figure \ref{fig:dark-deuterium} we present the total mass fraction of deuterium $X_D$ as a function of the $M_N$ for different choices of binding energies of the singlet ($D_\mathbf{1}$) and triplet ($D_\mathbf{3}$) deuterium. We assume for simplicity that $D_\mathbf{5}$ does not play a role even though being the least bound isotope with baryon number 2 it could enhance the production of $D_\mathbf{3}$ through a large scattering length. Solid lines correspond to the abundance including electro-weak SE effects while dashed lines are obtained with the short distance cross-sections. The SE enhancement has significant impact especially because it eliminates the velocity suppression of electric transitions. For $E_B< M_W$ only the photon can be emitted with smaller cross-section in light of the electric coupling and multiplicity. The transition between SU(2)$_L$ symmetric emission and photon emission originates the features in the plot.
Differently from the SM most of the baryons can form deuterium only for large binding energies. An order 1 fraction of deuterium can only be obtained for TeV masses, around the experimental collider bound.
The large mass scale associated with DM with electroweak charges is the principal obstruction to convert into deuterium and then heavier nuclei an O(1) fraction of DM. Notice that there is no reduction in the mass fraction of heavy nuclei caused by the reduction of $V^\pm$ from the decay $V^\pm \to \pi^\pm V^0$, which happens only $T<100$ MeV. On the contrary in the SM the main limitation to form deuterium is the neutron decay.
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{figs/figureXD2}~
\includegraphics[width=.45\textwidth]{figs/figureXD}
\caption{\label{fig:dark-deuterium} Dark deuterium mass fraction $X_D$ as a function of the nucleon mass $M_N$ for several choices of the binding energies (with and without the Sommerfeld effect).}
\end{figure}
\subsection{Production of nuclei with $A\geq 3$}
The formation of dark tritium $T$ is determined schematically by the following reactions
\begin{eqnarray}\label{chain}
D&:&\quad V+V \rightleftarrows D+W\,\\
T&:&\quad D+V \rightleftarrows T+W\,,\ D+D\rightleftarrows T+V\,,
\end{eqnarray}
where we allow for weak and strong $T$ formation processes, the latter without the emission of SM radiation. As for the deuterium, dark tritium is produced via exothermic reactions when $E_T-E_D>0$ (weak process) and/or $E_T - 2E_D>0$ (strong process). If they hold, when the temperature drops below $E_T-E_D$ and $E_T-2E_D$ the dissociation of dark tritium is exponentially suppressed. The most favourable situation arises, as in the SM, for $E_T-2E_D > E_D$, such that at the time of deuterium formation, the dissociation of tritium is already ineffective.
These assumptions greatly simplify the discussion and they allow us to estimate the upper bound of dark tritium abundance. The Boltzmann equations including deuterium and tritium are given by
\begin{eqnarray}
\label{uno}\small
X_D'(z)&=& 2\frac{z_0}{z^2}\bigg[ (1-X_D-X_T)^2 - \beta \frac{(\frac{M_N}{E_D})^{3/2}\, z^{3/2}e^{-z}}{g_D^{\rm eff} Y_{\rm DM}} \frac{X_D}{2}- \frac{b_1}{2} (1-X_D-X_T) X_D - \frac{b_2}{2} X_D^2\bigg]\,,\\
\label{due}
X_T'(z)&=& 3\frac{z_0}{z^2}\bigg[ \frac{b_1}{2} (1-X_D-X_T) X_D + \frac{b_2}{4} X_D^2\bigg]\,.
\end{eqnarray}
Where we have introduced the following notation
\begin{equation}
z_0\equiv c\sqrt{g_*} M_{\rm Pl} E_{D} Y_{\rm DM} \langle \sigma_{D} v\rangle_{\rm eff}\,, \quad b_1\equiv \frac{ \langle \sigma_{T} v\rangle_{\rm eff}}{\ \langle \sigma_{D} v\rangle_{\rm eff}}\,,\quad b_2\equiv \frac{ \langle \sigma_{T} v\rangle^{\rm strong}_{\rm eff}}{\ \langle \sigma_{D} v\rangle_{\rm eff}}\,.
\end{equation}
In deriving the above Boltzmann equations we have written effective production rates including the effect of nearby bound states with the same baryon number. In particular, $\langle \sigma_{D} v\rangle_{\rm eff}$ is defined in eq.~\eqref{effective-xsec} and the reactions for Tritium are defined similarly, although we would like to distinguish the weak $ \langle \sigma_{T} v\rangle_{\rm eff}$ from the strong $\langle \sigma_{T} v\rangle_{\rm eff}^{\rm strong}$ process. This inclusive approach allows us in principle to take into account all possible bound states of Table \ref{table:boundstates}.
When the dissociation rate of deuterium, $D+W \to V+V$, becomes exponentially suppressed, we see that the terms proportional to $b_1$ and $b_2$ tend to transfer a fraction of $X_D$ to $X_T$ with an overall decoupling as fast as $1/z^2$ (neglecting possible enhancements from Sommerfeld effect). As expected the strong fusion reactions ($b_2$-term) have smaller rates per deuteron since they are proportional to $X_D^2$, partially reducing the increase in the hard rate, $b_2/b_1\approx 1/\alpha$.
Knowing the reaction rates one can simply solve numerically the above set of equations, however it is interesting to analyse it in the limit of small nuclei mass fractions (i.e. $X_D\ll 1$). For the most realistic cases we expect $z_0\approx O(1)$, therefore the non-trivial evolution happens when the overall rate is very small $z_0/z\ll 1$. By contrast, in the BBN we have $z_0\approx 10^4$, which would allow for a fast fusion of heavier nuclei if bottlenecks for the binding energy were absent \cite{ABG}. The formation of dark Tritium is then further suppressed and we have
\begin{equation}\label{tritium}
X_D\approx 2\frac{z_0}{z_f} \,,\quad X_T\approx \frac{3}{8}X_D^2 b_1 +\frac{1}{8}X_D^3 b_2\,.
\end{equation}
These expression are accurate as long as one can neglect the loss terms in the Boltzmann equation for $X_D$ and at leading order in $b_1$ and $b_2$. This is reliable as long as $X_D\ll \mathrm{min}[1/b_1,1/\sqrt{b_2}]$.
\subsection{Indirect Detection}\label{sec:ID}
The possibility of forming bound states is relevant for indirect detection as it would lead to the emission of monochromatic photons
of energy equal to the binding energy of the nucleus. This possibility is of great interest also because asymmetric scenarios typically do not produce
indirect detection signals since the DM cannot annihilate into SM particles. It is also independent on whether dark nuclei are synthesised cosmologically.
We leave a detailed study to \cite{indirect} and here outline the main results.
In the $V$ model at late times DM is made of the neutral component $V_0$ of the nucleons plus a model dependent population of deuterium, also in the neutral component.
We will neglect the deuterium population for the purpose of this section. If the nuclear binding energy of deuterium is larger than $M_W$ one can form the triplet deuterium
via the tree-level process $V_0V_0\to D_3^\pm W^\mp$. The magnetic transition \eqref{xsec:magnetic} including long distance nuclear effects (see below) gives
\begin{equation}
\big[\sigma_{V_0V_0\to D_3^\pm W^\mp} v_{\rm rel}\big]_{\rm hard}=\frac {\pi \alpha_2}{M_N^2} \times \frac{2^8}{9}\sqrt{1- \frac{M_W^2}{E_{B_\mathbf{3}}^2}}\kappa^2\left(\frac{E_{B_\mathbf{3}}}{M_N}\right)^{\frac 3 2}\left[(1-a_{\mathbf{1}} \gamma_\mathbf{3})+\frac 1 2 (1-a_{\mathbf{5}} \gamma_\mathbf{3}) \right]^2\,,
\end{equation}
where we neglected the electric transition which
is velocity suppressed in absence of SE. For $E_{B_\mathbf{3}}<M_W$ the $W$ is off-shell leading to a suppressed cross-section.
Another contribution arises from the SE due to SU(2)$_L$ gauge interactions and nuclear forces.
Thanks to this effect, two neutral nucleons can form deuterium (in neutral component) through the emission of a photon. Physically this is possible because $| V_0 V_0 \rangle_S^\ell$ is not a mass eigenstate and can oscillate into $| V_+ V_- \rangle_S^\ell$. In the symmetric limit the mixing can be extracted from the Clebsch-Gordan coefficients,
\begin{equation}\label{CGcoeff}\small
| V_+ V_- \rangle=\frac {1}{\sqrt{3}} | 0 0 \rangle+\frac{1}{\sqrt{2}} | 1 0 \rangle +\frac {1}{\sqrt{6}} | 2 0 \rangle\,, \quad
| V_+ V_0 \rangle =\frac{1}{\sqrt{2}} | 1 1 \rangle+\frac{1}{\sqrt{2}} | 2 1 \rangle \,,\quad
| V_0 V_0 \rangle =-\frac{1}{\sqrt{3}} | 0 0 \rangle +\sqrt{\frac 2 3} | 2 0 \rangle\,.
\end{equation}
The following processes are then possible
\begin{eqnarray}
|V_0 V_0\rangle_{S=0}^{s} &\to& |V_- V_+\rangle_{S=0}^s \ \to\ D_3^0 + \gamma\,,\\
|V_0 V_0\rangle_{S=1}^{p} &\to& |V_- V_+\rangle_{S=1}^p \ \to\ D_3^0 + \gamma\,,
\end{eqnarray}
such that we have formation of neutral $D_3$ from an initial state $|V_0 V_0\rangle_0$ in the singlet or quintuplet of SU(2)$_L$, either from $p$-wave spin-1 through an electric transition or from $s$-wave spin-0 via a magnetic transition.
\paragraph{Indirect signals from Sommerfeld of $\mathrm{SU(2)}_L$ gauge interactions}
At the low velocities relevant for indirect detection the electro-weak symmetry breaking effects can be important and a numerical solution is required, see \cite{Hisano:2006nn,Cirelli:2007xd}.
For $s$-wave the SE due to electro-weak interactions is identical to the one of Wino DM $\chi$ studied widely studied in the literature, see \cite{Cohen:2013ama}. In particular we are interested in the long distance effects that allow the neutral Winos state $\chi^0\chi^0$ to oscillate into $\chi^+\chi^-$. We define as $\mathrm{SE}_{00\to +-}$ the corresponding Sommerfeld factor of the Wino that can be derived from the following potential
\begin{equation}
\label{eq:S0triplet}
V_{Q=0}^{S=0}=\bordermatrix{&+&0\cr -&2\Delta M-A&-\sqrt{2}B \cr 0& -\sqrt{2}B&0 }\,,
\end{equation}
where $A = \alpha_{\rm em}/r + \alpha_2 c_{\rm W}^2 e^{-M_Zr}/r$,
$B=\alpha_2e^{-M_Wr}/r$
and $\Delta M$ is the mass splitting produced by electroweak symmetry breaking, equal to $\Delta M=165$ MeV.
Noticing that in our case we have the same SE of Wino DM, so that we can exploit the calculation performed for the Wino and apply it to the dark deuterium. Therefore, the indirect detection signal can be simply calculated as
\begin{equation}\label{SEWino}
\sigma_{V_0V_0\to D^0_{\mathbf{3}}+\gamma} = \mathrm{SE}_{00\to + -}\, \big[\sigma_{V_+V_-\to D^0_{\mathbf{3}}+\gamma}\big]_{\rm hard}\,.
\end{equation}
The short-distance contribution to the cross-section of the charged components can be computed using \eqref{xsec:magnetic} and \eqref{CGcoeff},
\begin{equation}
\big[ \sigma_{V_+V_-\to D^0_{\mathbf{3}}+\gamma} v_{\rm rel}\big]_{\rm hard}= \kappa^2 \frac {2^7}{9}\frac{\pi \alpha_{\rm em}}{M_N^2}\left( \frac {E_{B_{\mathbf{3}}}}{M_N}\right)^{\frac 3 2}\left[(1-\sqrt{E_{B_{\mathbf{3}}}/E_{B_{\mathbf{1}}}}) +\frac 1 2(1-a_{\mathbf{5}} \gamma_\mathbf{3})\right]^2\,.
\end{equation}
The annihilation rate of Winos into photon pairs, $\sigma_{\chi^0\chi^0 \to \gamma\gamma +\frac12\gamma Z}$, has the property that both the $\gamma\gamma$ and $\gamma Z$ final states are reached from $\chi^0\chi^0$ with the same $\mathrm{SE}_{00\to +-}$. This allows us to provide an alternative form for eq.~\eqref{SEWino} in terms of Winos cross-sections \cite{Hisano:2006nn}
\begin{equation}\label{SE00}
\mathrm{SE}_{00\to +-} =\frac{[\sigma_{\chi^0\chi^0 \to \gamma\gamma +\frac12\gamma Z} v_{\rm rel}]_{\rm full}}{\big[ \sigma_{\chi_+ \chi_- \to \gamma\gamma +\frac12\gamma Z} v_{\rm rel}\big]_{\rm hard}}\,,\quad \quad [\sigma_{\chi_+ \chi_- \to \gamma\gamma +\frac12\gamma Z} v_{\rm rel}]_{\rm hard} =\frac {\pi \alpha_{\rm em}\alpha_2}{M_N^2}\,.
\end{equation}
Due to SE the effective cross-section has peaks whose location is identical to the one of the Wino. The first peak appears for $M_N\approx 2.3$ TeV,
but the energy of the emitted photon is now equal to the binding energy of the bound states rather than DM mass.
The sensitivity to these photons can be studied according to the recipes described in \cite{Cirelli:2018iax}.
In Fig. \ref{fig:indirect} we recast the bounds from FERMI from the galactic center \cite{FERMI}.
Formation of deuterium through magnetic emission of photons is strongly constrained for $E_B>M_N/10$.
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{figs/figuraID1.pdf}~
\includegraphics[width=.45\textwidth]{figs/figuraID2.pdf}~
\caption{\label{fig:indirect} Indirect detection bound from emission of photon lines of energy $E_{D_3}$ through magnetic coupling ($\kappa=1$).
Exclusion limits from FERMI are obtained by recasting the results in \cite{FERMI} to account for the different ratio of photon energy and DM mass.
SE for the triplet has been extracted from \cite{Cohen:2013ama}.}
\end{figure}
\paragraph{Indirect signals from strong interactions}
Beside the standard SE due to electro-weak interactions also the strong interactions can enhance the cross-section when the initial channel
supports a bound state at threshold. This effect is captured by the scattering length in eq. (\ref{xsec:magnetic}). We can compute the cross-section
relevant for indirect detection in the SU(2)$_L$ symmetric limit. Since the quintuplet is expected to be less attractive or even weakly repulsive the largest effect will be associated to the quintuplet initial state.
Using the Clebsch-Gordan coefficients of eq.~\eqref{CGcoeff} and neglecting further enhancement from electro-weak interactions we find,
\begin{equation}
\sigma_{V_0 V_0 \to D_3^0 + \gamma}v_{\rm rel} \approx \kappa^2 \frac {2^7 }{9} \frac{\pi \alpha_{\rm em}}{M_N^2} \left(\frac{E_{B_\mathbf{3}}}{M_N}\right)^{\frac 3 2} \times \frac {M_N E_{B_{\mathbf{3}}}}{1/a_5^2+M_N^2 v_{\rm rel}^2/4}\,.
\end{equation}
\section{Singlet Models}\label{sec:darkphoton}
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{figs/figureXDarkphoton.pdf}~
\includegraphics[width=.45\textwidth]{figs/figXDarkphoton2.pdf}~
\caption{\label{fig:dark-photon} Dark deuterium abundance in models with 2 degenerate flavours and a dark photon. }
\end{figure}
In this section we briefly study bound state formation in a model with singlets.
For simplicity we consider the minimal scenario with SU(3) gauge group and 2 degenerate flavours.
In this case the strong dynamics is as in the SM and the lightest baryon is an SU(2)$_F$ doublet, $Q$.
The dark deuterium is an $s$-wave bound state in the singlet or a triplet rep of SU(2)$_F$ with spin 1 or 0 respectively,
\begin{equation}
Q \times Q = \mathbf{1}_1 + \mathbf{3}_0\,.
\end{equation}
Strong interactions favour the SU(2)$_F$ singlet as the lightest state (the analog of the SM deuterium) that will then be absolutely stable.
Differently from the SM also the triplet could be bound. We will assume that the temperature of the dark sector is of the order
of the SM bath which can be realised introducing heavy fermions charged under the SM.
Singlets allow the mass scale to be much lower than TeV.
As discussed in section \ref{utterly} despite the larger numerical density of nucleons deuterium cannot be formed because $3\to 2$ processes are very suppressed.
This conclusion can be avoided introducing a dark photon that carries away the energy. Leaving the model building aspects to future work we assume the fermions to have opposite unit charges, $Q_D=2 J_3$ so that the nucleons have charges $\pm 1$. This could be modified with different charge assignments with minor changes to the nuclei formation.
Neglecting model dependent SE due to the dark photon the cross-section for formation of deuterium through emission of a dark photon is,
\begin{equation}
(\sigma v_{\rm rel})_{NN\to D_{\mathbf{1}(\mathbf{3})}+\gamma'}= \frac {\pi \alpha_D}{M_N^2} \times 2^4\left[ \kappa^2\left(\frac{E_{B_{\mathbf{1}(\mathbf{3})}}}{M_N}\right)^{\frac 3 2}(1-a_{\mathbf{3}(\mathbf{1})} \gamma_{\mathbf{1}(\mathbf{3})})^2 + \frac 1 {4(12)} \sqrt{\frac {M_N} {E_{B_{\mathbf{1}(\mathbf{3})}}}} v_{\rm rel}^2
\right]
\label{xsecdarkphoton}
\end{equation}
where we used $\sum|C_{\mathcal{J}}^{3MM'}|^2=1/4$ for the doublet rep and assumed $m_{\gamma'}\ll E_B$.
In Fig. \ref{fig:dark-photon} we show the abundance of deuterium obtained integrating the Boltzmann equation and assuming that no other nuclei are produced.
For masses in the GeV range, suggested by the coincidence of DM and visible matter density, the production of deuterium is very efficient even for dark photon couplings as small as $10^{-4}$.
Note that electric transitions, even though velocity suppressed, are relevant for small binding energies as the one of deuterium in the SM ($E_B/M_N=0.0022$). Differently from the SM for degenerate masses we do not expect significant bottlenecks so that production of heavy elements can proceed unsuppressed. For 100 GeV masses the production of deuterium is again suppressed due to the smaller numerical density of baryons.
\section{Conclusions}\label{sec:conclusions}
Big Bang Nucleosynthesis is a cornerstone of the cosmological history of the Universe and one might wonder if a similar process could take place in the dark sector.
In this work we have studied the synthesis of dark nuclei in theories where DM is a baryon of a new gauge interaction. Such models, motivated by the idea
of the accidental stability of DM, also predict the existence of stable and metastable nuclei with SM charges that could be formed during the evolution of the Universe.
The key step for Dark Nucleosynthesis is the formation of dark deuterium, a nucleus with baryon number 2. This process requires energy to be released
and it is automatically possible through emission of SM gauge bosons in models where the fundamental constituents have electro-weak charges.
Remarkably, the relevant cross-section can be determined from first principles in terms of the binding energies of nuclei.
To this aim, we have found it useful to employ pion-less EFT for the nucleons. This construction reproduces the effective range expansion of quantum mechanics with
short range potentials \cite{Kaplan:1998tg} and it allows us to compute in general the cross-sections for the production of shallow bound states expected for nuclear interactions.
We note that the cross-sections differ from the ones often used in the literature, depending in a non-trivial way on binding energies and velocity.
In particular electric transitions grow for small binding energies while magnetic transitions decrease. Electric transitions moreover are velocity suppressed unless
enhanced by Sommerfeld enhancement.
Having determined the relevant cross-sections one can solve the Boltzmann equations for the abundance of deuterium and heavier elements.
In the case of DM with electro-weak charges, for example a baryon triplet of SU(2)$_L$ here studied in detail,
one finds that only a fraction of DM binds into deuterium unless binding energies much larger than in nuclear physics are assumed.
The main reason for this is simply the numerical density of DM: direct searches imply that the DM mass is greater than 1 TeV for electro-weak constituents.
The small numerical abundance compared to the SM nucleons suppresses the production of nuclei. Therefore dark BBN ends after deuterium formation leaving
a fraction of deuterium plus small traces of tritium. In minimal models with singlets, production of dark deuterium is kinematically forbidden and nucleosynthesis cannot start.
This conclusion changes completely including a dark photon: for DM masses in the GeV range, deuterium is formed very efficiently due to the large density and heavier nuclei
can thus be formed through fusion reactions allowing to populate nuclei up to large atomic numbers.
While in this work we have focused on asymmetric scenarios where DM mass is a free parameter, our results can be extended to symmetric models,
producing extra annihilation channels and nuclei-anti-nuclei. For the simplest scenarios however the critical abundance is reproduced for masses around 100 TeV
and no significant fraction of nuclei is produced.
The formation of DM nuclei is interesting experimentally as it can lead to novel signatures in DM indirect detection, even in asymmetric scenarios and
change the prediction for direct detection experiments due to the different composition of DM.
The emission of monochromatic photons with energy equal to the nucleus binding energy is a smoking gun of dark nuclei that could be searched experimentally \cite{indirect}.
This work extends the computation of perturbative bound state formation in \cite{Mitridate:2017izz} to strongly coupled bound states.
We have provided general formulae, that could be used in other contexts, for electric and magnetic interactions also taking into account important
long distance effects associated to bound states close to zero energy. It would be interesting to generalise this formalism to the fusion of strongly coupled
bound states as well studying perturbative bound states within this framework.
{\small
\subsubsection*{Acknowledgements}
We wish to thank Hyung Do Kim, Gordan Krnjaic, Maria Paola Lombardo, Filippo Sala and Juri Smirnov for useful discussions.
AT is partially supported by the grant ``STRONG" from the INFN. We thank the Galileo Galilei Institute for Theoretical Physics for the hospitality during the completion of this work.
}
|
1,108,101,563,897 | arxiv | \section{Introduction}
The availability of accurate 3D building models has become highly demanded in various applications like the modeling of global urbanization process, urban planning, disaster monitoring, \etc. As traditional methods performed by human operators for 3D building modeling are expensive, time-consuming and limited to a small area, modern automatic 3D building model reconstruction methods have drawn wide research interests.
Current automatic 3D building reconstruction methods can be generally categorized into data-driven, model-driven and hybrid approaches. While model-driven approaches extract the primitives of buildings and fit them to the most appropriate models~\cite{lafarge2008structural}, data-driven methods extract geometrical components of building roof planes from 3D point clouds or \glspl{gl:DSM} with point- or image-based segmentation techniques, and these components are merged to 3D models with respect to some geometrical topology~\cite{tarsha2007model}. With model-driven methods being unable to solve complex situations and data-driven methods being commonly noisy, hybrid approaches, including this work, tend to integrate the two types of approaches, where a data-driven approach extracts the building components, and a model-driven approach utilizes prior knowledge of the geometrical building models to help reconstruct 3D buildings~\cite{zheng2017hybrid}.
\begin{figure}[t]
\begin{center}
\subfigure[]{
\includegraphics[width=0.25\linewidth]{Figures/results/dsm_out_qt.png}
}
\subfigure[]{
\includegraphics[width=0.25\linewidth]{Figures/results/edge_out_w.png}
}
\subfigure[]{
\includegraphics[width=0.4\linewidth]{Figures/results/3Dmodel.png}
}
\end{center}
\caption{Sample results of the proposed 3D building vectorization method. $(a)$ refined DSM; $(b)$ edge and corner segmentation; $(c)$ vectorized 3D building model.}
\label{fig:small_overview}
\end{figure}
While \gls{gl:LiDAR} point clouds and aerial images have been the most common sources to extract 3D building information in the past years~\cite{brenner2005building,chen2005building,haala2010update}, satellite images become more and more important as they are convenient to acquire, cover wide areas and update frequently. Apart from optical images, modern satellites can also provide \glspl{gl:DSM} using photogrammetric stereo matching techniques, from which we can extract both building objects and their height information. However, satellite \glspl{gl:DSM} show a reasonable amount of noise and outliers because of matching errors or the existence of non-building objects, thus refinement methods have been studied to improve their quality. With traditional methods using filter-based techniques like \gls{gl:PCA}~\cite{lopez2000improving}, Kalman filter~\cite{wang1998applying} and \gls{gl:FFT}~\cite{arrell2008spectral} to remove outliers, recent researches have shown promising improvement by using deep learning based methods. Bittner \etal~\cite{bittner2018automatic} proposed firstly a \gls{gl:cGAN} based approach to filter out non-building objects and refine building shapes of photogrammetric \glspl{gl:DSM}, which was further developed by a set of works~\cite{bittner2019multi,bittner2020long,bittner2019late} to step-by-step improve the generation quality. Stucker and Schindler~\cite{stucker2020resdepth} proposed an improvement for traditional stereo image matching by regressing a residual correction with a convolutional neural network.
The revolutionary appearance of machine learning and deep learning techniques has also brought significant contributions to the whole process of 3D building reconstruction tasks. Not only building footprints can be extracted and regularized with neural networks~\cite{vakalopoulou2015building,zhao2018building,zorzi2020machine}, but also the heights and roof elements can be detected and predicted~\cite{alidoost2020shaped,alidoost20192d}, leading to constructed 3D building models. Recent researches can be found in~\cite{mahmud2020boundary}, where the authors combined building object detection, semantic segmentation and height prediction in a multi-task manner, and~\cite{wang2017single}, where the authors proposed a deep learning based model-driven approach to perform parametric building reconstruction. While most of these researches focusing on \gls{gl:LoD}-1, \gls{gl:LoD}-2 building modeling is relatively new. One example is presented in~\cite{partovi2019automatic}, where a hybrid 3D building reconstruction method is applied to detect and decompose building boundaries, classify roof types, and fit predefined building models.
Challenges for \gls{gl:LoD}-2 building reconstruction contain the requirement for accurate building height prediction and roof element extraction, and the complexity to form vectorized 3D roofs. Most existing methods utilize or predict coarse height maps for detection tasks of neural networks and later perform optimization~\cite{alidoost20192d,partovi2019automatic}. Our work, by contrast, uses network refined \glspl{gl:DSM} to extract roof elements and proposes a corresponding vectorization pipeline to form 3D models.
In this paper, we propose a machine learning based approach to reconstruct \gls{gl:LoD}-2 building models from photogrammetric \glspl{gl:DSM} and \glspl{gl:PAN} image obtained from satellites. Our contributions can be described as following:
\begin{itemize}
\item We improve the state-of-the-art \gls{gl:cGAN} based \gls{gl:DSM} refinement network proposed by Bittner \etal~\cite{bittner2020long} by adding a popular self-attention \gls{gl:CBAM}~\cite{woo2018cbam}.
\item We propose an edge and corner detection network sharing the architecture of the previous \gls{gl:DSM} refinement network.
\item We propose a novel vectorization pipeline to polygonize building roofs and reconstruct 3D building models.
\end{itemize}
\section{Methodology}
As is shown in \cref{fig:overview}, our multi-stage 3D building vectorization approach starts with a \gls{gl:cGAN} architecture for photogrammetric \gls{gl:DSM} building shape refinement. The refined \gls{gl:DSM}, together with the input PAN image, is then used to detect building edges and corners with a semantic segmentation network that shares the structure of the cGAN generator. The detected edges and corners are later vectorized to building roof polygons. In the final stage, the refined \gls{gl:DSM} and 2D polygons are combined to reconstruct 3D building models.
\subsection{DSM building shape refinement}
The proposed deep neural network for \gls{gl:DSM} refinement is an extension of the network presented by Bittner \etal~\cite{bittner2020long} based on an image-to-image translation \gls{gl:cGAN} introduced by Isola \etal~\cite{isola2017image}. The network jointly learns a generator and a discriminator to do the domain transfer, i{.}e{.}, from a source domain, the photogrammetric \gls{gl:DSM}, to a target domain, the refined \gls{gl:DSM}. With the discriminator following the PatchGAN architecture proposed by Isola \etal~\cite{isola2017image}, the generator has a UNet-like structure with both long skip connections from the encoders to the decoder and short skip connections in-between the residual blocks inside the encoders. To enhance the feature of building objects, we add a \gls{gl:CBAM} as presented by Woo \etal~\cite{woo2018cbam} before the decoder. The \gls{gl:CBAM} is a combination of 1D channel attention and 2D spatial attention, which are sequentially multiplied to the input feature maps. The overall generator architecture is shown in~\cref{fig:generator}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figures/methods/method_overview2.pdf}
\caption[overview]{Overview of the proposed method. Given a photogrammetric \gls{gl:DSM} and a \gls{gl:PAN} image as input, a \gls{gl:cGAN} based DSM refinement network and a semantic segmentation network are sequentially applied to refine building shapes and detect edges and corners. A set of vectorization algorithms are then applied to reconstruct a full 3D building model.}
\label{fig:overview}
\end{figure*}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\linewidth]{Figures/methods/net_generator_paper.pdf}
\end{center}
\caption{Generator architecture of the proposed \gls{gl:DSM} refinement network.}
\label{fig:generator}
\end{figure}
Following the idea presented by Bittner \etal~\cite{bittner2020long}, we combine several types of losses in a multi-task manner for optimizing the proposed \gls{gl:DSM} refinement network:
\begin{equation}
\label{total_loss}
\begin{array}{c}
\mathcal{L}_{\text {total}}\left(G\right) = \alpha \cdot \mathcal{L}_{\mathrm{cLSGAN}}(G,D)+\beta \cdot \mathcal{L}_{L_{1}}(G)\\
+ \gamma \cdot \mathcal{L}_{\text {normal }}\left(\mathcal{N}^{\mathrm{t}}, \mathcal{N}^{\mathrm{p}}\right)
\end{array}
\end{equation}
\noindent
where $\alpha$, $\beta$ and $\gamma$ represent the weighting parameters of different loss terms.
\vspace{1em}
\noindent \textbf{GAN loss.}
We combine a conditional GAN~\cite{mirza2014conditional} and a Least Squares GAN~\cite{mao2017least} for the \gls{gl:DSM} refinement network, thus a \gls{gl:cLSGAN} loss is utilized:
\begin{equation}
\begin{array}{c}
\underset{G}{\min} \underset{D}{\max} \mathcal{L}_{\text {cLSGAN }}(G, D)=\mathbb{E}_{x, y \sim p_{\text {real }}(y)}\left[(D(y,x)-1)^{2}\right] \\
+\mathbb{E}_{x, z \sim p_{z}(z)}\left[D(G(z,x),x)^{2}\right]
\end{array}
\end{equation}
\noindent
where $y \sim p_{\text {real }}(y)$ represents real samples, and $G(z)$ represents generated samples transferred from usually latent noise variables $z \sim p_{z}(z)$. Respectively, $x$ denotes the \gls{gl:GAN}'s condition (the input \gls{gl:DSM}), $D(y,x)$ represents discriminator output of real samples, and $D(G(z,x),x)$ represents discriminator output of generated samples.
\vspace{1em}
\noindent \textbf{L1 loss.}
It is common to blend the objective functions for \glspl{gl:GAN} with traditional regression losses like $L1$ or $L2$ distances to help the generator create images as close as possible to the given ground truth. Since $L1$ loss encourages less blurring to the image boundaries, it is added to our generator losses:
\begin{equation}
\mathcal{L}_{L_{1}}(G)=\mathbb{E}_{x, y \sim p_{\text {real }}(y), z \sim p_{z}(z)}\left[\|\mathrm{y}-G(z,x)\|_{1}\right]
\end{equation}
\vspace{1em}
\noindent \textbf{Normal vector loss.}
To further refine the surface of building roof planes, a normal vector loss~\cite{hu2019revisiting}, which measures the angles between normal vectors of generated and target \glspl{gl:DSM}, is added to the generator losses:
\begin{equation}
\mathcal{L}_{\text {normal }}\left(\mathcal{N}^{\mathrm{t}}, \mathcal{N}^{\mathrm{p}}\right)=\frac{1}{m} \sum_{i=1}^{m}\left(1-\frac{\left\langle n_{i}^{\mathrm{t}}, n_{i}^{\mathrm{p}}\right\rangle}{\left\|n_{i}^{\mathrm{t}}\right\|\left\|n_{i}^{\mathrm{p}}\right\|}\right),
\end{equation}
\noindent
where $\mathcal{N}^{\mathrm{t}}=\left\{n_{1}^{\mathrm{t}}, \ldots, n_{m}^{\mathrm{t}}\right\}$ and $\mathcal{N}^{\mathrm{p}}=\left\{n_{1}^{\mathrm{p}}, \ldots, n_{m}^{\mathrm{p}}\right\}$ represent normal vectors of the target and predicted \gls{gl:DSM}, and $\langle \cdot,\cdot \rangle$ denotes the scalar product of the two vectors. This normal vector loss emphasizes the planarity and inclination of building roofs. The smaller the angle, the more planar the predicted surface and the more consistent to the target surface.
The combination of different losses forms a multi-task learning problem, thus an automatic weighting method proposed firstly by Kendal \etal~\cite{kendall2018multi} and investigated in remote sensing in~\cite{liebel2020generalized,liebel2018auxiliary} is applied to automatically tune the loss weights considering the homoscedastic uncertainty of each separate task:
\begin{equation}
w_{l}=\left\{\begin{array}{ll}
0.5 \cdot \exp \left(-\log \left(\sigma_{l}^{2}\right)\right) & \text { for } \mathcal{L}_{L_{1}} \text { and } \mathcal{L}_{\text {normal }} \\
\exp \left(-\log \left(\sigma_{l}^{2}\right)\right) & \text { for } \mathcal{L}_{\text {cLSGAN }}
\end{array}\right.
\label{eq:weighting}
\end{equation}
\noindent
where $\sigma_{l}^{2}$ is a learnable parameter, which represents the variance, i{.}e{.}, uncertainty of each task through the training process. In order to avoid over-controlled parameter values, a regularization term $0.5 \cdot \log \left(\sigma_{l}^{2}\right)$ is added following each weighted loss. As a result, the final loss of the generator of this \gls{gl:DSM} refinement network can be formulated as:
\begin{equation}
\mathcal{L}_{\text {total}}\left(G\right)=\sum_{l} w_{l} \cdot \mathcal{L}_{l}+\mathcal{R}_{l}
\end{equation}
\noindent
while the discriminator loss remains the same as the \gls{gl:cLSGAN} loss:
\begin{equation}
\begin{array}{c}
\mathcal{L}_{\text {total }}(D)=\mathbb{E}_{x, y \sim p_{\text {real }}(y)}\left[(D(y,x)-1)^{2}\right] \\
+\mathbb{E}_{x, z \sim p_{z}(z)}\left[D(G(z,x),x)^{2}\right]
\end{array}
\end{equation}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/methods/vec_overview.pdf}
\caption[vectorization]{Overview of the proposed vectorization pipeline. (a) detected edges and corners. (b) edges and filtered corners. (c) vectorized edges and corners. (d) roof polygons. (e) 3D building model.}
\label{fig:vectorization}
\end{figure*}
\subsection{Building edge and corner detection}
Given the refined \gls{gl:DSM} and PAN image, a semantic segmentation network is used to detect building edges and corners. The network architecture is identical to the generator of the \gls{gl:DSM} refinement network (see \cref{fig:generator}), except the change of the three-channel output layer. A simple multi-class cross-entropy loss is applied:
\begin{equation}
\mathcal{L}_{\mathrm{CE}}(x,t)=\mathbb{E}\left[-\sum_{i=1}^{3} t_{i} \log x_{i}\right]
\end{equation}
\noindent
where $x_{i}$ is the predicted probability for a certain class $i$, and $t_{i}$ is either 0 or 1 depending on the label of class $i$ for the corresponding target. The output probability remains for further processing.
\subsection{3D building model reconstruction}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/methods/vec_edge.pdf}
\caption[building edge connection]{Examples of building edge vectorization.}
\label{fig:edge_vectorization}
\end{figure}
In the final stage, a novel 3D building vectorization method is proposed using the refined \gls{gl:DSM} and detected building edges and corners. Assuming building edges are straight lines, the core idea is to step-by-step build a graph data structure that stores points, lines, faces and their relationships for every single building. As being a hybrid method, the proposed approach is not limited to the complexity of different types of buildings, thus performing well especially for large area 3D building modeling. A general workflow is shown in \cref{fig:vectorization}.
\vspace{1em}
\noindent \textbf{Corner point selection.}
For each corner pixel in the ground truth, multiple surrounding pixels may be detected as corners, thus a \gls{gl:NMS} algorithm is implemented to filter out best fitting corner points. As is shown in \cref{fig:vectorization} $(a) - (b)$, for each detected corner pixel (the candidate), a surrounding $n \times n$ window is used as the evaluating box. For each neighbor pixel in this window, if the pixel value (corner probability) is no bigger than the candidate, it is set to zero; otherwise if it exceeds the candidate, this pixel remains while the candidate is set to zero. This process is iterated over all corner candidates and those isolated best candidates are seen as final corner points.
\vspace{1em}
\noindent \textbf{Roof edge vectorization.}
Before we start the vectorization process, a \gls{gl:CCL}~\cite{wu2005optimizing} algorithm is applied to label connected pixels into building instances. Two pixels are connected when they are neighbors and have a non-zero value. Here the neighborhood is defined in a 2-connected sense, which means every pixel has eight neighbors in eight directions. As shown in \cref{fig:edge_vectorization} $(a)$, different sets of connected pixels would have different IDs and separate different buildings, which enables the next steps to be performed within the scope of every single building.
Then we connect the corners to form edges based on two conditions. The first condition is the average pixel value of a line buffer between a pair of corner points. If the average value is above a threshold, an edge line is determined between the corners. This condition would possibly fail when the edge is curved in reality, thus a second condition is applied in parallel. By utilizing the \gls{gl:CCL} algorithm again in a rectangle buffer between the pair of corners, an edge is determined if the labels of the two corners are identical. Two examples are shown in \cref{fig:edge_vectorization} $(b)$, where both an edge with a hole and a little curved edge can be successfully detected.
With the two conditions we can efficiently and thoroughly detect building edges, yet still one problem needs to be considered. As it is shown in \cref{fig:edge_vectorization} $(c)$, corner $A$ and $B$, corner $B$ and $C$ form two edges, but corner $A$ and corner $C$ can also form an edge which is redundant since it covers $AB$ and $BC$. To solve this issue, we again create a rectangle buffer for each potential two-corner pair and, if other corner points exist inside this buffer, this pair can not form an edge anymore.
\vspace{1em}
\noindent \textbf{Roof polygon generation.}
The vectorized edges are then polygonized to roof faces ( see \cref{fig:vectorization} $(d)$), which can be easily done by graph search algorithms. For each building, an undirected graph is firstly built from the obtained edges. A simple \gls{gl:DFS} is then applied to detect and mark a cycle (i{.}e{.}, a roof polygon) in this graph by tracing a back edge to vertices that have been visited. This is run iteratively to extract all cycles with corresponding different marks. To avoid face overlapping, large cycles which cover small cycles are removed in the final step. In practice, the polygonization process can also be directly applied with a $polygonize$ function from the open-source \emph{shapely} package which is popular for manipulation and analysis of planar geometric objects~\cite{shapely}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Figures/methods/vec_3D.pdf}
\caption[constructing 3D building model]{The construction of final 3D building model. Height information from refined \gls{gl:DSM} is processed and added to the polygons to build 3D roofs, walls and ground face, together forming the final 3D model.}
\label{fig:3D modeling}
\end{figure}
\vspace{1em}
\noindent \textbf{3D building modeling.}
In the final stage, walls and the ground face are constructed utilizing roof polygons and the refined \gls{gl:DSM} to produce a full 3D building model. Firstly, a \gls{gl:nDSM} is generated from the refined \gls{gl:DSM} with the method proposed by Qin \etal~\cite{qin2016spatiotemporal}. Then the adjacent roof faces are merged into a union, i{.}e{.}, a polygon whose edges are the building outlines. This gives us the footprints of the building, which also means the 2D shape of the ground face. In the next step, the height information from the \gls{gl:nDSM} is applied to corner points both inside roofs and on the building boundaries. To avoid apparent height difference between endpoints of an edge due to corner miss-matching (especially on outer boundaries where corner height is supposed to be much bigger than neighboring ground pixels), a small window is applied again to adjust height values. This is done by giving the corner point the maximum height value in this small window. Though slightly decreasing general accuracy, it can largely improve the robustness and smoothness of resulting 3D models.
The edges of the union polygon represent both the upper and lower boundaries of the building's surrounding walls. With the height of upper corners already determined with the maximum height value in the window, the height of lower corners is determined by giving the minimum height value in the window, i{.}e{.}, zero, hence forming the building walls in 3D. Meanwhile, the lower edges form also the ground face of the building, resulting in the final 3D building model. The modeling process is shown in \cref{fig:3D modeling}.
\section{Experiments and results}
The proposed approach is evaluated on Worldview-1 data of Berlin, Germany. The input consists of a space-borne photogrammetric \gls{gl:DSM} and a panchromatic image with 0.5 $m$ spatial resolution covering a total area of 410 ${km}^2$. The ground truth is generated from the public \gls{gl:CityGML} dataset following the same procedure as described in~\cite{bittner2018dsm}. The \gls{gl:CityGML} data for Berlin is freely available at \url{https://www.businesslocationcenter.de/en/economic-atlas/download-portal/}. Open datasets for some other worldwide cities can be found at \url{https://3d.bk.tudelft.nl/opendata/opencities/}.
\subsection{Implementation details}
The \gls{gl:DSM} refinement network is based on the Coupled-UResNet \gls{gl:cGAN} architecture proposed by Bittner \etal \cite{bittner2020long}, with an additional \gls{gl:CBAM}~\cite{woo2018cbam} applied before the decoder. The edge and corner detection network shares the architecture of the generator of the \gls{gl:DSM} refinement network, while the last layer is changed to three-channel output with a \emph{$softmax$} activation function.
The networks are trained on a single NVIDIA TITAN X (PASCAL) GPU with 12 GB memory. To fit the training data into the GPU memory, the satellite images are tiled into 21480 samples of size 256$\times$256 px. A minibatch of 4 is applied in both networks. The samples are augmented not only by horizontal and vertical flipping but also tiled from the original image with a random overlap every epoch to give the model a clue about building geometries which happened to be on the patch border in previous epochs. During the training of both networks, the ADAM optimizer is used with an initial learning rate of $\alpha=0.0002$ and momentum parameters ${\beta}_1=0.5, {\beta}_2=0.999$. For the \gls{gl:DSM} refinement network, the generator is pre-trained for 100 epochs as a warm-up and later interpolated with the cGAN's generator. This so-called network interpolation~\cite{wang2018esrgan} can balance CNN's over-smoothing and GAN's over-sharpening. The initial learnable weighting parameters as described in \cref{eq:weighting} are equally set to 1.
During the vectorization process, the window size for both corner point filtering and corner height valuing is set to $5 \times 5$ pix, while the width for rectangle buffers (edge connecting and overlap elimination shown in \cref{fig:edge_vectorization} $(c)$) is set to 7 pix.
\subsection{Results and evaluation}
\begin{figure*}[htp]
\centering
\subfigure[PAN image]{\includegraphics[width=0.3\textwidth]{Figures/results/pan.png}}
\subfigure[Photogrammetric DSM]{\includegraphics[width=0.3\textwidth]{Figures/results/dsm_stereo_qt.png}}
\subfigure[Ground truth DSM]{\includegraphics[width=0.3\textwidth]{Figures/results/dsm_gt_qt.png}}
\subfigure[Refined DSM]{\includegraphics[width=0.3\textwidth]{Figures/results/dsm_out_qt.png}}
\subfigure[Detected edges and corners]{\includegraphics[width=0.3\textwidth]{Figures/results/edge_out_w.png}}
\subfigure[Vectorized edges and corners]{\includegraphics[width=0.3\textwidth]{Figures/results/edge_vec_w.png}}
\caption{Experimental results of a 500m $\times$ 500m testing area. Some buildings in $(c)$ are not shown in other images because of the time difference. Some edges are missing in $(f)$ compared to $(e)$ because they don't meet the requirements of vectorization process, especially for boundary objects as they are incomplete.}
\label{fig:experiment-results}
\end{figure*}
\Cref{fig:experiment-results} $(c)$ shows the \gls{gl:DSM} refinement result, from which it can be seen that the proposed network can both filter out and regularize building objects from the photogrammetric \gls{gl:DSM}. This in parallel shows the robustness and accuracy of our approach to detect correct buildings, as we can see from \cref{fig:experiment-results} $(d)$ that the ground truth consists of several buildings that are not shown in satellite images due to the time difference. \Gls{gl:MAE}, \gls{gl:RMSE} and \gls{gl:NMAD} are applied for quantitative evaluation of the \gls{gl:DSM} refinement result:
\begin{equation}
\label{MAE}
\varepsilon_{\mathrm{MAE}}(h, \hat{h})=\frac{1}{n} \sum_{j=1}^{n}\left|\hat{h}_{j}-h_{j}\right|
\end{equation}
\begin{equation}
\label{RMSE}
\varepsilon_{\mathrm{RMSE}}(h, \hat{h})=\sqrt{\frac{1}{n} \sum_{j=1}^{n}\left(\hat{h}_{j}-h_{j}\right)^{2}}
\end{equation}
\begin{equation}
\label{NMAD}
\varepsilon_{\mathrm{NMAD}}(h, \hat{h})=1.4826 \cdot {median}_{j}\left(\left|\Delta h_{j}-m_{\Delta h}\right|\right)
\end{equation}
\noindent
where $\hat{h}$ denotes the predicted heights, $h$ denotes the target heights, $\Delta h$ denotes height error and $m_{\Delta h}$ denotes median height error. As is shown in \cref{tab:DSM-metrics-1}, our network improves all three metrics evaluated over the testing area compared to Bittner \etal~\cite{bittner2020long}. The \glspl{gl:RMSE} of all \glspl{gl:DSM} are relatively large compared to the ground truth, which can be explained by the time difference between the reference data and the given satellite \gls{gl:DSM}. There can be cases when in one data source the buildings exist and in the other not (due to new buildings construction or their destruction), and vice versa.
\begin{table}
\caption{Quantitative metrics for refined \gls{gl:DSM} evaluated over the testing area.}
\label{tab:DSM-metrics-1}
\centering
\scalebox{0.8}{
\begin{tabular}{c c c c}
\toprule
& \textbf{MAE (m)} & \textbf{RMSE (m)} & \textbf{NMAD (m)} \\
\midrule
Photogrammetric DSM & 3.91 & 7.14 & 1.40\\
Bittner \etal~\cite{bittner2020long} & 1.73 & 4.02 & 0.93\\
Ours (with attention) & \textbf{1.42} & \textbf{3.65} & \textbf{0.60}\\
\bottomrule
\end{tabular}
}
\end{table}
\Cref{fig:experiment-results} $(e)$ and $(f)$ present the edge and corner detection and vectorization results. By combining building height and shape information from the refined DSM and intensity information from the PAN image, the results show well-formed building skeletons with accurate corners and complete outlines. As a result of the requirements from the vectorization process, edges which have only one or none corner detected, or which are over-curved are unable to be determined. However, though missing some of the expected line segments, most of the building outer boundaries and inner edges are successfully constructed. Meanwhile, it might be helpful to mention that during the experiments we tried also combining the two steps (\gls{gl:DSM} refinement and edge and corner detection) together in a multi-task manner, but the results got worse, as the edge and corner detection network benefits more from an already refined \gls{gl:DSM} as input.
The final vectorized 3D building model is shown in \cref{fig:result-3Dmodel}, where most of the buildings are well reconstructed in 3D space. Even though some buildings are not fully visible in PAN image and blurry in photogrammetric \gls{gl:DSM}, we can still reconstruct them to a good shape. It is also seen that some buildings are missing or incomplete, which is due to the missing of those vectorized edges and corners whose quality doesn't meet the vectorization process.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/results/3Dmodel_2.png}
\caption{Reconstructed 3D building model of a $500m\times 500m$ testing area.}
\label{fig:result-3Dmodel}
\end{figure}
For quantitative evaluation of the height of reconstructed buildings, the generated \gls{gl:nDSM} is compared to the ground truth. \gls{gl:MAE}, \gls{gl:RMSE} and \gls{gl:NMAD} are applied again to evaluate the quality of the generated \gls{gl:nDSM}. The evaluation result is shown in \cref{tab:nDSM-metrics}, from which we can see that both photogrammetric nDSM and our genereated \gls{gl:nDSM} have better metrics than \glspl{gl:DSM} (\cref{tab:DSM-metrics-1}) after removing the height of ground surface. Meanwhile, our result presents large improvement compared to photogrammetric \glspl{gl:nDSM}.
\begin{table}
\caption{Quantitative metrics for building \gls{gl:nDSM} evaluated over the testing area.}
\label{tab:nDSM-metrics}
\centering
\scalebox{0.8}{
\begin{tabular}{c c c c}
\toprule
& \textbf{MAE (m)} & \textbf{RMSE (m)} & \textbf{NMAD (m)} \\
\midrule
Photogrammetric nDSM & 3.21 & 6.04 & 0.85 \\
Ours & \textbf{0.80} & \textbf{2.28} & \textbf{0.47} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Quantitative metrics for roof orientation error evaluated over the testing area.}
\label{tab:orientation-metrics}
\centering
\scalebox{0.8}{
\begin{tabular}{c c c c c}
\toprule
& \textbf{min ($^{\circ}$)} & \textbf{max ($^{\circ}$)} & \textbf{mean ($^{\circ}$)} & \textbf{$\sigma$ ($^{\circ}$)} \\
\midrule
Photogrammetric nDSM & \textbf{0.08} & 75.84 & 22.46 & 22.28 \\
Ours & 0.10 & \textbf{75.83} & \textbf{9.31} & \textbf{15.53} \\
\bottomrule
\end{tabular}
}
\end{table}
To evaluate the quality of the reconstructed 3D roofs, an orientation error is applied to examine the inclination of the constructed roof planes. As proposed by Koch \etal~\cite{koch2018evaluation}, the orientation error can be formulated as the angle difference between the normal vectors of 3D planes fitted to the predicted surface points and the given ground truth points:
\begin{equation}
\varepsilon_{\text {orie }}\left(G \odot \mathcal{P}\right)=\arccos \left(n_{i}^{\mathrm{t}} \cdot \check{n}_{i}^{\mathrm{p}}\right)
\end{equation}
\noindent
where $n_{i}^{\mathrm{t}}$ and ${n}_{i}^{\mathrm{p}}$ denote the normal vector of a certain plane on target and predicted image respectively. $G\odot \mathcal{P}$ represents the predicted depth image $G$ masked with a binary mask $\mathcal{P}$ containing a certain number of roof planes. \Cref{tab:orientation-metrics} shows the average orientation error of constructed 3D roof faces compared to corresponding ground truth, showing that the average plane angle is within 10$^{\circ}$, which is much better than using only the photogrammetric \gls{gl:nDSM}.
\begin{figure*}[ht]
\centering
\subfigure[PAN image]{
\includegraphics[width=0.35\textwidth]{Figures/results/pan_munich.png}
}
\subfigure[Photogrammetric DSM]{
\includegraphics[width=0.35\textwidth]{Figures/results/dsm_stereo_munich.png}
}
\subfigure[Refined DSM]{
\includegraphics[width=0.35\textwidth]{Figures/results/dsm_out_munich.png}
}
\subfigure[Reconstructed 3D model]{
\includegraphics[width=0.35\textwidth]{Figures/results/3Dmodel_munich_3.png}
}
\caption{Testing results of a sub-area of Munich.}
\label{fig:result-munich}
\end{figure*}
\begin{table}
\caption{Comparison of eave and ridge heights of the building model for selected buildings.}
\label{tab:ridgeandeave}
\centering
\scalebox{0.55}{
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
\multirow{2}{*}{Building No.} & \multicolumn{3}{c|}{Ridge (m)} & \multicolumn{3}{c|}{Eave (m)} \\
\cline{2-7}
& Reference & Partovi \etal \cite{partovi2019automatic} & Ours & Reference & Partovi \etal \cite{partovi2019automatic} & Ours \\
\hline
17 & 15 & 14.03 & 15.05 & 11 & 11.29 & 11.72 \\
\hline
18 & 19 & 17.46 & 18.21 & 15 & 13.38 & 15.50 \\
\hline
19 & 15 & 14.42 & 16.13 & 11 & 12.52 & 13.01 \\
\hline
20 & 15 & 14.22 & - & 11 & 10.86 & - \\
\hline
21 & 15.5 & 14.08 & 15.33 & 11.9 & 12.21 & 11.54 \\
\hline
22 & 15.6 & 15.28 & 14.87 & 11.5 & 11.87 & 11.94 \\
\hline
23 & 20.0 & 20.76 & 21.80 & 16.5 & 17.35 & 17.58 \\
\hline
24 & 16.2 & 15.87 & 17.03 & 12.3 & 11.03 & 13.66 \\
\hline
25 & 17.4 & 16.21 & 18.02 & 13.6 & 13.58 & 13.77 \\
\hline
26 & 16.8 & 16.40 & 17.19 & 12.5 & 10.54 & 11.36 \\
\hline
27 & 15 & 14.66 & 13.88 & 10.9 & 10.49 & 11.40 \\
\hline
28 & 16.8 & 16.27 & 17.11 & 12.5 & 10.41 & 12.96 \\
\hline
29 & 14.7 & 13.94 & 15.54 & 10.7 & 10.46 & 11.06 \\
\hline
30 & 16.8 & 15.66 & 16.00 & 12.5 & 10.51 & 13.79 \\
\hline \hline
$\mu_{|\Delta H|}$ & - & 0.79 & \textbf{0.74} & - & \textbf{0.93} & 0.99 \\
\hline
$\sigma_{|\Delta H|}$ & - & \textbf{0.41} & 0.44 & - & 0.77 & \textbf{0.74} \\
\hline
RMSE & - & 0.89 & \textbf{0.74} & - & \textbf{1.20} & 1.54 \\
\hline
NMAD & - & \textbf{0.55} & 0.59 & - & 0.82 & \textbf{0.68} \\
\hline
\hline
\end{tabular}
}
\end{table}
In addition, we compare our proposed 3D building vectorization method with the work presented by Partovi \etal~\cite{partovi2019automatic}, who developed a multi-stage hybrid method for 3D building reconstruction using \gls{gl:PAN} images, photogrammetric \glspl{gl:DSM} and multi-spectral images from satellite data. \cref{fig:result-munich} presents the reconstruction results of a sub-area of Munich using Worldview-2 satellite data. The ridge and eave heights of 14 reconstructed buildings in this area are compared with reference data from the Department of Urban Planning and Building (DUPB) of Munich. As is shown in \cref{tab:ridgeandeave}, $|\Delta H|$ denotes the absolute height difference between the predicted model and reference, and $\mu_{|\Delta H|}$ and $\sigma_{|\Delta H|}$ represent the mean and standard deviation of the height difference, respectively. The building numbers refer to \cref{fig:result-munich} $(c)$. It can be seen that both methods lead to lower accuracy in eave heights than ridge heights because the surroundings of building boundaries are usually more complex than inner-roof ridges both in PAN image and photogrammetric \gls{gl:DSM}. Our method tends to get bigger values for eave heights, which can be explained by our valuing method for the height of building corners. In order to avoid the mismatching between \gls{gl:DSM} heights and corner positions, we give the eave corner the maximum height value in a surrounding window and the minimum height value for the corresponding ground corner. This would increase the relative height of the building eaves, yet we can see that this systematic error is within a small range. Apart from that, the overall accuracy shows promising superiority of our method, where we get comparative metric performance with a simpler approach. Meanwhile, as a price of simplicity, the biggest problem remaining to be solved is the lack of completeness of our constructed model. As can be seen from both \cref{fig:result-3Dmodel} and \cref{fig:result-munich}, some building components are lost after vectorization, which quantitatively reduces the recall score from $0.88$ to $0.81$ (Berlin testing area) compared to the refined \gls{gl:DSM} before vectorization.
\glsresetall
\section{Conclusion}
In this paper, we present a multi-stage large-scale 3D building vectorization approach. We extend the application of recent deep learning based techniques on photogrammetric \gls{gl:DSM} refinement and bring it to the application of automatic 3D building model reconstruction. With the help of a self-attention module, we obtain promising results for both regression of building heights and semantic segmentation of edges and corners. Based on that, we propose a simple yet effective vectorization pipeline to reconstruct \gls{gl:LoD}-2 building models. We apply \gls{gl:NMS} to filter out best fitting corner points, define buffer connectivity and buffer thresholds to determine edges, and polygonize them to roof faces. By utilizing again the height information from the refined \gls{gl:DSM}, we finally reconstruct fully vectorized 3D building models. Though limitations exist in straight edge assumptions and the completeness of reconstructed building models, results prove the overall robustness and accuracy of our proposed method.
{\small
\bibliographystyle{ieee_fullname}
|
1,108,101,563,898 | arxiv | \section{Introduction}
\label{sec:intro}
$N=1$ supergravity~\cite{sugra} is not only regarded as an effective
field theory of superstring below the Planck scale, but also provides
a natural framework for the origin of the soft supersymmetry
(SUSY)-breaking terms. Most of the supergravity models, however,
contain a light massive boson $\phi$ (Polonyi field) with the mass
$m_\phi$ of order the gravitino mass
$m_{3/2}$~\cite{PLB131-59,PRD49-779,PLB318-447}, which is responsible
for the spontaneous SUSY breaking. The Polonyi field $\phi$ couples
only gravitationally to the light particles and hence the lifetime of
$\phi$ is very large as
\begin{equation}
\tau_\phi \simeq
\Gamma_\phi^{-1}
\sim \left( N\frac{m_\phi^3}{M_P^2}\right)^{-1},
\label{decay_rate}
\end{equation}
where $\Gamma_\phi$ is the decay rate of the Polonyi field,
$M_P=\sqrt{8\pi}M \simeq 1.2\times 10^{19}{\rm GeV}$ the Planck mass,
and $N$ the number of the decay modes. (In the following calculations,
we take $N=100$.) Then, the Polonyi field is expected to decay when
the temperature of the universe becomes very low. The reheating
temperature $T_R$ due to the decay of the Polonyi field is given by
\begin{equation}
T_R \sim 1 {\rm MeV} \left(\frac{m_{\phi}}{10{\rm TeV}}\right)^{3/2}.
\label{rtemp}
\end{equation}
This fact leads to a serious cosmological difficulty (so-called
Polonyi problem)~\cite{PLB131-59,PRD49-779,PLB318-447}. Under quite
general assumptions, the Polonyi field $\phi$ takes an amplitude of
order $M$ at the end of inflation, and subsequently it starts
oscillation and dominates the energy density of the universe until it
decays. If the decay of the Polonyi field occurs after or during the
big-bang nucleosynthesis, it most likely destroys one of the
successful scenarios in the big-bang cosmology, that is the
nucleosynthesis. Furthermore, the decay of the Polonyi field releases
a tremendous amount of entropy and dilutes primordial baryon asymmetry
much below what is observed today. Especially, the important point is
that we cannot solve this problem even if we assume an inflation,
which is the crucial difference between the Polonyi problem and
another serious cosmological difficulty in $N=1$ supergravity, {\it
i.e.} the gravitino
problem~\cite{PRL48-1303,PLB145-181,PLB158-463,PLB189-23,PLB303-289,PTP93-879}.
It has been pointed out~\cite{PLB174-176} that the first problem can
be solved by raising the Polonyi mass $m_\phi$ (or equivalently the
gravitino mass $m_{3/2}$) up to $O(10{\rm TeV})$ so that the reheating
temperature $T_R$ by the decay of the Polonyi field becomes larger
than $O(1{\rm MeV})$. Then, the nucleosynthesis may re-start after
the decay of the Polonyi field. This solution favors strongly
``no-scale type'' supergravity~\cite{PRD50-2356},\footnote
{In the original no-scale supergravity
model~\cite{PLB143-410,PLB147-99}, Polonyi field acquires a mass of
the order of $m_{3/2}^2/M$~\cite{NPB241-406} which is much smaller
than the gravitino mass $m_{3/2}$. However, in the ``no-scale type''
supergravity model studied in Ref.~\citen{PRD50-2356}, the mass of the
Polonyi field is at the order of the gravitino mass.}
since the gravitino mass can be taken $O(10 {\rm TeV})$ without
diminishing the original motivation of SUSY as a solution to the
hierarchy problem~\cite{Maiani,Veltman}. Namely, we can raise the
gravitino mass while keeping all masses of SUSY particles in
observable sector to be $O(100 {\rm GeV} )$.
Here, we stress that the second problem can be also solved if the
Affleck-Dine mechanism~\cite{NPB249-361} for baryogenesis works in the
early universe~\cite{PLB342-105}. However, we point out another
cosmological problem that the lightest superparticles (LSPs) produced
via the Polonyi decay are extremely
abundant~\cite{PLB342-105,lbl37715}. As a result, their energy
density, if stable, overcloses the universe unless the reheating
temperature due to the Polonyi decay is sufficiently high. This fact
gives us a lowerbound on the reheating temperature after the decay of
the Polonyi field.
The organization is as follows. In the next section, we show how the
baryon asymmetry of the universe can be explained if we assume the
Affleck-Dine mechanism for baryogenesis. In section~\ref{sec:lsp},
we calculate the mass density of LSP due to the decay of the Polonyi
field, and constrain the reheating temperature in the framework of the
minimal SUSY SU(5) model. Section~\ref{sec:discuss} is devoted to
discussion.
\section{Polonyi problem and the Affleck-Dine mechanism}
\label{sec:ad}
The Affleck-Dine mechanism~\cite{NPB249-361} for baryogenesis is based
on the fact that there are some combinations of squark $\tilde{q}$ and
slepton $\tilde{l}$ fields for which the scalar potential vanishes
identically when SUSY is unbroken. After SUSY breaking, these
flat-direction fields acquire masses $m_\chi$ of order $100 {\rm GeV} $. One
of these flat directions $\chi$ is assumed to have a large initial
value $\chi_0$ which is assumed to be about the grand unified theory
(GUT) scale $M_{GUT}\sim 10^{16} {\rm GeV} $ or the gravitational scale. It
has been shown~\cite{NPB249-361} that the decay of the coherent
oscillation mode of such a field $\chi$ can generate a large
baryon-to-entropy ratio $\sim O(1)$ under the presence of tiny
baryon-number violating operators such as
$(m_{S}/M_{GUT})\tilde{q}\tilde{q}\tilde{q}\tilde{l}$ (with $m_{S}$
being the scale of the SUSY breaking parameter in the observable
sector, which is assumed to be $m_{S}\sim O(100 {\rm GeV} )$).
We now compute how large baryon asymmetry can be obtained if we
combine the Affleck-Dine mechanism with the Polonyi problem. For this
purpose, it is convenient to use the fact that $n_B/\rho_\phi$ is
independent of time since the baryon number is approximately conserved
in the regime we consider. (Here, $n_B$ is the baryon number density
and $\rho_\phi$ the mass density of the Polonyi field.) Then,
\begin{eqnarray}
\frac{m_\chi n_B}{\rho_\phi} = {\rm const}.
\label{constant}
\end{eqnarray}
We evaluate this when the Affleck-Dine field $\chi$ starts its
oscillation. At this time,
\begin{eqnarray}
\frac{m_\chi n_B}{\rho_\phi} \simeq \frac{m_\chi n_B}{\rho_\chi}
\frac{\rho_\chi}{\rho_\phi}
\simeq
\eta_{B0} \frac{\rho_\chi}{\rho_\phi}
\simeq
\eta_{B0} \left( \frac{\chi_0}{\sqrt{3}M} \right)^2.
\label{at_oscillation}
\end{eqnarray}
where $\rho_\chi$ is the mass density of the Affleck-Dine field and
$\eta_{B0}\equiv (n_B/n_\chi )_{H\simeq m_\chi}$ with $n_\chi$ being
the number density of $\chi$. In deriving Eq.(\ref{at_oscillation}),
we have used $H\simeq\sqrt{\rho_\phi}/\sqrt{3}M$, and
$\rho_\chi=m_\chi^2\chi_0^2$. On the other hand, we evaluate the same
quantity given in Eq.~(\ref{constant}) at the decay time of the
Polonyi field $\phi$
\begin{eqnarray}
\frac{m_\chi n_B}{\rho_\phi} \simeq \frac{4}{3}
\frac{m_\chi n_B(T_R)}{s(T_R) T_R}.
\label{at_decay}
\end{eqnarray}
Equating Eq.~(\ref{at_oscillation}) and Eq.~(\ref{at_decay}), we get
\begin{eqnarray}
\frac{n_B}{s} &\simeq&
\frac{1}{4} \eta_{B0} \frac{T_R}{m_\chi}
\left ( \frac{\chi_0}{M} \right ) ^2
\sim 10^{-5} \eta_{B0}
\left ( \frac{T_R}{1 {\rm MeV} } \right )
\left ( \frac{100 {\rm GeV} }{m_\chi} \right )
\left ( \frac{\chi_0}{M} \right ) ^2.
\label{eta_now}
\end{eqnarray}
With Eq.(\ref{eta_now}), one may explain the observed value $n_B/s
\sim (10^{-10} - 10^{-11})$ taking $\chi_0 \sim M_{GUT}$, $T_R\sim
1{\rm MeV}$, and $\eta_{B0}\sim O(1)$.\footnote
{It has been pointed out that the Affleck-Dine mechanism for
baryogenesis may result in too large baryon number fluctuation in the
case of chaotic inflation~\cite{APP2-291}. However, such a difficulty
can be solved if we adopt a larger value of the initial amplitude of
the Affleck-Dine field; $\chi_0\sim M$. In that case, we have to
choose $\eta_{B0}\sim 10^{-5}$.}
In our case, the dilution factor $D$ is given by
\begin{eqnarray}
D \sim \frac{T_R}{m_\chi}
\left ( \frac{\chi_0}{M} \right ) ^2
\sim
10^{-5}
\left ( \frac{T_R}{1 {\rm MeV} } \right )
\left ( \frac{100 {\rm GeV} }{m_\chi} \right )
\left ( \frac{\chi_0}{M} \right ) ^2,
\label{dilution}
\end{eqnarray}
which is much larger than that derived in the previous
work~\cite{PLB174-176}. For example, the dilution factor given in
Ref.~\citen{PLB174-176} is $O(10^{-14})$ for the case $T_R\sim 1{\rm
MeV}$, which is about $10^{-9}$ times smaller than our result with
$m_\chi \sim 100{\rm GeV}$ and $\chi_0 \sim M$. This discrepancy
originates to the fact that the amplitude of the Polonyi field has
already decreased by a large amount at the decay time of the
Affleck-Dine field. In Ref.~\citen{PLB174-176}, this effect is not
taken into account, and hence the dilution factor given in
Ref.~\citen{PLB174-176} is underestimated.
\section{Mass density of LSP}
\label{sec:lsp}
Let us now turn to discuss a new cosmological difficulty in the
present solution to the Polonyi problem. The decay of the Polonyi
field produces a large number of superparticles, which promptly decay
into LSPs. The number density of LSP produced by the decay, $n_{{\rm
LSP},i}$, is of the same order of that of the Polonyi field
$n_\phi\equiv\rho_\phi /m_\phi$. Just after the decay of the Polonyi
field, the yield variable for LSP, $Y_{\rm LSP}$, which is defined by
the ratio of the number density of LSP to the entropy density $s$, is
given by
\begin{eqnarray}
m_{\rm LSP} Y_{\rm LSP} & \simeq & \frac{\rho_\phi}{s}
\simeq \frac{m_{\rm LSP}\rho_{{\rm LSP},i}}{m_\phi s}
\sim \frac{m_{\rm LSP}T_R}{m_\phi}
\nonumber \\
& \sim & 10^{-5} {\rm GeV} \left(\frac{m_{\rm LSP}}{100 {\rm GeV} }\right)
\left(\frac{T_R}{1 {\rm MeV} }\right)\left(\frac{10 {\rm TeV} }{m_\phi}\right),
\label{mY_LSP}
\end{eqnarray}
where $\rho_{{\rm LSP},i}$ is the mass density of LSP just after the
decay of the Polonyi field. If LSP is stable and the pair annihilation
of LSP is not effective, $Y_{\rm LSP}$ is conserved until today. On
the other hand, the ratio of the critical density $\rho_c$ to the
present entropy density $s_{0}$ is given by
\begin{equation}
\frac{\rho_c}{s_{0}} \simeq 3.6 \times 10^{-9}h^2~ {\rm GeV} ,
\label{critical}
\end{equation}
where $h$ is the Hubble constant in units of 100km/sec/Mpc. Comparing
Eq.(\ref{mY_LSP}) with Eq.(\ref{critical}), we see that LSP overcloses
the universe in the wide parameter region for $m_{\rm LSP}, m_{\phi}$
and $T_R$ which we are concerned with.
If the pair annihilation of LSP takes place effectively, its abundance
is reduced to
\begin{equation}
\frac{n_{\rm LSP}}{s} \simeq
\left. \frac{H}{s\langle\sigma_{\rm ann}v_{\rm rel}\rangle}
\right | _{T=T_R},
\label{abundance_LSP}
\end{equation}
where $\sigma_{\rm ann}$ is the annihilation cross section, $v_{\rm
rel}$ is the relative velocity, and $\langle\cdots\rangle$ represents
the average over the phase space distribution of LSP. Comparing
Eq.(\ref{critical}) with Eq.(\ref{abundance_LSP}), we obtain a
lowerbound on the annihilation cross section,
\begin{equation}
\langle\sigma_{\rm ann}v_{\rm rel}\rangle \mathop{}_{\textstyle \sim}^{\textstyle >}
3\times 10^{-8}h^{-2} {\rm GeV} ^{-2}
\left( \frac{m_{\rm LSP}}{100 {\rm GeV} }\right)
\left( \frac{100 {\rm MeV} }{T_R}\right),
\label{sv_limit}
\end{equation}
in order that the mass density of LSP does not overclose the universe.
As we can see, constraint (\ref{sv_limit}) becomes severer as the
reheating temperature $T_R$ decreases, and hence we obtain a
lowerbound on $T_R$. Here, we derive the constraint on $T_R$ in the
framework of minimal SUSY SU(5) model~\cite{NPB193-150,ZPC11-153},
which is shown in Appendix~\ref{ap:su5}. We first solve RGEs based on
the minimal SU(5) model with the no-scale boundary conditions, and
determine the mass spectrum and mixing matrices of the superparticles.
Notice that we only investigate the parameter space which is not
excluded by the experimental or theoretical constraints. The crucial
constraints are as follows;
\begin{itemize}
\item Higgs bosons $H_f$ and $\bar{H}_f$ have correct vacuum
expectation values; $\langle
H_f\rangle^2+\langle\bar{H}_f\rangle^2\simeq (174 {\rm GeV} )^2$ and
$\tan\beta=\langle{H_f}\rangle/\langle{\bar{H}_f}\rangle$.
\item Perturbative picture is valid below the gravitational scale.
\item LSP is neutral.
\item Sfermions (especially, charged sleptons) have masses larger than
the experimental lower limits~\cite{PDG}.
\item The branching ratio for $Z$-boson decaying into neutralinos is
not too large~\cite{PLB350-109}.
\end{itemize}
One remarkable thing is that {\it LSP almost consists of bino which is
the superpartner of the gauge field for $U(1)_Y$} if we require that
LSP is neutral. Therefore, in our model, the LSP mass $m_{\rm LSP}$ is
essentially equivalent to the bino mass. Then, we calculate the
annihilation cross section and determine the lowerbound on the
reheating temperature from the following equation;
\begin{equation}
\left. \frac{H}{s\langle\sigma_{\rm ann}v_{\rm rel}\rangle}
\right | _{T=T_R} \leq
\frac{\rho_c}{s_0} \simeq 3.6h^2 \times 10^{-9} {\rm GeV} .
\end{equation}
Since LSP is most dominated by bino, it annihilates into fermion
pairs. The annihilation cross section is given by~\cite{PLB230-78}
\begin{equation}
\langle\sigma_{\rm ann}v_{\rm rel}\rangle
= a + b\langle v^2\rangle,
\label{sigma*v}
\end{equation}
where $\langle v^2\rangle$ is the average velocity of LSP,
and
\begin{eqnarray}
a & \simeq &
\frac{32\pi\alpha_1^2}{27}
\frac{m_t^2}{(m_{\tilde{t}_R}^2 + m_{\rm LSP}^2 - m_t^2)^2}
\left ( 1 - \frac{m_t^2}{m_{\rm LSP}^2} \right ) ^{1/2}
\theta (m_{\rm LSP}-m_t),
\label{s-wave} \\
b
&\simeq& \frac{8\pi\alpha_1^2}{3} \sum_{m_f\leq m_{LSP}} Y_f^4 \left \{
\frac{m_{\rm LSP}^2}{(m_{\rm LSP}^2+m_{\tilde{f}}^2)^2}
- \frac{2m_{\rm LSP}^4}{(m_{\rm LSP}^2+m_{\tilde{f}}^2)^3}
+ \frac{2m_{\rm LSP}^6}{(m_{\rm LSP}^2+m_{\tilde{f}}^2)^4} \right \} .
\label{p-wave}
\end{eqnarray}
Here, $\alpha_1^2\equiv g_1^2/4\pi\simeq 0.01$ represents the fine
structure constant for U(1)$_{\rm Y}$, $m_t$ the top-quark mass, $Y_f$
the hypercharge of the fermion $f$, and $m_{\tilde{f}}$ the mass of
the sfermion $\tilde{f}$. Notice that $a$- and $b$-terms correspond
to $s$- and $p$-wave contributions, respectively. Taking
$m_{\tilde{f}}\sim m_{\rm LSP}\sim 100 {\rm GeV} $, the annihilation cross
section given in Eq.(\ref{sigma*v}) is at most $3\times
10^{-8} {\rm GeV} ^{-2}$. Using this result in the inequality
(\ref{sv_limit}), we can see that the reheating temperature must be
higher than about 100MeV even if $\langle v^2\rangle\sim 1$. In fact,
LSP is in kinetic equilibrium in the thermal bath~\cite{lbl37715},
and hence its velocity is given by $O(T_R/m_{\rm LSP})$ which is much
smaller than 1. Thus, we have severer constraint on $T_R$, as we will
see below.
In Fig.~1, we show the lowerbound on the reheating temperature in the
$\tan\beta$ vs. $m_{\rm LSP}$ plane. In the figures, large or small
$\tan\beta$'s are not allowed since the Yukawa coupling constant for
the top quark or bottom quark blows up below the gravitational scale
for such $\tan\beta$'s. Furthermore, there also exists a lowerbound on
the LSP mass. In the case where $\tan\beta \mathop{}_{\textstyle \sim}^{\textstyle <} 20$, charged sfermions
become lighter than the experimental limit if the LSP mass becomes
lighter than $\sim 50 {\rm GeV} $. On the other hand, for the large
$\tan\beta$ case, unless the bino mass is sufficiently large, the
lightest charged slepton becomes LSP. (Remember that the dominant
component of LSP is bino.) Thus, the lowerbound on $m_{\rm LSP}$ is
obtained. As we can see, the reheating temperature should be larger
than about 100MeV, even for the case where $m_{\rm LSP}\sim 50 {\rm GeV} $.
The constraint becomes more stringent as $m_{\rm LSP}$ increases,
since the masses of the superparticles which mediate the annihilation
of LSP becomes larger as the LSP mass increases. If we translate the
lowerbound on the reheating temperature into that of the Polonyi mass
$m_\phi$, we obtain $m_\phi \mathop{}_{\textstyle \sim}^{\textstyle >} 100 {\rm TeV} $ (see
Eq.(\ref{rtemp})).
Finally, we comment on the accidental case where the annihilation
process hits the Higgs pole in the $s$-channel. If the LSP mass is
just half of the lightest Higgs boson mass, the LSP annihilation cross
section is enhanced since LSP has small but nonvanishing fraction of
higgsino component. If the parameters are well tuned, such a situation
can be realized and the lowerbound of $T_R$ decreases to $O(10 {\rm MeV} )$.
However, we consider that such a scenario are very unnatural since a
precise adjustment of the parameters is required in order to hit the
Higgs pole. \footnote
{In the case where the annihilation process hits the pole of heavier
Higgs bosons, the cross section is not enhanced so much, since the
widths of the heavier Higgs bosons are quite large.}
\section{Discussion}
\label{sec:discuss}
Here, we proposed a solution to the Polonyi problem based on the
no-scale type supergravity and the Affleck-Dine mechanism for
baryogenesis. In our scenario, however, LSP may be overproduced due to
the decay of the Polonyi field. From this fact, we obtained the
lowerbound on the reheating temperature after the decay of the Polonyi
field, which is given by $O(100 {\rm MeV} )$. As a result, the mass of the
Polonyi field have to be larger than $O(100 {\rm TeV} )$, which may raise a
new fine-tuning problem~\cite{PLB173-303,PLB168-347}.
To cure this conflict in the case of $T_R \mathop{}_{\textstyle \sim}^{\textstyle <} 10$ MeV, let us
consider modifications of the minimal SUSY standard model (MSSM). One
way is to extend the particle contents and provide a new, very light
LSP. If the LSP is lighter than $O(10$ MeV), we can see from
Eq.~(\ref{mY_LSP}) that the relic abundance does not exceed the
critical density without invoking the annihilation. This is most
easily realized in the minimal extension of the MSSM, where the
superpartner of a singlet Higgs is contained in the neutralino sector.
Another extension which has a light LSP is to incorporate the
Peccei-Quinn symmetry. Then the superpartner of the axion, the axino,
can be the LSP~\cite{NPB358-447}. Indeed, it was shown in
Ref.~\citen{PLB276-103} that the axino becomes massless at the
tree-level in the no-scale supergravity. Radiative corrections may
give a small, model-dependent axino mass.\footnote
{A light axino can also be realized if one chooses a special form of
superpotential~\cite{PLB287-123}.}
In the case of the axino mass $\sim 10 {\rm MeV} $, the axino becomes a cold
dark matter of the universe.
$R$-parity breaking is the other possibility to make our scenario
cosmologically viable. In this case, the LSP is no longer stable, but
decays to ordinary particles. If the lifetime $\tau_{LSP}$ of the LSP
is shorter than 1~sec,\footnote
{Such a small $R$-parity violation ($\tau_{LSP} \sim 1{\rm sec}$) is
consistent with other phenomenological constraints~\cite{PLB256-457}.}
its decay does not upset the standard big-bang nucleosynthesis.
\section*{Acknowledgement}
The author would like to thank M.~Kawasaki, M.~Yamaguchi and
T.~Yanagida for useful discussion, and J.~Yokoyama for a comment on
the Affleck-Dine mechanism for baryogenesis.
|
1,108,101,563,899 | arxiv | \section{\label{sec:level1}Introduction}
Observations of GW events by the advanced LIGO and VIRGO GW interferometers
are ushering in the era of GW astronomy \cite{aLigo,aVirgo}.
These GW events include merging black hole (BH) binaries and an inspiraling neutron star (NS) binary \cite{GW150914,GW151226,GW170104,GW170608,GW170814,GW170817,GWTC-1}.
Several scenarios
that include long-lived (galactic) field binaries, star clusters, galactic nuclei
and active galactic nuclei can produce these observed GW events \cite{Fieldbinaries,Globclusters,Galnuclei,AGN,LISAGCs}.
Fortunately, it may be possible to extract valuable
information about the astrophysical origins of GW events in the near future.
This requires accurate GW measurements of the spin-orbit misalignment or
the orbital eccentricities of these GW events \cite{RZV16,CA17,NSBK17}.
Using both frequency and time domain
inspiral-merger-ringdown (IMR) waveforms, residual orbital eccentricities of
the first two GW events were restricted to be below $0.15$ when these binaries entered aLIGO frequency window\cite{GW150914prop,Huerta}.
Strictly speaking, the so far detected GW events do not exhibit any observational signatures of residual orbital eccentricities
and are faithfully captured by IMR templates associated with compact binaries merging along quasi-circular orbits.
However, there exists a number of astrophysical scenarios that can
produce GW events with non-negligible eccentricities in the frequency windows
of ground-based GW detectors.
Dense star clusters like the ubiquitous globular clusters
are the most promising sites to form aLIGO relevant compact binaries with non-negligible
orbital eccentricities \cite{RCR16}.
A recent realistic modeling of globular clusters that involve
general relativistic few body interactions provided non-negligible
fraction of BH binaries with eccentricities $> 0.1$ as they enter the aLIGO frequency window \cite{SMR14,SR17,RACR18,Samsing18,RACK18,LISAGCs}.
Additionally, there exists a number of other astrophysical scenarios that can force
stellar mass compact binaries to merge with orbital eccentricities.
This include GW induced merger during hyperbolic encounters between
BHs in dense clusters \cite{RYB16}
and mergers influenced by Kozai effect in few body systems as explored in
many detailed investigations (see Ref.~\cite{Randall18} and references therein).
Further, a very recent investigation pointed out that less frequent
binary-binary encounters in dense star clusters can easily produce
eccentric compact binary coalescence \cite{ZSRHR18}.
These detailed investigations suggest that it may be reasonable to expect GW events with non-negligible orbital eccentricities in the coming years.
Non-negligible orbital eccentricities may be helpful to improve
the accuracy with a network of GW interferometers to
constrain parameters of compact binary mergers \cite{GKRF18,Gondan_Kocsis}.
Moreover, massive BH binaries in eccentric orbits are of definite
interest to maturing Pulsar Timing Arrays and the planned
Laser Interferometer Space Antenna (LISA) \cite{PTA_e,LISA_e}.
There are different on-going investigations to model eccentric compact
binary coalescence.
These efforts aim to provide template families that model
GWs from IMR phases of eccentric coalescence.
The initial effort, detailed in Ref.~\cite{Huerta},
provided a time-domain IMR family that requires orbital eccentricity to be negligible during
the merger phase.
The inspiral part of the above waveform family was based on certain $x$-model, introduced in Ref.~\cite{HHLSxmodel},
that adapted GW phasing formalism of Refs.~\cite{DGI,KG06}. Additionally,
a preliminary comparison with two numerical relativity (NR) waveforms was also pursued in Ref.~\cite{Huerta}.
An improved version of the above family was presented in Ref.~\cite{HKP18}
that employed certain quasi-circular merger waveform and which can reproduce
their NR simulations for any mass ratio below $4$.
These waveform families are expected to model GWs from eccentric coalescence
when initial eccentricities were usually below $0.2$.
Very recently, another time domain IMR family was introduced in Ref.~\cite{ENIGMA}.
This detailed effort
combined various
elements from post-Newtonian, self-force and black hole perturbation
approaches in tandem with NR simulations to model GWs from moderately eccentric non-spinning
BH binary coalescence.
The resulting IMR waveforms were validated with many NR
simulations for
eccentric binary BH mergers lasting around ten orbits with mass ratios below $5.5$
and initial eccentricities below $0.2$.
The eccentric binary BH coalescence is also explored in the framework of
the Effective-One-Body (EOB) approach \cite{TD_review}.
A formalism to incorporate orbital eccentricity in the existing EOB approach
to model quasi-circular compact binary coalescence is presented in Ref.~\cite{HB17}.
Additionally, Ref.~\cite{CHEOBNR} presented an EOB waveform family that incorporated elements of 2PN accurate
eccentric orbital description while comparing with few NR simulations for eccentric binary BH coalescence.
In contrast, the LIGO Scientific Collaboration (LSC) adapted
Ref.~\cite{EMLP13}
that provided a crude IMR prescription to model
GW signals from merging highly eccentric compact binaries.
This was employed to probe the ability of few LSC algorithms to
extract burst-like signals in the LIGO
data \cite{VTIWARI16}.
Further, some of us developed a
ready-to-use {`effective eccentric variant'} of \texttt{IMRPhenomD} waveform to
constrain
the initial orbital eccentricity of the GW150914 black hole binary.
This was pursued to justify the assumption of binary evolution
along circular orbits for the event \cite{GW150914prop}.
A crucial ingredient of the above IMR waveform family involved
an eccentric version of
\texttt{TaylorF2} approximant that incorporated in its
Fourier phase the
leading-order eccentricity corrections up to 3PN order.
The present paper provides fully analytic frequency domain interferometric response function $\tilde{h}(f)$ relevant for GW data analysis of nonspinning compact binaries inspiraling along moderately eccentric PN-accurate orbits.
Our computation is aimed at extending the widely used \texttt{TaylorF2} approximant that provides analytic
frequency domain GW templates for compact binaries inspiraling along quasi-circular orbits \cite{TaylorF2}.
This waveform family employs the method of stationary phase approximation (SPA)
to compute analytically Fourier transform of temporally evolving GW polarization states, $h_{\times}$ and $h_{+}$, for quasi-circular inspirals.
The popular LSC approximant provides fully analytic Fourier domain
GW response function $\tilde{h}(f)$ that incorporates
3.5PN-accurate Fourier phase \cite{TaylorF2}.In other words, this approximant provides general relativistic corrections to GW phase evolution that are accurate to $( v/c)^7$ order beyond the dominant quadrupolar order, where $v$ is the orbital velocity.
The present manuscript details our derivation of a fully analytic $\tilde{h}(f)$ with 3PN-accurate Fourier phase with sixth order
eccentricity contributions in terms of certain initial eccentricity at each PN order. Additionally, we include 1PN-accurate amplitude corrections and the effect of 3PN-accurate periastron advance on the Fourier phases.
To derive our eccentric approximant, we extend the post-circular scheme of Ref.~\cite{YABW} to higher PN orders.
This scheme involves expanding the Newtonian accurate $h_{\times}$ and $h_{+}$ as a power series in
orbital eccentricity that requires analytic solution to the classic Kepler equation.
We extend such a Newtonian approach by invoking a recent effort
to solve analytically PN-accurate Kepler equation in the small eccentricity limit \cite{YS}. This detailed computation also provided analytic 1PN-accurate amplitude corrected expressions for $h_{\times}$ and $h_{+}$ as a sum over harmonics in certain mean anomaly $l$ of PN-accurate Keplerian type parametric
solution \cite{YS}. Additionally, the above PN-accurate decomposition explicitly
incorporated the effect of periastron advance on individual harmonics, numerically explored using PN description in Ref.~\cite{MG07}. We combine such 1PN-accurate amplitude corrected $h_{\times}$ and $h_{+}$
expressions that incorporated eccentricity contributions to sixth order
at each PN order with the two beam pattern functions, $F_{\times}$ and $F_{+}$, to obtain fully analytic time domain GW response function $h(t)$.
Our eccentric \texttt{TaylorF2} approximant is obtained by applying the method of stationary phase approximation to such an analytic
$h(t)=F_+ h_++F_\times h_\times$ expression.
To obtain analytic expressions for several Fourier phases at their associated stationary points of $h(t)$, we require
additional PN-accurate expressions.
This involves deriving 3PN-accurate expression for the time eccentricity $e_t$, present in the 3PN-accurate Kepler Equation \cite{MGS}, as a bivariate expansion in terms of orbital angular frequency $\omega$, its initial value $\omega_0$ and $ e_0$, the value of $e_t$ at $\omega_0$. This lengthy computation extends to
3PN order, the idea of certain
{\it asymptotic eccentricity invariant} at the quadrupolar order, introduced in Ref.~\cite{KKS95}, and extended to 2PN in Ref.~\cite{THG}.
In fact, we adapted the approach of Ref.~\cite{THG} by employing the appropriately modified 3PN-accurate $d\omega/dt$ and $de_t/dt$ expressions of Refs.~\cite{ABIS,KBGJ}
to obtain 3PN-accurate bivariate expression for $e_t$.
A careful synthesis of the above listed PN-accurate expressions lead to a fully analytic frequency domain \texttt{TaylorF2} approximant
that included 1PN-accurate amplitude corrections and 3PN-accurate Fourier phases. An additional feature of our approximant
is the inclusion of periastron advance effects to 3PN order.
To explore GW data analysis implications of these features, we perform preliminary {\it match} computations \cite{DIS98}.
We conclude that the influences of periastron advance are
non-negligible for moderately eccentric binaries, especially
in the aLIGO frequency window.
This observation should be relevant while constructing IMR waveform family for compact binaries merging along moderate eccentric orbits.
This paper is structured as follows. In Sec.~\ref{sec:PostCirc}, we summarize the efforts of Refs.~\cite{YABW,THG}
to obtain analytic $\tilde{h}(f)$ with PN-accurate Fourier phase. The crucial inputs to construct our eccentric \texttt{TaylorF2}
approximant is also listed in this section.
Our approach and crucial expressions to implement our eccentric approximant that incorporates eccentricity contributions up to $\mathcal{O}(e_t^6)$ to 3PN are presented in Sec.~\ref{sec:level2}.
A brief summary and possible extensions are listed in Sec.~\ref{conclusion} while
detailed expressions, accurate to ${\cal O}(e_0^4)$ are given in Appendix \ref{appendixA}
\section{ Post-circular extensions to circular inspiral templates } \label{sec:PostCirc}
We begin by reviewing two key efforts to include the effects of orbital eccentricity onto the circular inspiral templates \cite{KKS95,YABW}.
This involves listing in Sec.~\ref{sec:PostCirc_1} the steps that are crucial to compute
analytic frequency domain GW response function with quadrupolar amplitudes and PN-accurate Fourier phase
in some detail.
Various lengthy expressions, extracted from Refs.~\cite{YS,ABIS,KBGJ}, are listed in Sec.~\ref{sec:PostCirc_2}
that will be
crucial to compute the time domain response function for eccentric binaries
while incorporating effects of periastron advance, higher order radiation reaction and amplitude corrections.
\subsection{\label{sec:PostCirc_1} Quadrupolar order $\tilde{h}(f)$ with PN-accurate Fourier phase }
Following \cite{KT89_300}, we may express
the GW interferometric response function as
\begin{widetext}
\begin{equation}
h(t)= F_+\left(\theta_S,\phi_S,\psi_S\right) h_+(t) + F_\times\left(\theta_S,\phi_S,\psi_S\right) h_\times (t) \,, \label{ht_ant} \end{equation}
\end{widetext}
where $F_{\times,+}\left(\theta_S,\phi_S,\psi_S\right)$
are the two detector antenna patterns.
These quantities depend on $\phi_S,\theta_S$, the right ascension
and declination of the source, and certain polarization angle
$\psi_S$ \cite{KT89_300}.
For eccentric inspirals, the explicit expressions for the quadrupolar order GW polarization states, $h_{\times}$ and $h_+$, are
given by Eqs.~(3.1) of Ref.~\cite{YABW}.
It is rather straightforward to express these Newtonian accurate expressions as a sum over harmonics
in terms of the mean anomaly $l$.
The resulting expressions read
\begin{widetext}
\begin{equation}
h_{+,\times}(t)= - \frac{G m \eta}{c^2 D_L} \, x\,
\sum\limits_{j=1}^{10} \left[ C_{+,\times}^{(j)} \cos{ j l} + S_{+,\times}^{(j)} \sin{j l} \right]\,,
\label{Eq_1}
\end{equation}
\end{widetext}
where $D_L$ denotes the luminosity distance while the symmetric mass ratio $\eta$ of a binary consisting of individual masses
$m_1$ and $m_2$ is defined to be $\eta= (m_1\,m_2)/ m^2$ while the total mass
$m= m_1 + m_2$.
Further, we use the commonly used dimensionless PN expansion parameter
$x=\,\left(\frac{G\, m\, \omega}{c^3}\right)^{2/3}$ where $G$, $c$ and $\omega$ are the
gravitational constant, the speed of light in vacuum and the orbital angular frequency,
respectively.
The Newtonian accurate amplitudes,
$C^{(j)}_{+,\times}$ and $S^{(j)}_{+,\times}$, are written as
power series in orbital eccentricity $e_t$ whose coefficients involve
trigonometric functions of the two angles
$\iota, \beta$ that specify the line of sight vector in a certain
inertial frame.
The derivation of these expressions is detailed in Ref.~\cite{YABW} and the required
inputs are obtained by adapting a standard analytic approach to solve the classical Kepler equation
in terms of the Bessel functions\cite{PC_93}.
With the help of Eqs.~(\ref{ht_ant}) and (\ref{Eq_1}), we obtain interferometric strain for GWs from eccentric binaries as
\begin{equation}
\label{Eq_hN}
h(t) = \, - \frac{G m \eta}{c^2 D_L} \left(\frac{G m \omega}{c^3} \right)^{2/3} \sum\limits_{j=1}^{10} \alpha_j~\cos\left(j l +\phi_j\right) ,
\end{equation}
where $\alpha_j = {\rm sign}(\Gamma_j) \sqrt{\Gamma_j^{2}+\Sigma_j^{2}} $ and $ \phi_j = \tan^{-1}{\left(-\frac{\Sigma_j}{\Gamma_j}\right)} $.
The two new functions,
$ \Gamma_j$ and $ \Sigma_j$, are defined as
$\Gamma_j=
F_+\, C^{(j)}_+ + F_{\times}\, C^{(j)}_{\times}$
and $\Sigma_j = F_+\, S^{(j)}_+ + F_{\times}\, S^{(j)}_{\times} $, respectively as in Ref.~\cite{YABW}.
We impose the effects of GW emission on the above strain by
specifying how $e_t$ and $\omega = 2\, \pi\, F$,
$F$ being the orbital frequency, vary in time.
In Ref.~\cite{YABW}, the temporal evolutions of $\omega$ and $e_t$ are
governed by the following Newtonian (or quadrupolar) equations
that were adapted from Refs.~\cite{PM63,Peters64,JS92}.
\begin{widetext}
\begin{subequations} \label{4}
\begin{align}
\frac{d \omega}{dt} &=
\frac { (G\,m\ \omega)^{5/3}\,\omega^{2}\,\eta}
{5\,c^5\, (1 -e_t^2)^{7/2}}
\biggl \{ 96
+292\,{{e_t}}^{2}
+37\,{{e_t}}^{4} \biggr \} , \label{4.b}\\
\frac{d e_t}{dt} &=
-\frac { (G\,m\, \omega)^{5/3}\, \omega \,\eta \, e_t }
{ 15\,c^5\, (1 -e_t^2)^{5/2}}
\biggl \{ 304 + 121\,{{e_t}}^{2} \biggr \} \,. \label{4.c}
\end{align}
\end{subequations}
\end{widetext}
It is customary to solve these two coupled differential equations
numerically to obtain $\omega(t)$ and $e_t(t)$ and hence
temporally evolving $h(t)$.
Interestingly, earlier efforts provided certain analytic
way for obtaining temporal evolution for $\omega(t)$ and $e_t(t)$
that mainly involves the usage of hypergeometric functions \cite{MRLY18,MKFV,Pierro02,Pierro01}
However, it is possible to obtain analytic frequency domain counterpart of the above $h(t)$ as demonstrated in Ref.~\cite{KKS95,YABW}.
This traditional approach involves the method of
SPA, detailed in Ref.~\cite{BO_book}, to compute analytically
the Fourier Transform of $h(t)$.
This was essentially demonstrated at the leading order
in initial eccentricity $e_0$ in Ref.~\cite{KKS95} and later extended to
${\cal O}(e_0^8)$ in Ref.~\cite{YABW}.
Following Refs.~\cite{KKS95,YABW}, we write
\begin{widetext}
\begin{align}
\tilde{h}(f) = \mathcal{\tilde{A}} \, {\left(\frac{G m \pi f}{c^3}\right)}^{-7/6} \sum\limits_{j=1}^{10} \xi_{j}
{\left(\frac{j}{2}\right)}^{2/3} e^{-i(\pi/4 + \Psi_j)}\,, \label{5}
\end{align}
\end{widetext}
where the overall amplitude $\mathcal{\tilde{A}}$ and
the amplitudes of Fourier coefficients $ \xi_j $
are given by
\begin{subequations} \label{6}
\begin{align}
\mathcal{\tilde{A}} &= - {\left(\frac{5 \eta \pi}{384}\right)}^{1/2} \frac{G^2 m^2}{c^5 D_L} , \label{6.b}\\
\xi_j &= \frac{\left(1-e_t^2\right)^{7/4}}{{\left(1+\frac{73}{24}e_t^2+\frac{37}{96}e_t^4\right)}^{1/2}} \alpha_{j} e^{-i \phi_j(f/j) } . \label{6.c}
\end{align}
\end{subequations}
In the approach of stationary phase approximation, the crucial Fourier phase
is given by
\begin{align}
\Psi_j [F(t_0)] = 2 \pi \int_{}^{F(t_0)} \tau' \left( j - \frac{f}{F'} \right) \,d{F'}\,, \label{7}
\end{align}
where $\tau$ stands for $F / \dot {F}$.
Note that one needs to evaluate
the above integrals at appropriate
stationary points $t_0$, defined by $F(t_0) = f/j$.
To obtain a fully analytic ready-to-use expression for $\tilde{h}(f)$, we need to follow few additional steps.
Clearly, we require to specify the frequency evolution of $e_t$ with the help of above Eqs.~(\ref{4}).
The structure of these equations for $\dot \omega$ and $\dot e_t$ allows us to write
$ d \omega/d e_t = \omega\, \kappa_N (e_t) $
and it turns out that $\kappa_N $ depends only on $e_t$.
This allows to integrate analytically the resulting
$ d \omega / \omega = \kappa_N (e_t)\, d e_t$ equation.
The resulting expression can be written symbolically as
$ \omega / \omega_0 = \kappa' (e_t, e_0)$ where $e_0$ is the
value of $e_t$ at the initial $\omega$ value, namely $\omega_0$
(see Eq.~(62) in Ref.~\cite{DGI} for the explicit form for
$\kappa' (e_t, e_0)$).
Interestingly, one may invert such an expression in the limit
$e_t \ll 1$ to obtain
$e_t$ in terms of $e_0, \omega$ and $\omega_0$ and
it reads
\begin{align}
e_t &\sim e_0 \chi ^{-19/18} + \mathcal{O}(e_0^3) , \label{7.c}
\end{align}
where $\chi$ is defined as $ \omega/ \omega_0 = F/F_0$.
We note that the above result was first obtained
in Ref.~\cite{KKS95} which influenced them to introduce the idea
of an {\it asymptotic eccentric invariant }.
This relation allows us to write
$\tau $ in terms of
$\omega, \omega_0$ and $e_0$ as
\begin{widetext}
\begin{align}
\tau \sim \frac{5~}{96~\eta~x^4}\left(\frac{G~m}{c^3}\right) \left[ 1-\frac{157 e_0^2}{24 }\chi ^{-19/9} + \mathcal{O}(e_0^4)\right] . \label{7.d}
\end{align}
\end{widetext}
It is now straightforward to compute analytically the indefinite integral
for $\Psi_j$, namely
\begin{align}
2 \pi \int \tau'\left( j - \frac{f}{F'} \right) \,d{F'}
\end{align}
that appears
in Eq.~(\ref{7}) for $\tilde{h}(f)$.
This leads to the following
expression for $\Psi_j$, accurate to $ {\cal O}(e_0^2)$ corrections:
\begin{widetext}
\begin{align}
\Psi_j &\sim j \phi_c - 2\pi f t_c
- \frac{3}{128 \eta }{ \left( \frac{G m \pi f }{c^3} \right) }^{-5/3} \left(\frac{j}{2}\right)^{8/3}
\left[1-\frac{2355 e_0^2}{1462 }\chi ^{-19/9} + \mathcal{O}(e_0^4) \right ]\,, \label{8}
\end{align}
\end{widetext}
where $\phi_c$ and $t_c$ are the orbital phase at coalescence and the time of coalescence, respectively.
Note that $\chi$ now stands for $f/f_0$
due to the use of the stationary phase condition.
Additionally, we have re-scaled $F_0 \rightarrow f_0/j$
to ensure
that $e_t(f_0) = e_0$ while employing the above expression for $e_t$, given by Eq.~(\ref{7.c}).
Indeed, our expression is consistent with Eq.~(4.28) of Ref.~\cite{YABW} that employs the
chirp mass to characterize the binary.
A number of extensions to the above result is available in the literature.
In fact, Ref.~\cite{YABW} computed the
higher order corrections to $e_t$
in terms of $e_0$ up to
${\cal O}( {e_0^7})$
and extended $\Psi_j$ to ${\cal O}( {e_0^8})$.
Its PN extension, available in
Ref.~\cite{THG}, provided
2PN corrections for $\Psi_j$ that incorporated
eccentricity corrections, accurate to
${\cal O}( {e_0^6})$ at every PN
order while Ref.~\cite{MFAM16} computed 3PN-accurate $\Psi_j$
that included leading order $e_0$ contributions.
A crucial ingredient to such PN extensions is the derivation of PN-accurate
$e_t$ expression in terms of $e_0, \chi$ and $x$.
In what follows, we summarize the steps that are required to obtain
1PN-accurate expression for $e_t$ (see Ref.~\cite{THG} for details).
The starting point of such a derivation is the 1PN-accurate differential equations
for $\omega $ and $e_t$, obtainable from Eqs.~(3.12) in Ref.~\cite{THG}.
With these inputs, it is fairly straightforward to obtain the following 1PN accurate expression
for $ d \omega/ \omega $
that includes only the leading order $e_t$ contributions as
\begin{widetext}
\begin{align}
d \omega/ \omega
= \biggl \{ -\frac{18}{19 e_t} -\frac{3}{10108 e_t} \left(-2833+5516 \eta\right) \left(\frac{G m \omega}{c^3}\right)^{2/3}
\biggr \}\, de_t
\,. \label{12}
\end{align}
\end{widetext}
The fact that $\omega$ term appears only at the 1PN order
allows us to use the earlier derived Newtonian accurate
$\omega = \omega_0\, \left ( e_0/e_t \right )^{18/19}$ relation to
replace $\omega$ on the right hand side of the above equation.
This leads to
\begin{widetext}
\begin{align}
d \omega/ \omega \sim \left\lbrace-\frac{18}{19 e_t} - \frac{3}{10108} \left(\frac{e_0^{12/19}}{e_t^{31/19}}\right)
\left(-2833+5516\eta\right)\, x_0
\right\rbrace de_t\,, \label{13}
\end{align}
\end{widetext}
where $ x_0= \left ( G\, m\, \omega_0/c^3 \right )^{2/3}$. We can integrate this equation to obtain $ \ln \omega - \ln \omega_0$ in terms of $e_t,e_0$ and $\omega_0$.
The exponential of the resulting expression and its
bivariate expansion in terms of $x_0$ and $e_t$ result in
\begin{widetext}
\begin{align}
\omega & \sim \left\lbrace \left(\frac{e_0}{e_t}\right)^{18/19} + x_0\left(\frac{2833-5516 }{2128}\eta \right) \left[ \left( \frac{e_0}{e_t}\right)^{18/19} - \left( \frac{e_0}{e_t}\right)^{30/19} \right] \right\rbrace \omega_0 \,.
\label{14}
\end{align}
\end{widetext}
We invert the above equation to obtain $e_t$ in terms of $e_0$ and $x_0$ after invoking the
Newtonian accurate relation $e_t = e_0\, \chi^{-19/18} $ to
replace the $e_t$ terms associated with the $x_0$ term.
This inversion and the associated bivariate
expansion in terms of $e_0$ and $x_0$ require that $e_0 \ll 1$ and $x_0 \ll 1$.
The resulting $e_t$ expression reads
\begin{widetext}
\begin{align}
&e_t \sim e_0
\left\lbrace \chi ^{-19/18}+x_0 \left(\frac{2833}{2016}-\frac{197 }{72}\eta\right) \left(- \chi ^{-7/18} + \chi
^{-19/18} \right)
\right\rbrace \,. \label{15}
\end{align}
\end{widetext}
To obtain $e_t$ as a bivariate expansion in terms of
the regular PN parameter $x$ and $e_0$, we employ the fact that $ x/x_0 = \chi^{2/3}$ and this results in
\begin{widetext}
\begin{align}
&e_t \sim e_0 \left\lbrace \chi^{-19/18} + x\left( \frac{2833}{2016}-\frac{197 }{72}\eta \right)\left( -\chi^{-19/18} +\chi^{-31/18} \right) \right\rbrace . \label{16}
\end{align}
\end{widetext}
We are now in a position to obtain
1PN-accurate $\Psi_j$ expression that includes
${\cal O}(e_0^2)$ contributions both at the Newtonian and 1PN orders
with the help of 1PN-accurate $\tau = \omega/\dot \omega$
expression that is accurate to ${\cal O}(e_t^2)$ terms.
A straightforward computation leads to the desired $\Psi_j$ expression which reads
\begin{widetext}
\begin{align}
&\Psi_j \sim j \phi_c- 2\pi f t_c - \left(\frac{3 j }{256 \eta } \right) x^{-5/2} \left\lbrace 1- \frac{2355 e_0^2 }{1462} \chi^{-19/9} + x \left[ \frac{3715}{756} +\frac{55
}{9}\eta + \left( \left[ -\frac{2045665 }{348096}- \frac{128365 }{12432}\eta\right] \chi^{-19/9} \right.\right.\right.\nonumber\\
&\qquad \left.\left.\left. {} + \left[ -\frac{2223905}{491232 }+\frac{154645 }{17544 }\eta\right]\chi^{-25/9} \right)e_0^2 \right]\right\rbrace \,, \label{19}
\end{align}
\end{widetext}
where the quantities $x$ and $\chi$ will have to be evaluated
at the stationary point (see Ref.~\cite{THG} for details).
With the above equation, we explicitly listed our approach to compute PN-accurate
$ \Psi_j$ that incorporates $e_0$ corrections at each PN order.
In the present paper, we extend these computations to 3PN order
while incorporating ${\cal O}(e_0^6)$ contributions at each PN order.
These higher order $e_0$ corrections are included as we desire to to model
GWs from moderately eccentric compact binary inspirals.
In the next section, we provide crucial inputs that will be required to
compute analytic 1PN-accurate amplitude corrected $\tilde{h}(f)$ with 3PN-accurate Fourier phases.
\subsection{Analytic PN-accurate amplitude corrected time domain eccentric GW templates} \label{sec:PostCirc_2}
The previous section showed that we require analytic expressions for the two GW polarization states
as a sum over {\it harmonics} to construct ready-to-use analytic $\tilde{h}(f)$.
This influenced us to adapt Eqs.~(44) and (45) in Ref.~\cite{YS} that
provided analytic 1PN-accurate amplitude corrected $h_{\times,+}(t)$
which additionally included the
effects of periastron advance on individual {\it harmonics}.
This may be seen by a close inspection of appropriate terms in Eqs.~(44),(45),(46) and (47) of Ref.~\cite{YS}.
To describe in detail how these improvements
in GW polarization states
change the harmonic structure of $h(t)$, we restrict
our attention to quadrupolar order contributions to $h_{\times,+}(t)$ , given in Eqs.~(44) and (45) of Ref.~\cite{YS}.
The explicit expressions for such `Newtonian` contributions to $h_{\times,+}(t)$ that include $\mathcal{O}(e_t^4)$ corrections read
\begin{comment}
\begin{align}
h_{\times}^{0}=& \, \frac{G\,m\,\eta}{c^2\, D_L} \,x\,
\left\lbrace e^2 \left(-16 c_{2 \beta } c_i \sin \left(4 \phi -2 \phi '\right)+10 c_{2 \beta } c_i \sin (2 \phi ) \right. \right. \nonumber \\ & \left. +16 c_i s_{2 \beta } \cos \left(4 \phi -2 \phi'\right)-10 c_i s_{2 \beta } \cos (2 \phi )\right) \nonumber \\ & +e \left(-9 c_{2 \beta } c_i \sin \left(3 \phi -\phi '\right)+3 c_{2 \beta } c_i \sin \left(\phi'+\phi \right) \right. \nonumber \\ & \left. + 9 c_i s_{2 \beta } \cos \left(3 \phi -\phi '\right)-3 c_i s_{2 \beta } \cos \left(\phi '+\phi \right)\right) \nonumber \\ & \left. -4 c_{2 \beta } c_i \sin (2 \phi )+4 c_i s_{2 \beta } \cos (2 \phi ) \right\rbrace \, \label{20} \\
h_+^{0}=& \, \frac{G\,m\,\eta}{c^2\, D_L} \,x\, \left\lbrace e^2 \left(-8 c_{2 \beta } \cos \left(4 \phi -2 \phi '\right)+5 c_{2 \beta } \cos (2 \phi ) \right. \right. \nonumber \\ & -8 c_{2 \beta } c_i^2 \cos \left(4 \phi -2 \phi '\right)+5 c_{2
\beta } c_i^2 \cos (2 \phi ) \nonumber \\ & -8 c_i^2 s_{2 \beta } \sin \left(4 \phi -2 \phi '\right)+5 c_i^2 s_{2 \beta } \sin (2 \phi ) \nonumber \\ & +s_i^2 \cos \left(2 \phi -2 \phi
'\right)-8 s_{2 \beta } \sin \left(4 \phi -2 \phi '\right) \nonumber \\ & \left. +5 s_{2 \beta } \sin (2 \phi )\right) +\frac{1}{2} e \left(-9 c_{2 \beta } \cos \left(3 \phi
-\phi '\right) \right. \nonumber \\ & +3 c_{2 \beta } \cos \left(\phi '+\phi \right)-9 c_{2 \beta } c_i^2 \cos \left(3 \phi -\phi '\right) \nonumber \\ & +3 c_{2 \beta } c_i^2 \cos \left(\phi
'+\phi \right)-9 c_i^2 s_{2 \beta } \sin \left(3 \phi -\phi '\right) \nonumber \\ & +3 c_i^2 s_{2 \beta } \sin \left(\phi '+\phi \right)+2 s_i^2 \cos \left(\phi -\phi
'\right)\nonumber \\ & \left.-9 s_{2 \beta } \sin \left(3 \phi -\phi '\right)+3 s_{2 \beta } \sin \left(\phi '+\phi \right)\right) \nonumber \\ & -2 \left(c_{2 \beta } \cos (2 \phi )+c_{2
\beta } c_i^2 \cos (2 \phi )+c_i^2 s_{2 \beta } \sin (2 \phi ) \right. \nonumber \\ & \left. \left. +s_{2 \beta } \sin (2 \phi )\right) \right\rbrace \label{21}
\end{align}
\end{comment}
\begin{widetext}
\begin{align}
h_{\times}^{0}=& \,\frac{G\,m\,\eta}{c^2\, D_L} \,x\, \bigg\{\cos(\phi+\phi')\bigg[\bigg(-3 e_t+\frac{13 e_t^3}{8}\bigg) c_i s_{2 \beta }\bigg] +\sin(\phi+\phi')\bigg[\bigg(3 e_t-\frac{13 e_t^3}{8}\bigg) c_i c_{2 \beta }\bigg] +\cos(2\phi)\bigg[\bigg(4-10 e_t^2 \nonumber \\ & +\frac{23 e_t^4}{4}\bigg) c_i s_{2 \beta }\bigg]+ \sin(2\phi)\bigg[\bigg(-4+10 e_t^2-\frac{23 e_t^4}{4}\bigg) c_i c_{2 \beta }\bigg]+\cos(3\phi-\phi')\bigg[\bigg(9 e_t-\frac{171 e_t^3}{8}\bigg) c_i s_{2 \beta }\bigg] \nonumber \\ & +\sin(3\phi-\phi')\bigg[\bigg(-9 e_t+\frac{171 e_t^3}{8}\bigg) c_i c_{2 \beta }\bigg]+\cos(4\phi-2\phi')\bigg[\bigg(16 e_t^2-40 e_t^4\bigg) c_i s_{2 \beta }\bigg]+\sin(4\phi-2\phi')\bigg[\bigg(-16 e_t^2 \nonumber \\ & +40 e_t^4\bigg) c_i c_{2 \beta }\bigg]+\cos(5\phi-3\phi')\bigg[\frac{625}{24} e_t^3 c_i s_{2 \beta }\bigg]+\sin(5\phi-3\phi')\bigg[\frac{-625}{24} e_t^3 c_i c_{2 \beta }\bigg]+\cos(6\phi-4\phi')\bigg[\frac{81}{2} e_t^4 c_i s_{2 \beta }\bigg] \nonumber \\ & +\sin(6\phi-4\phi')\bigg[\frac{-81}{2} e_t^4 c_i c_{2 \beta }\bigg]+\cos(\phi-3\phi')\bigg[\frac{-7}{24} e_t^3 c_i s_{2 \beta }\bigg]+\sin(\phi-3\phi')\bigg[\frac{-7}{24} e_t^3 c_i c_{2 \beta }\bigg] \nonumber \\ & +\cos(2\phi-4\phi')\bigg[-\frac{1}{4} e_t^4 c_i s_{2 \beta }\bigg]+\sin(2\phi-4\phi')\bigg[-\frac{1}{4} e_t^4 c_i c_{2 \beta }\bigg] \bigg\} \label{hx0} \,, \\ \nonumber \\
h_+^{0}=& \, \frac{G\,m\,\eta}{c^2\, D_L} \,x\,\bigg\{\cos(\phi+\phi')\bigg[\bigg(\frac{3 e_t}{2}-\frac{13 e_t^3}{16}\bigg) \left(1+c_i^2\right) c_{2 \beta }\bigg]+\sin(\phi+\phi')\bigg[\bigg(\frac{3 e_t}{2}-\frac{13 e_t^3}{16}\bigg) \left(1+c_i^2\right) s_{2 \beta }\bigg] \nonumber \\ & +\cos(2\phi)\bigg[\bigg(-2+5 e_t^2-\frac{23 e_t^4}{8}\bigg) \left(1+c_i^2\right) c_{2 \beta }\bigg]+\sin(2\phi)\bigg[\bigg(-2+5 e_t^2-\frac{23 e_t^4}{8}\bigg) \left(1+c_i^2\right) s_{2 \beta }\bigg] \nonumber \\ & +\cos(3\phi-\phi')\bigg[\bigg(-\frac{9 e_t}{2}+\frac{171 e_t^3}{16}\bigg) \left(1+c_i^2\right) c_{2 \beta }\bigg]+\sin(3\phi-\phi')\bigg[\bigg(-\frac{9 e_t}{2}+\frac{171 e_t^3}{16}\bigg) \left(1+c_i^2\right) s_{2 \beta }\bigg] \nonumber \\ & +\cos(4\phi-2\phi')\bigg[\left(-8 e_t^2+20 e_t^4\right) \left(1+c_i^2\right) c_{2 \beta }\bigg]+\sin(4\phi-2\phi')\bigg[\left(-8 e_t^2+20 e_t^4\right) \left(1+c_i^2\right) s_{2 \beta }\bigg] \nonumber \\ & +\cos(5\phi-3\phi')\bigg[-\frac{625}{48} e_t^3 \left(1+c_i^2\right) c_{2 \beta }\bigg]+\sin(5\phi-3\phi')\bigg[-\frac{625}{48} e_t^3 \left(1+c_i^2\right) s_{2 \beta }\bigg]+\cos(6\phi-4\phi')\bigg[-\frac{81}{4} e_t^4 \left(1+c_i^2\right) c_{2 \beta }\bigg] \nonumber \\ & +\sin(6\phi-4\phi')\bigg[-\frac{81}{4} e_t^4 \left(1+c_i^2\right) s_{2 \beta }\bigg]+\cos(\phi-\phi')\bigg[\bigg(e_t-\frac{e_t^3}{8}\bigg) s_i^2\bigg]+\cos(2\phi-2\phi')\bigg[\bigg(e_t^2-\frac{e_t^4}{3}\bigg) s_i^2\bigg] \nonumber \\ & +\cos(3\phi-3\phi')\bigg[\frac{9}{8} e_t^3 s_i^2\bigg]+\cos(4\phi-4\phi')\bigg[\frac{4}{3} e_t^4 s_i^2\bigg]+\cos(\phi-3\phi')\bigg[\frac{7}{48} e_t^3 \left(1+c_i^2\right) c_{2 \beta }\bigg] \nonumber \\ & +\sin(\phi-3\phi')\bigg[-\frac{7}{48} e_t^3 \left(1+c_i^2\right) s_{2 \beta }\bigg]+\cos(2\phi-4\phi')\bigg[-\frac{1}{8} e_t^4 \left(1+c_i^2\right) c_{2 \beta }\bigg]+\sin(2\phi-4\phi')\bigg[-\frac{1}{8} e_t^4 \left(1+c_i^2\right) s_{2 \beta }\bigg]\bigg\}\,. \label{hp0}
\end{align}
\end{widetext}
where $\phi= ( 1+k)\,l $, $ \phi' = k\,l$ and
$k$ provides the rate of periastron advance per orbit \cite{DGI}.
Further, we let $c_i=\cos \iota$, $s_i=\sin \iota$, $c_{2\beta}=\cos 2\beta$ and $s_{2\beta}=\sin 2\beta$.
Note that crucial ingredients to obtain above analytic expressions
include developing approaches to solve PN-accurate Kepler equation and
adapting them to derive PN-accurate relations to connect true
and eccentric anomalies, detailed in Ref.~\cite{YS}.
A close inspection of the above two equations with Eqs.~(3.3) and (3.4) of Ref.~\cite{YABW} reveals that the arguments of cosine and sine functions
in above expressions involve $\phi'= k\,l$ and its multiples
in addition to the usual orbital phase $\phi$ and its multiples.
These additional $\phi'$ contributions
are clearly due to the periastron advance.
It turns out that these additional angular contributions are
sufficient to provide the numerically inferred side bands in the power spectrum of eccentric binaries due to the presence of $k$ \cite{MG07}.
This is why we explicitly included $e_t^4$ contributions to the above $h_{\times,+}$
expressions as these contributions are required to reveal
the underlying side band structure of waveforms due to the influence of periastron advance.
We re-write the above expressions for $h_{\times,+}^{0}$ in a more compact form to explicitly show how various harmonics are affected by the
advance of periastron. The resulting expressions read
\begin{widetext}
\begin{align}
h_{+,\times}^0(t)=& \,\bigg\{\sum_{j=1}^{6}\bigg[C_{+,\times}^{j,-2}(0) \,\cos(j\,\phi-(j-2)\phi') + S_{+,\times}^{j,-2}(0) \,\sin(j\,\phi-(j-2)\phi')\bigg] + \sum_{j=1}^4 \bigg[ C_{+,\times}^{j,0}(0) \,\cos(j\,\phi-j\phi')\nonumber \\ & + S_{+,\times}^{j,0}(0) \,\sin(j\,\phi-j\phi')\bigg]+ \sum_{j=1}^{2} \bigg[C_{+,\times}^{j,+2}(0) \,\cos(j\,\phi-(j+2)\phi') + S_{+,\times}^{j,+2}(0) \,\sin(j\,\phi-(j+2)\phi')\bigg]\bigg\} \,, \label{hpx0}
\end{align}
\end{widetext}
where we denoted the coefficient of $\cos (j\,\phi-(j\pm n)\phi')$ harmonic at the quadrupolar (Newtonian) order
for the $+$ polarization by $C_{+}^{j,\pm n}(0)$
while the coefficient of $\sin (j\,\phi-(j\pm n)\phi')$ is indicated by
$S_{+}^{j,\pm n}(0)$.
We adopt a rather heavy notation as it is amenable to higher PN order contributions which will be tackled below.
In this convention, we represent
the coefficient of $\cos(j\,\phi-(j\pm n)\phi')$ that appears in the
1PN contributions to $\times $ polarization state by
$C_{\times}^{j,\pm n}(1)$. It should be obvious that $j$ stands for the {\it harmonic} variable while $n$ provides a measure of the shift that each harmonic experiences due
to periastron advance.
A close comparison of Eqs.~(\ref{hx0}) and (\ref{hp0}) reveals that these coefficients are functions of $\iota, \beta$
and contain powers of $e_t$.
Moreover, the arguments of cosine and sine functions clearly show that the eccentricity induced higher {\it harmonics} are not mere multiples of $\omega=N(1+k)$, where $N$ is the PN-accurate mean motion. Clearly, this is due to the presence of non-vanishing $\phi'$ contributions due to periastron advance.
Interestingly, the plus polarization state does provide {\it harmonics} which are
integer multiples of $N$.
It is not difficult to show that these Newtonian like terms arise from
specific cosine functions with arguments $j\phi-j\phi' $, as evident
from Eqs.~(\ref{hp0}).
Further, it is possible to show that these contributions arise from $e_t\, \cos u\,s_i^2/( 1- e_t\,\cos u)$
contributions to $H_{+}^{0}$, given by Eq.~(F2a) in Ref.~\cite{YS} and therefore not influenced by the periastron advance.
Interestingly, similar
conclusions were obtained in Ref.~\cite{MG07}.
With the above inputs, we write the time-domain GW detector response function for eccentric inspirals as
\begin{widetext}
\begin{align}
h(t)=& \, \frac{G\,m\,\eta}{c^2\, D_L} \,x\, \bigg\{ \sum_{j=1}^6 \bigg[ \Gamma_{j,-2}^{(0)} \cos(j\phi-(j-2)\phi')+\Sigma_{j,-2}^{(0)} \sin(j\phi-(j-2)\phi') \bigg] + \sum_{j=1}^4 \bigg[ \Gamma_{j,0}^{(0)} \cos(j\phi-j\phi')\nonumber \\ & +\Sigma_{j,0}^{(0)} \sin(j\phi-j\phi') \bigg] + \sum_{j=1}^2 \bigg[ \Gamma_{j,+2}^{(0)} \cos(j\phi-(j+2)\phi')+\Sigma_{j,+2}^{(0)} \sin(j\phi-(j+2)\phi') \bigg] \bigg\}\,, \label{hpx0_1}
\end{align}
\end{widetext}
where the amplitudes of the cosine and sine functions are denoted by rather complicated symbols
$\Gamma_{j,\pm n}^{(0)} $ and $\Sigma_{j,\pm n}^{(0)}$. The definition of
$h(t)=\,F_+\,h_+(t)\,+\,F_\times\,h_\times(t)$ ensures that
$\Gamma_{j,\pm n}^{(0)}=F_+\,C_{+}^{j,\pm n}(0) + F_\times\,C_{\times}^{j,\pm n}(0)$
while $\Sigma_{j,\pm n}^{(0)}=F_+\,S_{+}^{j,\pm n}(0) + F_\times\,S_{\times}^{j,\pm n}(0)$. We list in Appendix~\ref{appendixB}, the
lengthy expressions for these quantities in terms of $\iota, \beta$ and eccentricity contributions, accurate to ${\cal O}(e_t^4)$.
We display up to $\mathcal{O}(e_t^4)$ contributions to demonstrate the full harmonic structure of the quadrupolar order GW polarization states.
It turns out that $\Sigma_{j,0}^{(0)}$ contributions
are zero by construction. This is mainly because
the un-shifted harmonics only appear with the cosine terms, present in the $+$ polarization state.
Invoking familiar trigonometric identities, we simplify the above equation and obtain
\begin{widetext}
\begin{align}
\label{Eq_h_t_0f}
h(t)=& \,\frac{G\,m\,\eta}{c^2\, D_L} \,x\,\bigg\{ \sum_{j=1}^6 \alpha_{j,-2}^{(0)} \cos(j\phi-(j-2)\phi'+\bar{\phi}_{j,-2}^{(0)}) + \sum_{j=1}^4 \alpha_{j,0}^{(0)} \cos(j\phi-j\phi'+\bar{\phi}_{j,0}^{(0)}) \nonumber \\ & + \sum_{j=1}^2 \alpha_{j,+2}^{(0)} \cos(j\phi-(j+2)\phi'+\bar{\phi}_{j,+2}^{(0)})\bigg\} \,,
\end{align}
\end{widetext}
where we introduce two new multi-index symbols $\alpha_{j,\pm n}^{(0)}$ and $\bar{\phi}_{j,\pm n}^{(0)}$ to ensure that
detector strain can be written in terms of only cosine functions.
Influenced by Ref.~\cite{YABW}, these symbols are defined as
$\alpha_{j,\pm n}^{(0)} = {\rm sign}\left(\Gamma_{j,\pm n}^{(0)}\right)\sqrt[]{\left(\Gamma_{j,\pm n}^{(0)}\right)^2+\left(\Sigma_{j,\pm n}^{(0)}\right)^2}$
and $\bar{\phi}_{j,\pm n}^{(0)} = \tan^{-1}\left(- \frac{\Sigma_{j,\pm n}^{(0)}}{\Gamma_{j,\pm n}^{(0)}}\right)$.
We do not list explicit expressions for these quantities that are accurate to $\mathcal{O}(e_t^4)$ in eccentricity corrections
as they can be easily obtained from our Eqs.~(\ref{gamma_0}) and (\ref{sigma_0}).
A close inspection of above equations reveal that they provide
GW response function for compact binaries moving along precessing eccentric orbits.
To obtain temporally evolving $h(t)$ associated with
compact binaries inspiraling along precessing eccentric orbits,
we need to specify how $\phi, \phi', \omega $ and $e_t$ vary in time due to GW emission. We adapt the phasing formalism, detailed
in Refs.~\cite{DGI,THG}, to provide differential equations for these
variables. And, for the time being, we will concentrate on the secular evolution of these
variables. In other words, we will neglect GW induced quasi-periodic variations to orbital elements and angles, detailed in Ref.~\cite{DGI}. The 3PN-accurate secular evolution to
$\phi$ and $\phi'$ in the modified harmonic gauge
that are
accurate to ${\cal O}(e_t^6)$ are given by
\begin{widetext}
\begin{align}
\frac{d \phi}{dt} =&\, \omega =\, x^{3/2}\frac{c^3}{G\,m}, \, \label{Tevolve_1} \\ \nonumber \\
\frac{d \phi'}{dt} =&\, \omega \frac{k}{1+k} =\, \omega \bigg\{3\,x \bigg[1+ e_t^2+ e_t^4+e_t^6\bigg]+x^2\bigg[\frac{9}{2}-7 \eta +\bigg(\frac{87}{4}-\frac{41}{2}\eta\bigg) e_t^2+(39-34 \eta ) e_t^4+\bigg(\frac{225}{4}-\frac{95}{2}\eta\bigg) e_t^6\bigg] \nonumber \\ & +x^3\bigg[\frac{27}{2}+\bigg(-\frac{481}{4}+\frac{123 \pi ^2}{32}\bigg) \eta +7 \eta ^2+\bigg(\frac{519}{4}+\bigg(-\frac{2037}{4}+\frac{1599 \pi ^2}{128}\bigg)\eta +61 \eta ^2\bigg) e_t^2+\bigg(\frac{2811}{8}\nonumber \\ & +\bigg(-1174+\frac{3321 \pi ^2}{128}\bigg) \eta +\frac{1361}{8}\eta^2\bigg)e_t^4+\bigg(\frac{10779}{16}+\bigg(-\frac{16901}{8}+\frac{2829 \pi ^2}{64}\bigg) \eta +\frac{2675}{8}\eta^2\bigg) e_t^6\bigg]\bigg\},\, \label{Tevolve_2} \\ \nonumber \\
\frac{d \omega}{dt} =&\, \frac{96\,c^6\,\eta}{5\,G^2\,m^2}x^{11/2}\bigg\{1+\frac{157}{24}e_t^2+\frac{605}{32}e_t^4+\frac{3815}{96}e_t^6+x\bigg[-\frac{743}{336}-\frac{11}{4}\eta+\bigg(\frac{713}{112}-\frac{673}{16}\eta\bigg) e_t^2+\bigg(\frac{52333}{672}-\frac{12415}{64}\eta\bigg)e_t^4 \nonumber \\ & +\bigg(\frac{13823}{48}-\frac{107765}{192}\eta\bigg) e_t^6\bigg]+ \dot{\omega}^{\tiny{1.5PN}} + \dot{\omega}^{\tiny{2PN}} + \dot{\omega}^{\tiny{2.5PN}} + \dot{\omega}^{\tiny{3PN}} \bigg\},
\, \label{Tevolve_3} \\ \nonumber \\
\frac{d e_t}{dt} =& \, -\frac{304\,c^3\,\eta\,e_t}{15\,G\,m}x^4\bigg\{1+\frac{881}{304}e_t^2+\frac{3265}{608}e_t^4+\frac{20195}{2432}e_t^6+x\bigg[-\frac{2817}{2128}-\frac{1021}{228}\eta+\bigg(\frac{40115}{4256}-\frac{51847}{1824}\eta\bigg) e_t^2 \nonumber \\ & +\bigg(\frac{87749}{2128} -\frac{298115}{3648}\eta\bigg) e_t^4+\bigg(\frac{121833}{1216}-\frac{2501905}{14592}\eta\bigg) e_t^6\bigg]+ \dot{e_t}^{\tiny{1.5PN}} + \dot{e_t}^{\tiny{2PN}} + \dot{e_t}^{\tiny{2.5PN}} + \dot{e_t}^{\tiny{3PN}}\bigg\}.
\label{Tevolve_4} \\ \nonumber
\end{align}
\end{widetext}
The explicit $1.5,\,2,\,2.5$ and $3$PN order contributions to $d \omega/dt$ and $ d e_t/dt$
that incorporates all the
$\mathcal{O}(e_t^6)$ corrections are provided in the Appendix~\ref{appendixC}.
The differential equations for $d \omega/dt$ and $ d e_t/dt$ are extracted from expressions, available in Refs.~\cite{ABIS,KBGJ} and
are in the modified harmonic gauge. These papers provided above 3PN accurate expressions as
sum of certain `instantaneous' and `tail' contributions
\begin{align*}
\frac{d \omega}{dt} &= \bigg( \frac{d \omega}{dt}\bigg)_{\rm inst} + \bigg( \frac{d \omega}{dt} \bigg)_{\rm tail}
\,,\\
\frac{d e_t}{dt} &= \bigg( \frac{d e_t}{dt} \bigg)_{\rm inst} + \bigg( \frac{d e_t}{dt}\bigg)_{\rm tail}\,.
\end{align*}
The 3PN-accurate instantaneous contributions that depend only on the binary dynamics at the usual retarded time while the hereditary contributions are sensitive to the binary dynamics at all epochs prior to the usual retarded time \cite{BDI}. The instantaneous contributions to $d \omega/dt$ are extracted from Eqs.~(6.14),(6.15a),(6.15),(C6) and (C7) of Ref.~\cite{ABIS} while for $d e_t/dt$ such contributions originate from Eqs.~(6.16),(6.19a),(6.19b),(C10) and (C11) in Ref.~\cite{ABIS}. It should be obvious that we have Taylor expanded these equations around $e_t=0$ to obtain eccentricity contributions
accurate to ${\cal O}(e_t^6)$. The hereditary contributions to $d \omega/dt$ and $d e_t/dt$ are adapted from Eqs.~(6.24c) and (6.26)
of Ref.~\cite{ABIS} and they depend on a number of eccentricity enhancement functions.
We employ such enhancement functions provided in Ref.~\cite{KBGJ} for our computations.
\newline
We now have all the inputs to
obtain the restricted time-domain $h(t)$
to model GWs from non-spinning compact binaries inspiraling along precessing
moderately eccentric orbits.
To obtain such time domain templates,
we numerically solve
the above listed differential equations for $\omega, e_t, \phi$ and $\phi'$
and impose their temporal evolution in the quadrupolar order GW response
function, given by Eq.~(\ref{Eq_h_t_0f}).
We now move onto describe how we extend the quadrupolar order GW response
function.
It should be obvious that we require a prescription to compute
analytically PN-accurate amplitude corrected GW polarization states
to improve the above listed quadrupolar order GW response
function.
Therefore, we adapt 1PN-accurate amplitude corrected and fully analytic expressions for $h_{\times,+}$, available in Ref.~\cite{YS},
to compute GW response function for eccentric inspirals that incorporates PN contributions even to its amplitudes.
We list below certain ingredients that will be crucial to write down analytic $h(t)$ that incorporates
1PN-accurate amplitude corrections to $h_{\times,+}$ while consistently keeping eccentricity contributions up to
${\cal O}(e_t^6)$.
We begin by displaying Eqs.~(44) and (45) of Ref.~\cite{YS} as a single sum which reads
\begin{widetext}
\begin{align} \label{hpc}
h_{+,\times}(t)=&\, \frac{G\,m\,\eta}{c^2\, D_L} \,x\, \bigg\{h^0_{+,\times}(t)+x^{0.5}\,h^{0.5}_{+,\times}(t)+x\,h^1_{+,\times}(t)\bigg\} \,.
\end{align}
Various PN order amplitude contributions take the following form
\begin{subequations}
\begin{align}
h_{+,\times}^0(t)=& \,\sum_{j=1}^{8}\bigg\{ C_{+,\times}^{j,-2}(0) \,\cos(j\,\phi-(j-2)\phi') + S_{+,\times}^{j,-2}(0) \,\sin(j\,\phi-(j-2)\phi')\bigg\}+ \sum_{j=1}^6 \bigg\{ C_{+,\times}^{j,0}(0) \,\cos(j\,\phi-j\phi')\nonumber \\ & + S_{+,\times}^{j,0}(0) \,\sin(j\,\phi-j\phi') \bigg\} + \sum_{j=1}^{4} \bigg\{C_{+,\times}^{j,+2}(0) \,\cos(j\,\phi-(j+2)\phi') + S_{+,\times}^{j,+2}(0) \,\sin(j\,\phi-(j+2)\phi')\bigg\}, \label{22}\\
h_{+,\times}^{0.5}(t)=& \, \delta\bigg\{\sum_{j=1}^{7}\bigg[ C_{+,\times}^{j,-1}(0.5) \,\cos(j\,\phi-(j-1)\phi') + S_{+,\times}^{j,-1}(0.5) \,\sin(j\,\phi-(j-1)\phi')\bigg] + \sum_{j=1}^{5} \bigg[ C_{+,\times}^{j,+1}(0.5) \,\cos(j\,\phi-(j+1)\phi') \nonumber \\ & + S_{+,\times}^{j,+1}(0.5) \,\sin(j\,\phi-(j+1)\phi')\bigg] + \sum_{j=1}^{9} \bigg[ C_{+,\times}^{j,-3}(0.5) \,\cos(j\,\phi-(j-3)\phi') + S_{+,\times}^{j,-3}(0.5) \,\sin(j\,\phi-(j-3)\phi') \bigg] \nonumber \\ & + \sum_{j=1}^{3} \bigg[ C_{+,\times}^{j,+3}(0.5) \,\cos(j\,\phi-(j+3)\phi') + S_{+,\times}^{j,+3}(0.5) \,\sin(j\,\phi-(j+3)\phi')\bigg]\bigg\}, \label{23} \\
h_{+,\times}^{1}(t)=& \, \sum_{j=1}^{8}\bigg\{C_{+,\times}^{j,-2}(1) \,\cos(j\,\phi-(j-2)\phi') + S_{+,\times}^{j,-2}(1) \,\sin(j\,\phi-(j-2)\phi')\bigg\}+ \sum_{j=1}^{4} \bigg\{C_{+,\times}^{j,+2}(1) \,\cos(j\,\phi-(j+2)\phi') \nonumber \\ & + S_{+,\times}^{j,+2}(1) \,\sin(j\,\phi-(j+2)\phi')\bigg\}+ \sum_{j=1}^{10} \bigg\{C_{+,\times}^{j,-4}(1) \,\cos(j\,\phi-(j-4)\phi') + S_{+,\times}^{j,-4}(1) \,\sin(j\,\phi-(j-4)\phi') \nonumber \bigg\} \\ & + \sum_{j=1}^{2} \bigg\{C_{+,\times}^{j,+4}(1) \,\cos(j\,\phi-(j+4)\phi') + S_{+,\times}^{j,+4}(1) \,\sin(j\,\phi-(j+4)\phi') \bigg\} + \sum_{j=1}^6 \bigg\{C_{+,\times}^{j,0}(1) \,\cos(j\,\phi-j\phi') \nonumber \\ & + S_{+,\times}^{j,0}(1) \,\sin(j\,\phi-j\phi')\bigg\}, \label{24}
\end{align}
\end{subequations}
\end{widetext}
where $\delta=(m_1-m_2)/(m_1+m_2)$ and we let
$m_1$ to be the heavier of the two binary components. We do not list explicitly very lengthy expressions for these amplitudes. However, they can be easily extracted from the attached
\texttt{Mathematica} notebook.
The derivation of above lengthy expressions include developing analytic approaches to solve PN-accurate Kepler equation and PN-accurate relations connecting true and eccentric anomalies, detailed in Ref.~\cite{YS}.
Indeed, we have verified that these expressions reduce to their circular
counterparts, provided in Ref.~\cite{BIWW_95}.
The associated GW detector strain for eccentric binaries is given by
\begin{widetext}
\begin{align}
h(t)=& \, \frac{G\,m\,\eta}{c^2\, D_L} \,x\, \bigg\{\bigg[ \sum_{j=1}^8 \bigg( \Gamma_{j,-2}^{(0)} \cos(j\phi-(j-2)\phi')+\Sigma_{j,-2}^{(0)} \sin(j\phi-(j-2)\phi') \bigg) + \sum_{j=1}^6 \bigg(\Gamma_{j,0}^{(0)} \cos(j\phi-j\phi')+\Sigma_{j,0}^{(0)} \sin(j\phi-j\phi') \bigg) \nonumber \\ & + \sum_{j=1}^4 \bigg(\Gamma_{j,+2}^{(0)} \cos(j\phi-(j+2)\phi')+\Sigma_{j,+2}^{(0)} \sin(j\phi-(j+2)\phi') \bigg)\bigg] + x^{0.5}\,\delta \bigg[ \sum_{j=1}^7 \bigg(\Gamma_{j,-1}^{(0.5)} \cos(j\phi-(j-1)\phi') \nonumber \\ & +\Sigma_{j,-1}^{(0.5)} \sin(j\phi-(j-1)\phi') \bigg) + \sum_{j=1}^5 \bigg(\Gamma_{j,+1}^{(0.5)} \cos(j\phi-(j+1)\phi')+\Sigma_{j,+1}^{(0.5)} \sin(j\phi-(j+1)\phi') \bigg)\nonumber \\ & + \sum_{j=1}^9 \bigg(\Gamma_{j,-3}^{(0.5)} \cos(j\phi-(j-3)\phi')+\Sigma_{j,-3}^{(0.5)} \sin(j\phi-(j-3)\phi') \bigg) + \sum_{j=1}^3 \bigg(\Gamma_{j,+3}^{(0.5)} \cos(j\phi-(j+3)\phi') \nonumber \\ & +\Sigma_{j,+3}^{(0.5)} \sin(j\phi-(j+3)\phi') \bigg)\bigg]+\,x \bigg[\sum_{j=1}^8 \bigg(\Gamma_{j,-2}^{(1)} \cos(j\phi-(j-2)\phi')+\Sigma_{j,-2}^{(1)} \sin(j\phi-(j-2)\phi') \bigg) \nonumber \\ & + \sum_{j=1}^4 \bigg(\Gamma_{j,+2}^{(1)} \cos(j\phi-(j+2)\phi')+\Sigma_{j,+2}^{(1)} \sin(j\phi-(j+2)\phi') \bigg) + \sum_{j=1}^6 \bigg(\Gamma_{j,0}^{(1)} \cos(j\phi-j\phi')+\Sigma_{j,0}^{(1)} \sin(j\phi-j\phi') \bigg) \nonumber \\ & + \sum_{j=1}^{10} \bigg(\Gamma_{j,-4}^{(1)} \cos(j\phi-(j-4)\phi')+\Sigma_{j,-4}^{(1)} \sin(j\phi-(j-4)\phi') \bigg) + \sum_{j=1}^2 \bigg(\Gamma_{j,+4}^{(1)} \cos(j\phi-(j+4)\phi')\nonumber \\ & +\Sigma_{j,+4}^{(1)} \sin(j\phi-(j+4)\phi') \bigg)\bigg]\bigg\},
\end{align}
\end{widetext}
where as expected, we have defined
\begin{subequations}
\begin{align}
\Gamma_{j,\pm n}^{(p)}&=F_+\,C_{+}^{j,\pm n}(p) + F_\times\,C_{\times}^{j,\pm n}(p),\\
\Sigma_{j,\pm n}^{(p)}&=F_+\,S_{+}^{j,\pm n}(p) + F_\times\,S_{\times}^{j,\pm n}(p).
\end{align}
\end{subequations}
A further simplification is possible which requires, as expected, additional multi-index functions
\begin{subequations} \label{alpha_phibr}
\begin{align}
\alpha_{j,\pm n}^{(p)}&=\,{\rm sign}\left(\Gamma_{j,\pm n}^{(p)}\right)\sqrt[]{\left(\Gamma_{j,\pm n}^{(p)}\right)^2+\left(\Sigma_{j,\pm n}^{(p)}\right)^2},\\
\bar{\phi}_{j,\pm n}^{(p)}&=\tan^{-1}\left(- \frac{\Sigma_{j,\pm n}^{(p)}}{\Gamma_{j,\pm n}^{(p)}}\right),
\end{align}
\end{subequations}
such that
\begin{widetext}
\begin{align} \label{ht_1PN}
h(t)=& \, \frac{G\,m\,\eta}{c^2\, D_L} \,x\, \bigg\{\bigg[ \sum_{j=1}^8 \alpha_{j,-2}^{(0)} \cos(j\phi-(j-2)\phi'+\bar{\phi}_{j,-2}^{(0)}) + \sum_{j=1}^6 \alpha_{j,0}^{(0)} \cos(j\phi-j\phi'+\bar{\phi}_{j,0}^{(0)}) \nonumber \\ & + \sum_{j=1}^4 \alpha_{j,+2}^{(0)} \cos(j\phi-(j+2)\phi'+\bar{\phi}_{j,+2}^{(0)})\bigg] + x^{0.5}\,\delta \bigg[ \sum_{j=1}^7 \alpha_{j,-1}^{(0.5)}\cos(j\phi-(j-1)\phi'+\bar{\phi}_{j,-1}^{(0.5)}) \nonumber \\ & + \sum_{j=1}^5 \alpha_{j,+1}^{(0.5)} \cos(j\phi-(j+1)\phi'+\bar{\phi}_{j,-1}^{(0.5)}) + \sum_{j=1}^9 \alpha_{j,-3}^{(0.5)} \cos(j\phi-(j-3)\phi'+\bar{\phi}_{j,-3}^{(0.5)}) \nonumber \\ & + \sum_{j=1}^3 \alpha_{j,+3}^{(0.5)} \cos(j\phi-(j+3)\phi'+\bar{\phi}_{j,+3}^{(0.5)}) \bigg] +\,x \bigg[\sum_{j=1}^8 \alpha_{j,-2}^{(1)} \cos(j\phi-(j-2)\phi'+\bar{\phi}_{j,-2}^{(1)}) \nonumber \\ & + \sum_{j=1}^4 \alpha_{j,+2}^{(1)} \cos(j\phi-(j+2)\phi'+\bar{\phi}_{j,+2}^{(1)}) + \sum_{j=1}^6 \alpha_{j,0}^{(1)} \cos(j\phi-j\phi'+\bar{\phi}_{j,0}^{(1)}) \nonumber \\ & + \sum_{j=1}^{10} \alpha_{j,-4}^{(1)} \cos(j\phi-(j-4)\phi'+\bar{\phi}_{j,-4}^{(1)}) + \sum_{j=1}^2 \alpha_{j,+4}^{(1)} \cos(j\phi-(j+4)\phi'+\bar{\phi}_{j,+4}^{(1)}) \bigg] \bigg\}.
\end{align}
\end{widetext}
A cursory look at the above equation may give the impression that
the summation indices in various sums are terminated in an arbitrary manner.
Interestingly, we find a possible way to predict the maximum value that j index can take in each of the above summations.
This is related to the argument of $\phi'$ in each of these cosine series.
We infer that the argument of $\phi'$ can take a maximum value of six
as we are restricting eccentricity contributions to sixth order in $e_t$.
This ensures that $j$ index can take maximum values of $8,6$ and $4$
at the Newtonian order in the above expression.
In other words, $j_{\rm max}$ in the above expression is given such that
$j_{\rm max} \pm n = 6$ where $\pm n$ value arises from the the argument of $\phi'$ variable in various summations.
It is easy to see that the above relation holds true even at 0.5 and 1PN orders
and it provides a natural check on the structure of these higher order PN contributions to $h(t)$.
To obtain GW response function for eccentric inspirals, we need to incorporate
temporal evolution in $\omega, e_t, \phi$ and $\phi'$, given by our earlier
listed 3PN-accurate differential equations.
The fact that we require to solve the above four coupled differential equations numerically ensures that
our approach to obtain ready-to-use $h(t)$ will be computationally expensive. This is clearly one of the motivation to obtain fully analytic $\tilde{h}(f)$ for compact binaries inspiraling along moderately eccentric orbits. Fortunately, we are in a position to compute analytic amplitude corrected $\tilde{h}(f)$ that incorporates 3PN-accurate Fourier phase while keeping eccentricity contributions accurate to sixth order in $e_0$ at every PN order. \\
\section{\label{sec:level2} Analytic $\tilde{h}(f)$ for eccentric inspirals with 1PN amplitude corrections}
We first provide a detailed description of our approach to compute analytic Fourier transform of the restricted time domain inspiral family, given by Eq.~(\ref{Eq_h_t_0f}).
This will be followed by computing $\tilde h(f)$ associated
with Eq.~(\ref{ht_1PN}). Preliminary data analysis implications of our analytic $\tilde{h}(f)$ are probed in Sec.~\ref{sec:level2A}.
\subsection{Approach to compute Fourier transform of $h(t)$ for compact binaries inspiraling along precessing eccentric orbits } \label{sec:3A}
We begin by listing the expanded version of our quadrupolar order $h(t)$,
namely Eq.~(\ref{Eq_h_t_0f}) with $\mathcal{O}(e_t^4)$
eccentricity contributions as
\begin{widetext}
\begin{align}
\label{Eq_3.1}
h(t)=& \, \frac{G\,m\,\eta}{c^2\,D_L} x \bigg\{\bigg[\alpha_{1,-2}^{(0)}\cos\left(\phi+\phi'+\bar{\phi}_{1,-2}^{(0)}\right)+ \alpha_{2,-2}^{(0)}\cos\left(2\phi+\bar{\phi}_{2,-2}^{(0)}\right)+\alpha_{3,-2}^{(0)}\cos\left(3\phi-\phi'+\bar{\phi}_{3,-2}^{(0)}\right) \nonumber \\ & + \alpha_{4,-2}^{(0)}\cos\left(4\phi-2\phi'+\bar{\phi}_{4,-2}^{(0)}\right) + \alpha_{5,-2}^{(0)}\cos\left(5\phi-3\phi'+\bar{\phi}_{5,-2}^{(0)}\right) + \alpha_{6,-2}^{(0)}\cos\left(6\phi-4\phi'+\bar{\phi}_{6,-2}^{(0)}\right)\bigg] \nonumber \\ & + \bigg[\alpha_{1,0}^{(0)} \cos\left(\phi-\phi'+\bar{\phi}_{1,0}^{(0)}\right) + \alpha_{2,0}^{(0)} \cos\left(2\phi-2\phi'+\bar{\phi}_{2,0}^{(0)}\right)+\alpha_{3,0}^{(0)} \cos\left(3\phi-3\phi'+\bar{\phi}_{3,0}^{(0)}\right)+\alpha_{4,0}^{(0)} \cos\left(4\phi-4\phi'+\bar{\phi}_{4,0}^{(0)}\right)\bigg] \nonumber \\ & + \bigg[\alpha_{1,+2}^{(0)} \cos\left(\phi-3\phi'+\bar{\phi}_{1,+2}^{(0)}\right)+\alpha_{2,+2}^{(0)} \cos\left(2\phi-4\phi'+\bar{\phi}_{2,+2}^{(0)}\right)\bigg]\bigg\}.
\end{align}
\end{widetext}
Clearly, we see three distinct square brackets that contain three cosine functions with explicitly time dependent arguments, namely $j\phi-(j-2)\phi'$, $j\phi-j\phi'$ and $j\phi-(j+2)\phi'$.
Note that $\alpha_{j,\pm n}^{(0)}$ and
$ \bar{\phi}_{j,\pm n}^{(0)}$ experience implicit temporal evolution due to the GW emission induced variations to $\omega$ and $e_t$.
The main reason for displaying the above equation is to show explicitly
how the periastron advance, defined by $\phi'$, influences the harmonic structure of $h(t)$ in comparison with Eq.~(4.21) of Ref.~\cite{YABW} or our Eq.~(\ref{Eq_hN}).
We obtain an analytic Fourier domain version of the above equation
with the help of
the {Stationary Phase Approximation}, detailed in \cite{BO_book}.
How this approach can be employed to compute
$\tilde{h}(f)$ for compact binaries spiraling along Keplerian
eccentric orbits can be found in Sec.~IV of Ref.~\cite{YABW}.
This approximation is quite appropriate for us as it provides a prescription to compute the asymptotic behavior of
generalized cosine time series, as given by our Eq.~(\ref{Eq_3.1}).
Without loss of any generality, we may write such a time series as
\begin{align}
S(t)= s(t)\cos(l\phi(t))\,,
\end{align}
where $l>0$ and as expected
$S(t)$ should be a product of slowly varying amplitude $s(t)$ and a rapidly varying cosine function
with argument $l\phi(t)$.
Due to the virtue of Riemann-Lebesgue lemma, as noted in Ref.~\cite{BO_book}, the
Fourier transform of $S(t)$ becomes
\begin{align}
S_f(f)=\frac{1}{2}\int_{-\infty}^{\infty}s(t)e^{if(2\pi t-l\phi(t)/f)}dt\,.
\end{align}
It is not difficult to gather that the argument of the exponential function vanishes at the stationary point $t_0$ such that
$l\dot{\phi}(t_0)=2\pi f$. This allows us to invoke the approach of SPA to obtain the
asymptotic behaviour of $S_f(f)$ by the following expression:
\begin{align}
S_f(f)=&\, s(t_0)e^{-i\Psi(t_0)\pm i \pi/(2\times2)}\left[\frac{2!}{f|\Psi^{(2)}(t_0)|}\right]^{\frac{1}{2}}\frac{\Gamma \left(1/2 \right)}{2} \nonumber \\
=& \, \frac{s(t_0)}{2\,\sqrt[]{l\dot{F}(t_0)}}e^{-i(\Psi(t_0)\mp\pi/4)}\,, \label{eq:SPA}
\end{align}
where the Fourier phase is defined as $$\Psi(t)\coloneqq -2\pi f t\,+\,l\phi(t)$$
Note that $F(t)=\dot{\phi}(t)/2\pi$
and therefore its value at the stationary point should be $F(t_0)= f/l$.
Interestingly, a rather identical computation can be done to obtain
the Fourier transform of a similar sinusoidal time series to be
$i\,S_f(f)$. \\
To make operational the above expression for $S_f(f)$, we require an explicit expression for the above defined Fourier phase at the stationary point $t_0$, namely
\begin{align} \label{eq:Psi}
\Psi(t_0) \coloneqq -2\pi f t_0\,+\,l\phi(t_0)
\end{align}
This is done by defining $\tau=F/\dot{F}$ such that $\phi(F)$ and $t(F)$ become
\begin{align}
\label{eq:phi} \phi(F)=&\, \phi_c + 2\pi \int^{F} \tau'dF'\,,\\
\label{eq:t} t(F)=&\, t_c+\int^{F}\frac{\tau '}{F'}dF'\,,
\end{align}
where $\phi_c$ and $t_c$ are the orbital phase and time at coalescence. In the present context, $\tau$ is defined using our 3PN-accurate expression for $\dot{\omega}$ given by Eq.~(\ref{Tevolve_3}).
Additionally, we require 3PN-accurate $e_t (\omega, \omega_0, e_0)$
expression, namely 3PN extension of Eq.~(\ref{16}), for computing these integrals analytically.
The expression for $\Psi[F(t_0)]$ obtained using Eq.~(\ref{eq:phi}) and (\ref{eq:t}) in (\ref{eq:Psi}) may be written as
\begin{align} \label{Eq_3_2}
\Psi_l[F(t_0)]=\,l\phi_c-2\pi ft_c+2\pi\int^{F(t_0)} \tau' \left(l- \frac{f}{F'}\right)dF'\,,
\end{align}
where $F(t_0) = f/l$. In the present context,
we need to evaluate the above integral at a point of time where
the orbital frequency is related to the Fourier
frequency by $F(t_0) = f/l$.
A close inspection of Eq.~(\ref{Eq_3.1}) reveals that our expression for the quadrupolar order time domain response function
is structurally similar to the above displayed cosine time series and therefore we can easily adapt these
results to obtain the Fourier transform of our quadrupolar order
$h(t)$.
However, the SPA based $\tilde{h}(f)$ will have contributions from a number of distinct stationary points.
This is primarily due to the fact that Eq.~(\ref{Eq_3.1}) consists of cosine functions of three different arguments, namely
$ j\, \phi - (j+2)\, \phi', j\, \phi - (j -2)\, \phi' $ and $ j\,\phi - j\,\phi'$. Note that there are only
three distinct types of cosine arguments as we restricted our attention to the quadrupolar order GW response function for eccentric inspirals.
However, we infer from our 1PN-accurate $h(t)$, given by Eq.~(\ref{ht_1PN}), that there are {\it nine} distinct types of cosine functions
with arguments $j\phi-(j\pm n)\phi'$ where $n=0,1,2,3,4$.
The associated {\it nine} stationary points
$t^{\pm n}$ are computed by demanding that $\dot \Psi^{\pm n}(t^{\pm n})=0$, where $\Psi^{\pm n}(t) \coloneqq -2\pi f t\,+\,j\phi-(j \pm n)\phi'$.
\\
For computing Fourier transform of
Eq.(\ref{Eq_3.1}), we solve $\dot{\Psi}^{\pm n}(t^{\pm n})= 0 $ to get the relevant stationary points and this leads to
\begin{align}
-2\pi f +j\dot{\phi}-(j \pm n)\dot{\phi'} &=0 \,,
\end{align}
where $\dot \phi = N(1+k)$ and this by definition is $\omega$.
The treatment of $\dot \phi'$ requires PN approximation
as $ \dot \phi'$ equals $k\,N$ ( this is because
$\phi' = k\, l$).
We need to express $k\,N$
in terms of $\omega$ and this leads to $\dot \phi'= \omega\, k/( 1+k)$
as $\omega = N( 1+ k) $.
For computing Fourier phase analytically, we
express $\dot \phi'$ as $\omega\, k^{(6)}_{(3)} $, where
$k^{(6)}_{(3)} $ stands for
the 3PN-accurate
expression for $k/(1+k)$ that incorporates $e_t$ contributions accurate to ${\cal O}(e_t^6)$.
The resulting expression reads
\begin{widetext}
\begin{align} \label{k63}
k^{(6)}_{(3)} =&\, x\bigg\{3 \bigg[1+e_t^2+e_t^4+e_t^6\bigg]\bigg\}+x^2\bigg\{\frac{9}{2}-7 \eta +\bigg[\frac{87}{4}-\frac{41 \eta }{2}\bigg] e_t^2+\bigg[39-34 \eta\bigg] e_t^4+\bigg[\frac{225}{4}-\frac{95 \eta }{2}\bigg] e_t^6\bigg\}+x^3\bigg\{\frac{27}{2} \nonumber \\ &+\bigg(-\frac{481}{4}+\frac{123 \pi ^2}{32}\bigg) \eta +7 \eta ^2+\bigg[\frac{519}{4}+\bigg(-\frac{2037}{4}+\frac{1599 \pi ^2}{128}\bigg)\eta +61 \eta ^2\bigg] e_t^2+\bigg[\frac{2811}{8}+\bigg(-1174+\frac{3321 \pi ^2}{128}\bigg) \eta \nonumber \\ &+\frac{1361}{8}\eta ^2\bigg] e_t^4+\bigg[\frac{10779}{16}+\bigg(-\frac{16901}{8}+\frac{2829 \pi ^2}{64}\bigg) \eta +\frac{2675}{8} \eta ^2\bigg] e_t^6\bigg\}.
\end{align}
\end{widetext}
With the help of these inputs, the stationary points
$t^{\pm n}$, where $\dot \Psi^{\pm n}(t^{\pm n})$ vanish, are given by
\begin{align*}
\left(j-(j \pm n)\, k^{(6)}_{(3)}\right)\,\dot{\phi}(t^{\pm n})\, &=2\,\pi\,f\,.
\end{align*}
In other words, the stationary phase condition is given by
\begin{align}\label{SPA_1}
F(t^{\pm n})\,=\, \frac{f}{ \left(j-(j \pm n)\, k^{(6)}_{(3)}\right)}\,.
\end{align}
Rewriting $\Psi^{\pm n}(t) \coloneqq -2\pi f t+j\phi-(j \pm n)\phi'$ using relation between $\phi'$ and $\phi$ $\left(\phi'= k^{(6)}_{(3)} \phi \right)$ gives $\Psi^{\pm n}(t) \coloneqq -2\pi f t+\left(j-(j \pm n)k^{(6)}_{(3)}\right)\phi$. We are now in a position to obtain analytic PN-accurate expressions for
Fourier phases, associated with these stationary points.
With Eq.~(\ref{eq:phi}) and (\ref{eq:t}), our Eq.~(\ref{Eq_3_2}) becomes
\begin{widetext}
\begin{align}
\label{Eq_3_5}
\Psi^{\pm n}_j[F(t^{\pm n})]=\, \left(j-(j\pm n)k^{(6)}_{(3)}\right) \phi_c - 2\pi f t_c + 2\pi\int^{F(t^{\pm n})} \tau' \left( j-(j \pm n)\,k^{(6)}_{(3)} - \frac{f}{F'}\right)dF'\,.
\end{align}
\end{widetext}
Note that $n$ takes values $0$ and $2$ as we are dealing with quadrupolar order GW response function
given by
Eq.(\ref{Eq_3.1}).
However, $n$ varies from $0$ to $4$ if the underlying GW response function contains 1PN-accurate amplitude corrections that include
at each PN order eccentricity corrections accurate to
{${\cal O}(e_t^6)$}. Further,
we do not display here 3PN-accurate expression for $\tau$
that includes the leading order $e_t$ corrections,
listed as Eqs.~(6.7a) and (6.7b) in Ref.~\cite{MFAM16}.
However, we do list below the explicit 3PN-accurate $\Psi_j^{\pm n}[F(t^{\pm n})]$ that incorporates
leading order $e_0$ contributions at each PN order:
\begin{widetext}
\begin{align} \label{psi_e02}
\Psi_j^n =&\, \left(j-(j+n)k^{(6)}_{(3)}\right) \phi_c - 2\pi f t_c - \frac{3 \, j}{256\, \eta \, x^{5/2}} \bigg\{1-\frac{2355}{1462} e_0^2 \chi ^{-19/9}+ x\bigg[-\frac{2585}{756}-\frac{25 n}{3 j}+\frac{55}{9}\eta \nonumber \\ & +\bigg(\bigg(\frac{69114725}{14968128}+\frac{1805 n}{172 j}-\frac{128365}{12432} \eta\bigg) \chi^{-19/9}+\bigg(-\frac{2223905}{491232}+\frac{15464 }{17544}\eta\bigg) \chi ^{-25/9}\bigg) e_0^2 \bigg] \bigg\}
+x^{3/2}\bigg[-16 \pi \nonumber \\ & +\bigg(\frac{65561 \pi}{4080}\chi ^{-19/9}-\frac{295945 \pi}{35088}\chi ^{-28/9}\bigg) e_0^2\bigg]+x^2\bigg[ -\frac{48825515}{508032}-\frac{31805 n}{252 j}+\bigg(\frac{22105}{504}-\frac{10 n}{j}\bigg) \eta +\frac{3085}{72}\eta^2 \nonumber \\ & +\bigg(\bigg(\frac{115250777195}{2045440512}+\frac{323580365 n}{5040288 j}+\bigg(-\frac{72324815665}{6562454976}+\frac{36539875 n}{1260072 j}\bigg)\eta -\frac{10688155}{294624}\eta^2\bigg) \chi^{-19/9} \nonumber \\ & +\bigg(\frac{195802015925}{15087873024}+\frac{5113565 n}{173376 j}+\bigg(-\frac{3656612095}{67356576}-\frac{355585 n}{6192 j}\bigg) \eta +\frac{25287905}{447552}\eta^2\bigg) \chi^{-25/9}+\bigg(\frac{936702035}{1485485568} \nonumber \\ & +\frac{3062285}{260064}\eta-\frac{14251675}{631584}\eta^2\bigg)\chi^{-31/9}\bigg) e_0^2\bigg]+x^{5/2}\bigg[\frac{14453 \pi }{756}-\frac{32 \pi n}{j}-\frac{65 \pi }{9} \eta -\bigg(\frac{1675}{756}+\frac{160 n}{3 j}+\frac{65}{9}\eta\bigg) \pi \log\left(\frac{f}{j}\right) \nonumber \\ & +\bigg(\bigg(-\frac{458370775 \pi }{6837264}-\frac{4909969 \pi n}{46512 j}+\frac{15803101 \pi \eta }{229824}\bigg) \chi ^{-19/9}+\bigg(\frac{185734313 \pi
}{4112640}-\frac{12915517 \pi \eta }{146880}\bigg) \chi ^{-25/9} \nonumber \\ & +\bigg(\frac{26056251325 \pi }{1077705216}+\frac{680485 \pi n}{12384 j}-\frac{48393605 \pi \eta }{895104}\bigg) \chi ^{-28/9}+\bigg(-\frac{7063901 \pi }{520128}+\frac{149064749 \pi \eta }{2210544}\bigg) \chi^{-34/9}\bigg)e_0^2\bigg]\nonumber \\ &+x^{3}\bigg[\frac{13966988843531}{4694215680}+\frac{257982425 n}{508032 j}-\frac{640 \pi ^2}{3}-\frac{6848 \gamma
}{21}+\bigg(-\frac{20562265315}{3048192}-\frac{2393105 n}{1512 j}+\frac{23575 \pi ^2}{96}\nonumber \\ &+\frac{1845 \pi ^2 n}{32 j}\bigg) \eta +\bigg(\frac{110255}{1728}+\frac{475 n}{24 j}\bigg) \eta ^2-\frac{127825 \eta ^3}{1296}-\frac{13696 \log (2)}{21}-\frac{3424 \log (x)}{21} \nonumber \\ & +\bigg(\bigg(\frac{4175723876720788380517}{5556561877278720000}+\frac{534109712725265 n}{2405438042112 j}-\frac{21508213 \pi ^2}{276480}-\frac{734341 \gamma}{16800} +\bigg(-\frac{37399145056383727}{28865256505344}\nonumber \\ &-\frac{1219797059185 n}{2045440512 j}+\frac{12111605 \pi ^2}{264192}+\frac{639805 n \pi^2}{22016 j}\bigg) \eta +\bigg(-\frac{159596464273381}{1718170030080} +\frac{43766986495 n}{1022720256 j}\bigg) \eta ^2-\frac{69237581}{746496}\eta^3 \nonumber \\ &-\frac{9663919 \log (2)}{50400}+\frac{4602177 \log (3)}{44800}-\frac{734341 \log (x)}{33600}\bigg)\chi ^{-19/9}+\bigg(\frac{326505451793435}{2061804036096}+\frac{916703174045 n}{5080610304 j}\nonumber \\ &-\bigg(\frac{13467050491570355}{39689727694848}+\frac{9519440485 n}{35282016 j}\bigg) \eta -\bigg(\frac{2186530635995}{52499639808} +\frac{7198355375 n}{45362592 j}\bigg) \eta ^2+\frac{2105566535 }{10606464}\eta^3\bigg) \chi ^{-25/9}\nonumber \\ &+\frac{24716497 \pi ^2 }{293760}\chi ^{-28/9}+\bigg(-\frac{82471214720975}{45625728024576}-\frac{2153818055 n}{524289024 j} +\bigg(-\frac{48415393035455}{1629490286592}-\frac{119702185 n}{1560384 j}\bigg) \eta \nonumber \\ &+\bigg(\frac{906325428545}{6466231296}+\frac{32769775 n}{222912 j}\bigg) \eta ^2-\frac{2330466575}{16111872} \eta ^3\bigg) \chi ^{-31/9} +\bigg(-\frac{4165508390854487}{16471063977984}-\frac{96423905 \pi ^2}{5052672}\nonumber \\ &\qquad+\frac{2603845 \gamma}{61404}+\bigg(-\frac{1437364085977}{53477480448}+\frac{3121945 \pi ^2}{561408}\bigg) \eta +\frac{4499991305 \eta ^2}{636636672}+\frac{2425890995 \eta^3}{68211072}+\frac{1898287 \log (2)}{184212}\nonumber \\ &\qquad +\frac{12246471 \log (3)}{163744}+\frac{2603845 \log (x)}{122808}-\frac{2603845 \log (\chi)}{184212}\bigg)\chi ^{-37/9}\bigg)e_0^2\bigg]\,.
\end{align}
\end{widetext}
A few comments are in order. To obtain the circular limit, we require to impose $n=j$ in $j\phi-(j-n)\phi'$ and let $e_0=0$.
This is indeed due to the fact that $k^{(6)}_{(3)}$ does not go to zero in the circular limit. Additionally,
we have verified that the resulting $\Psi^{-2}_{2} (f) $ expression in the $e_0 \rightarrow 0$ limit is identical to
3PN accurate version of Eq.~(6.26) in Ref.~\cite{MFAM16} while neglecting the spin contributions.
It is natural to expect that the $\Psi^{0}_{j} (f) $ version of our above equation should be identical to Eq.~(6.26) of Ref.~\cite{MFAM16}.
This is because
this equation indeed provided quadrupolar $\tilde{h}(f)$ with 3PN-accurate Fourier phase while incorporating leading order $e_0$ corrections at each PN order by extending the post-circular approach of Ref.~\cite{YABW}.
However, our expression for $\Psi^{0}_{j} (f) $ is not identical to Eq.~(6.26) of Ref.~\cite{MFAM16}.
This is because that effort did not incorporate the effect of periastron advance while obtaining analytic expression for their Fourier phase.
A close inspection of the $n=0$ version of our Eq.~(\ref{Eq_3_5}) reveals that it will still be influenced by our PN-accurate expression for $k^{(6)}_{(3)}$. This clearly shows that it is rather impossible to
remove the effect of periastron advance from our Eq.~(\ref{Eq_3_5}).
Therefore, our Eq.~(\ref{psi_e02}) will be different from Eq.~(6.26) of Ref.~\cite{MFAM16} which, as noted earlier, neglected the effect of periastron advance.
The differences may be attributed to the
physical fact that we are providing an analytic expression for $\tilde{h}(f)$ associated with compact binaries inspiraling along PN-accurate eccentric orbits. In contrast, Ref.~\cite{MFAM16} models inspiral GWs from compact binaries spiraling in along Newtonian orbits though frequency evolution in both cases are fully 3PN accurate.
Additionally, we are unable to match with the 2PN order results of
Ref.~\cite{THG} due to similar reasons.
We note in passing that the explicit 3PN-accurate ${\cal O}(e_0^4)$ contributions to $\Psi^{n}_{j}(f)$ and the associated
3PN accurate $e_t$ expression are provided in the Appendix~ \ref{appendixA} .
We now employ fully the final result of SPA, namely Eq.~(\ref{eq:SPA}),
to compute the Fourier transform of Eq.~(\ref{Eq_3.1}).
This gives us
\begin{widetext}
\begin{align}
\tilde{h}[F(t_0)]=& \bigg(\frac{5\,\pi\,\eta}{384}\bigg)^{1/2} \frac{G^2m^2}{c^5D_L}\,\bigg(\frac{G\,m\,\pi\,2\,F(t_0)}{c^3}\bigg)^{-7/6}\,\frac{\left(1-e_t^2\right)^{7/4}}{\left(1+\frac{73}{24}e_t^2+\frac{37}{96}e_t^4\right)^{1/2}}\bigg\{\sum_{j=1}^6\alpha_{j,-2}^{(0)}\sqrt{\frac{2}{j}}e^{-i\bar{\phi}^{(0)}_{j,-2}[F(t_0)]}e^{-i(\Psi_j^{-2}+\pi/4)} \nonumber \\ & +\sum_{j=1}^4\alpha_{j,0}^{(0)}\sqrt{\frac{2}{j}}e^{-i\bar{\phi}^{(0)}_{j,0}[F(t_0)]}e^{-i(\Psi_j^{0}+\pi/4)} + \sum_{j=1}^2\alpha_{j,+2}^{(0)}\sqrt{\frac{2}{j}}e^{-i\bar{\phi}^{(0)}_{j,+2}[F(t_0)]}e^{-i(\Psi_j^{+2}+\pi/4)}\bigg\} \,, \label{hf_spa}
\end{align}
\end{widetext}
where we have used the quadrupolar (Newtonian) order differential
equation for the orbital frequency, available in Refs.~\cite{PM63,DGI},
to compute the amplitudes of $\tilde h[F(t_0)]$.
Note that we require to employ the earlier defined
stationary points
to replace $F(t_0)$. In practice, we employ the
unperturbed stationary points, namely $F(t_0)=f/j$, while
evaluating the amplitudes of $\tilde{h}(f)$.
In what follows, we collect the above pieces together to display the quadrupolar order $\tilde{h}(f)$ that incorporates
fourth order orbital eccentricity contributions
while including the effects due to
3PN-accurate frequency, eccentricity evolution and periastron
advance as
\begin{widetext}
\begin{align} \label{hf_newt}
\tilde{h}(f) &= \bigg(\frac{5 \pi \eta }{384}\bigg)^{1/2}\frac{G^2 m^2}{c^5 D_L}\bigg(\frac{G m \pi f}{c^3}\bigg)^{-7/6}\bigg\{\sum _{j=1}^6 \xi _{j,-2}^{(0)}\bigg(\frac{j}{2}\bigg)^{2/3} e^{-i \left(\Psi _j^{-2}+\frac{\pi }{4}\right)}+\sum _{j=1}^4 \xi _{j,0}^{(0)} \bigg(\frac{j}{2}\bigg)^{2/3} e^{-i \left(\Psi _j^0 + \frac{\pi }{4}\right)} \nonumber \\ & +\sum _{j=1}^2 \xi _{j,+2}^{(0)}\bigg(\frac{j}{2}\bigg)^{2/3} e^{-i \left(\Psi _j^{+2}+\frac{\pi }{4}\right)}\bigg\} ,
\end{align}
\end{widetext}
where the Fourier amplitudes $\xi_{j,\pm n}^{(0)}$ are now given by
\begin{align} \label{xi_newt}
\xi_{j,\pm n}^{(0)} &= \frac{\left( 1-e_t^2 \right)^{7/4}}{\left(1+\frac{73}{24}e_t^2+\frac{37}{96}e_t^4\right)^{1/2}} \alpha_{j,\pm n}^{(0)}\,e^{-\textit{i} \bar{\phi}_{j,\pm n}^{(0)}(f/j)}\,,
\end{align}
and $n$ takes values 0 and 2.
A crucial expression that will be required to operationalize the above $\tilde{h}(f)$, namely 3PN-accurate expression for $e_t$ in terms of $e_0,x$ and $\chi$,
is listed as Eq.~(\ref{appendixet}) in the Appendix.
Note that the approach to obtain such an expression for $e_t$ is detailed in Ref.~\cite{THG} and briefly summarized in Sec.~\ref{sec:PostCirc_1}.
Finally, the fully 3PN-accurate expression for $\Psi^{n}_{j} (f) $
that incorporates fourth order orbital eccentricity contributions
at each PN order
is displayed as Eq.~(\ref{appendixpsi})
in the Appendix.
It should be noted that the approach of SPA
{ demands the evaluation of Fourier amplitudes, $\xi_{j,\pm n}$ and Fourier phases, $\Psi_j^{\pm n}$ at $F(t^{\pm n})\,=\,f/ \left(j-(j \pm n)\, k^{(6)}_{(3)}\right)$}.
We have extended these calculations by including 1PN-accurate amplitude corrections to $h_{\times}$ and $h_+$
with the help of Eqs.~(\ref{hpc}),(\ref{22}),(\ref{23}) and (\ref{24}).
Additionally, we have included
initial eccentricity corrections, accurate to ${\cal O}(e_0^6)$, in our 3PN-accurate $e_t$ and $\Psi^{n}_{j} (f)$ expressions.
We note in passing that these expressions are available in the
accompanying \texttt{Mathematica} file.
The resulting expression for $\tilde{h}(f)$ may be symbolically written as
\begin{widetext}
\begin{align} \label{hf_1PN}
\tilde{h}(f) =&\, {\left(\frac{5 \pi \eta }{384}\right)}^{1/2}\frac{G^2 m^2}{c^5 D_L}\left(\frac{G m \pi f}{c^3}\right)^{-7/6}\bigg\{\bigg[\sum _{j=1}^6 \xi _{j,0}^{(0)} \left(\frac{j}{2}\right)^{2/3} e^{-i \left(\Psi _j^0+\frac{\pi }{4} \right)}+\sum _{j=1}^4 \xi _{j,+2}^{(0)} \left(\frac{j}{2}\right)^{2/3} e^{-i \left(\Psi _j^{+2}+\frac{\pi }{4}\right)} \nonumber \\ & +\sum _{j=1}^8 \xi _{j,-2}^{(0)} \left(\frac{j}{2}\right)^{2/3} e^{-i \left(\Psi _j^{-2}+\frac{\pi }{4}\right)}\bigg]+\left(\frac{G m \pi f}{c^3}\right)^{1/3} \delta \bigg[\sum _{j=1}^5 \xi _{j,+1}^{(0.5)}\left(\frac{j}{2}\right)^{1/3} e^{-i \left(\Psi _j^{+1}+\frac{\pi }{4}\right)} \nonumber \\ & +\sum _{j=1}^7 \xi _{j,-1}^{(0.5)}\left(\frac{j}{2}\right)^{1/3} e^{-i \left(\Psi _j^{-1}+\frac{\pi }{4}\right)}+\sum _{j=1}^3 \xi _{j,+3}^{(0.5)}\left(\frac{j}{2}\right)^{1/3} e^{-i \left(\Psi _j^{+3}+\frac{\pi }{4}\right)}+\sum _{j=1}^9 \xi _{j,-3}^{(0.5)}\left(\frac{j}{2}\right)^{1/3} e^{-i \left(\Psi _j^{-3}+\frac{\pi }{4}\right)}\bigg] \nonumber \\ & +\left(\frac{G m \pi f}{c^3}\right)^{2/3}\bigg[\sum _{j=1}^6 \xi _{j,0}^{(1)} e^{-i \left(\Psi _j^0 +\frac{\pi }{4}\right)}+ \sum _{j=1}^4 \xi _{j,+2}^{(1)} e^{-i \left(\Psi _j^{+2}+\frac{\pi }{4}\right)}+ \sum _{j=1}^8 \xi _{j,-2}^{(1)} e^{-i \left(\Psi _j^{-2}+\frac{\pi }{4}\right)} \nonumber \\ & + \sum _{j=1}^2 \xi _{j,+4}^{(1)} e^{-i \left(\Psi _j^{+4}+\frac{\pi }{4}\right)}+ \sum _{j=1}^{10} \xi _{j,-4}^{(1)} e^{-i \left(\Psi _j^{-4}+\frac{\pi }{4}\right)}\bigg]\bigg\} \,.
\end{align}
\end{widetext}
In the above expression, the Fourier amplitudes are given by
\begin{widetext}
\begin{align} \label{xi_1PN}
\xi_{j,\pm n}^{(p)} &= \bigg\{\frac{\left( 1-e_t^2 \right)^{7/4}}{\left(1+\frac{73}{24}e_t^2+\frac{37}{96}e_t^4\right)^{1/2}}+\frac{\left( 1-e_t^2 \right)^{3/4}}{10752\left(1+\frac{73}{24}e_t^2+\frac{37}{96}e_t^4\right)^{3/2}}\bigg[11888 + 14784 \eta - e_t^2 (87720 - 159600 \eta) \nonumber \\ & \qquad - e_t^4 (171038 - 141708 \eta) - e_t^6 (11717 - 8288 \eta)\bigg]\bigg\} \alpha_{j,\pm n}^{(p)}\,e^{-\textit{i} \bar{\phi}_{j,\pm n}^{(p)}},
\end{align}
\end{widetext}
where the superscript $p$ takes values $0,0.5$ and $1$ in our amplitude corrected $\tilde{h}(f)$.
Further, we have used the 1PN-accurate differential equation for the orbital frequency while obtaining the Fourier amplitude expressions.
This expression, adaptable from Eqs.~(B8a) and (B9a) of Ref.~\cite{THG}, reads
\begin{widetext}
\begin{align}
\frac{dF}{dt}=&\,\frac{48\,c^6\,\eta}{5\,\pi\,G^2\,m^2}\left(\frac{G\,m\,2\,\pi\,F}{c^3}\right)^{11/3}\frac{\left(1+\frac{73}{24}e_t^2+\frac{37}{96}e_t^4\right)}{\left(1-e_t^2\right)^{7/2}} - \frac{743\,c^6\,\eta}{35\,\pi\,G^2\,m^2}\left(\frac{G\,m\,2\,\pi\,F}{c^3}\right)^{13/3}\frac{1}{\left(1-e_t^2\right)^{9/2}} \nonumber \\ &\bigg\{1 +\frac{924}{743}\eta+e_t^2\bigg(-\frac{10965}{1486}+\frac{9975}{743}\eta\bigg) +e_t^4\bigg(-\frac{85519}{5944}+\frac{35427}{2972}\eta\bigg)+e_t^6\bigg(-\frac{11717}{11888}+\frac{518}{743}\eta\bigg)\bigg\}\,.
\end{align}
\end{widetext}
The explicit expressions for $e_t$ and $\Psi^{n}_{j}(f)$ that incorporate the next to leading order $e_0$ corrections at each PN order,
as noted earlier, are listed in the Appendix \ref{appendixA}.
We move on to contrast our approach with other attempts in the literature.
The Sec.~VI of Ref.~\cite{YABW} indeed sketched a road map to include PN corrections to their Newtonian waveform family.
This road map included a suggestion to incorporate the effect of periastron advance into their quadrupolar order GW polarization states,
influenced by Ref.~\cite{DGI}. Their suggestion involves splitting the orbital phase evolution into two parts where one part remains
linear in the mean anomaly $l$ while the other part is periodic in $l$. These considerations influenced them to re-write our Eq.~(\ref{Eq_hN}) essentially to be
\begin{align}
h(t)&=\, -\frac{G\,m\,\eta}{c^2\,D_L} x \sum_{j=1}^{10} \alpha_j \cos\bigg\{j\,l\,\left(1+k^{(6)}_{(1)}\right)+\phi_j \bigg\} \,,
\end{align}
where $k^{(6)}_{(1)}$ stands for the 1PN accurate expression for $k$, given by $3\,x/(1-e_t^2)$, expanded to the sixth order in $e_t$ (see our Eq.(\ref{k63})).
It is not difficult to see that the associated SPA based
Fourier phase takes the following form:
\begin{align}
\Psi_j (F) &=\,\lambda\left[t\left(f/j\right)\right]-2\,\pi\,f\,t\left(f/j\right)
\,,
\end{align}
where
\begin{align}
\lambda\left[t\left(f/j\right)\right] &= \,j\,\phi_c+j\,\int^{f/j}\frac{\dot{\lambda}'}{\dot{F}'}dF' \,\\
t\left(f/j\right) &= t_c+\int^{f/j}\frac{dF'}{\dot{F}'}\,.
\end{align}
It turned out that $\dot{\lambda}' \equiv \omega $ by construction.
The use of $\omega$ in the above Fourier phase expression essentially ensures that the suggestion of Ref.~\cite{YABW} leads
to what is detailed in Ref.~\cite{THG}.
Note that Ref.~\cite{THG} provided $\tilde{h}(f)$ in terms of infinite set of harmonics with quadrupolar order amplitudes
and 2PN-accurate Fourier phase.
We observe that Ref.~\cite{YABW} indeed commented on the absence of side bands in their prescription in comparison
with what was reported in Refs.~\cite{Moreno94,Moreno95} and suggested future investigations to clarify the issue.
In contrast, the present investigation
employs Eqs.~(\ref{ht_1PN}) that explicitly incorporates the effect of periastron advance
both in the amplitude and phase of GW polarization states, as detailed in Ref.~\cite{YS}.
The use of such an expression ensures that our analytic Fourier domain expression does indeed contain periastron advance
induced frequency
side bands.
{ Additionally, Refs.~\cite{GKRF18,MKFV} employed
the dominant order periastron advance induced decomposition
of Fourier phases, associated with quadrupolar order
gravitational waveform,
while exploring LISA and aLIGO
relevant parameter estimation studies.
A close comparison of Eqs.~(B10) and (B11) of Ref.~\cite{MKFV} and Eqs.~(35) and (36) of Ref.~\cite{GKRF18} with our Eq.~(3.10) reveals
fairly identical expressions for the Fourier phases.
}
These considerations allowed us to state that our expression for $\tilde{h}(f)$, given by Eqs.~(\ref{hf_1PN}),(\ref{xi_1PN}),(\ref{appendixet}),(\ref{appendixpsi}), provides analytic
PN-accurate Fourier domain templates
for compact binaries inspiraling along PN-accurate precessing eccentric orbits.
We are now in a position to explore basic GW data analysis implications of our inspiral templates.
\subsection{ Preliminary GW data analysis implications
}
\label{sec:level2A}
We employ the familiar match computations to probe basic GW data analysis implications of our PN-accurate inspiral templates.
Following Ref.~\cite{DIS98}, the match ${\cal M}(h_s,h_t)$ between
members of two waveform classes, namely signal $h_s$ and template $h_t$,
is computed by maximizing a certain overlap integral
$\mathcal{O}(h_s, h_t) $ with respect to the kinematic variables of the template waveform.
In other words,
\begin{align}
\label{Eq:match}
{\cal M}(h_s, h_t) &= \max_{t_0, \phi_0}\, \mathcal{O}(h_s, h_t)\,,
\end{align}
where $t_0$ and $\phi_0$ are the detector
arrival time and the associated arrival phase of our template.
The overlap integral involves the interferometer-specific normalized inner product between members of
$h_s$ and $h_t$ families; it reads
\begin{align}
\label{Eq:innerproduct}
\langle h_s | h_t \rangle &= 4\, {\rm Re}\, \int_{f_{\rm low}}^{f_{\rm high}} \,
\frac{\tilde h_s^*(f)\, \tilde h_t(f)}{S_{\rm h}(f)} df \,,
\end{align}
where $\tilde h_s(f)$ and $\tilde h_t(f)$ are the Fourier transforms of the $h_s(t)$ and
$h_t(t)$ inspiral waveforms. Further, $S_{\rm h}(f)$ denotes
the one-sided power spectral density of the detector noise. In the following, we employ the
zero-detuned, high power (ZDHP) noise configuration of Advanced LIGO at design sensitivity \cite{LIGO_2010}.
In our ${\cal M}$ estimates, we let $ f_{\rm low}$ be $20\,$Hz, corresponding to the lower cut-off frequency of Advanced LIGO.
The upper frequency limit $ f_{\rm high}$ is chosen to be the usual $f_{\rm LSO}=c^3/(G\, m\, \pi\, 6^{3/2})$ of the last stable circular orbit.
We have verified that orbital eccentricities of compact binaries reduce to well below $10^{-2}$ at ${f_{\rm high}} = f_{\rm LSO}$, thereby justifying the use of the last stable circular orbit frequency for the upper frequency limit.
We require additional steps to operationalize our inspiral templates
while performing the ${\cal M}$ computations. Clearly, these waveform families should only be implemented
within the physically allowed frequency intervals. This is to ensure that the many higher harmonics present in these
waveform families do not cross the above listed upper frequency limit.
Influenced by Ref.~\citep{YABW},
we invoke the \textit{Unit Step function} ($\Theta$)
to operationalize our inspiral templates.
This step function allows us to appropriately terminate the waveform
as $\Theta(y)=1$ for $y \geq 0$ and {\it zero} otherwise.
The structure of our quadrupolar amplitude inspiral family, given by Eqs.~(\ref{hf_newt}),
compels us to invoke $\Theta$ functions such that
\begin{widetext}
\begin{align} \label{eq:theta_hf}
\tilde{h}(f) = & \,\ {\bigg(\frac{5 \pi \eta }{384}\bigg)}^{1/2}\frac{G^2 m^2}{c^5 D_L}\bigg(\frac{G m \pi f}{c^3}\bigg)^{-7/6} \bigg \{ \sum _{j=1}^4 \xi _{j,0}^{(0)} \bigg(\frac{j}{2}\bigg)^{2/3} e^{-i \left(\frac{\pi }{4}+\Psi _j^0 \right)} \times \Theta\bigg[\bigg(j-j\,k^{(6)}_{(3)}\bigg)f_{LSO}-2f\bigg] \nonumber \\ & +\sum _{j=1}^2 \xi _{j,+2}^{(0)} \bigg(\frac{j}{2}\bigg)^{2/3} e^{-i \left(\frac{\pi }{4}+\Psi _j^{+2}\right)} \times \Theta\bigg[\bigg(j-(j+2)\,k^{(6)}_{(3)}\bigg)f_{LSO}-2f\bigg]+\sum _{j=1}^6 \xi _{j,-2}^{(0)} \bigg(\frac{j}{2}\bigg)^{2/3} e^{-i \left(\frac{\pi }{4}+\Psi _j^{-2}\right)} \nonumber \\ & \times \Theta\bigg[\bigg(j-(j-2)\,k^{(6)}_{(3)}\bigg)f_{LSO}-2f\bigg] \bigg\}\,.
\end{align}
\end{widetext}
Note that we have appropriately shifted the upper frequency limits to ensure that higher {\it harmonics} are suitably terminated.
While implementing our $\tilde{h}(f)$ we have encountered the violation of the {stationary phase condition}, namely Eq.~(\ref{SPA_1}), at a few Fourier frequencies corresponding to lower harmonic indices ($j \sim 1,2$).
We infer that the periastron advance induced shift of these harmonics
can lead to negative GW frequencies. Therefore, we have discarded such
Fourier components. Interestingly, Ref.~\cite{MG07} showed that these
harmonics provide negligible contributions to the GW power spectrum,
which may be used to justify our neglect of such Fourier components in the implementation of our waveform families.
The above steps ensure smoothly varying templates which we will use in the following to pursue match computations.
We provide three frequency series of the same length (corresponding to $h_s$ and $h_t$ inspiral families and the ZDHP noise power spectral density) and employ a routine from the free and open software package
{\ttfamily PyCBC} \cite{alex_nitz_2019_2556644} to compute various ${\cal M}$ estimates.
\iffalse
Finally, we employ the {\it match} routine of
a free and open software package
{\ttfamily PyCBC} \cite{pycbc} to implement the above ${\cal M}$ estimates.
For obtaining various match estimates,
we provide two frequency series corresponding to $h_s$ and $h_t$ inspiral families and
a third frequency series for the noise power spectral density to the routine in frequency steps of $1/8$ Hz.
A submodule of {\ttfamily PyCBC}
filter package, namely {\it pycbc.filter.match}, returns a value
between $0$ and $1$ and it specifies the amount of maximized overlap between the two inspiral waveforms.
For the ${\cal M}$ estimates of this Section, we vary $e_0$ values from
$0$ to $0.4$ in steps of $0.0125$, giving $33$ sample points in the ${\cal M}-e_0$ plane.
We ensure that all input series have the same length as actual match implementation involves
obtaining the Inverse Fast Fourier Transforms.
\fi
We qualify the implications of our match estimates on GW data analysis by considering the threshold $\mathcal{M}(h_s,h_t) \geq 0.97$, denoted in the presentation of results in Figs.~\ref{fig:e02_e06},~\ref{fig:aop_waop} and~\ref{fig:Newt_1PN} by solid black lines. This limit corresponds to a loss of less than $10\%$ of all signals in the matched filter searches.
In regions of parameter space where the computed matches are high, i.e., $\mathcal{M} \geq 0.97$, waveform models are generally considered both {\it effectual} templates for the detection of fiducial GW signals and reasonably {\it faithful} in the estimation of GW source parameters \cite{DIS98}.
However, even if $\mathcal{M}$ larger than $0.97$, certain errors in the model waveform (due to unmodeled effects of, e.g., eccentricity) may become {\it distinguishable} from noise at high signal-to-noise ratio (SNR) and can affect the accuracy of source parameter estimation. Negligible systematic errors in parameter estimation -- despite differences between the true signal waveform and the template model -- can be guaranteed only if $(h_s - h_t, h_s - h_t) < 1$, the so-called {\it indistinguishability criterion} \cite{creightonanderson}. In other words, such systematic errors in the estimated source parameters may become significant when the mismatch $1 - \mathcal{M}_c \geq 1/{\rm SNR}^2$ and clearly depend on the amplitude of the signal. In the following analysis, we let the signal-to-noise ratio of our fiducial GW signals be SNR$\,= 30$ (corresponding to the SNR of the binary neutron star inspiral GW170817) and probe the distinguishability of certain effects in our model waveforms for inspiraling eccentric binaries. In the inset plots of Figs.~\ref{fig:e02_e06} and~\ref{fig:aop_waop}, we zoom into those regions of parameter space where we can expect waveform uncertainties to become indistinguishable from noise for SNR$\,= 30$; the corresponding distinguishable limit $\mathcal{M}_c$ is represented by the dashed black lines.
We first probe the importance of higher-order eccentricity
corrections in the GW phasing. For this purpose, we let the signal family $h_s$ to be our quadrupolar-order $\tilde{h}(f)$, with a 3PN-accurate Fourier phase that
includes next-to-next-to leading order, ${\cal O}(e_0^6)$ eccentricity corrections at each PN order. The template family is given by a quadrupolar-order $\tilde{h}(f)$ in the low-eccentricity limit,
incorporating only the leading-order, ${\cal O}(e_0^2)$ eccentricity contributions
in the 3PN-accurate Fourier phase.
We consider the traditional non-spinning compact binary sources relevant for Advanced LIGO: namely, binary neutron stars (NS-NS), NS-BH systems and binary black holes (BH-BH), with NS and BH components of $1.4\,M_{\odot}$ and $10\,M_{\odot}$, respectively.
For each of these three configurations, we compute the match between signal and template waveforms for different values of the initial orbital eccentricity $e_0$ between $0$ and $0.4$ (defined at the cut-off frequency $20\,$Hz).
Fig.~\ref{fig:e02_e06} suggests that the importance of higher-order eccentricity corrections for GW data analysis is strongly dependent on the total mass of an eccentric compact binary source.
Given the same $e_0$ but for configurations with increasing total mass, we find that templates restricted to leading-order eccentricity corrections become increasingly faithful representations of those inspiral waveforms that include higher-order eccentricity effects at each PN order.
This is expected, as compact binaries with higher total mass provide a smaller number of inspiral GW cycles in the frequency
window of Advanced LIGO. Therefore, these systems require larger initial eccentricities to bring on a substantial de-phasing and subsequent mismatch between our inspiral signal and template families.
Fig.~\ref{fig:e02_e06} indicates that a waveform model restricted to only leading-order eccentricity corrections would be an effectual template family for the detection of GWs from even moderately eccentric inspirals (with $e_0 \leq 0.15$ and $ \leq 0.3$ for our traditional NS-NS and BH-BH binaries, respectively).
However, the inset of Fig.~\ref{fig:e02_e06} suggests that waveform effects of higher-order eccentricity corrections become distinguishable from detector noise at significantly lower initial eccentricities ($e_0 \geq 0.07$ and $\geq 0.17$ for GWs from NS-NS and BH-BH systems with SNR$\,= 30$). In this region of parameter space, we should expect systematic errors in source parameter estimation with inspiral templates that are accurate only to leading order in eccentricity $e_0$. The inclusion of higher-order eccentricity corrections in waveform modeling is therefore desirable for an accurate follow-up of eccentric GW signals.
\begin{figure*}[htp]
\begin{center}
\includegraphics[width=0.5\textwidth, angle=0]{eccentricityorders.png}\\[0.1cm]
\end{center}
\caption{\label{fig:e02_e06}
Matches between eccentric waveform models with different orders of eccentricity corrections. We are comparing waveforms that take only leading-order $\mathcal{O}(e_0^2)$ eccentricity corrections into account to those that include eccentricity corrections up to next-to-next-to leading order $\mathcal{O}(e_0^6)$. We consider three configurations of a NS and a BH with masses of $1.4 M_\odot$ and $10 M_\odot$, respectively: i.e., NS-NS (blue curve), NS-BH (orange curve) and BH-BH (pink curve) systems. The initial orbital eccentricity $e_0$ refers to the eccentricity of the binary system at 20 Hz. Given the same $e_0$, the effect of higher-order eccentricity corrections on the agreement between signal and template is strongly dependent on the total mass of the compact binary source. The solid black line denotes the threshold $\mathcal{M}=0.97$, associated with the effectualness of a model for GW detection and its faithfulness for source parameter estimation. The inset plot zooms into the region of parameter space where we can expect the effect of higher-order eccentricity corrections to become distinguishable from noise for SNR$\,= 30$, leading to systematic errors in parameter estimation; the dashed black line represents the indistinguishability criterion.}
\end{figure*}
We move on to probe data analysis implications of including the effect of periastron advance in our eccentric inspiral waveforms $\tilde{h}(f)$.
In our match calculation $\mathcal{M}(h_s, h_t)$, the signal waveforms employ our quadrupolar-order $\tilde{h}(f)$ given by Eq.~(\ref{hf_newt}), including both $k$ and $e_t$ effects to the sixth order in $e_0$ at each PN order. We build a template family $h_t$ that neglects effects of periastron advance, by extending to 3PN order previously developed eccentric inspiral waveforms (provided with 2PN-accurate Fourier phase in Ref.~\cite{THG}).
In other words, we construct quadrupolar templates $\tilde{h}_t(f)$ with the help of Eq.~(\ref{5}) and
the 3PN extension of our Newtonian Eq.~(\ref{8}) for $\Psi_j$
while incorporating all
$\mathcal{O}(e_0^6)$ corrections at each PN order.
Additionally, we evaluate the
Fourier phase at the unperturbed stationary point $F=f/j$ \cite{YABW}.
It is important to note that such a template waveform family ignores the effect of periastron advance in its Fourier phase evolution.
We consider the same NS-NS, NS-BH and BH-BH systems as before and compute the match between signal and template waveforms for discrete values of initial orbital eccentricity at $20\,$Hz, $e_0 \in [0,0.4]$.
From our results, presented in Fig.~\ref{fig:aop_waop}, we learn that the significance of periastron advance effects for GW data analysis is rather independent of the total mass of the source, with similar match estimates for all three traditional compact binaries under consideration.
Periastron advance starts to influence the effectualness of GW templates for detection only for systems that have eccentricities $e_0 > 0.25$ at $20\,$Hz. This agrees with our observation that $k$-induced modulations in the inspiral waveforms presented in Fig.~5 and 6 of Ref.~\cite{DGI} become clearly visible only for moderate values of initial orbital eccentricity.
However, we can expect systematic biases in the source parameter estimation for much smaller values of orbital eccentricity.
The inset of Fig.~\ref{fig:aop_waop} suggests that periastron advance effects in an eccentric GW signal with SNR$\,=30$ would already become distinguishable from noise for eccentricities $e_0 > 0.03$ at $20\,$Hz, leading to systematic errors in the recovered source parameters when waveform models neglect periastron advance.
\begin{figure*}[htp]
\begin{center}
\includegraphics[width=0.5\textwidth, angle=0]{periastronadvance.png}\\[0.1cm]
\end{center}
\caption{\label{fig:aop_waop}
Matches between eccentric waveform models that include or neglect effects of periastron advance. We consider the same three configurations of binaries with NS and BH components as in Fig.~\ref{fig:e02_e06}: i.e., NS-NS (blue curve), NS-BH (orange curve) and BH-BH (pink curve) systems. The initial orbital eccentricity $e_0$ is again defined at the lower cut-off frequency $20\,$Hz. We infer that the significance of periastron advance effects for GW data analysis is rather independent of the total mass of the source. We interpret our results by considering the threshold $\mathcal{M}=0.97$ (represented by the solid black line) below which a waveform model should be considered ineffectual for detection and unfaithful for parameter estimation. In the inset plots, we highlight the parameter space of small eccentricities to probe the importance of systematic errors in parameter estimation due to waveform uncertainties. The dashed black line represents the distinguishable limit for a fiducial GW signal with SNR$\,=30$.
}
\end{figure*}
Lastly, we explore the relevance of PN-accurate amplitude corrections while constructing realistic analytic Fourier-domain waveforms for eccentric inspirals. For these $\mathcal{M}$ estimates, we invoke as the expected GW signal our 1PN-accurate amplitude corrected $\tilde{h}(f)$, given by Eq.~(\ref{hf_1PN}),
including the effects of 3PN-accurate periastron advance, frequency and eccentricity evolution accurate to sixth order
in orbital eccentricity. For the template family, we are utilizing a quadrupolar-order $\tilde{h}(f)$, given by Eq.~(\ref{hf_newt}), that includes the same order effects of 3PN-accurate periastron advance and 3PN-accurate frequency and eccentricity evolution as above.
We consider five compact binary configurations with a fixed total mass $m = m_1 + m_2 = 20 M_\odot$ and varying mass ratios $q = m_1/m_2 \in \{1,3,5,7,9\}$. For each of these configurations, we pursue match computations for different choices of initial orbital eccentricity $e_0 \in [0,0.4]$ at $20\,$Hz, resulting in Fig.~\ref{fig:Newt_1PN}. We observe that amplitude corrections are rather unimportant while constructing template waveforms for equal-mass binaries in eccentric orbits. This is expected, as the dominant amplitude corrections -- appearing at 0.5PN order in Eq.~(\ref{hf_1PN}) -- are proportional to $\sqrt{1-4\eta}$ and therefore vanish for equal-mass binaries. Our plots suggest that the effect of amplitude corrections on the faithfulness of eccentric inspiral waveforms crucially depends on the mass ratio of a binary system, with $\mathcal{M}$ rapidly dropping below the critical value of 0.97 as $q \geq 5$, even for systems with negligible initial eccentricities. This is a familiar result from the modeling of compact binary inspiral along circular orbits and points to the relevance of higher modes for GWs from binaries with asymmetric masses \cite{CVBASG}. In other words, our plots in Fig.~\ref{fig:Newt_1PN} essentially confirm previous literature that compared restricted and amplitude-corrected $\tilde{h}(f)$ for quasi-circular inspiral. Interestingly, we find that the $q$-dependent effect of amplitude corrections on the faithfulness of eccentric inspiral waveforms is largely unaffected by the value of initial eccentricity $e_0$.
\begin{figure*}[htp]
\begin{center}
\includegraphics[width=0.5\textwidth, angle=0]{ampcorrections.png}\\[0.1cm]
\end{center}
\caption{\label{fig:Newt_1PN}
Matches between eccentric waveform models with Newtonian and 1PN-accurate amplitudes. We consider compact binary systems with a total mass of $m=m_1+m_2$, with different choices for the mass ratio $q=m_1/m_2$. As expected, the effect of amplitude corrections on waveform faithfulness is largely independent of the orbital eccentricity $e_0$ at 20 Hz. Waveforms with Newtonian amplitudes are faithful representations of amplitude-corrected waveforms only if $q \leq 3$ (blue and orange curves); for higher mass ratios $q \geq 5$ (pink, green and purple curves) the match between waveforms with Newtonian and 1PN-accurate amplitudes falls below the threshold of $\mathcal{M}=0.97$ (denoted by the black line) even in the circular limit.
}
\end{figure*}
\section{Conclusions} \label{conclusion}
We have provided fully analytic PN-accurate Fourier domain gravitational waveforms
for compact binaries inspiraling along precessing moderately eccentric orbits.
Our inspiral approximant contains 1PN-accurate amplitude corrections and its Fourier phase
incorporates the effects of 3PN-accurate periastron advance and GW emission.
Additionally, the eccentricity effects are
accurate to sixth order in $e_0$ at each PN order.
We infer from our analytic waveform expression that the orbital eccentricity induced
higher harmonics are no longer integer multiples of orbital frequency due to
the influence of periastron advance. This substantiates and extends what is detailed in Ref.~\cite{YS}
for compact binaries inspiraling along PN-accurate precessing eccentric orbits.
Preliminary GW data analysis implications of our waveforms are probed with the help of the usual match computations.
In what follows, we provide a step-by-step summary of our effort.
\begin{enumerate}
\item
We start from our Eqs.~(\ref{hx0}) and (\ref{hp0}) that provide quadrupolar order GW polarization states from compact
binaries in PN-accurate eccentric orbits as a sum over various harmonics.
\item
With above inputs, we compute the time domain GW detector response function and express it
as a summation of several cosine functions whose arguments are sum of integer multiples of
$\phi $ and $\phi'$ associated with the orbital and periastron motions. Amplitudes of these
functions are expressed in terms of $\omega$, $e_t$ and the angles that specify
the antenna patterns $F_{\times}, F_+ $ and the direction of the orbital
angular momentum vector. The quadrupolar version of $h(t)$
that explicitly incorporates the next to leading order $e_t$ corrections
is given by Eq.~(\ref{Eq_h_t_0f}) and associated expressions like
Eqs.~(\ref{gamma_0}) and (\ref{sigma_0}).
Its 1PN extension is symbolically provided by Eq.~(\ref{ht_1PN})
and the accompanying \texttt{Mathematica} file provide
the explicit expressions for various PN coefficients
while incorporating ${\cal O}(e_t^6)$ corrections.
\item
We also provide a prescription to obtain temporally evolving $h(t)$ for
compact binaries inspiraling due to 3PN accurate GW emission along precessing
3PN accurate orbits of moderate eccentricities.
This involves imposing temporal evolution for $\omega, e_t, \phi'$ and $\phi$
with the help of PN accurate differential equations.
The relative 3PN accurate equations for $\omega$ and $e_t$ are due to the emission of GWs,
as evident from our Eqs.~(\ref{Tevolve_3}) and (\ref{Tevolve_4}).
The conservative 3PN accurate differential equation for
$\phi'$ arises essentially due to periastron advance as evident from Eq.~(\ref{Tevolve_2}).
The differential equation for $\phi$ is kinematical in nature as $d\phi/dt \equiv \omega$.
\item
The structure of the time domain response function allows us to involve the method of stationary phase
approximation to compute its Fourier transform.
The crucial Fourier phases and the associated 'nine'; stationary points may be concisely
written as $\Psi^{\pm n}(t) \coloneqq -2\pi f t\,+\,j\phi-(j \pm n)\phi'$,
where $n$ takes values $0,1,2,3,4$.
The nine stationary points, associated with 1PN accurate amplitude corrected $h(t)$,
essentially provide relations between the orbital and Fourier frequencies
$ F ( t^{\pm n}) = {f}/ (j-(j \pm n)\, k')$, where $k'$ is related to the rate of periastron
advance per orbit.
The explicit expression for the resulting 3PN -accurate Fourier phases with leading order initial eccentricity
corrections are provided by Eq.~(\ref{psi_e02}).
Gathering various results, we obtain Eqs.~(\ref{hf_newt}) and (\ref{xi_newt})
that provide the quadrupolar order $\tilde{h}(f)$ while incorporating
fourth order orbital eccentricity contributions along with the effects due to
3PN-accurate frequency, eccentricity evolution and periastron advance.
Additionally, we have extended these results by including 1PN accurate amplitude
corrections and six order eccentricity contributions.
\item
A crucial ingredient to obtain a fully analytic $\tilde{h}(f)$ involves
a derivation, detailed in Sec.~\ref{sec:PostCirc}, that provides PN accurate analytic expression for
$e_t$ in terms of $e_0, \omega, \omega_0$. We have obtained 3PN accurate expression for $e_t(e_0,\omega,\omega_0)$ by extending the post-circular scheme of Refs.~\cite{YABW,THG}.
\end{enumerate}
A number of extensions are possible.
Influenced by Refs.~\cite{KG05,Kidder95}, we are incorporating the effects of leading order
aligned spin-orbit and spin-spin interactions into these waveforms.
It will be interesting to explore data analysis implications of our present waveforms.
A possible avenue is to explore the astrophysical implications of using PN-accurate periastron
advance contributions that depend both on $m$ and $\eta$, influenced by
Refs.~\cite{MKFV,NSeto}.
There are on-going efforts to construct analytic IMR templates to model eccentric
compact binary coalescence \cite{Huerta,ENIGMA}. The present waveform family will be relevant to construct IMR templates for moderately eccentric compact binary mergers which can be used to extract orbital eccentricity and periastron advance as done in Ref~\cite{Abdul10}. Efforts are on-going to obtain various constructs, using elements of our post-circular Fourier domain approximant,
that should allow us to make comparisons with brand new PN-accurate frequency domain waveform family, developed in Refs.~\cite{MRLY18,MY19} for moderate eccentricities.
\section{Acknowledgements}
We thank Yannick Boetzel for helpful discussions, suggestions and providing us
the lengthy eccentricity enhancement functions.
We are grateful to Marc Favata and Blake Moore for their helpful comments.
M.~H. acknowledges support from Swiss National Science Foundation (SNSF) grant IZCOZ0\_177057.
We have used software packages from {\tt PyCBC} \cite{alex_nitz_2019_2556644} and {\tt Matplotlib} \cite{matplotlib} to compute and plot match estimates.
\onecolumngrid
\pagebreak
\twocolumngrid
|
1,108,101,563,900 | arxiv | \section{INTRODUCTION}
Our knowledge of the formation and evolution of low-mass stars has made significant progress
over the past two decades (see, e.g., Reipurth et al. 2007 for several reviews). It is widely accepted
that low-mass stars form from the gravitational collapse of dense molecular cores (e.g., Shu et
al. 1987). Initially, these cores, generally referred to as prestellar cores, are cold dense condensations
with infall motions, where no central stellar object yet exists (Andr\'{e} et al. 2009). Resulting from the
collapse of prestellar cores, Class\,0 objects are the youngest accreting protostars observed right
after point mass formation, when most of the mass of the system is still in the surrounding dense
core/envelope (Andr\'{e} et al. 2000). Representing the earliest phase of low-mass star formation,
both prestellar cores and Class\,0 protostars have been extensively observed and studied using
(sub)\,millimeter and infrared telescopes (see, e.g., reviews by Di~Francesco et al. 2007;
Ward-Thompson et al. 2007). However, despite all of the observational advances in the past two
decades, we still do not have a good understanding of the evolutionary process that turns a
prestellar core into a protostar. This is illustrated by the fact that several ``prestellar" cores, like L1014
and L1521F, were found to harbor very low-luminosity protostars in sensitive infrared observations (see
Young et al. 2004; Bourke et al. 2006). A better understanding of the star formation process can only
be achieved by studying the detailed properties of cores and their surroundings at different evolutionary stages.
In this paper, we present high angular resolution observations of CB\,17, using the Submillimeter
Array\footnote{The Submillimeter Array is a joint project between the Smithsonian Astrophysical
Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the
Smithsonian Institution and the Academia Sinica.} (SMA; Ho et al. 2004) and the {\it Spitzer Space
Telescope} ($Spitzer$). CB\,17 (also known as L1389) is a small and slightly cometary-shaped dark
cloud, located near Perseus and associated with the Lindblad ring (Lindblad et al. 1973). It was
classified as a Bok globule by Clemens \& Barvainis (1988). The distance of CB\,17 is somewhat
uncertain, ranging from $\sim$\,210\,pc (van Leeuwen 2007) to $\sim$\,300\,pc (Dame et al. 1987).
Following Launhardt et al. (2010), we adopt a distance of 250\,$\pm$\,50\,pc in this work.
The roundish core of the CB\,17 globule is associated with a faint and cold IRAS point
source (IRAS\,04005+5647, detected only at 100\,$\mu$m and 60\,$\mu$m). CB\,17
has been studied by various groups using different molecular line transitions (e.g.,
Lemme et al. 1996; Kane \& Clemens 1997; Launhardt et al. 1998; Benson et al. 1998;
Turner et al. 1997, 2000; Caselli et al. 2002a). The core was found to have a mean kinetic
gas temperature of $T_{\rm kin}\approx 10$\,K, and the observed non-thermal widths of
optically thin lines are in the range 0.25--0.45\,km\,s$^{-1}$ (e.g., N$_2$H$^+$, Benson
et al. 1998; Caselli et al. 2002a). Numerical simulations based on multi-line observations
suggest that the kinematical structure of CB\,17 can be explained by a prestellar core
with combined subsonic infall, small rotational, and low-level internal turbulent motions
(Pavlyuchenkov et al. 2006).
Based on the infrared and single-dish (sub-)\,millimeter continuum observations, Launhardt et
al. (2010) found two sources within the CB\,17 globule. One source, referred to as CB\,17 IRS,
dominates the infrared emission in the $Spitzer$ images but has faint millimeter continuum emission.
The other source, referred to as CB\,17 SMM, has no compact infrared emission in the $Spitzer$
images but dominates the millimeter continuum emission. The results from fitting the spectral
energy distributions (SEDs) suggested that CB\,17 IRS may be a Class\,0/I transition object while
CB\,17 SMM may be a prestellar core. In addition, the IRAM-30m 1.3\,mm continuum images
suggest that there may be two sub-cores (SMM\,1 and SMM\,2) in CB\,17 SMM with a separation
of 14$''$, although this result was not confirmed by the less sensitive SCUBA 850\,$\mu$m images
(see Launhardt et al. 2010).
\section{OBSERVATIONS AND DATA REDUCTION}
\subsection{SMA Observations}
The SMA 230\,GHz observations of CB\,17 were carried out in the compact configuration
on 2008 November 30 (eight antennas, $\sim$\,6.5 hours integration time) and 2009
December 25 (seven antennas, $\sim$\,3.3 hours integration time). Zenith opacities during
the observations were typically in the range of 0.10$-$0.15. In the 2008 observations, the
digital correlator was set up to cover the frequency ranges 219.5$-$221.4 GHz and
229.5$-$231.4 GHz in the lower and upper sidebands (LSB and USB), respectively. This
setup includes the three isotopic CO lines of $^{12}$CO\,(2--1) (230.538\,GHz),
$^{13}$CO\,(2--1) (220.399\,GHz), and C$^{18}$O\,(2--1) (219.560\,GHz), as well as
N$_2$D$^+$\,(3--2) (231.322\,GHz). The channel widths were set up to be 0.406, 0.406,
0.203, and 0.203\,MHz for the four lines, corresponding to the velocity resolutions of
$\sim$\,0.50, $\sim$\,0.50, $\sim$\,0.25, and $\sim$\,0.25\,km\,s$^{-1}$, respectively.
The 1.3\,mm continuum emission was recorded with a total bandwidth of $\sim$\,3.5\,GHz,
combining the line-free portions of the two sidebands ($\sim$\,1.7\,GHz USB and $\sim$\,1.8\,GHz LSB).
In the 2009 observations, the SMA bandwidth was upgraded to 4\,GHz, and the correlator
was set up to cover the approximate frequency range of 216.8$-$220.8\,GHz in the LSB
and of 228.8$-$232.8\,GHz in the USB. The total continuum bandwidth is $\sim$\,7.5\,GHz
($\sim$\,3.8\,GHz LSB and $\sim$\,3.7\,GHz USB). System temperatures ranged from 100
to 200\,K (depending on elevation) in the 2008 observations (typical value $\sim$\,140\,K)
and from 100 to 150\,K in the 2009 observations (typical value $\sim$\,120\,K). The SMA
primary beam is about 55$''$ at 230\,GHz.
The visibility data were calibrated with the MIR package (Qi 2005). In the 2008 observations,
Saturn and quasar 3c454.3 were used for bandpass calibration, and quasars 0359+509 and
3c111 for gain calibration. In the 2009 observations, quasar 3c273 was used for bandpass
calibration, and quasars 3c84 and 0359+509 for gain calibration. Uranus and 3c273 were used
for absolute flux calibration in the 2008 and 2009 observations, respectively. We estimate a
flux accuracy of $\sim$\,20\%, by comparing the final quasar fluxes with the SMA calibration
database. The calibrated visibility data were further imaged using the Miriad toolbox (Sault et
al. 1995). We combined the 2008 and 2009 dust continuum data together to improve the
imaging sensitivity. However, the line results shown in this paper were taken only from the
2008 observations; the reason we did not include the 2009 line data is that the velocity
resolution in the 2009 observations was set up to be about 1.0\,km\,s$^{-1}$ uniformly, about
two times (in $^{12}$CO and $^{13}$CO) or four times (in C$^{18}$O and N$_2$D$^+$) larger
than that of the 2008 line data. It must be noted that the line results taken in the 2008 and 2009
observations are consistent with each other. Table~1 lists the SMA synthesized beam size and
theoretical noise levels at 1.3\,mm continuum and in the CO\,(2--1) line, with robust {\it uv} weighting 1.0.
\subsection{$Spitzer$ Observations}
Infrared data of CB\,17 were obtained from the $Spitzer$ Science Center (SSC). CB\,17 was
observed on 2004 February 12 with the Infrared Array Camera (IRAC; AOR key 4912384;
PI: C. Lawrence) and 2004 October 16 with the Multiband Imaging Photometer for $Spitzer$
(MIPS; AOR key 12025088; PI: G. Rieke). The IRAC observations were taken in the high
dynamic range mode, with an effective long-frame exposure time of $\sim$\,130\,s. The MIPS
observations covered a field of $\sim$\,15$'$\,$\times$\,55$'$, with an exposure time of
$\sim$\,190\,s at 24\,$\mu$m, $\sim$\,80\,s at 70\,$\mu$m, and $\sim$\,20\,s at 160\,$\mu$m,
respectively. The data were processed by the $Spitzer$ Science Center using their standard
pipeline (version~S18.7 for IRAC data and version~S16.1 for MIPS data) to produce Post-Basic
Calibration Data (P-BCD) images. The $Spitzer$ spatial resolution is about 2$''$ at the IRAC
bands, and about 6$''$, 18$''$, and 40$''$ at the MIPS 24, 70, and 160\,$\mu$m bands, respectively.
The overall flux calibration, expressed in physical units of MJy\,sr$^{-1}$, is estimated by the SSC
to be accurate to within 10\%. Table~2 lists the imaging sensitivity (1\,$\sigma$) of the IRAC and
MIPS bands for CB17. For comparison, the imaging sensitivity for dark cloud L1448 (c2d data;
Evans et al. 2003; 2009) is also listed. The $Spitzer$ data of CB\,17 have been already published
by Launhardt et al. (2010).
\section{RESULTS}
\subsection{Millimeter and Infrared Continuum Emission}
Figure~1 shows the SMA 1.3\,mm dust continuum image of CB\,17, in which a centrally-peaked
continuum source is clearly detected in the northwest of the SMA field of view. This continuum
source is spatially associated with the infrared source IRS found by Launhardt et al. (2010), which
dominates the infrared emission in the $Spitzer$ images (see Figure~2) and is referred to as
CB\,17\,IRS here. Located at the center of the SMA image, another faint continuum source
is marginally detected (peak value $\sim$\,6\,$\sigma$; see Figure~1), at an angular distance of
$\sim$\,21$''$ to the southeast of CB\,17\,IRS. This faint continuum source was independently
detected in both the 2008 LSB ($\sim$\,5--6\,$\sigma$) and 2009 LSB ($\sim$\,4--5\,$\sigma$;
depending on the cleaning-box and $uv$-weighting adopted in the data reduction) observations,
but is not seen in the 2008 and 2009 USB observations.
However, it must be noted that in our SMA observations the noises found in the USB are relatively
higher than those in the LSB. This is evidenced by the observations of source CB\,17\,IRS, which was
detected with the signal-to-noise ratios of $\sim$\,8 and $\sim$\,5 in the 2008 LSB
(1\,$\sigma$\,$\sim$\,0.74\,mJy\,beam$^{-1}$) and 2009 LSB (1\,$\sigma$\,$\sim$\,0.65\,mJy\,beam$^{-1}$)
observations, respectively, but only $\sim$\,5 and $\leq$\,3 in the 2008 USB (1\,$\sigma$\,$\sim$\,0.85\,mJy\,beam$^{-1}$)
and 2009 USB (1\,$\sigma$\,$\sim$\,0.80\,mJy\,beam$^{-1}$) observations. Therefore, the faint continuum
source found in the LSB observations is likely missing in the USB observations due to the relatively
lower single-to-noise ratio.
In the $Spitzer$ images shown in Figure~2, this faint dust continuum source is located at the center of
a small-scale dark shadow seen in the IRAC 8.0\,$\mu$m image (see Figure~2a), and no compact infrared
emission is detected from this source at the $Spitzer$ bands from 3.6 to 70\,$\mu$m (see Figure~2),
although the MIPS 160\,$\mu$m map (spatial resolution $\sim$\,40$''$) shows a slight shift of the peak
position from the northwestern IRS source toward this faint source. Hereafter, we refer to the central faint
continuum source as CB\,17\,MMS. We also note that CB\,17\,MMS is spatially associated with a faint infrared
source detected in the {\it Herschel} PACS 100\,$\mu$m observations\footnote{There is an offset ($\sim$\,2$''$)
between the peaks of source CB\,17\,MMS and the faint infrared source seen at PACS 100\,$\mu$m (see
Figure~2b). This offset is smaller than the SMA synthesized beam size ($\sim$\,3$''$) and the PACS angular
resolution at 100\,$\mu$m ($\sim$\,7$''$), and is therefore not significant.} (M.~Schmalzl et al., in preparation;
see Figure~2b) and a core detected in various high-density molecular line tracers in CB\,17 (e.g., N$_{2}$H$^{+}$
and HCO$^{+}$; see Figure~2c).
On the other hand, the IRAM-30m 1.3\,mm dust continuum observations show two sub-cores with $\sim$\,14$''$
separation in CB\,17, named SMM1 and SMM2 (see Launhardt et al. 2010 and also Figure~2d). Interestingly,
in the high angular resolution SMA images, no continuum source is detected within the sub-core SMM2, while
the SMA 1.3\,mm continuum source CB\,17\,MMS is close to the sub-core SMM1, but $\sim$\,7$''$ to the west
of the peak position of the SMM1 core (see Figure~2d). This offset is larger than the pointing uncertainty in the
IRAM-30m observations ($\sim$\,3$''$--5$''$). It is possible that the IRAM-30m single-dish observations detect
the large-scale extended envelope, while the SMA observations reveal the relatively compact source which is
embedded in this envelope, but not at its center.
Figure~3 shows the plots of the SMA visibility amplitudes versus {\it uv} distances for the continuum
sources CB\,17\,IRS and MMS. As shown in the plots, CB\,17\,IRS shows a roughly flat distribution
of amplitudes from long to short baselines, suggesting a compact, point-like object. For CB\,17\,MMS,
the distribution also shows a roughly flat distribution at baselines longer than $\sim$\,15\,k$\lambda$;
at shorter baselines, the extended emission is mostly resolved (see below), and it is unclear if there is
an Gaussian-like distribution, which is frequently seen in the interferometric observations toward
protostellar envelopes (see, e.g., Looney et al. 2003).
The SMA 1.3\,mm continuum fluxes of sources CB\,17\,IRS and MMS were derived from Gaussian fitting
of the restored images using the Miriad command {\it imfit} (see Table~3). For comparison, their fluxes
integrated above the 3\,$\sigma$ level are also listed. For source CB\,17\,IRS, its flux detected at SMA is
$\sim$\,6.3\,mJy, roughly 20\% of the flux found in the IRAM-30m image, where the flux density on its position
is about $\sim$\,30\,mJy. For source CB\,17\,MMS, its flux derived from the SMA images is $\sim$\,3.8\,mJy,
while its flux detected in the IRAM-30m 1.3\,mm map is $\sim$\,50\,mJy (within the central 12$''$ or 3000\,AU
of CB\,17\,MMS), indicating that more than 90\% of the flux around CB\,17\,MMS was resolved out by the SMA.
Assuming that the 1.3\,mm dust continuum emission is optically thin, the total gas mass ($M_{\rm gas}$)
is derived from the flux densities using the formula:
\begin{equation}
M_{\rm gas} = \frac{F_{\nu} D^{2}}
{\kappa_{\rm m}(\nu)\, B_{\nu} (T_{\rm d})}\
\left(\frac{M_{\rm g}}{M_{\rm d}}\right) \quad
\end{equation}
\noindent where $D$\ is the distance to the source, $T_{\rm d}$ is the dust temperature,
$\kappa_{\rm m}(\nu)$\ is the dust opacity (mass absorption coefficient per gram of dust),
and $M_{\rm g}/M_{\rm d}$\ is the gas-to-dust mass ratio. Following Ossenkopf \& Henning
(1994), we adopt $\kappa_{\rm m} = 0.5\,{\rm cm}^2\,{\rm g}^{-1}$, which is a typical value
for dense cores with an average number density of $n(\rm H) = 10^5\,$cm$^{-3}$. A standard
gas-to-dust mass ratio of 150 is used, which is a combination of the ratio of Hydrogen mass to
dust mass of 110 (Draine \& Lee 1984) and the inclusion of Helium and heavier elements which
introduces an extra factor of 1.36. A dust temperature of $\sim$\,10\,K was adopted for CB\,17\,MMS,
which is derived from a fit to the SED (see below and also Launhardt et al. 2010) and is also
similar to the mean kinetic gas temperature found in CB\,17 (see $\S$\,1). For CB\,17\,IRS, a
dust temperature of $\sim$\,20\,K is adopted (see below). The relative uncertainties of the derived
masses due to the calibration errors of the fluxes are within $\pm$20\%. The total gas masses of
CB\,17\,MMS and IRS derived from the SMA observations are $\sim$\,0.035 and $\sim$\,0.023\,$M_\odot$,
respectively.
\subsection{CO\,(2--1) Emission}
Figure~4 shows the velocity channel maps of the SMA $^{12}$CO\,(2--1) emission. The $^{12}$CO
emission is detected from $V_{\rm LSR}$\,=\,$-$7.2\,km\,s$^{-1}$ to $-$2.2\,km\,s$^{-1}$, with the
cloud systematic velocity being $\sim$\,$-$4.7\,km\,s$^{-1}$ (Pavlyuchenkov et al. 2006). In each panel,
the two crosses indicate the SMA positions of sources CB\,17\,IRS and MMS. In the CB\,17 core, the observed
FWHM of the optically thin N$_{2}$H$^+$ line is in the range of 0.25--0.45\,km\,s$^{-1}$ (e.g., Benson et al.
1998), and we can safely assume that the (turbulent) CO cloud emission is within a velocity width of
$\sim$\,2\,km\,s$^{-1}$ ($\sim$\,5 times larger than the FWHM of N$_{2}$H$^+$). The $^{12}$CO emission
at velocities more than 1\,km\,s$^{-1}$ away from the cloud systematic velocity suggests the existence of
molecular outflows in this region (see $\S$\,4.2 for more discussions, and Arce et al. 2010 for a discussion
on differentiating between outflows and turbulence-related features). For CB\,17\,IRS, blueshifted emission
(from $V_{\rm LSR}$\,=\,$\sim$\,$-$7.2\,km\,s$^{-1}$ to $\sim$\,$-$5.7\,km\,s$^{-1}$) extends to the
southeast, while redshifted emission (from $\sim$\,$-$3.7\,km\,s$^{-1}$ to $\sim$\,$-$2.7\,km\,s$^{-1}$)
mainly extends to the northwest of the source. Near CB\,17\,MMS, blueshifted emission
(from $V_{\rm LSR}$\,=\,$\sim$\,$-$7.2\,km\,s$^{-1}$ to $\sim$\,$-$5.7\,km\,s$^{-1}$) extends to the east of
the source, while redshifted emission (from $\sim$\,$-$3.7\,km\,s$^{-1}$ to $\sim$\,$-$2.2\,km\,s$^{-1}$)
extends to both the west and east directions.
Figure~5 shows the velocity channel maps of the $^{13}$CO\,(2--1) emission of CB\,17, which is detected at
velocities between $\sim$\,$-$5.2 and $\sim$\,$-$4.2\,km\,s$^{-1}$. The $^{13}$CO\,(2--1) line emission
associated with CB\,17\,IRS shows a circular centrally-peaked condensation coincident with the dust continuum
source, while the emission near CB\,17\,MMS extends in both the west and east directions, similar to the morphology
of the $^{12}$CO\,(2--1) line emission. We also note that faint C$^{18}$O\,(2--1) emission is seen at the position of
CB\,17\,IRS (see the spectrum in Figure~6), which shows a morphology similar to the $^{13}$CO\,(2--1) emission
(see Figure~7), while no C$^{18}$O\,(2--1) emission is detected from source CB\,17\,MMS in our SMA observations.
\subsection{N$_2$D$^+$\,(3--2) Emission}
The N$_2$D$^+$\,(3--2) line emission is also detected in the CB\,17 observations. Figure~7 shows the
velocity-integrated intensity map of the N$_2$D$^+$\,(3--2) emission, plotted on the IRAC 8.0\,$\mu$m
image. The N$_2$D$^+$\,(3--2) emission shows an elongated structure, which extends roughly in the
east-west direction, and is spatially coincident with the dark shadow seen in the IRAC 8.0\,$\mu$m image.
The FWHM diameter of the N$_2$D$^+$\,(3--2) condensation is measured to be 16.6$''$\,$\times$\,6.3$''$ or
4200\,$\times$\,1600\,AU (at a distance of $\sim$\,250\,pc). Therefore, the condensation is resolved,
as each axis is larger than the synthesized beam for N$_2$D$^+$ (3.0$''$\,$\times$\,2.8$''$). The HFS
(HyperFine Structure) fitting routine in CLASS\footnote{see http://www.iram.fr/IRAMFR/GILDAS}, with
the frequencies and weights adopted from Dore et al. (2004), was used to derive LSR velocity ($V_{\rm LSR}$),
line width ($\Delta V$), optical depth ($\tau$), and excitation temperature ($T_{\rm ex}$). The optical depth
($\tau$) is found to be small in most regions (ranging from 0.1 to 10, with typical values of 1--2). Hence, the
N$_2$D$^+$ emission can be considered approximately optically thin, and the estimated excitation temperature
is about 5--7\,K.
The measured integrated intensity of the N$_2$D$^+$\,(3--2) emission is about 5\,Jy\,beam$^{-1}$\,km\,s$^{-1}$
or $\sim$\,14\,K\,km\,s$^{-1}$. With the same method described in Caselli et al. (2002b), the column density of
N$_2$D$^+$ is estimated to be $\sim$\,2--6\,$\times$\,10$^{11}$\,cm$^{-2}$.
The integrated intensity map of the C$^{18}$O\,(2--1) emission is also shown in Figure~7. Interestingly, the
comparison between the N$_2$D$^+$\,(3--2), C$^{18}$O\,(2--1), and 8.0\,$\mu$m images suggests that
N$_2$D$^+$\,(3--2) traces only cold gas around source CB\,17\,MMS, while C$^{18}$O\,(2--1) traces relatively
warm gas around infrared source CB\,17\,IRS.
Furthermore, no strong N$_2$D$^+$\,(3--2) emission is detected at the position of CB\,17\,MMS, but instead the
N$_2$D$^+$\,(3--2) emission shows a small arc-like structure surrounding the MMS source. It could be that
N$_2$D$^+$ at the CB\,17\,MMS position has been destroyed by the gradually warming gas/dust due to the
accreting luminosity of source CB\,17\,MMS.
Figure~8 shows the velocity field of CB\,17, using the SMA N$_2$D$^+$\,(3--2) data and derived with the same
method described in Chen et al. (2007). The mean velocity map shows a continuous velocity gradient across the
N$_2$D$^+$ condensation, increasing from southeast to northwest (see Figure~8a). A least-squares fitting of
the velocity field indicates there is a velocity gradient of 18$\pm$1\,km\,s$^{-1}$\,pc$^{-1}$, with a position angle
of $\sim$\,$-$30$^\circ$ (measured east of north in the direction of increasing velocity), in the core traced by the
N$_2$D$^+$ emission.
The line widths are roughly constant within the condensation (with a typical value of $\sim$\,0.5\,km\,s$^{-1}$) and
relatively large line widths (1.0--1.2\,km\,s$^{-1}$) are mainly seen at the southeast southeast edge (see Figure~8b).
We note that the velocity resolution in the N$_2$D$^+$ observations is $\sim$\,0.25\,km\,s$^{-1}$. Therefore, the
typical line width of 0.5\,km\,s$^{-1}$ derived in the observations can be considered as an upper limit (given higher
velocity resolution observations, we may expect to find narrower line widths between 0.25--0.50\,km\,s$^{-1}$).
Assuming a kinetic gas temperature of 10\,K (see $\S$\,1), the thermal contribution to the N$_2$D$^+$ line width is
$\sim$\,0.13\,km\,s$^{-1}$, and the typical non-thermal contribution to the line width is then $\sim$\,0.48\,km\,s$^{-1}$,
which is about 3.7 times larger than the thermal line width. Although the origin of the non-thermal line width is still a
subject of an ongoing debate, it is widely accepted that turbulence is the main contributor (see, e.g., Goodman et al.
1998). On the other hand, the thermal FWHM line width of an ``average" particle of mass 2.33\,m$_{\rm H}$ (assuming
gas with 90\% H$_2$ and 10\% He), which represents the local sound speed, would be $\sim$\,0.44\,km\,s$^{-1}$
at 10\,K. The observed non-thermal line width in N$_2$D$^+$ is comparable with this value, which suggests that turbulence
in the condensation is approximately sonic.
\section{DISCUSSION}
\subsection{Spectral Energy Distributions}
Figure~9 shows the spectral energy distribution of CB\,17\,IRS. The submillimeter and millimeter
fluxes of CB\,17\,IRS were estimated from the SCUBA and IRAM-30m dust continuum images
(see Launhardt et al. 2010). The fluxes in the $Spitzer$ images were measured with aperture
photometry in the IRAF APPHOT package, with the radii, background annuli, and aperture
corrections recommended by the $Spitzer$ Science Center. We note that CB\,17 is actually
associated with a faint and cold IRAS point source
(IRAS\,04005+5647, $F_{100\,\rm\mu m}=5.78$\,Jy, $F_{60\,\rm\mu m}=0.91$\,Jy, not detected
at shorter wavelengths). Given that no compact infrared emission was detected from CB\,17\,MMS
in the $Spitzer$ 3.6--70\,$\mu$m images but CB\,17\,IRS was detected at these bands, we assume
that all the IRAS flux comes from CB\,17\,IRS\footnote{This is also consistent with the infrared observations
taken with the {\it Herschel Space Observatory} (M.~Schmalzl et al. in preparation). In the $Herschel$
PACS observations at 100$\mu$m, source CB\,17\,IRS dominates the infrared emission (see Figure~2b),
with a flux ratio of more than 30 compared to source CB\,17\,MMS.}.
Table~4 lists all the flux points of CB\,17\,IRS plotted in Figure~9. To derive luminosities and
temperatures, we first interpolated and then integrated the SED, always assuming spherical
symmetry. Interpolation between the flux densities was done by a $\chi$$^2$ single-temperature
grey-body fit to all points (including upper limits) at $\lambda$\,$\geq$\,100\,$\mu$m, using
the same method as described in Chen et al. (2008). A simple logarithmic interpolation was
performed between all points at $\lambda$\,$<$\,100\,$\mu$m. The results from the SED
fitting of CB\,17\,IRS, such as $T_{\rm bol}$\,$\sim$\,50\,K, $T_{\rm dust}$\,$\sim$\,20\,K, and
$L_{\rm smm}$/$L_{\rm bol}$\,$\sim$\,2\%, in concert with the fact that it is directly observed
in the near-infrared wavelengths (see Launhardt et al. 2010), suggest that CB\,17\,IRS is a
Class\,0/I transition object with a luminosity of $L_{\rm bol}$\,$\sim$\,0.5\,$L_\odot$.
For comparison, we also show in Figure~9 the SED of source CB\,17\,MMS. The mm and submm
fluxes of CB\,17\,MMS were estimated using the IRAM-30m and SCUBA dust continuum images
(see Launhardt et al. 2010). The flux within one beam of CB\,17\,MMS is $\sim$\,50\,mJy at 1.3\,mm,
$\sim$\,180\,mJy at 850\,$\mu$m, and $<$\,800\,mJy at 450\,$\mu$m (3\,$\sigma$ upper limit,
no detection). Although the MIPS\,3 image at 160\,$\mu$m does not resolve sources CB\,17\,IRS
and MMS, a slight shift of the peak position from IRS toward MMS suggests detectable emission
from MMS at this wavelength (see Figure~2d). For compiling the SED, we assigned 15\% of the
total 160\,$\mu$m flux to MMS (see also Launhardt et al. 2010), but our results depend only weakly
on the adopted flux splitting. Table~4 lists all the flux points and upper limits of CB\,17\,MMS. The
estimated bolometric luminosity of CB\,17\,MMS is less than 0.04\,$L_\odot$, and the dust temperature
derived from the SED fit is $\sim$\,10\,K.
The fact that no infrared emission was detected from CB\,17\,MMS in the $Spitzer$ 3.6--70\,$\mu$m
images suggests this source is extremely cold; the estimated bolometric temperature of the source is
about 16\,K. We note that the imaging sensitivity of the CB\,17 $Spitzer$ data is about 1.5 times deeper
than that obtained by the c2d observations of other cores (see Table~2 for a comparison). Given that
the c2d data are sensitive to embedded protostars with internal luminosity\footnote{The internal
luminosity is the luminosity of the central source, which excludes the luminosity arising from external
heating.} $L_{\rm int}$\,$\leq$\,4\,$\times$\,10$^{-3}$($d$/140\,pc)$^{2}$\,$L_\odot$ (see Dunham
et al. 2008), the internal luminosity of a potential protostar in CB\,17\,MMS should be less than 0.013\,$L_\odot$ at
a distance of 250\,pc, which is consistent with the upper limit of the bolometric luminosity obtained
from our SED fitting ($\sim$\,0.04\,$L_\odot$). Nevertheless, it must be noted that uncertainties
remain in our estimates, due to the limited observations available. More observations, such as high
angular resolution and high sensitivity continuum observations at wavelengths from far-infrared to
(sub-)\,millimeter, are needed to constrain the SED of CB\,17\,MMS in order to derive more precisely
its luminosity and temperature.
\subsection{Outflows in CB\,17}
Figure~10 shows the velocity-integrated intensity map of the SMA $^{12}$CO\,(2--1) emission of CB\,17,
plotted on the IRAC 8.0\,$\mu$m image. For source CB\,17\,IRS, its CO emission shows a typical bipolar
morphology seen in low-mass protostellar outflows (see, e.g., Arce \& Sargent 2006; J{\o}rgensen et al.
2007). From the SMA data, we estimate the opening angle of the blue lobe to be $\sim$\,85 degrees
and the position angle (measured east from north) is $\sim$\,125 degrees. The outflow's inclination (with
respect to the plane of the sky) is estimated to be $\sim$\,50 degrees, using geometrical case~4 in Cabrit
\& Bertout (1986).
Near source CB\,17\,MMS, the blueshifted and redshifted emissions show long and narrow structures
($\sim$\,7500\,AU and $\sim$\,8500\,AU in length, respectively), extending in the east-west direction and
overlapping each other, different from the morphology of the bipolar outflow associated with source
CB\,17\,IRS. Because CB\,17\,MMS is such an extremely low luminosity object, we discuss below five
possible mechanisms that might produce the observed CO emission around CB\,17\,MMS.
Firstly, the CO lobes around CB\,17\,MMS might be artifacts due to incomplete $uv$-coverage in the
interferometric observations. However, the CO emission around CB\,17\,MMS is strong, and shows
consistent velocity structure and morphology, which do not change significantly with data reduction
methods (e.g., various $uv$ weighting, velocity resolution, and clean method adopted). More importantly,
our observations taken in 2008 (eight antennas, 6.5 hours integration time) and 2009 (seven antennas,
3.3 hours integration time) produce the same velocity structure and morphology around CB\,17\,MMS,
indicating that these CO structures are independent of the $uv$ sampling. Therefore, we consider these
CO lobes around CB\,17\,MMS are not artifacts in the interferometric maps.
Secondly, the low velocity ($\sim$\,2.5\,km\,s$^{-1}$) CO line emission might suggest that the emission
could be due to bound motions in the cloud core. However, for gas at 2\,km\,s$^{-1}$ to be bound to the
core at a distance of 7500\,AU (measured using the blue lobe), it would require a core mass of
$\sim$\,17\,$M_\odot$, which is much larger than the core mass derived from the IRAM-30m continuum
observations ($\sim$\,4\,$M_\odot$, radius 8000\,AU; Launhardt et al. 2010) and the virial mass derived
from molecular line observations ($\sim$\,3$M_\odot$, e.g., Caselli et al. 2002a; scaled to the radius of
7500\,AU). This indicates that the CO emission is not from spurious structures of the cloud core.
The third possibility is that the two long and narrow CO lobes around CB\,17\,MMS are caused by the outflow
from CB\,17\,IRS, which impacts and deflects from the dense region near CB\,17\,MMS.
However, (1) the geometry of the system is not quite consistent with this picture. As seen in Figure~10,
the redshifted outflow of CB\,17\,IRS mainly extends to the northwest, but the red lobe around
CB\,17\,MMS mainly extends to the east.
(2) The outflow from CB\,17\,IRS is much weaker than the elongated lobes close to CB\,17\,MMS (see
Table~5). The velocity-integrared intensity of the CO emission from the two lobes around CB\,17\,MMS
($\sim$\,19.5 and $\sim$\,23.3\,Jy\,beam$^{-1}$\,km\,s$^{-1}$ for the blue and red lobes, respectively)
is much larger than that of the two lobes in the CB\,17\,IRS outflow
($\sim$\,16.9 and $\sim$\,14.2\,Jy\,beam$^{-1}$\,km\,s$^{-1}$ for the blue and red lobes, respectively).
We believe that it is unlikely for a weak outflow, like that of CB\,17\,IRS, to produce a much stronger
outflow as a result from its deflection from a dense core.
And (3), if the CO emission around source CB\,17\,MMS is impacted and deflected from the CB\,17\,IRS
outflow, we might expect to find intense turbulence around CB\,17\,MMS, but in contrast, the dense core
around CB\,17\,MMS is very quiescent, as indicated by the narrow linewidth in the optically thin N$_2$H$^+$
observations (see Bensen et al. 1998; Caselli et al. 2002a) and N$_2$D$^+$ observations (this work).
The fourth possibility is that the CO lobes around CB\,17\,MMS are actually parts of the outflow lobes of
CB\,17\,IRS, and these features could result from missing flux in the interferometric observations. However,
the geometry of the system is not consistent with this picture. As we discussed above, the redshifted
outflow of CB\,17\,IRS mainly extends to the northwest, but the red lobe around CB\,17\,MMS mainly
extends to the east. Therefore, even if we assumed that the blue lobe around CB\,17\,MMS is part of the
blueshifted outflow driven by CB\,17\,IRS, it would be difficult to explain the existence of the red lobe to the
east of CB\,17\,MMS.
Finally, the fifth possibility is that the two CO lobes around source CB\,17\,MMS represent the molecular
outflow driven by CB\,17\,MMS. The peculiar morphology (i.e., blue and red outflow lobes are overlapping)
and the low radial velocity of the gas suggest that these blue- and redshifted lobes may be produced by a
collimated outflow with an axis close to the plane of the sky, similar to the RNO\,43 outflow (e.g., Arce \&
Sargent 2005; see also Figure~1 of Cabrit et al. 1988 for a diagram of the geometry of an outflow close
to the plane of the sky).
If this is the case, then it is probable that we do not detect higher velocities CO emission due
to projection effects (we return to this point in $\S$\,4.3.2). We consider this the most likely scenario, and
hereafter we assume the observed narrow $^{12}$CO structures are associated with a molecular outflow
driven by CB\,17\,MMS. Nevertheless, we note that further observations (e.g., SMA subcompact data) are
needed to recover the missing flux of the extended structure, and to confirm the nature of these narrow
$^{12}$CO structures.
Since the $^{13}$CO\,(2--1) emission from CB\,17\,MMS is detected only at velocities between $\sim$\,$-$5
and $\sim$\,$-$4\,km\,s$^{-1}$ (see Figure~5), we may consider that the $^{12}$CO\,(2--1) emission at
velocities beyond this range to be optically thin. The outflow masses of the two sources are derived with the
same method as described in Cabrit \& Bertout (1990; 1992). In the calculations, we assume LTE conditions
and an excitation temperature of 20\,K. The derived outflow mass, as well as other outflow properties (e.g.,
momentum $P$ and energy $E$), are listed in Table~5. The outflow mass-loss rate ($\dot{M}$$_{\rm out}$),
force ($F_{\rm m}$), and mechanical luminosity ($L_{\rm m}$) are estimated from the mass, momentum, and
energy with the estimated dynamical age of the outflow. For the CB\,17\,MMS outflow, assuming that the outflow
velocity is 2.5\,km\,s$^{-1}$, the dynamical age of the outflow is estimated to be $\sim$\,1.4\,$\times$\,10$^4$ yr
(the inclination effect is not considered here). For the CB\,17\,IRS outflow, assuming the same outflow velocity
and an inclination angle of $\sim$\,50 degrees, the dynamical age of the outflow is estimated to be
$\sim$\,1.1\,$\times$\,10$^4$ yr. We note that these outflow parameters in Table~5 refer only to the
compact outflows detected in the SMA maps and thus represent lower limits only.
\subsection{The Nature of CB\,17\,MMS}
\subsubsection{Comparisons to prestellar and Class\,0 objects}
Although the SMA and $Spitzer$ observations clearly indicate that source CB\,17\,IRS is a Class\,0/I
transition object, the evolutionary stage of source CB\,17\,MMS is unclear.
Previous single-dish molecular line observations have shown subsonic infall ($\sim$\,0.05\,km\,s$^{-1}$),
slow rotation ($\sim$\,2\,km\,s$^{-1}$\,pc$^{-1}$), and subsonic internal turbulent ($\sim$\,0.1\,km\,s$^{-1}$)
motions in the CB\,17 dense core (see Pavlyuchenkov et al. 2006), which are similar to the typical properties
found in prestellar cores (see, e.g., Andr\'{e} et al. 2009). These early results of CB\,17\footnote{The previous
single-dish line observations toward CB\,17 generally show a dense core centered at the position of source
CB\,17\,MMS. Therefore, we consider that the properties derived from these single-dish line observations are
related to source MMS rather than source IRS.}, in concert with the fact that no compact infrared emission is
detected from CB\,17\,MMS in the $Spitzer$ images, led to the idea that CB\,17\,MMS may be a prestellar core
(e.g., Launhardt et al. 2010).
However, the SMA 1.3\,mm dust continuum observations suggest that a compact object has formed in
CB\,17\,MMS. In contrast, prestellar cores generally show no compact dust continuum emission in high
angular resolution interferometric observations (e.g., Olmi et al. 2005; Schnee et al. 2010; but see also
Bourke et al. 2012). Furthermore, the SMA CO\,(2--1) observations suggest that CB\,17\,MMS drives a
molecular outflow, which implies active accretion/ejection motions in source CB\,17\,MMS --- unlikely to
take place in a prestellar core.
Moreover, Pavlyuchenkov et al. (2006) suggested that the CB\,17 core is chemically evolved, with an age
of $\sim$\,2\,Myr, which is somewhat larger than the typical lifetimes of prestellar cores
($\sim$\,1--1.5\,$\times$\,10$^6$\,yr; see Andr\'{e} et al. 2009).
On the other hand, compared to the typical properties of Class\,0 protostars, CB\,17\,MMS shows also
three major differences. Firstly, CB\,17\,MMS is not visible in the deep $Spitzer$ images at wavelengths
from 3.6 to 70\,$\mu$m, with an extremely low bolometric luminosity ($\leq$\,0.04\,$L_\odot$), which
implies no central protostellar object formed yet within the dense core. For comparison, most Class\,0
protostars, if not all, are detectable in the $Spitzer$ infrared images (at least in the MIPS bands; see,
e.g., Evans et al. 2009). Secondly, the SMA 1.3\,mm dust continuum observations suggest the existence
of a compact, but very faint, object in CB\,17\,MMS (see $\S$\,3.1). The mm continuum flux of CB\,17\,MMS
(only a few mJy) is about two magnitudes lower than the values of those Class\,0 protostars observed
with the same configuration at the SMA (generally in the order of $\sim$\,100\,mJy or $\sim$\,0.1\,$M_\odot$;
see, e.g., J{\o}rgensen et al. 2007), implying that no massive accretion disk has developed yet around
CB\,17\,MMS. Lastly, CB\,17\,MMS appears to drive a relatively slow molecular outflow
($\sim$\,2.5\,km\,s$^{-1}$), compared to those typically found in Class\,0 protostars, which have velocities of
$\sim$\,10 to 100\,km\,s$^{-1}$ and have strong (physical and chemical) impacts on their surrounding cores
and clouds (see, e.g., Arce et al. 2007).
From the comparisons discussed above, we suggest that source CB\,17\,MMS is more evolved than prestellar
cores but less evolved than Class\,0 protostars, as it preserves many typical properties of prestellar cores seen
in the early single-dish observations (i.e., in the outer core), but it also shows properties (e.g., compact object
and outflow) that only protostars exhibit, in the high angular resolution observations (i.e., in the inner core),
though neither strong protostellar object nor massive accretion disk appear to have developed yet in it.
\subsubsection{A Candidate First Hydrostatic Core?}
Interestingly, the observed properties of CB\,17\,MMS are consistent with the theoretical predictions of the radiative
hydrodynamical (RHD) simulations for the first hydrostatic core (or first core), a transient object intermediate between
the prestellar and Class\,0 phases (see, e.g., Larson 1969; Masunaga et al. 1998). Theoretical studies have investigated
the properties of the first core and made a series of observationally testable predictions, including a short lifetime of
only 10$^3$$-$10$^4$ years, an extremely low bolometric luminosity ($<$\,0.1\,$L_\odot$), very low mass ($<$\,0.1\,$M_\odot$),
and no detectable infrared emission at wavelengths shorter than 30\,$\mu$m (see, e.g., Boss \& Yorke 1995; Masunaga
et al. 1998; Machida et al. 2008).
In Figure~11, we compare the fluxes of source CB\,17\,MMS with those of a first hydrostatic core modeled by
Masunaga et al. (1998). Although the number of observational data points is still small, the comparisons show
that the fluxes of CB\,17\,MMS are in general agreement with those of a first hydrostatic core calculated with a
beam size of 1000\,AU (or 4$''$ at the distance of CB\,17), which would have a luminosity of $\sim$\,0.06\,$L_\odot$
when the core central density reaches $\rho$$_{\rm center}$ $\sim$ 10$^{-9}$\,g\,cm$^{-3}$ or evolves at
$\sim$\,1.23\,$\times$\,free-fall time (model M1a; see Masunaga et al. 1998 for more details).
In addition, the radius of the first core is found to be small, typically $\sim$\,5\,AU (Masunaga et al. 1998).
When the effects of rotation are considered, this radius can be a little larger, in the range of 10$-$20\,AU
(Saigo \& Tomisaka 2006; Saigo et al. 2008). Generally, in the interferometric dust continuum observations,
prestellar cores show no compact dust continuum emission (i.e. no detection) with existing millimeter interferometers
(e.g., Olmi et al. 2005; Schnee et al. 2010; Offner et al. 2012; X.~Chen et al. in preparation),
because the extended structure of prestellar core (typically with density of $\rho$\,$\sim$\,10$^{-19}$\,g\,cm$^{-3}$,
or $\sim$\,10 magnitudes smaller than that of the first core) will be almost totally resolved out by millimeter
interferometric observations. While a core evolves from prestellar core to the onset of the first core, we expect to
find a relatively compact, point-like, but also faint ($\leq$\,50\,mJy, judged from the models in Masunaga et al.
1998) object in the interferometer observations. As can be seen in Figure~3, the SMA visibility amplitudes versus
{\it uv} distances diagram of CB\,17\,MMS shows a roughly flat distribution, which suggests a point-like object
in CB\,17\,MMS with a flux of $\sim$\,3\,mJy, consistent with the prediction of the small radius and small flux of the first
core. However, we note that the SMA compact configuration observations mainly sample the $uv$ range between
$\sim$\,10--60\,k$\lambda$. Further subcompact and extended configurations are definitely needed to study in
detail the density structure of CB\,17\,MMS.
Moreover, the SMA CO\,(2--1) observations suggest that CB\,17\,MMS drives a molecular outflow.
Interestingly, recent MHD simulations have shown that the first cores can drive low-velocity outflows (see, e.g.,
Machida et al. 2008; Tomida et al. 2010). In the simulations, the typical outflow driven by a first core has velocities
of $\sim$\,2--5\,km\,s$^{-1}$ (e.g., Machida et al. 2008). The observed velocity of the CB\,17\,MMS outflow is
about 2.5\,km\,s$^{-1}$, which is in good agreement with the result from the MHD simulations. However, in the MHD
simulations (e.g., Machida et al. 2008), the outflow driven by the first core also features wide opening-angle and
low extent-to-width ($E$/$W$) ratio (2.2$-$2.5)\footnote{More recently, smoothed particle magneto-hydrodynamics
simulations suggest that the first hydrostatic core can drive collimated jets (opening angles $\leq$\,10$^\circ$) with
speeds of $\sim$\,2--7\,km\,s$^{-1}$ (see Price et al. 2012).}.
The outflow driven by CB\,17\,MMS shows two narrow lobes, overlapping with each other, with opening angle of
$\sim$\,35$^\circ$ and $E$/$W$ of $\sim$\,4 (measured using the blue lobe), which is more consistent with a
protostellar outflow rather than a first core outflow. However, it must be noted that the opening-angle and $E$/$W$
measured in the CB\,17\,MMS outflow are best treated as upper limits, because the SMA compact configuration
observations recover only the fluxes at projected baselines $>$\,10\,k$\lambda$ (corresponding to angular scales
$<$\,20$''$). Further short-spacing observations are needed to recover the extended structure of the CB\,17\,MMS
outflow, in order to derive more precisely its opening-angle and $E$/$W$ ratio. In MHD simulations, both magnetic
field and rotation rate in the collapsing core shape the morphology of the first core outflow (see, e.g., Machida et al.
2008). Hence, there is also the possibility that the CB\,17\,MMS outflow represent a first core outflow that results from
a specific magnetic field and/or rotation rate.
Another significant unknown factor in our analysis of the CB\,17\,MMS outflow is the source inclination. With the SMA
CO images of CB\,17\,MMS, we are unable to set strong constraints on the inclination from the outflow morphology,
although the extreme face-on ($\leq$\,10$^\circ$) configuration can be ruled out here. If CB\,17\,MMS is viewed
along a relatively edge-on line-of-sight ($>$\,60$^\circ$), the true velocity of the outflow would increase beyond the
outflow velocity expected for the first core. For example, an inclination of 80$^\circ$ would increase the outflow velocity
from the observed value of 2.5 to 14\,km\,s$^{-1}$. On the other hand, if we assumed an inclination of 45$^\circ$, the
outflow velocity of CB\,17\,MMS would be about 3.5\,km\,s$^{-1}$, which is still comparable with the predicted values of
the first core outflows (but in this case, the extent-to-width ratio would increase to $\sim$\,6).
Furthermore, source inclination also has strong effects on the observed infrared properties.
For a protostar surrounded by a circumstellar disk and embedded in a dense envelope, the
infrared emission would be much stronger when viewed near face-on where the emission can
escape through outflow cavities, than when viewed near edge-on where the emission is
reprocessed by the disk and dense inner envelope. Clearly, better knowledge of the source
inclination would help in determining the true evolutionary status of CB\,17\,MMS.
At present, unfortunately, due to the insufficient observations and uncertain inclination angle, the nature of source
CB\,17\,MMS is still not definitive. With the combined results from early single-dish observations and our SMA and
$Spitzer$ observations, in concert with the comparisons to theoretical models, we consider that CB\,17\,MMS may
represent a candidate first hydrostatic core. Nevertheless, there is also the possibility that CB\,17\,MMS is an extremely
low luminosity protostar, which is deeply embedded in an {\it edge-on} circumstellar disk/inner envelope and thus shows
no detectable infrared emission in the deep $Spitzer$ observations.
\subsubsection{Comparisons with other first core candidates}
The detection of the first hydrostatic core phase is of prime importance in our understanding of the early evolution of
dense cores and the origin of outflows, as it would not only confirm the long prediction of RHD models but also set
strong constraints on MHD models of protostellar outflows.
On the observational side, the search for the first core has been going on for a while and several first core candidates
have been proposed, although none of them have been verified.
Based on the SMA and $Spitzer$ observations, we reported a first core candidate, L1448~IRS\,2E, which is
invisible in the sensitive $Spitzer$ infrared images (from 3.6 to 70\,$\mu$m), has very weak (sub-)\,millimeter dust
continuum emission, and consequently has an extremely low luminosity ($L_{\rm bol}$ $<$\,0.1\,$L_\odot$), but
also drives a molecular outflow (Chen et al. 2010).
Enoch et al. (2010) reported another candidate first core, Per-Bolo~58, a very low luminosity (internal luminosity
$<$\,0.01\,$L_\odot$) dense core in Perseus. This core was originally thought to be starless, but Enoch et al. (2010)
detected an associated infrared source in very deep $Spitzer$ 24\,$\mu$m and 70\,$\mu$m observations and argued
this source could either be a first core or a very low luminosity Class\,0 protostar. A bipolar, jet-like molecular
outflow was also found in this interesting source by Dunham et al. (2011).
More recently, Pineda et al. (2011) reported another candidate, L1451-mm, which is not visible in the $Spitzer$
observations (with a bolometric luminosity of $<$\,0.05\,$L_\odot$) and drives a low-velocity, poorly-collimated,
bipolar outflow.
Together with CB\,17\,MMS, these sources are all suggested candidates of the long predicted first hydrostatic
core, but none of them are in complete agreement with theoretical models. For source L1448 IRS\,2E, its outflow
velocity reaches $\sim$\,25\,km\,s$^{-1}$ (see Chen et al. 2010), much larger than the value predicted by the
MHD simulations. In the case of Per-Bolo~58, its detection at 24\,$\mu$m is inconsistent with current models.
For source L1451-mm, its observations are in better agreement with theoretical models, but its SED
and continuum interferometric visibilities can also be equally well fitted by a model of a protostar plus a circumstellar
disk (see Pineda et al. 2011). For source CB\,17\,MMS, another promising candidate, its outflow is much more
collimated than that expected for a first core, and there is a possibility that it is an extremely low luminosity protostar
deeply embedded in an edge-on circumstellar disk.
Clearly, more observations are needed to study these sources thoroughly: (1) high angular resolution and high
sensitivity continuum observations at wavelengths from far-infrared to (sub-)\,millimeter, are needed to constrain
the SEDs of these sources, in order to derive more precisely their luminosities and temperatures, and (2) high
angular and spectral resolution line observations are also needed to study in detail the physical properties, density
structure, kinematics, and chemistry of the surrounding dense gas around the sources in order to accurately
characterize their evolutionary status. It is also critical to search for more candidate first cores in nearby clouds,
in order to achieve a better understanding of the formation and evolution of dense cores, as well as the origin of
outflows. With the availability of recent sensitive (sub-)\,millimeter telescopes (e.g., {\it Herschel Space Observatory}
and the Atacama Large Millimeter/Submillimeter Array), we believe that more candidates will be found in the near future.
\section{SUMMARY}
We present SMA 230\,GHz and $Spitzer$ infrared observations toward the Bok globule CB\,17.
The SMA 1.3\,mm dust continuum images reveal within CB\,17 two sources, which are separated
by $\sim$\,21$''$ ($\sim$\,5250\,AU) in the northwest-southeast direction. The northwestern continuum
source, referred to as CB\,17\,IRS (gas mass $\sim$\,0.023\,$M_\odot$), dominates the infrared emission
in the $Spitzer$ images and drives a low-velocity bipolar outflow detected in the SMA CO\,(2--1) observations.
The SED fitting results suggest that CB\,17\,IRS is a low luminosity Class\,0/I transition object
($L_{\rm bol}$\,$\sim$\,0.5\,$L_\odot$).
The southeastern continuum source, referred to as CB\,17\,MMS, has faint dust continuum emission in
the SMA 1.3\,mm images ($\sim$\,6\,$\sigma$ detection; gas mass $\sim$\,0.035\,$M_\odot$), and is not
detected in the deep $Spitzer$ images at wavelengths from 3.6\,$\mu$m to 70\,$\mu$m. Its bolometric
luminosity and temperature, estimated from the SED fitting, are $\leq$\,0.04\,$L_\odot$ and $\leq$\,16\,K,
respectively. The SMA N$_2$D$^+$\,(3--2) observations show an elongated condensation associated
with source CB\,17\,MMS, which has a systematic velocity gradient of 18$\pm$1\,km\,s$^{-1}$\,pc$^{-1}$
and a typical line width of $\sim$\,0.48\,km\,s$^{-1}$.
Interestingly, the SMA CO\,(2--1) observations suggest that CB\,17\,MMS may drive a long
narrow low-velocity outflow ($\sim$\,2.5\,km\,s$^{-1}$), with blueshifted and redshifted lobes extending in
the east-west direction, that overlap each other. Comparisons with prestellar cores and Class\,0
protostars suggest that CB\,17\,MMS is likely at an evolutionary stage intermediate between these two
stages. The observed characteristics of CB\,17\,MMS are consistent with the properties expected for the
first hydrostatic core as predicted by radiative/magneto hydrodynamical simulations. We thus consider
CB\,17\,MMS to be a candidate first core. However, there is also the possibility that CB\,17\,MMS is an
extremely low luminosity protostar, which is deeply embedded in an edge-on circumstellar disk/inner
envelope. Further high angular resolution and high sensitivity observations are needed to confirm the
properties of CB\,17\,MMS and to address more precisely its evolutionary stage.
\acknowledgments
We thank the SMA staff for technical support during the observations and
the $Spitzer$ Science Center for their maintenance of the $Spitzer$ data.
This material is based on work supported by NSF grant AST-0845619 to
H.G.A. This research is supported in part by the National Science Foundation
under grant number 0708158 (T.L.B.).
\clearpage
|
1,108,101,563,901 | arxiv | \section{Introduction}
When making a comparison between coupling constants related to different types of interaction, one finds $G_{grav}/G_{Fermi} \sim 10^{-34}$ so that $G_{grav} << G_{Fermi}$. In terms of the corresponding enegy scales this means that the
Planck scale $\Lambda_{Planck } \sim 10^{16}$ TeV, at which gravity becomes strongly coupled and its effects cannot be neglected any more, is far larger than the electroweak (EW) symmetry breaking scale $\Lambda_{EW} \sim 0.25$ TeV. The absence of a clear explanation for a difference of so many orders of magnitude, is a manifestation of the so-called hierarchy problem. A possible solution, dating back to 1998~\cite{arkani-hamed,antoniadis,randall}, proposes the existence of $n$ extra-dimensions where gravitational interactions can extend (the ``bulk''), thus diluting their effect in our 4-dimensional world (the ``brane'') where the strong and electroweak interactions would be confined.
Thus, beside the $\Lambda_{Planck }$ scale related to the strength of gravitational interactions in our 4-dimensional world, i.e. $\Lambda_4 = \Lambda_{Planck}$, a fundamental gravity scale in $D$-dimensions ($D = 4 + n$) is introduced which may be as low as $\Lambda_D \sim \Lambda_{EW}$ and possibly also lead to the unification of the fundamental forces. If this would be the case then the effects of gravity would begin to manifest themselves already at this (low) unification energy scale and thus would be within reach of terrestrial accelerator experiments. In particular, one of the most intriguing phenomenological consequences of this scenario would be the possible formation of microscopic black-holes (MBH)'s in collisions between two hadrons with a Center-of-Mass (CM) energy as low as $E_{CM}\sim$ 5 $-$ 20 TeV.
This contribution deals with hadronic collisions in a wide interval of energies,
ranging from LHC energies ($E_{CM} \sim 10$ TeV) up to the highest cosmic
ray energies ($E_{lab} \sim 10^{6} - 10^{8}$ TeV, equivalent to
$E_{CM} \sim 40 - 140$ TeV). In this energy range gravitational effects are
usually neglected. These energies are higher than those tipically covered by
nuclear physics studies\footnote{as those discussed in many contributions to this Conference ($13^{th}$ International Conference on Nuclear Reaction Mechanisms, Varenna, Italy, June 11 - 15 2012)}, however it is possible to establish some parallelism between concepts familiar to nuclear physicists, which determine the evolution of an excited nuclear system formed in the collision between two nuclei at energies of the order of hundreds MeV/nucleon, and concepts that characterize the evolution of a MBH formed by the collision of two hadrons or nuclei at much higher energies as predicted by the Hoop Conjecture~\cite{thorne:1972}: {\it if in the collision of two hadronic objects a large amount of energy/mass is concentrated in a spatial region that can be surrounded by a hoop with a radius $R < R_{Schwarzschild}$ corresponding to a Schwarzschild black hole of that energy, then a MBH is formed.}
Although the hoop conjecture provides some basic necessary conditions for the formation of a MBH in collisions, the actual formation process and the initial phase of its evolution is a very complex non-linear phenomenon, subject to many uncertainties, evolving from an asymmetric configuration out of thermal equilibrium, to a highly symmetric static configuration with a well-defined Hawking temperature. Numerical Monte Carlo simulations may be the best way to model this type of process. The same happens when describing the formation of an excited nuclear system from the collision of two nuclei. In both cases the key parameters determining the evolution of the system are $E_{CM}$ and the total angular momentum $J$, related to the impact parameter $b$ between the initial colliding objects.
At the end of the dynamical phase, it is commonly believed that the MBH undergoes a Hawking evaporation phase during which its temperature evolves following the law $T \propto k/M_{MBH}$, and this can be described by statistical/thermodynamical models together with corrections to the trajectory of the emitted particles as a consequence of the curved geometry through which they subsequently propagate via the so-called grey-body factors. Analogously, the evaporation of nucleons by excited nuclei at the end of the pre-equilibrium phase is commonly described by statistical methods.
In particular the dynamical + statistical MBH evolution is commonly divided in four sequential phases (for a more detailed discussion see e.g. Ref~\cite{park} and references therein):
\begin{itemize}
\item {\it Balding phase}: the MBH just formed, initially characterized by a deformed shape,``loses its hairs'' (i.e. the higher angular momentum powers), by emitting charge, energy and angular momentum in the form of gravitational radiation and gauge fields, becoming more symmetric (elliptical) in the process.
\item {\it Spin-down phase}: the now stationary rotating MBH continues to gradually lose its energy/mass (60 $-$ 80\%) and angular momentum, until it reaches a non-rotating static (spherical) configuration.
\item {\it Schwarzschild/Evaporation phase}: the MBH loses its mass by emitting all possible particle degrees of freedom (Standard Model particles: quarks, leptons, photons, $W$, $Z$, gluons + gravitons, and, if they exist, other heavy particles beyond the SM). At this stage, the emission is assumed to be democratic: each degree of freedom is equally weighted, i.e. has the same probability of being emitted (thus colored particles are favoured with respect to the uncolored ones, high spin particles are favoured with respect to scalar ones, etc....). Furthermore, the emission is assumed to follow an adiabatic evolution: a homogeneous MBH temperature can be identified and slowly increases during the major part of the process of isotropic radiation.
\item {\it Planck phase}: in the final stages of evaporation when $ M_{MBH} \sim M_D$, the semi-classical and adiabatic approximation of General Relativity (which justifies a thermodynamical evolution of the evaporation process) breaks down and Quantum Gravity (QG) effects becomes much more important in defining the ultimate MBH fate. The possibilities range from a final remnant, an explosive break-up, or a complete evaporation, with each hypothesis still under discussion.
In particular, the role of discontinuous emissions with backreaction in this context must still be investigated.
\end{itemize}
Each of the phases described above is subject to uncertainties. In particular,
a better understanding of the balding phase requires dynamical simulations that should also take into account the possible formation of exotic shapes (``saturn''-like configurations) or multiple MBH's immediately following the collision. The democracy of the emissions that characterize the Schwarzschild phase is not present in the earlier phases where the MBH still retains a memory of the way in which it was created (it has hairs). In order to preserve unitarity it has recently been pointed out that democracy should be reached gradually, and that two scales should be introduced, instead of just one, to fully characterize the MBH evolution: in addition to the already mentioned gravitational scale/radius (at which gravitation becomes strong), a second (lower energy/higher distance) scale (e.g. the compactification radius in extra dimension models at which gravity deviations from the Einsteinian regime begin to manifest themselves), characterizes the transition from the non-democratic to the democratic emission regime \cite{dvali}.
Furthermore, the emission of particles during MBH evolution is modified by gravitational effects, related to the curved geometry near the MBH horizon.
These modifications are codified in grey-body factors and many results concerning their precise determination have recently appeared in the literature, thanks to increasingly more sophisticated computations. However, some of the factors are still unknown or very uncertain, like those for graviton emission in extra-dimensions from a rotating MBH. Finally, still unknown QG effects are expected to determine the evolution of the final MBH remnant in the Planck phase. While many works agree on the hypothesis of a complete evaporation, it is still possible that there may remain a finite MBH remnant.
Following progress in theoretical understanding, several numeric event generators have been developed in the last ten years for the simulation of MBH generation and decay, in particular {\texttt{Groke}}~\cite{ahn} in the framework of cosmic ray studies, and {\texttt{Charybdis}}~\cite{harris}, {\texttt{Catfish}}~\cite{cavaglia}, {\texttt{BlackMax}}~\cite{frost,dai} and {\texttt{QBH}}~\cite{gingrich} in the framework of LHC physics.
The heavy particles (top quarks, Higgs and EW bosons, etc.) emitted by MBH evaporation decay quickly, i.e. before entering the detectors, and the partons and charged leptons emitted both by MBH evaporation and by these decays are further subject to parton and photon shower emissions, degrading their energy down to a scale where perturbative QCD can not be applied anymore, and hadronization takes place, followed by hadron decays. Non-perturbative effects in this context are described by means of phenomenological models.
This same chain of processes also occurs in p-p collisions in the framework of the SM and the corresponding physics and model parameters have been constrained over the years by results obtained at accelerators. In particular, shower Monte Carlo (SMC) programs like {\texttt{PYTHIA}}, {\texttt{HERWIG}} and {\texttt{SHERPA}} are commonly used to describe these processes\cite{buckley}. The largest uncertainties in this framework concern the complications that may arise when considering beams with a nuclear structure (as for instance in p-A and A-A collisions) and the propagation of the MBH decay products in a medium instead of the vacuum~\footnote{Actually, in case of p-p collisions medium effects reduces to the so-called ``underlying event'' effect, that is already one of the sources of the largest uncertainties in SMC simulations.}.
Searches for MBH's have been conducted by the CMS and ATLAS experimental collaborations at LHC in the framework of the more general ``searches for exotica''. The analyses conducted so far~\cite{cms0,atlas0,cms1,cms2} have not lead to any evidence for MBH formation in p-p collisions at $E_{CM}$ = 7 TeV. However, these analyses have been criticized, since QG effects, expected to be important at LHC energies, have been neglected or treated too naively in the event generators used. Very recently, some theoretical work has also appeared in the literature pointing out, on the basis of other arguments like the generalized uncertainty principle or the extrapolation of the results of numerical simulation of colliding self-gravitating fluid objects, that the present LHC energy is in any case too low for the formation of MBH's~\cite{ali,rezzolla}. However, the situation is globally still controversial, and the exclusion at the present LHC energy certainly does not limit the possible formation of MBH's at higher energies.
In this contribution we investigate the behaviour of event generators, usually adopted (and adapted) at LHC energies, at higher energies such as those reachable in the interactions of ultra-high-energy cosmic rays with the Earth's atmosphere, leading to extended air showers (EAS). In particular, we work with the last version of {\texttt{BlackMax}} (2.02.0), both in the standalone mode, and interfaced to the {\texttt{PYTHIA}} SMC code \cite{pythia}.
We perform simulations of the formation of non-rotating MBH's in p-p collisions in the 14 TeV $<$ $E_{CM}$ $<$ 100 TeV energy range, two different values for the fundamental gravity mass scale, i.e. $M_D = 4$ and 15 TeV, a MBH mass constrained in the range 2 $M_D < M_{MBH} < E_{CM}$, and $n$ = 2 spatial extra dimensions without fermion splitting. In the simulation of MBH evolution, the mass, linear and angular momentum loss fractions were assumed to be equal to 0.3, whereas angular momentum, charge and color suppression factors were assumed to be equal to 0.2, and baryon and lepton numbers, as well as their difference, conserved.
With these settings we investigated the kinematical properties of particles emitted during the MBH evolution as computed by {\texttt{BlackMax}} and also after the Parton Shower + Hadronization + Hadron decay chain, as computed by the interface of {\texttt{BlackMax}} with {\texttt{PYTHIA}}. Examples of selected results are presented in Figs.\ref{fig3}, \ref{fig2}, \ref{fig4} and \ref{fig5}.
\begin{figure}
\includegraphics[width=0.49\textwidth]{./figsproc/pz50.eps}
\includegraphics[width=0.49\textwidth]{./figsproc/pt50.eps}
\caption{\label{fig3}{Parallel ($left$) and transverse ($right$) momentum distributions for different SM degrees of freedom (quarks and antiquarks with positive charge, quarks and antiquarks with negative charge, gluons, positively charged leptons, negatively charged leptons, neutrinos, photons) and gravitons as computed by {\texttt{BlackMax}} for a MBH formed at a CM p-p collision energy $E_{CM}$ = 50 TeV for $M_D$ = 4 TeV. See text for more detail.}}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{./figsproc/gluonpz.eps}
\includegraphics[width=0.49\textwidth]{./figsproc/gluonpt.eps}
\caption{\label{fig2}{Gluon parallel and transverse momentum distributions as computed by {\texttt{BlackMax}} for a MBH formed at four different CM p-p collision energies ($E_{CM}$ = 14, 28, 50, 100 TeV) for $M_D$ = 4 TeV and at two different CM energies ($E_{CM}$ = 50, 100 TeV) for $M_D$ = 15 TeV. See text for more detail.}}
\end{figure}
In Fig.\ref{fig3}.a and \ref{fig3}.b the longitudinal and transverse momentum distributions (expressed in terms of number of particles/bin/event) are shown for different SM particle species for the case of MBH production at $E_{CM} =$ 50 TeV. After evaporation of the MBH the (anti-)quarks give rise to the largest contributions followed by gluons, (anti-)leptons and photons. Contributions from particles with opposite charges are shown separately: for any given flavour the contribution of positively charged particles is larger than that coming from negatively charged particles, probably due to the fact that during the final burst in the MBH evolution ({\texttt{BlackMax}} implements the hypothesis of complete evaporation), positive charged particles are predominantly emitted, because the majority of MBH's are positively charged (see also Ref.\cite{dai} for similar conclusions in a lower energy p-p study).
The $p_z$ distributions are almost monotonically decreasing with similar slopes for all SM particles, whereas the $p_T$ distributions show some broad peaks, located at different $p_T$ values according to the particle species. (Anti-)leptons are emitted in pairs, i.e. as $\ell\nu_\ell$, $\ell^+ \ell^-$ or $\nu_\ell \bar{\nu}_\ell$, due to imposed lepton number conservation.
Graviton distributions are also shown, and display a high $p_T$ profile with a slope that decreases more rapidly than do those for SM particles, leading to a suppression of gravitons with respect to SM degrees of freedom at high $p_T$.
In Fig.\ref{fig2}.a and Fig.\ref{fig2}.b, the $p_z$ and $p_T$ distributions of a specific particle species, i.e. the gluon in this example, are shown as a function of the p-p collision $E_{CM}$ (leading to the formation of a MBH), for different values of $M_D$. It is evident that, for a fixed value of $M_D$, the shape of the distributions at different $E_{CM}$'s is preserved with the total number of gluons increasing with $E_{CM}$. This is as expected because the cross-section for MBH formation increases with $E_{CM}$. On the other hand, changing the value of $M_D$ leads to distributions with different shapes in addition to a changing value of the total cross-section. In particular, the position of the $p_T$ maximum for gluon emission increases with $M_D$, ranging from $p_T \sim$ 1.1 TeV for $M_D$ = 4 TeV to
$p_T \sim$ 4.3 TeV for $M_D$ = 15 TeV.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{./figsproc/tracks.eps}
\caption{\label{fig4}{Photon (upper part) and all lepton
(lower part) yields as a function of the yield of all hadronic tracks after {\texttt{BlackMax + PYTHIA}}. Each point correspond to a different simulated event. Regions with different colors correspond to different $E_{CM}$ and $M_D$ parameters adopted in the MBH simulation, as labelled in the figure. See text for more detail.}}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[width=0.49\textwidth]{./figsproc/compara50.eps}
\includegraphics[width=0.49\textwidth]{./figsproc/compara100.eps}
\caption{\label{fig5}{Energy distributions of photons and gravitons
emitted by a MBH at a CM p-p collision energy $E_{CM}$ = 50 TeV ($left$)
and $E_{CM}$ = 100 TeV ($right$), for $M_D$ = 4 TeV. Results after both
{\texttt{BlackMax}}
and {\texttt{BlackMax~+~PYTHIA}} are presented in each panel for comparison. See text for more detail.}}
\end{figure}
The SM yields from MBH evaporation are in general modified after parton and photon shower + hadronization + hadron decay, as simulated by SMC codes, such as {\texttt{PYTHIA}}, which leads to hundreds of hadrons and photons. In particular, the number of emitted photons in each event turns out to be correlated to the number of emitted hadronic tracks, with a constant slope at increasing $E_{CM}$, as shown in Fig.\ref{fig4}. This slope is also independent of $M_D$, at a fixed $E_{CM}$. On the other hand, the total yield of emitted leptons turns out to be small (a few tens of particles) and does not show evident correlations with the number of hadronic tracks. This points towards the conclusion that the large number of photons is probably due light hadron (in particular $\pi^0$) decays, whereas electromagnetic shower effects are suppressed.
It is also interesting to compare the shapes of particle spectra at different stages of the evolution of the entire system. In particular this can be carried out for distributions of particles that are not subject to hadronization, such as leptons, photons and gravitons. In Fig.\ref{fig5}.a and \ref{fig5}.b the energy distributions of photons and gravitons at the parton level after MBH evaporation and at the hadron level after {\texttt{PYTHIA}} are shown, for two different $E_{CM}$ energies. It is evident that the contributions of the parton shower, the hadronization and hadron decay lead to a complete distorsion of the original photon spectrum, disproportionately populating the region of low energies with photons emitted in these last processes. The photon distributions at the evaporation level are very similar for both $E_{CM} = 50$ and 100 TeV, whereas, at the hadron level, the photon distribution at $E_{CM} = 100$ TeV is clearly much more populated than the corresponding one for $E_{CM} = 50$ TeV due to the stronger SMC effects. On the other hand, the graviton distributions are completely unaffected by shower effects, and in the case of $E_{CM} = 100$ TeV display a flatter profile in comparison to that at $E_{CM} = 50$ TeV.
In conclusion, we have provided some examples of theoretical simulations demonstrating how parton shower + hadronization + hadron decay effects may dramatically modify particle distributions after MBH evaporation, especially in the case of some SM quanta, such as photons. This is certainly a challenge that must be confronted when trying to distinguish the effects of different MBH models, potentially observable through MBH formation, evaporation and decay in high-energy and ultra-high-energy collisions, such as those explored at LHC and in cosmic ray experiments. From our preliminary investigations it appears that lepton distributions are less affected than photon ones and should thus be preferred for these MBH studies.
\section*{Acknowledgments}
This work was financed by the Slovenian Research Agency (ARRS) and by the Slovenian Ministery of Work, under the AD-FUTURA program. We wish to thank the organizers of the 13$^{th}$ International Conference on Nuclear Reaction Mechanisms for financial support and for stimulating discussions.
|
1,108,101,563,902 | arxiv | \section{Introduction}
Unconventional superconductivity in strongly correlated electron systems is attracting renewed interest
because it may be a platform of topological superconductivity accompanied by Majorana surface/edge/vortex/end
states~\cite{Qi-Zhang,Tanaka_review,Sato-Fujimoto_review}.
Although previous studies focused on the proximity-induced topological superconductivity in $s$-wave superconductor (SC)
heterostructures~\cite{Lutchyn2010,Mourik,Nadj-Perge,Fu-Kane2008,Sun2016}, natural $s$-wave SCs are mostly trivial
from the viewpoint of topology.
On the other hand, unconventional SCs may have topologically nontrivial properties originating from
non-$s$-wave Cooper pairing. In particular, time-reversal symmetry (TRS) broken chiral SCs~\cite{Read-Green}
and odd-parity spin-triplet SCs~\cite{Kitaev2001,Schnyder,Sato2010} are known to be candidates of
topological superconductivity.
However, from the viewpoint of materials science, evidences for chiral and/or odd-parity superconductivity have been reported
for only a few materials, such as URu$_2$Si$_2$~\cite{Kasahara2007,Yano2008,Kittaka2016}, SrPtAs~\cite{BiswasSrPtAs},
Sr$_2$RuO$_4$~\cite{Sr2RuO4_review2012}, Cu$_x$Bi$_2$Se$_3$~\cite{Sasaki2011}, and ferromagnetic heavy fermion
SCs~\cite{Aoki_review}.
Superconductivity in UPt$_3$ has been discovered in 1980's~\cite{Stewart}. Multiple superconducting phases illustrated in
Fig.~\ref{phasediagram}~\cite{Fisher,Bruls,Adenwalla,Tou_UPt3} unambiguously exhibit exotic Cooper pairing which is probably
categorized into the two-dimensional (2D) irreducible representation of point group $D_{\rm 6h}$~\cite{Sigrist-Ueda}.
After several theoretical proposals examined by experiments for more than three decades, the $E_{\rm 2u}$ representation
has been regarded as the most reasonable symmetry of superconducting order parameter~\cite{Sauls,Joynt}.
In particular, the multiple phase diagram in the temperature-magnetic field plane is naturally reproduced by assuming
a weak symmetry breaking term of hexagonal symmetry~\cite{Sauls}. Furthermore, a phase-sensitive measurement~\cite{Strand} and
the observation of spontaneous TRS breaking~\cite{Schemm} in the low-temperature and low-magnetic field B-phase,
which was predicted in the $E_{\rm 2u}$-state, support the $E_{\rm 2u}$ symmetry of superconductivity.
The order parameter of $E_{\rm 2u}$ symmetry represents odd-parity spin-triplet Cooper pairs.
Therefore, topologically nontrivial superconductivity is expected in UPt$_3$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=65mm]{crystal2.eps}
\caption{(Color online)
Crystal structure of UPt$_3$. Uranium ions form AB stacked triangular lattice.
2D vectors, ${\bm e}_i$ and ${\bm r}_i$, are shown by arrows.
}
\label{crystal}
\end{center}
\end{figure}
Furthermore, UPt$_3$ has an intriguing feature in the crystal structure, which is illustrated in Fig.~\ref{crystal}.
The symmetry of the crystal is represented by nonsymmorphic space group $P6_{3}/mmc$~\cite{Comment2};
glide and screw symmetries including a half translation along the {\it c}-axis are preserved
in spite of broken mirror and rotation symmetries. Exotic properties ensured by glide and/or screw symmetry
are one of the central topics in the modern condensed matter physics.
This topic for SCs traces back to Norman's work in 1995 for UPt$_3$~\cite{Norman1995};
a counterexample of Blount's theorem~\cite{Blount,Kobayashi-Sato}.
The line nodal excitation predicted by Norman has been revisited by recent studies; group-theoretical
proof~\cite{Micklitz2009,Micklitz2017-1}, microscopic origin~\cite{Yanase_UPt3_Weyl}, and
topological protection~\cite{Kobayashi-Yanase-Sato} have been elucidated, and they have been confirmed
by a first principles-based calculation~\cite{Nomoto}.
Recent developments in the theory of nonsymmorphic topological states of matter~\cite{Fang-Fu2015,Shiozaki2015,Shiozaki2016,Varjas2015,Liu2016}
have uncovered novel topological insulators and SCs enriched by glide and/or screw symmetry, which are distinct from
those classified by existing topological periodic table for symmorphic systems~\cite{Schnyder,Kitaev2009,Ryu2010,Morimoto2013,Shiozaki2014}.
Since eigenvalues of glide and two-fold screw operators are $4\pi$-periodic, a M\"obius structure appears in the wave function
and changes the topological classification.
Although such topological nonsymmorphic crystalline insulators have been proposed in KHgX (X = As, Sb, Bi)~\cite{KHgX,KHgX_ARPES}
and CeNiSn~\cite{CeNiSn}, topological nonsymmorphic crystalline superconductor (TNSC) has not been identified in materials.
In this paper we show the topological invariant specifying the TNSC by $K$-theory, and demonstrate its nontrivial value
in UPt$_3$.
Multiband structures give rise to both intriguing and complicated aspects of many heavy fermion systems.
However, the band structure of UPt$_3$ has been clarified to be rather simple~\cite{Joynt,Taillefer1988,Kimura_UPt3,McMullan}.
Fermi surfaces (FSs) are classified into the two classes.
The FSs of one class enclose the $A$-point in the Brillouin zone (BZ) [band 1 and band 2 in Ref.~\onlinecite{McMullan}],
while those of the other class are centered on the $\Gamma$-point or $K$-point [bands 3, 4, and 5
in Ref.~\onlinecite{McMullan}].
The two classes are not hybridized in the surface state on the (100)-direction where the glide symmetry is preserved.
Therefore, we can separately study the topological invariants and surface states arising from the multiband FSs.
The TNSC is attributed to the former FSs in Sec.~V.
The latter FSs are also accompanied by various topological surface states, for which we identify topological invariant in Sec.~VI.
The paper is organized as follows.
In Sec.~II, we introduce a minimal two-sublattice model for nonsymmorphic superconductivity in UPt$_3$.
In Sec.~IIB, Dirac nodal lines protected by $P6_3/mmc$ space group symmetry are proved.
In Sec.~IIC, the order parameter of $E_{\rm 2u}$ symmetry is explained.
The calculated surface states on the glide invariant (100)-surface are shown in Sec.~III.
In Sec.~IV, three-dimensional (3D) TNSC of DIII class is classified on the basis of the $K$-theory.
In Sec.~V, we show that the glide-$\Z_2$ invariant characterizing the TNSC is nontrivial in UPt$_3$ A-phase.
The underlying origin of the TNSC accompanied by double Majorana cone surface states is discussed.
In Sec.~VI, we characterize other topological surface states by low-dimensional topological invariants
enriched by crystal mirror, glide, and rotation symmetries. Constraints on these topological invariants by
nonsymmorphic space group symmetry are also revealed. Finally, a brief summary is given in Sec.~VII.
Ingredients giving rise to rich topological properties of UPt$_3$ are discussed.
\section{Model}\label{sec:model}
\subsection{Nonsymmorphic two-sublattice model}
We study the superconducting state in UPt$_3$ by analyzing the Bogoliubov-de Gennes (BdG) Hamiltonian
for nonsymmorphic two-sublattice model~\cite{Yanase_UPt3_Weyl},
\begin{align}
{\cal H}_{\rm BdG}&= \sum_{{\bm k},m,s} \xi({\bm k}) c_{{\bm k}ms}^\dagger c_{{\bm k}ms}
+ \sum_{{\bm k},s} \left[a({\bm k}) c^\dagger_{{\bm k}1s}c_{{\bm k}2s} + {\rm h.c.}\right]
\nonumber \\
& +\sum_{{\bm k},m,s,s'} \alpha_m {\bm g}({\bm k}) \cdot {\bm s}_{ss'}c^\dagger_{{\bm k}ms}c_{{\bm k}ms'}
\nonumber \\
& + \frac{1}{2} \sum_{{\bm k},m,m',s,s'} \left[\Delta_{mm'ss'}({\bm k}) c^\dagger_{{\bm k}ms}c^\dagger_{-{\bm k}m's'} + {\rm h.c} \right],
\label{eq:model}
\end{align}
where ${\bm k}$, $m=1,2$, and $s=\uparrow,\downarrow$ are index of momentum, sublattice, and spin, respectively.
The last term represents the gap function and others are normal part Hamiltonian.
Taking into account the crystal structure of UPt$_3$ illustrated in Fig.~\ref{crystal},
we adopt an intra-sublattice kinetic energy,
\begin{align}
\xi({\bm k}) = 2 t \sum_{i=1,2,3}\cos{\bm k}_\parallel\cdot{\bm e}_i + 2 t_z \cos k_z -\mu,
\end{align}
and an inter-sublattice hopping term,
\begin{align}
a({\bm k}) = 2 t' \cos\frac{k_z}{2} \sum_{i=1,2,3} e^{i{\bm k}_\parallel\cdot{\bm r}_i},
\end{align}
with ${\bm k}_\parallel=(k_x,k_y)$. The basis translation vectors in two dimension are
${\bm e}_1 = (1,0)$, ${\bm e}_2 = (-\frac{1}{2},\frac{\sqrt{3}}{2})$, and
${\bm e}_3 = (-\frac{1}{2},-\frac{\sqrt{3}}{2})$. The interlayer neighboring vectors projected onto the
basal plane are given by ${\bm r}_1 = (\frac{1}{2},\frac{1}{2\sqrt{3}})$, ${\bm r}_2 = (-\frac{1}{2},\frac{1}{2\sqrt{3}})$,
and ${\bm r}_3 = (0,-\frac{1}{\sqrt{3}})$. These 2D vectors are illustrated in Fig.~\ref{crystal}.
Although the crystal point group symmetry is centrosymmetric $D_{\rm 6h}$,
local point group symmetry at Uranium ions is $D_{\rm 3h}$ lacking inversion symmetry. Then, Kane-Mele spin-orbit coupling
(SOC)~\cite{Kane-Mele} with
$g$-vector~\cite{Saito_MoS2}
\begin{align}
{\bm g}({\bm k})= \hat{z} \sum_{i=1,2,3} \sin{\bm k}_\parallel\cdot{\bm e}_i,
\label{g-vector}
\end{align}
is allowed by symmetry. The coupling constant has to be sublattice-dependent,
$(\alpha_1,\alpha_2)=(\alpha,-\alpha)$,
so as to preserve the global $D_{\rm 6h}$ point group symmetry~\cite{Kane-Mele,Fischer,JPSJ.81.034702}.
Quantum oscillation measurements combined with band structure
calculations~\cite{Joynt,Taillefer1988,Kimura_UPt3,McMullan,Nomoto} have shown a pair of FSs
centered at the $A$-point ($A$-FSs) on the BZ face.
Interestingly, the paired bands are degenerate on the $A$-$L$ lines and form
Dirac nodal lines~\cite{Burkov-Hook-Balents2011},
which have been experimentally observed by de Haas-van Alphen experiments~\cite{McMullan}.
In the next subsection we show that the Dirac nodal lines are protected by the nonsymmorphic space group
symmetry of $P6_3/mmc$ (No.~194)~\cite{Yanase_UPt3_Weyl,Kobayashi-Yanase-Sato}.
Thus, the two A-FSs are naturally paired by the nonsymmorphic crystal symmetry.
By choosing a parameter set $(t,t_z,t',\alpha,\mu)=(1,-4,1,2,12)$ our two band model reproduces
the paired $A$-FSs. In this paper we show that the peculiar band structure
results in exotic superconductivity in terms of symmetry and topology.
First principles band structure calculations also predict three FSs centered on the $\Gamma$-point ($\Gamma$-FSs),
and two FSs enclosing the $K$-point ($K$-FSs)~\cite{Joynt,Taillefer1988,Nomoto},
although the existence of $K$-FSs is experimentally under debates~\cite{McMullan}.
We show that a variety of topological surface states may arise from these bands.
A parameter set $(t,t_z,t',\alpha,\mu)=(1,4,1,0,16)$ reproduces one of the $\Gamma$-FSs, while
another set $(t,t_z,t',\alpha,\mu)=(1,-1,0.4,0.2,-5.2)$ is adopted for the $K$-FSs.
\subsection{Dirac nodal lines in space group $P6_3/mmc$}\label{sec:Dirac-node}
The single particle states are four-fold degenerate on the $A$-$L$ lines
[$\k = (0, k_y, \pi)$ and symmetric lines].
In addition to the usual Kramers degeneracy, the sublattice degree of freedom gives additional degeneracy.
This feature is reproduced in the normal part Hamiltonian, because the inter-sublattice hopping vanishes
on the BZ face ($k_z =\pi$) and the SOC disappears on the $A$-$L$ lines.
Below we show that the existence of Dirac line nodes is ensured by the space group symmetry.
First, we show the additional degeneracy in the absence of the SOC.
In the SU(2) symmetric case, the two spin states are equivalent and naturally degenerate.
Then, we can define the TRS, $T=K$, and screw symmetry $S^z_\pi(k_z)$ in each spin sector,
where $K$ is the complex conjugate operator.
At the BZ face, $k_z=\pi$, we have $S^z_\pi(\pi)=i\sigma_y$ where $\sigma_i$ is the Pauli matrix in the sublattice space.
Because the combined magnetic-screw symmetry satisfies $\left[S^z_\pi(\pi)T\right]^2=-1$,
the two-fold degeneracy in each spin sector is proved by familiar Kramers theorem.
Taking into account the spin-degeneracy, we obtain four-fold degenerate bands on the entire BZ face.
The four-fold degeneracy is partly lifted by the SOC. However, the degeneracy of two spinful bands
is protected on the $A$-$L$ lines, that is proved as follows.
The little group on the $A$-$L$ lines includes the rotation symmetry $IG^{xz}(k_z)$, mirror symmetry $M^{yz}$,
and magnetic-inversion symmetry $IT$.
We here represent $T=is_y K$, $I=\sigma_x$, and $M^{yz}= is_x$, respectively.
The glide symmetry is represented by $G^{xz}(k_z)=s_y \sigma_y$ at $k_z=\pm \pi$
while $G^{xz}(k_z)=is_y \sigma_x$ at $k_z=0$.
The nonsymmorphic property of rotation symmetry is emphasized by denoting as $IG^{xz}(k_z)$.
The symmetry operations satisfy the algebra
\begin{align}
&\left[IG^{xz}(\pi)\right]^2=-1,
\label{AL_algebra1}
\\
&\{IG^{xz}(\pi),IT\}=0,
\label{AL_algebra2}
\\
&\{IG^{xz}(\pi),M^{yz}\}=0.
\label{AL_algebra3}
\end{align}
The first relation ensures the sector decomposition to the $\lambda=\pm i$ eigenstates of rotation operator.
Because the $IT$ symmetry is preserved in each subsector, the Kramers theorem holds.
The anti-commutation relation of two unitary symmetries, Eq.~(\ref{AL_algebra3}), shows that
a Kramers pair in one subsector has to be degenerate with another Kramers pair in the other subsector.
Therefore, the four-fold degeneracy on the $A$-$L$ lines is protected by symmetry.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=75mm]{FSpi.eps}
\caption{(Color online)
FSs on the BZ face, $k_z=\pi$. Thin red lines show the FSs in the presence of the SOC ($\alpha=1$),
while the thick black line is the overlapping FSs in the absence of the SOC ($\alpha=0$).
Dashed lines show the $A$-$L$ lines in the first BZ.
We set parameters $(t,t_z,t',\mu)=(1,-4,1,12)$ to reproduce the $A$-FSs.
}
\label{Dirac_node}
\end{center}
\end{figure}
Figure~\ref{Dirac_node} shows the FSs in our model. It is illustrated that the FSs completely overlap
on the BZ face in the absence of the SOC. Although the SOC splits the FSs, the degeneracy remains on the $A$-$L$ lines.
These features are consistent with above group theoretical analysis.
The Dirac nodal lines in the $P6_3/mmc$ space group are one of the typical examples of band degeneracy
protected by nonsymmorphic crystal symmetry~\cite{Young2012,Watanabe-Po-Vishwanath,Niu2016,Yang2017,Bradley}.
On the BZ face away from the $A$-$L$ lines, the non-Kramers degeneracy is lifted purely by the SOC.
Therefore, the SOC gives particularly significant effects on the BZ face.
This is the underlying origin of the SOC-induced nodal loop in the superconducting gap~\cite{Yanase_UPt3_Weyl,Kobayashi-Yanase-Sato}.
\subsection{Order parameter of $E_{\rm 2u}$ representation}\label{sec:order-parameter}
The multiple superconducting phases in UPt$_3$ have been reasonably attributed to two-component order parameters
in the $E_{\rm 2u}$ irreducible representation of $D_{\rm 6h}$ point group~\cite{Sauls,Joynt}.
The gap function is generally represented by
\begin{align}
\hat{\Delta}({\bm k}) = \eta_1 \hat{\Gamma}^{E_{\rm 2u}}_1 + \eta_2 \hat{\Gamma}^{E_{\rm 2u}}_2.
\end{align}
The two-component order parameters are parametrized as
\begin{align}
(\eta_1, \eta_2) = \Delta (1,i \eta)/\sqrt{1+\eta^2},
\end{align}
with a real variable $\eta$.
The basis functions $\hat{\Gamma}^{E_{\rm 2u}}_1$ and $\hat{\Gamma}^{E_{\rm 2u}}_2$
are admixture of some harmonics.
Adopting the neighboring Cooper pairs in the crystal lattice of U ions, we obtain the basis functions
\begin{align}
&
\hat{\Gamma}^{E_{\rm 2u}}_1 = \Bigl[\delta \left\{p_x({\bm k})s_x - p_y({\bm k})s_y\right\} \sigma_0
\nonumber \\ & \hspace{8mm}
+ f_{(x^2-y^2)z}({\bm k})s_z \sigma_x - d_{yz}({\bm k})s_z \sigma_y \Bigr] i s_y,
\label{Gamma_1}
\\
&
\hat{\Gamma}^{E_{\rm 2u}}_2 = \Bigl[\delta \left\{p_y({\bm k})s_x + p_x({\bm k})s_y\right\} \sigma_0
\nonumber \\ & \hspace{8mm}
+ f_{xyz}({\bm k})s_z \sigma_x - d_{xz}({\bm k})s_z \sigma_y \Bigr] i s_y,
\label{Gamma_2}
\end{align}
which are composed of the $p$-wave, $d$-wave, and $f$-wave components given by
\begin{eqnarray}
p_{x}({\bm k}) = \sum_{i} e_i^{x} \sin {\bm k}_\parallel\cdot{\bm e}_i,
\label{SM_p1}
\\
p_{y}({\bm k}) = \sum_{i} e_i^{y} \sin {\bm k}_\parallel\cdot{\bm e}_i,
\label{SM_p2}
\end{eqnarray}
\begin{eqnarray}
d_{xz}({\bm k}) = - \sqrt{3} \sin\frac{k_z}{2} {\rm Im} \sum_{i} r_i^{x} e^{i {\bm k}_\parallel \cdot {\bm r}_i},
\label{SM_d1}
\\
d_{yz}({\bm k}) = - \sqrt{3} \sin\frac{k_z}{2} {\rm Im} \sum_{i} r_i^{y} e^{i {\bm k}_\parallel \cdot {\bm r}_i},
\label{SM_d2}
\end{eqnarray}
\begin{eqnarray}
f_{xyz}({\bm k}) = -\sqrt{3} \sin\frac{k_z}{2} {\rm Re} \sum_{i} r_i^{x} e^{i {\bm k}_\parallel \cdot {\bm r}_i},
\label{SM_f1}
\\
f_{(x^2-y^2)z}({\bm k}) = -\sqrt{3} \sin\frac{k_z}{2} {\rm Re} \sum_{i} r_i^{y} e^{i {\bm k}_\parallel \cdot {\bm r}_i}.
\label{SM_f2}
\end{eqnarray}
Pauli matrix in the spin and sublattice space are denoted by $s_i$ and $\sigma_i$, respectively.
The purely $f$-wave state has been intensively investigated,
and the phase diagram compatible with UPt$_3$ has been obtained~\cite{Sauls}. However, an admixture of
a $p$-wave component is allowed by symmetry and it changes the gap structure and topological
properties~\cite{Yanase_UPt3_Weyl}.
Thus, we here take into account a small $p$-wave component with $0 < |\delta| \ll 1$.
The small $p$-wave component does not alter the phase diagram consistent with experiments.
On the other hand, the dominantly $p$-wave state discussed in Ref.~\onlinecite{Nomoto} would fail to reproduce
the phase diagram.
Besides the $p$-wave component, a sublattice-singlet spin-triplet $d$-wave component accompanies
the $f$-wave component as a result of the nonsymmorphic crystal structure of UPt$_3$~\cite{Yanase_UPt3_Weyl}.
The neighboring Cooper pairs on ${\bm r}_i$ bonds give equivalent amplitude of $d$-wave and $f$-wave
components in Eqs.~(\ref{Gamma_1}) and (\ref{Gamma_2}).
The $d$-wave order parameter plays a particularly important role on the superconducting gap at the BZ face, $k_z =\pi$.
Later we show that the TNSC is induced by the $d$-wave component.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=60mm]{multiplephase3.eps}
\caption{(Color online)
Multiple superconducting phases of UPt$_3$ in the magnetic field-temperature plane~\cite{Sauls,Joynt}.
The A-phase is identified as a TNSC.
The shaded region shows the Weyl superconducting phase~\cite{Yanase_UPt3_Weyl}.
Pair creation of Weyl nodes occurs at the phase boundary.
The dashed line indicates a topological phase transition, which is discussed in Sec.~\ref{sec:magnetic-rotation}.
}
\label{phasediagram}
\end{center}
\end{figure}
Now we review the multiple superconducting phases in UPt$_3$.
Three thermodynamically distinguished superconducting phases are illustrated
in Fig.~\ref{phasediagram}~\cite{Fisher,Bruls,Adenwalla,Sauls,Joynt}.
The A-, B-, and C-phases are characterized by the ratio of two-component order parameters
$\eta=\eta_2/i\eta_1$ summarized in Table.~I.
A pure imaginary ratio of $\eta_1$ and $\eta_2$ in the B-phase implies the chiral superconducting state which
maximally gains the condensation energy. Owing to the $p$-wave component, the B-phase is non-unitary.
It has been considered that the A- and C-phases are stabilized by weak symmetry breaking of hexagonal structure,
possibly induced by weak antiferromagnetic order~\cite{Sauls,Joynt,Aeppli,Hayden}.
We here assume that the A-phase is the $\Gamma_2$ state ($\eta=\infty$),
while the C-phase is the $\Gamma_1$ state ($\eta=0$), and assume non-negative $\eta \ge 0$ without loss of generality.
\begin{table}[htbp]
{\renewcommand\arraystretch{1.2}
\begin{tabular}{c|c}
\hline
A-phase & $|\eta|=\infty$
\\ \hline
B-phase & $0 \le |\eta| \le \infty $
\\ \hline
C-phase & $\eta=0$
\\
\hline
\end{tabular}
}
\caption{Range of the parameter $\eta$ in the A-, B-, and C-phases of UPt$_3$~\cite{Joynt,Sauls}.
}
\label{tab1}
\end{table}
Contrary to the experimental indications for the $E_{\rm 2u}$-pairing state mentioned before,
a recent thermal conductivity measurement~\cite{Machida-Izawa} has been interpreted in terms of the
$E_{\rm 1u}$ symmetry of the {\it orbital part} of order parameter.
However, this interpretation is not incompatible with the $E_{\rm 2u}$ symmetry of total order parameter.
For instance, the basis functions, Eqs.~\eqref{Gamma_1} and \eqref{Gamma_2}, include components
$p_x({\bm k})s_x - p_y({\bm k})s_y$ and $p_y({\bm k})s_x + p_x({\bm k})s_y$, where the orbital part $p_x({\bm k})$ and $p_y({\bm k})$
belong to the $E_{\rm 1u}$ symmetry.
Although in Ref.~\onlinecite{Machida-Izawa} the superconducting state with TRS has been discussed
along with a theoretical proposal~\cite{Tsutsumi2012}, the spin part of order parameter can not
be deduced from thermal conductivity measurements. Thus, we here assume the $E_{\rm 2u}$-pairing state.
For clarity of discussions for topological properties,
we carry out the unitary transformation for the BdG Hamiltonian.
When the model Eq.~(\ref{eq:model}) is represented in the Nambu space
\begin{align}
{\cal H}_{\rm BdG} = \frac{1}{2} \sum_{{\bm k}} \hat{c}_{\bm k}^\dagger \hat{H}_{\rm BdG}({\bm k}) \hat{c}_{\bm k},
\end{align}
with
\begin{align}
\hat{c}_{\bm k} = \left(c_{{\bm k}1\uparrow},c_{{\bm k}2\uparrow},c_{{\bm k}1\downarrow},c_{{\bm k}2\downarrow},c_{-{\bm k}1\uparrow}^{\dag},c_{-{\bm k}2\uparrow}^\dag,c_{-{\bm k}1\downarrow}^\dag,c_{-{\bm k}2\downarrow}^\dag, \right)^{\rm T},
\end{align}
the BdG matrix $\hat{H}_{\rm BdG}({\bm k})$ in this form does not satisfy the periodicity compatible with
the first BZ.
To avoid this difficulty, we represent the BdG Hamiltonian by
\begin{align}
\tilde{H}_{\rm BdG}({\bm k}) =
U({\bm k}) \hat{H}_{\rm BdG}({\bm k}) U({\bm k})^{\dag},
\label{unitary_transformation}
\end{align}
using the unitary matrix
\begin{align}
U({\bm k}) =
\left(
\begin{array}{cc}
1 & 0 \\
0 & e^{i{\bm k} \cdot {\bm \tau}} \\
\end{array}
\right)_{\sigma}
\otimes
s_0
\otimes
\tau_0.
\end{align}
By choosing the translation vector, ${\bm \tau}=(0,-\frac{1}{\sqrt{3}},\frac{1}{2})$,
$\tilde{H}_{\rm BdG}({\bm k})$ is periodic with respect to the translation ${\bm k} \rightarrow {\bm k} + {\bm K}$
for any reciprocal lattice vector ${\bm K}$.
The transformed BdG Hamiltonian has the same form as Eq.~(\ref{eq:model}), although the inter-sublattice
components acquire the phase factor
\begin{align}
& a(\k) \rightarrow \tilde{a}(\k) \equiv a(\k) e^{- i{\bm k} \cdot {\bm \tau}} ,
\\
& f_i(\k) \rightarrow \tilde{f}_i(\k) \equiv f_i(\k) e^{- i{\bm k} \cdot {\bm \tau}} ,
\\
& d_i(\k) \rightarrow \tilde{d}_i(\k) \equiv d_i(\k) e^{- i{\bm k} \cdot {\bm \tau}}.
\end{align}
\section{Topological surface states}
We calculate the energy spectrum of quasiparticles with surface normal to the (100)-axis, $E(\k_{\rm sf})=E(k_y,k_z)$,
because the nonsymmorphic glide symmetry is preserved there. Both glide and screw symmetry
are broken in the other surface directions.
Figures~\ref{eGFSedgestate} and \ref{ZFSedgestate} show results for the $\Gamma$-FS and $A$-FSs, respectively.
The black regions represent the zero energy surface states.
It is revealed that a variety of zero energy surface states appear on the (100)-surface in the A-, B-, and C-phases.
We clarify the topological protection of these surface states below.
Indeed, all of the zero energy surface states are topologically protected.
In Figs.~\ref{eGFSedgestate} and \ref{ZFSedgestate}, the topological surface states discussed in
Secs.~\ref{sec:glide-DIII} and \ref{sec:Weyl} - \ref{sec:magnetic-rotation} are labeled by
(\ref{sec:glide-DIII}) and (A)-(E), respectively.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=180mm]{eGFSedgestate5.eps}
\caption{(Color online) Energy of surface states on the (100)-surface.
We impose open boundary condition along the [100]-direction and periodic boundary condition along the other directions.
The lowest excitation energy of BdG quasiparticles [$\equiv {\rm min} |E(\k_{\rm sf})|$]
as a function of the surface momentum ${\bm k}_{\rm sf} = (k_y, k_z)$ is shown.
Parameters $(t,t_z,t',\alpha,\mu,\Delta,\delta)=(1,4,1,0,16,4,0.02)$ are assumed so that
the $\Gamma$-FS is reproduced.
(a) C-phase ($\eta=0$), (b)-(d) B-phase ($\eta=0.7$, $1$, and $1.5$), and (e) A-phase ($\eta=\infty$).
Arrows with characters (A), (B), (C) and (E) indicate surface states clarified
in Secs.~\ref{sec:Weyl}, \ref{sec:mirror}, \ref{sec:glide-AIII}, and \ref{sec:magnetic-rotation}, respectively.
The green circles show the projections of Weyl point nodes.
}
\label{eGFSedgestate}
\end{center}
\end{figure*}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=180mm]{ZFSedgestate5.eps}
\caption{(Color online) (a)-(e) Energy of surface states on the (100)-surface
for parameters reproducing the paired $A$-FSs, $(t,t_z,t',\alpha,\mu,\Delta,\delta)=(1,-4,1,0,12,0.7,0.04)$.
(a) C-phase ($\eta=0$), (b)-(d) B-phase ($\eta=0.6$, $1$, and $2$), and (e) A-phase ($\eta=\infty$).
We choose $\alpha=2$ in (f) while the other parameters are the same as (c).
Comparison between (c) and (f) reveals the effect of the SOC.
Arrows with characters (V), (A), and (C) indicate surface states discussed
in Secs.~\ref{sec:glide-DIII}, \ref{sec:Weyl}, and \ref{sec:glide-AIII}, respectively.
The green circles show the projections of Weyl point nodes.
}
\label{ZFSedgestate}
\end{center}
\end{figure*}
The most panels of Figs.~\ref{eGFSedgestate} and \ref{ZFSedgestate} show the results for $\alpha=0$
by neglecting the SOC. Most of the surface states are indeed robust against the SOC.
Exceptionally, the surface states around $k_{z}=\pi$ are affected by the SOC,
because the nodal bulk excitations may be induced by the SOC~\cite{Yanase_UPt3_Weyl,Kobayashi-Yanase-Sato}.
For our choice of parameters, the bulk excitation gap remains finite at $k_z=\pi$ for $\alpha=1$ although the gap
may be suppressed for $\alpha=2$.
Thus, we show the surface states for $\alpha=2$ in Fig.~\ref{ZFSedgestate}(f) for a comparison.
The gapless bulk excitations which are not shown for $\alpha=0$ [Fig.~\ref{ZFSedgestate}(c)] are observed around the surface BZ boundary.
One of the main results of this paper is a signature of TNSC in UPt$_3$,
that is indicated by the label (V) in Fig.~\ref{ZFSedgestate}(e).
This surface state is robust against the SOC unless the bulk excitation gap is closed.
According to the first principles band structure calculation, the band splitting by the SOC is tiny
along the $A$-$H$ lines~\cite{Joynt} and significantly decreased by the mass renormalization factor~\cite{Maruyama-Yanase2015},
$z \sim 1/100$ in UPt$_3$~\cite{Joynt}.
Thus, it is reasonable to assume a small SOC leading to the gapped bulk excitations at the BZ face.
This assumption is compatible with the recent field-angle-dependent thermal conductivity measurement
which has shown nodal lines/points lying away from the BZ face~\cite{Machida-Izawa}.
In the next section, superconducting phases of 3D DIII class with additional glide symmetry
are classified on the basis of the $K$-theory, and the topological invariants are derived.
In Sec.~\ref{sec:glide-DIII}, we show that a surface state labeled by (V) is protected
by the strong topological index characterizing the TNSC.
The topological protection of other surface states is revealed in Sec.~\ref{sec:low-dimension}.
\section{Classification of class DIII superconductors with glide symmetry}
\label{sec:classification}
Topological classification of TNSC is carried out for both glide-even and glide-odd superconducting states
of DIII class. For simplicity, the cubic first BZ with volume $(2\pi)^3$ is assumed in this section.
We do not rely on any specific model, and therefore, the results obtained in this section
are valid for all the superconducting states preserving the glide symmetry and TRS.
\subsection{Glide-even superconductor}\label{sec:glide-even}
First, we study glide-even superconducting states.
The $\Gamma_2$-state (A-phase) of UPt$_3$ corresponds to this case.
The symmetries for the BdG Hamiltonian are summarized as
\begin{align}
&C {\cal H}(\bk) C^{-1} = -{\cal H}(-\bk), && C = \tau_x K,
\label{glide-even-algebra1} \\
&T {\cal H}(\bk) T^{-1} = {\cal H}(-\bk), && T = i s_y K,
\label{glide-even-algebra2} \\
&G(\bk) {\cal H}(\bk) G^{-1}(\bk) = {\cal H}(m_y\bk), && G(m_y \bk) G(\bk) = - e^{-i k_z},
\label{glide-even-algebra3} \\
&T G(\bk) = G(-\bk) T, && C G(\bk) = G(-\bk) C,
\label{glide-even-algebra4}
\end{align}
where $m_y \bk = (k_x, -k_y, k_z)$ is the momentum flipped by glide operation,
and $K$ is the complex conjugate.
The stable classification of bulk superconductors is given by the $K$-theory
over the bulk 3D BZ torus with symmetries (\ref{glide-even-algebra1})-(\ref{glide-even-algebra4}).
From Ref.~\onlinecite{Shiozaki2016}, the result is
\begin{equation}\begin{split}
&({\rm The\ stable\ classification\ of\ bulk\ gapped\ SCs}) \\
&= \underbrace{\bf Z_2}_{(k_x,k_y,k_z)} \oplus \underbrace{\Z_2 \oplus {\bf Z_2}}_{(k_x,k_z)} \oplus \underbrace{\bf Z_2}_{(k_y,k_z)} \oplus \underbrace{\Z_2}_{(k_z)}.
\end{split}\label{eq:diii+gr_even_bulk}\end{equation}
The bold style ${\bf Z_2}$ expresses an emergent topological phase which disappears if the glide symmetry is broken.
Each underbrace represents the momentum dependence of generating Hamiltonian.
For instance, $\underbrace{\Z_2}_{(k_x,k_z)}$ means that the generating Hamiltonian of the $\Z_2$ phase can be $k_y$-independent,
that is, the stacking of layered Hamiltonians $H_y(k_x,k_z)$ in the $xz$-plane along the $y$-direction.
We focus on the gapless states on the surfaces preserving the glide symmetry, i.e.\ $x=$ constant surface.
The classification of the surface gapless states is given by a similar $K$-theory over the surface 2D BZ torus
under the same symmetries (\ref{glide-even-algebra1})-(\ref{glide-even-algebra4}) with the $k_x$-direction excluded.
The bulk-boundary correspondence holds:~\cite{Shiozaki-Sato-Gomi2017} the $K$-group for the surface gapless states is given by the
direct summand of which generators are dependent on $k_x$ in Eq.\ (\ref{eq:diii+gr_even_bulk}).
Thus,
\begin{align}
& \left(\begin{array}{ll}
{\rm The\ classification\ of\ gapless\ states} \\
{\rm on\ the\ } x= {\rm constant\ surface}
\end{array}\right)
\nonumber \\
& = \underbrace{\bf Z_2}_{(k_x,k_y,k_z)} \oplus \underbrace{\Z_2 \oplus {\bf Z_2}}_{(k_x,k_z)}.
\end{align}
All the three $\Z_2$ invariants relevant to the surface gapless states are constructed on the $k_z=\pi$ plane~\cite{Shiozaki2016}.
At $k_z = \pi$, the glide symmetry is reduced into the mirror symmetry
\begin{align}
&G(k_x,k_y,\pi) {\cal H}(k_x,k_y,\pi) G^{-1}(k_x,k_y,\pi) = {\cal H}(k_x,-k_y,\pi),
\\
& G(k_x,-k_y,\pi) G(k_x,k_y,\pi) = 1,
\\
&T G(k_x,k_y,\pi) = G(-k_x,-k_y,\pi) T,
\\
& C G(k_x,k_y,\pi) = G(-k_x,-k_y,\pi) C.
\end{align}
On the $k_y = \Gamma_y \equiv 0, \pi$ lines, since the TRS and the particle-hole symmetry (PHS) commute with the glide symmetry,
we can define $\Z_2$ invariant $\nu(\Gamma_y,\pm) \in \{0,1\}$ of one-dimensional (1D) class DIII SCs
for each glide-subsectors $G(k_x,\Gamma_y,\pi) = \pm 1$,
\begin{widetext}
\begin{align*}
\nu(\Gamma_y,\pm)
= \frac{i}{\pi} \oint_0^{2 \pi} d k_x \sum_n \Braket{u^{(I)}_{\pm,n}(k_x,\Gamma_y,\pi) | \partial_{k_x} | u^{(I)}_{\pm,n}(k_x,\Gamma_y,\pi)} \ \ ({\rm mod\ }2), &&
(\Gamma_y = 0, \pi),
\end{align*}
\end{widetext}
where $u^{(I)}_{\pm,n}(k_x,\Gamma_y,\pi)$ represents one of the Kramers pair of occupied states in the glide-subsector
$G(k_x,\Gamma_y,\pi) = \pm 1$.
Noticing that the combined symmetries $TG(k_x,k_y,\pi)$ and $CG(k_x,k_y,\pi)$,
\begin{align}
&\left[TG(k_x,k_y,\pi)\right] {\cal H}(k_x,k_y,\pi) \left[TG(k_x,k_y,\pi)\right]^{-1} \nonumber \\ &= {\cal H}(-k_x,k_y,\pi),
\\
& TG(-k_x,k_y,\pi)TG(k_x,k_y,\pi) = -1,
\\
&\left[CG(k_x,k_y,\pi)\right] {\cal H}(k_x,k_y,\pi) \left[CG(k_x,k_y,\pi)\right]^{-1} \nonumber \\ &= -{\cal H}(-k_x,k_y,\pi),
\\
& CG(-k_x,k_y,\pi)CG(k_x,k_y,\pi) = 1,
\end{align}
indicate the emergent class DIII symmetry for all $k_y$,
we have a constraint
\begin{align}
\nu(0,+) + \nu(0,-) = \nu(\pi,+) + \nu(\pi,-), \ \ ({\rm mod\ }2).
\end{align}
Because of this emergent class DIII symmetry, all the surface states on the $k_z= \pi$ plane show two-fold degeneracy.
The three kinds of surface states may be generated by
\begin{widetext}
\begin{align}
&\underbrace{\bf Z_2}_{(k_x,k_y,k_z)} : && \bigl(\nu(0,+), \nu(0,-); \nu(\pi,+), \nu(\pi,-)\bigr) = (1,1;0,0) \ {\rm or\ } (0,0;1,1),
\label{glide_strong_Z2}
\\
&\underbrace{\Z_2}_{(k_x,k_z)} : && \bigl(\nu(0,+), \nu(0,-); \nu(\pi,+), \nu(\pi,-)\bigr) = (1,0;1,0), (1,0;0,1), (0,1;1,0) \ {\rm or\ } (0,1;0,1),
\label{glide_weak_Z2flat}
\\
&\underbrace{\bf Z_2}_{(k_x,k_z)} : && \bigl(\nu(0,+), \nu(0,-); \nu(\pi,+), \nu(\pi,-)\bigr) = (1,1;1,1).
\label{glide_weak_Z2}
\end{align}
\end{widetext}
Here, $\underbrace{\Z_2}_{(k_x,k_z)}$ shows the flat surface band on the $k_z = \pi$ plane.
The $\underbrace{\bf Z_2}_{(k_x,k_y,k_z)}$ is the strong index of TNSC, which is denoted as
glide-$\Z_2$ invariant, $\nu_{\rm G} \equiv \underbrace{\bf Z_2}_{(k_x,k_y,k_z)}$.
It is given by
\begin{align}
\nu_{\rm G} = \nu(0,+) \nu(0,-) - \nu(\pi,+) \nu(\pi,-) \,\,\,\,\, ({\rm mod} \,\,\, 2).
\label{glide_Z2}
\end{align}
Later, we show that the glide-$\Z_2$ invariant $\nu_{\rm G}$ is nontrivial in the A-phase of UPt$_3$.
\subsection{Glide-odd superconductor}\label{glide-odd}
Next, we study glide-odd superconducting states, which may be realized in the $\Gamma_1$-state of UPt$_3$ (C-phase).
Symmetries for the BdG Hamiltonian are
\begin{align}
&T {\cal H}(\bk) T^{-1} = {\cal H}(-\bk), && T = i s_y K,
\label{glide-odd-algebra1}
\\
&C {\cal H}(\bk) C^{-1} = -{\cal H}(-\bk), && C = \tau_x K,
\label{glide-odd-algebra2}
\\
&G(\bk) {\cal H}(\bk) G^{-1}(\bk) = {\cal H}(m_y\bk), && G(m_y \bk) G(\bk) = -e^{-i k_z},
\label{glide-odd-algebra3}
\\
&T G(\bk) =G(-\bk) T, && C G(\bk) = -G(-\bk) C.
\label{glide-odd-algebra4}
\end{align}
From Ref.~\onlinecite{Shiozaki2016}, the $K$-theory classification of the bulk reads
\begin{equation}\begin{split}
&({\rm The\ stable\ classification\ of\ bulk\ gapped\ SCs}) \\
&= \underbrace{\Z \oplus {\bf Z_2}}_{(k_x,k_y,k_z)} \oplus \underbrace{\bf Z_4}_{(k_x,k_z)} \oplus \underbrace{\Z_2 \oplus {\bf Z_2}}_{(k_y,k_z)} \oplus \underbrace{\Z_2}_{(k_z)}.
\end{split}\label{eq:diii+gr_odd_bulk}\end{equation}
The bold-style indices express emergent topological phases which requires the glide symmetry.
From the bulk-boundary correspondence,
it holds that
\begin{align}
& \left(\begin{array}{ll}
{\rm The\ classification\ of\ surface\ states} \\
{\rm on\ the\ } x= {\rm constant\ surface}
\end{array}\right)
\nonumber \\ &
= \underbrace{\Z \oplus {\bf Z_2}}_{(k_x,k_y,k_z)} \oplus \underbrace{\bf Z_4}_{(k_x,k_z)}.
\end{align}
The 3D $\underbrace{\Z}_{(k_x,k_y,k_z)}$ index is the ordinary winding number~\cite{Schnyder},
\begin{align}
N := \frac{1}{48 \pi^2} \int {\rm tr} \Gamma ({\cal H}^{-1} d {\cal H})^3, && \Gamma = i T C.
\end{align}
By imposing the glide symmetry, we have two $\Z_4$ invariants $\theta(\Gamma_y = 0, \pi) \in \{0,1,2,3\}$
on the glide invariant $k_y = 0$ and $\pi$ planes~\cite{Shiozaki2016},
\begin{widetext}
\begin{align}
\theta(\Gamma_y):= \frac{2 i}{\pi}
\Big[ \oint_0^{2 \pi} d k_x {\rm tr} {\cal A}^{(I)}_{+}(k_x, \Gamma_y, \pi) + \frac{1}{2} \int_0^{\pi} d k_z \oint_0^{2 \pi} d k_x {\rm tr} {\cal F}_+(k_x,\Gamma_y,k_z) \Big] \ \ ({\rm mod\ } 4), &&
(\Gamma_y = 0, \pi).
\end{align}
\end{widetext}
Here, ${\cal A}^{(I)}_{+}(k_x, \Gamma_y, \pi)$ and ${\cal F}_+(k_x,\Gamma_y,k_z)$ are the Berry connection of one of Kramers pair of occupied states and
the Berry curvature of the occupied states, respectively, with the positive glide eigenvalue $G(k_x,\Gamma_y,\pi) = 1$.
In modulo 2, $\theta(\Gamma_y)$ is recast into the $\Z_2$ invariant at $(k_x,\Gamma_y,k_z=0)$ lines as~\cite{Shiozaki2016}
\begin{align}
\theta(\Gamma_y):= \frac{i}{\pi} \oint_0^{2 \pi} d k_x {\rm tr} {\cal A}_{+}(k_x, \Gamma_y, 0) \ \ ({\rm mod\ } 2),
\end{align}
by the Stokes' theorem.
The three invariants $\{ N, \theta(0), \theta(\pi) \}$ are not independent, since there is a constraint
\begin{align}
N + \theta(0) + \theta(\pi) = 0 \ \ ({\rm mod\ }2),
\label{eq:constraint}
\end{align}
which can be understood as follows.
On the $k_z=0$ plane, the $\Z_2$ invariant $\nu = \theta(0) + \theta(\pi) \ ({\rm mod\ }2)$ is equivalent
to the 2D class DIII $\Z_2$ invariant.
Since we can show that the existence of odd numbers of Majorana cones is allowed only on the $k_z=0$ plane,
$N$ (mod 2) is also equivalent to the $\Z_2$ invariant.
Therefore, $N = \nu \ ({\rm mod \ 2})$, which implies Eq.~(\ref{eq:constraint}).
\section{Topological nonsymmorphic superconductivity in A-phase}\label{sec:glide-DIII}
Now we go back to the superconductivity in UPt$_3$.
Let's focus on the surface zero mode at $\bk_{\rm sf} =(0,\pi)$ in the TRS invariant A-phase.
Naturally, the $A$-FSs are considered in this section.
The surface states labeled by (V) in Fig.~\ref{ZFSedgestate}(e) have the spectrum shown in Fig.~\ref{glide-cone}.
As we proved in Sec.~\ref{sec:glide-even}, the quasiparticle states are two-fold degenerate on the
$\k_{\rm sf} = (k_y, \pi)$ line in the glide-even A-phase.
Therefore, the spectrum of surface states shows double Majorana cone with four zero energy states at $\bk_{\rm sf} =(0,\pi)$.
In this section we show that the surface double Majorana cone is protected by the strong $\mathbb{Z}_2$ index $\nu_G$
for glide-even TNSCs, which has been introduced in Eq.~\eqref{glide_Z2}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=85mm]{glide_cone.eps}
\caption{(Color online)
Double Majorana cone in the A-phase.
Energy spectrum of (100)-surface states around $\k_{\rm sf}=(0,\pi)$ is shown.
Parameters are the same as Fig.~\ref{ZFSedgestate}(e) for the paired $A$-FSs.
}
\label{glide-cone}
\end{center}
\end{figure}
The glide symmetry of $P6_3/mmc$ space group is $G^{xz} = \{M^{xz}|\frac{z}{2}\}$ composed of mirror reflection
and half translation along the $z$-axis. Thus, the nonsymmorphic glide operator is intrinsically $k_z$-dependent.
We have an operator for the normal part Hamiltonian, $G^{xz}(k_z) = i s_y \sigma_x V_{\sigma}(k_z)$, where
\begin{align}
&
V_{\sigma}(k_z) =
\left(
\begin{array}{cc}
1 & 0 \\
0 & e^{-i k_z} \\
\end{array}
\right)_{\sigma},
\end{align}
acts in the sublattice space.
The superconducting state preserves the glide symmetry in the TRS invariant A- and C-phases, although
the glide symmetry is spontaneously broken in the B-phase.
The glide operator in the Nambu space depends on the glide-parity of the superconducting state;
$G^{xz}_{\rm BdG}(k_z) = G^{xz}(k_z) \tau_0$ in the glide-even A-phase while
$G^{xz}_{\rm BdG}(k_z) = G^{xz}(k_z) \tau_z$ in the glide-odd C-phase.
Then, the BdG Hamiltonian respects the glide symmetry
\begin{align}
&
G^{xz}_{\rm BdG}(k_z) \tilde{H}_{\rm BdG}(\k) G^{xz}_{\rm BdG}(k_z)^{-1} = \tilde{H}_{\rm BdG}(k_x,-k_y,k_z).
\nonumber \\
\label{glide-symmetry}
\end{align}
The symmetries satisfy the algebra (\ref{glide-even-algebra1})-(\ref{glide-even-algebra4})
and (\ref{glide-odd-algebra1})-(\ref{glide-odd-algebra4}) in the A-phase and C-phase, respectively.
\subsection{Glide-$\Z_2$ invariant}\label{1D_invariant}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=70mm]{BZ2.eps}
\caption{(Color online) Unfolded BZ (solid line) and folded BZ (dashed line)
projected onto a $k_z =$ constant plane.
The latter is compatible with the surface BZ.
$\K_x = 2 \pi \hat{x}$ and $\K_y = \frac{2 \pi}{\sqrt{3}} \hat{y}$ are reciprocal lattice vectors of
the folded BZ.
}
\label{folded_BZ}
\end{center}
\end{figure}
As we showed in Sec.~\ref{sec:glide-even}, only the 2D plane at $k_z=\pi$ determines the topological properties of
the glide-even A-phase. From Eq.~(\ref{glide_Z2}), the glide-$\Z_2$ invariant $\nu_G$ is given by
the 1D $\Z_2$ invariant of DIII class, $\nu(\Gamma_y,\pm)$.
When we choose the rectangular BZ shown in Fig.~\ref{folded_BZ}, we have $\Gamma_y=0,\pi/\sqrt{3}$.
For our choice of parameters, the FSs do not cross a line $k_y=\pi/\sqrt{3}$ on the BZ face.
Therefore, $\nu(\pi/\sqrt{3},\pm)$ is trivial, and the glide-$\Z_2$ invariant is obtained by evaluating $\nu(0,\pm)$.
Below, we show that $\nu(0,\pm) = 1$, and thus, the glide-$\Z_2$ invariant is nontrivial.
First, the Hamiltonian is block-diagonalized by using the basis diagonal for $G^{xz}_{\rm BdG}(\pi) = s_y \sigma_y \tau_0$,
\begin{align}
\tilde{H}_{\rm BdG}(k_x,0,\pi) = \tilde{H}_{1}^{\rm 1d}(k_x) \oplus \tilde{H}_{-1}^{\rm 1d}(k_x),
\end{align}
on the 1D BZ $k_x \in [-2 \pi,2 \pi]$.
The glide-subsector with eigenvalue $\lambda_{\rm G}=\pm 1$ is obtained as,
\begin{align}
\tilde{H}_{\pm 1}^{\rm 1d}(k_x) &=
\left(
\begin{array}{cc}
\hat{H}_{\pm 1}^{(0)}(k_x) & \hat{\Delta}(k_x) \\
\hat{\Delta}(k_x)^\dag & - \hat{H}_{\pm 1}^{(0)}(-k_x)^T \\
\end{array}
\right),
\end{align}
with
\begin{align}
\hat{H}_{\pm 1}^{(0)}(k_x) &=
\left(
\begin{array}{cc}
\xi^{\rm 1d}(k_x) & \mp \alpha {\bm g}^{\rm 1d}(k_x) \\
\mp \alpha {\bm g}^{\rm 1d}(k_x) & \xi^{\rm 1d}(k_x) \\
\end{array}
\right),
\\
\hat{\Delta}(k_x) &= i
\Delta
\left(
\begin{array}{cc}
i \delta p_{x}^{\rm 1d}(k_x) & - d_{xz}^{\rm 1d}(k_x) \\
- d_{xz}^{\rm 1d}(k_x) & i \delta p_{x}^{\rm 1d}(k_x) \\
\end{array}
\right).
\end{align}
We defined
$p_{x}^{\rm 1d}(k_x) \equiv p_x(k_x,0,\pi) = \sin k_x + \sin \frac{k_x}{2}$
and $d_{xz}^{\rm 1d}(k_x) \equiv d_{xz}(k_x,0,\pi) = -\sqrt{3} \sin \frac{k_x}{2}$.
Thus, the glide-subsector is equivalent to the TRS invariant $p$-wave SC.
It is easy to confirm that both TRS and PHS are preserved in each glide-subsector as expected from
Sec.~\ref{sec:glide-even}.
In the A-phase we adopt time-reversal operator in the Nambu space $T_{\rm BdG} =i T \tau_z$,
since the gap function is chosen to be pure imaginary.
Then, the commutation relation $\left[C, T_{\rm BdG} \right] =0$ is satisfied.
Although the inversion symmetry is broken in the glide-subsector by the SOC,
we can adiabatically eliminate the SOC as $\alpha \rightarrow 0$, unless the SOC is large enough
to suppress the superconducting gap~\cite{Yanase_UPt3_Weyl,Kobayashi-Yanase-Sato}.
Then, the glide-subsector is reduced to the odd-parity spin-triplet SC, and the $\mathbb{Z}_2$ invariant
is obtained by counting the number of Fermi points $N(\lambda_{\rm G})$ (per Kramers pairs)
between the time-reversal invariant momentum, $k_x=0$ and $2 \pi$~\cite{Sato2010}.
Since each glide-subsector represents a single band model with $N(\pm 1)=1$,
the nontrivial $\mathbb{Z}_2$ invariant, $\nu(0,\pm) =1$ (mod 2), is obtained from the formula
$(-1)^{\nu(0,\pm)} = (-1)^{N(\pm)}$.
Now we conclude that the glide-$\Z_2$ invariant is nontrivial, namely, $\nu_{\rm G}=1$,
because
\begin{align}
\Bigl(\nu(0,+), \nu(0,-); \nu(\pi/\sqrt{3},+), \nu(\pi/\sqrt{3},-)\Bigr) = (1,1;0,0).
\end{align}
This is the strong topological index characterizing the TNSC with even glide-parity.
It should be noticed that the paired FSs and the sublattice-singlet $d$-wave pairing are essential ingredients.
Both of them are ensured by the nonsymmorphic space group symmetry (see Secs.~\ref{sec:Dirac-node} and \ref{sec:order-parameter}).
The pseudospin degree of freedom in the glide-subsector corresponds to the pair of FSs.
Although the $f$-wave component in the order parameter disappears on the glide invariant plane $k_y =0$,
the $d$-wave component induces the superconducting gap and gives rise to 1D $\Z_2$ nontrivial superconductivity.
The topological surface state protected by the glide-$\Z_2$ invariant should appear as a signature of the TNSC.
Because the two glide-subsectors discussed above are TRS invariant and $\mathbb{Z}_2$ nontrivial,
two Majorana states per subsector, namely, four Majorana states in total, appear
on the glide invariant (100)-surface. Indeed, the double Majorana cone centered at
$\k_{\rm sf}=(0,\pi)$ (Fig.~\ref{glide-cone}) is the characteristic topological surface states of the glide-even TNSC.
\begin{table}[htbp]
{\renewcommand\arraystretch{1.2}
\begin{tabular}{c|c|c|c|c|c|c}
& $k_z$ & $\left(G_{\rm BdG}^{xz}\right)^2$ & $\eta_T$ & $\eta_C$ & 1D invariant & 2D invariant \\
\hline
C-phase & 0 & -1 & 1 & -1 & $\mathbb{Z}_2$ & $\mathbb{Z}_2 \oplus \mathbb{Z}_2$ \\ \cline{2-7}
($\eta = 0$) & $\pi$ & 1 & 1 & -1 & $0$ & $0$ \\ \hline
A-phase & 0 & -1 & 1 & 1 & $\mathbb{Z}$ & $\mathbb{Z}$ \\ \cline{2-7}
($\eta = \infty$) & $\pi$ & 1 & 1 & 1 & $\mathbb{Z}_2 \oplus \mathbb{Z}_2$ & $\mathbb{Z}_2$ \\
\hline
\end{tabular}
}
\caption{Classification of 1D and 2D BdG Hamiltonian in the TRS invariant A- and C-phases.
The low-dimensional Hamiltonian on the basal plane $(k_z=0$) and BZ face ($k_z=\pi$) is classified.
We show $\left(G_{\rm BdG}^{xz}\right)^2$, $\eta_T$, and $\eta_C$. (Anti-)commutation relations with
time-reversal and particle-hole operators are represented as
$T_{\rm BdG} \, G_{\rm BdG}^{xz} = \eta_T \, G_{\rm BdG}^{xz} \, T_{\rm BdG}$ and
$C \, G_{\rm BdG}^{xz} = \eta_C \, G_{\rm BdG}^{xz} \, C$.
The right two columns show the 1D topological index on the $(k_y, k_z) =(0,0)$ and $(0,\pi)$ lines and
the 2D topological index on the $k_z=0$ and $\pi$ planes.
}
\label{table:glide}
\end{table}
For confirmation, we show the topological indices of 1D Hamiltonian along the $\k = (k_x,0,0)$ and $(k_x,0,\pi)$ lines
and 2D Hamiltonian on the $k_z=0$ and $\pi$ planes in Table~II.
For these low-dimensional Hamiltonian, the glide operator is momentum independent,
and therefore, the topological classification can be carried out without taking care of the nonsymmorphic property.
The (anti-)commutation relations of symmetry operators are summarized in Table~\ref{table:glide},
and accordingly the topological indices are obtained on the basis of the periodic table for symmorphic
topological crystalline insulators and SCs~\cite{Shiozaki2014}.
Indeed, in the A-phase we have $\mathbb{Z}_2 \oplus \mathbb{Z}_2$ index for 1D Hamiltonian
on the $\k = (k_x, 0, \pi)$ line, which are nothing but $\nu(0,\pm)$.
The $\mathbb{Z}_2$ index of 2D Hamiltonian on the $k_z=\pi$ plane is equivalent to the glide-$\Z_2$ invariant
discussed in this section.
On the other hand, the $k_z=\pi$ plane is trivial in the glide-odd C-phase, consistent with the absence of
topological surface states in Fig.~\ref{ZFSedgestate}(a).
\subsection{Folded Brillouin zone}\label{2D_invariant}
For the consistency with classification based on the $K$-theory in Sec.~\ref{sec:classification},
we need to consider the folded Brillouin zone compatible with the surface BZ.
To be specific, the translation symmetry along the [010]-axis is partially broken on the (100)-surface.
The basic translation vectors on the surface are $(y,z)=(\sqrt{3},0)$ and $(0,1)$,
and the reciprocal lattice vectors are $\K_y = \frac{2}{\sqrt{3}}\pi \hat{y}$ and
$\K_z = 2\pi \hat{z}$. Thus, the surface first BZ is a rectangle with
$k_y \in [-\pi/\sqrt{3},\pi/\sqrt{3})$ and $k_z \in [-\pi,\pi)$.
We have already adopted the bulk BZ compatible with the surface BZ. However, the
periodicity with respect to $\K_y$ is lost in the BdG Hamiltonian.
To satisfy the periodicity, we equate $\k$ with $\k + \K_y$, and accordingly, adopt the folded BZ
in Fig.~\ref{folded_BZ}.
The folded BdG Hamiltonian is transformed
\begin{align}
\widehat{H}_{\rm BdG}(\k) =
U_{\rm sf}(\k)
\left(
\begin{array}{cc}
\tilde{H}_{\rm BdG}(\k) & 0 \\
0 & \tilde{H}_{\rm BdG}(\k + \K_y) \\
\end{array}
\right)_\rho
U_{\rm sf}(\k)^\dag,
\end{align}
by the unitary matrix
\begin{align}
U_{\rm sf}(\k) =
\frac{1}{\sqrt{2}}
\left(
\begin{array}{cc}
1 & 0 \\
0 & e^{i \k {\bm \tau}'} \\
\end{array}
\right)_\rho
\left(
\begin{array}{cc}
1 & 1 \\
1 & -1 \\
\end{array}
\right)_\rho,
\label{folded_BdG_Hamiltonian}
\end{align}
with ${\bm \tau}'=(\frac{1}{2}, \frac{\sqrt{3}}{2}, 0)$.
It is easy to check the periodicity of the folded BdG Hamiltonian,
$\widehat{H}_{\rm BdG}(\k + \K_i)
= \widehat{H}_{\rm BdG}(\k)$ with respect to $\K_x = 2\pi \hat{x}$, $\K_y$, and $\K_z$.
The glide symmetry is recast,
\begin{align}
&
\widehat{G}^{xz}_{\rm BdG}(\k) \, \widehat{H}_{\rm BdG}(\k) \, \widehat{G}^{xz}_{\rm BdG}(\k)^{-1} = \widehat{H}_{\rm BdG}(k_x,-k_y,k_z),
\label{glide-symmetry-folded}
\end{align}
with using the glide operator for the folded BdG Hamiltonian,
\begin{align}
&
\widehat{G}^{xz}_{\rm BdG}(\k) =
G^{xz}_{\rm BdG}(k_z) \otimes
\left(
\begin{array}{cc}
1 & 0 \\
0 & e^{-i \sqrt{3} k_y} \\
\end{array}
\right)_{\rho}.
\label{glide-operator-folded}
\end{align}
The TRS and PHS are also preserved,
\begin{align}
&
T_{\rm BdG} \, \widehat{H}_{\rm BdG}(\k) \, T_{\rm BdG}^{-1} = \widehat{H}_{\rm BdG}(-\k),
\\
& C \, \widehat{H}_{\rm BdG}(\k) \, C^{-1} = - \widehat{H}_{\rm BdG}(-\k).
\label{TC-symmetry-folded}
\end{align}
The symmetry operators satisfy the following relations,
\begin{align}
&
T_{\rm BdG} \, C = C \, T_{\rm BdG},
\\
&
\widehat{G}^{xz}_{\rm BdG}(m \k) \, \widehat{G}^{xz}_{\rm BdG}(\k) = - e^{-i k_z},
\\
&
T_{\rm BdG} \, \widehat{G}^{xz}_{\rm BdG}(\k) = \widehat{G}^{xz}_{\rm BdG}(-\k) \, T_{\rm BdG},
\\
&
C \, \widehat{G}^{xz}_{\rm BdG}(\k) = \pm \widehat{G}^{xz}_{\rm BdG}(-\k) \, C.
\label{symmetry-class-folded}
\end{align}
The sign $\pm$ in Eq.~(\ref{symmetry-class-folded}) should be chosen in the A-phase and C-phase, respectively.
Thus, the algebra (\ref{glide-even-algebra1})-(\ref{glide-even-algebra4}) and (\ref{glide-odd-algebra1})-(\ref{glide-odd-algebra4})
are satisfied.
The 1D $\Z_2$ invariants in the folded BZ are equivalent to those obtained in the unfolded BZ.
This fact is simply understood by looking at the surface states.
The energy spectrum is not changed by the unitary transformation, and
we have obtained odd number of Majorana cone at $\bk_{\rm sf} =(0,\pi)$ per glide-subsector.
This fact indicates $\nu(0,\pm) =1$ and $\nu(\pi/\sqrt{3},\pm) =0$ (mod 2). Therefore, the glide-$\Z_2$ invariant
is nontrivial, that is, $\nu_{\rm G}=1$.
\subsection{Deformation to M\"obius surface state}\label{3D_invariant}
The glide-$\Z_2$ invariant $\nu_{\rm G}$ is the strong topological index specifying the gapped TNSC.
However, the A-phase is actually gapless because of the point nodes of gap function at the poles of 3D FSs.
Figure~\ref{3DFS_glide} shows the surface spectrum $E(0,k_z)$, and indeed, we observe gapless bulk
excitations away from the surface BZ boundary $k_z=\pi$ in addition to the double Majorana cone at $k_z=\pi$.
Therefore, UPt$_3$ A-phase does not realize the characteristic ``M\"obius surface state''~\cite{Shiozaki2015,Shiozaki2016,KHgX,CeNiSn}
of topological nonsymmorphic insulators/superconductors.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=75mm]{ky0a2l.eps}
\caption{(Color online) Energy spectrum on the (100)-surface in the A-phase. Parameters
$(t,t_z,t',\alpha,\mu,\Delta,\delta)=(1,-4,1,2,12,0.7,0.04)$ reproduce the paired $A$-FSs of UPt$_3$.
Spectrum on the $\k_{\rm sf}=(0, k_z)$ line is shown. Surface states are highlighted by green lines.
}
\label{3DFS_glide}
\end{center}
\end{figure}
However, the nontrivial glide-$\mathbb{Z}_2$ invariant ensures that the gapped TNSC can be realized
when the point nodes are removed by some perturbations preserving the symmetry.
Then, we obtain the M\"obius surface states with keeping the nontrivial glide-$\Z_2$ invariant
and the associated double Majorana cone.
In other words, the double Majorana cone around $\k_{\rm sf} = (0,\pi)$ is regarded as a reminiscent of
the M\"obius surface states of glide-even TNSC.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=85mm]{2DFS.eps}
\caption{(Color online)
M\"obius surface states in the 3D glide-even TNSC.
We choose parameters $(t,t_z,t',\alpha,\mu,\Delta,\delta)=(1,0,0.3,0.5,3.5,0.4,0.5)$ for 2D cylindrical FSs.
Bulk and surface states are shown by blue and green lines, respectively.
(a) $E(0, k_z)$, (b) $E(k_y, 0)$, and (c) $E(k_y, \pi)$.
The glide eigenvalues are illustrated in (a).
}
\label{2DFS}
\end{center}
\end{figure}
A simple way is to deform the FS to be cylindrical so that the point nodes are removed.
Then, the surface spectrum in Fig.~\ref{2DFS} is obtained.
In Fig.~\ref{2DFS}(a), the surface states detached from bulk excitations show the M\"obius
structure typical of glide-even TNSC. At $\k_{\rm sf}=0$, the Kramers degeneracy is ensured by the TRS.
The Kramers pair is formed by $\pm i$ glide eigenstates since the TRS and PHS are not preserved
in the glide-subsector.
When we look at the $\k_{\rm sf} = (k_y, 0)$ line, Fig.~\ref{2DFS}(b) shows two helical modes protected by the
mirror Chern number $\nu_{\rm M}^0= 4$, which is introduced in Sec.~\ref{sec:mirror}.
The nontrivial relationship between the mirror Chern number and the glide-$\mathbb{Z}_2$ invariant
will be shown elsewhere~\cite{Shiozaki-Yanase2017}.
\subsection{Broken glide symmetry by crystal distortion}\label{sec:broken_glide}
Strictly speaking, the symmetry of crystal structure in UPt$_3$ is still under debate, because
a tiny crystal distortion has been indicated by a x-ray diffraction measurement~\cite{Walko}.
The distortion leads to layer dimerization that breaks the glide and screw symmetry.
Then, the space group is reduced from nonsymmorphic $P6_3/mmc$ to symmorphic $P\bar{3}m1$.
If the crystal distortion actually occurs in UPt$_3$, the double Majorana cone protected
by the glide-$\Z_2$ invariant may be gapped.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=70mm]{Bglide.eps}
\caption{(Color online) Energy spectrum on the (100)-surface in the presence of layer dimerization
that breaks the glide symmetry. Parameters are the same as Fig.~\ref{3DFS_glide}, while the inter-sublattice
hybridization is replaced by Eq.~(\ref{layer_dimerization}) with $d=0.2$.
Surface states are highlighted by green lines.
}
\label{3DFS_Bglide}
\end{center}
\end{figure}
The layer dimerization makes the inter-sublattice hybridization asymmetric between the $+z$ and $-z$ directions.
The asymmetry is taken into account by replacing,
\begin{align}
& a({\bm k}) = 2 t' \cos\frac{k_z}{2} \sum_{i=1,2,3} e^{i{\bm k}_\parallel\cdot{\bm r}_i}
\nonumber \\
&
\Rightarrow \,\,
t' \left[(1+d) e^{i k_z/2} + (1-d) e^{-i k_z/2} \right] \sum_{i=1,2,3} e^{i{\bm k}_\parallel\cdot{\bm r}_i}.
\label{layer_dimerization}
\end{align}
The parameter $d$ represents the strength of the layer dimerization.
For a finite $d$, the double Majorana cone at $\k_{\rm sf}=(0,\pi)$ indeed acquires mass term.
In Fig.~\ref{3DFS_Bglide}, we show the surface spectrum gapped at $\k_{\rm sf}=(0,\pi)$.
In the figure, a strong layer dimerization $d=0.2$ is assumed in order to visualize the effect of glide symmetry breaking.
In reality, the parameter $d$ is expected to be tiny even if it is finite, because the crystal distortion
reported is small~\cite{Walko}.
Therefore, the gap in the double Majorana cone may be tiny, and a fingerprint of topological
glide-$\Z_2$ superconductivity will appear even in the symmorphic $P\bar{3}m1$ structure.
\section{Other topological surface states}\label{sec:low-dimension}
In contrast to toy models, the model specific for the real material shows rich topological properties.
In Figs.~\ref{eGFSedgestate} and \ref{ZFSedgestate}, we have observed a variety of surface states
other than the double Majorana cone discussed in Sec.~\ref{sec:glide-DIII}.
In this section, we clarify the topological invariant protecting the surface states.
In addition to the glide symmetry, we take the mirror symmetry into account.
The Weyl charge, mirror Chern number, glide winding number, and rotation winding number
are discussed below.
\subsection{Chiral Majorana arc in Weyl B-phase}\label{sec:Weyl}
The TRS broken B-phase identified as a Weyl superconducting state~\cite{Yanase_UPt3_Weyl,Goswami}
hosts surface Majorana arcs, analogous to Fermi arcs in Weyl
semimetals~\cite{Murakami,Wan-Vishwanath,Burkov-Balents,Xu,Lv,Yang,Huang}.
The existence of Majorana arcs is ensured by the topological Weyl charge
\begin{equation}
q_i = \frac{1}{2\pi} \oint_{S} {\rm d}{\bm k} \vec{F}({\bm k}),
\end{equation}
which is nothing but the monopole of Berry flux,
\begin{align}
F_{i}({\bm k}) = -i \varepsilon^{ijk}\sum_{E_n(\k)<0} \partial_{k_j} \langle u_n(\k)| \partial_{k_k} u_n(\k)\rangle.
\label{Berry_flux}
\end{align}
A wave function and energy of Bogoliubov quasiparticles are denoted by $|u_n(\k)\rangle$ and $E_n(\k)$, respectively.
A nontrivial Weyl charge protects the Weyl point node in the bulk excitation spectrum.
Indeed, the B-phase of UPt$_3$ is a point nodal SC compatible with Blount's theorem~\cite{Blount,Kobayashi-Sato}
when the $p$-wave and $d$-wave order parameters are appropriately taken into account~\cite{Norman1995,Yanase_UPt3_Weyl}.
Although the purely $f$-wave state has a nodal line at $k_z=0$~\cite{Sauls}, it is an accidental node
removed by symmetry-preserving perturbation.
In accordance with the bulk-boundary correspondence, the Majorana arcs appear on the surface and terminate
at the projection of Weyl point nodes illustrated by green circles in
Figs.~\ref{eGFSedgestate} and \ref{ZFSedgestate}.
Interestingly, the position of Weyl nodes is tunable. In the $E_{\rm 2u}$ scenario for UPt$_3$~\cite{Sauls},
the parameter $\eta$ smoothly changes from $\infty$ to $0$ in the B-phase
by decreasing the temperature and/or increasing the magnetic field (see Fig.~\ref{phasediagram} and Table~I).
Then, the pair creation, pair annihilation, and coalescence of Weyl nodes occur as a consequence of
the $p$-$f$ mixing in the order parameter~\cite{Yanase_UPt3_Weyl}.
Accordingly, the projection of Weyl nodes moves as illustrated in Figs.~\ref{eGFSedgestate}(b)-(d)
and \ref{ZFSedgestate}(b)-(d). The Majorana arcs follow the Weyl nodes.
In the generic $E_{\rm 2u}$-state studied in this paper, the Weyl nodes are purely protected by topology,
and any crystal symmetry is not needed.
Therefore, the positions of Weyl nodes are not constrained by any symmetry.
Although the Weyl nodes are pinned at the poles of FS in the purely $f$-wave $E_{\rm 2u}$-state~\cite{Goswami},
that is an accidental result.
In another candidate of Weyl SC URu$_2$Si$_2$, the 3D $d_{xz} \pm id_{yz}$-wave
superconductivity has been revealed by experiments.~\cite{Kasahara2007,Yano2008,Kittaka2016,Yamashita2015}
Then, the Weyl nodes are pinned and the traveling of Weyl nodes does not occur,
in contrast to UPt$_3$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=90mm]{ZFSedgestate2de0.6.eps}
\caption{(Color online) Energy spectrum on the (100)-surface in the B-phase ($\eta=0.6$).
Surface and bulk quasiparticle states on slices of BZ at $k_z=$ constant planes are shown.
The surface states are emphasized by green lines.
The $k_z$-dependent Chern number is shown in each panel.
(a)-(e) Parameters are the same as Fig.~\ref{ZFSedgestate}(b).
(f) $\alpha=1$ while the others are the same as (e).
The surface states are almost two-fold degenerate in (d) and (f) and four-fold degenerate in (e).
Comparison of (e) and (f) reveals that the four-fold degeneracy in the absence of the SOC is lifted by the SOC.
}
\label{ZFSedgestate0.6}
\end{center}
\end{figure}
Here the number of Majorana arcs is verified by calculating the Chern number~\cite{Thouless,Kohmoto,Fukui}
of effective 2D models on $k_z=$ constant planes,
\begin{align}
&
\nu(k_z) = \frac{1}{2\pi} \int {\rm d}{\bm k}_\parallel F_{z}({\bm k}),
\label{Chern}
\end{align}
that is, a $k_z$-dependent Chern number of class A.
The Chern number indicates the number of chiral surface modes.
In Weyl SCs, the Chern number may change at a gapless $k_z=$ constant plane hosting Weyl nodes.
Therefore, the zero energy surface states form arcs terminating at the projection of Weyl nodes.
For parameters reproducing the $A$-FSs, the Chern number changes
$\nu(k_z)= 0 \rightarrow 4 \rightarrow 8 \rightarrow -4$ with increasing $k_z$ from $0$ to $\pi$,
while $\nu(k_z)= 0 \rightarrow 4 \rightarrow 0$ for the $\Gamma$-FS~\cite{Yanase_UPt3_Weyl}.
The bulk-boundary correspondence is confirmed by showing the surface spectrum on the $k_z=$ constant lines
in Fig.~\ref{ZFSedgestate0.6}. The number of chiral modes coincides with the $k_z$-dependent Chern number.
We also observe the sign reversal of chirality in accordance with the sign change of the Chern number.
Finally, we discuss the Weyl superconducting phase in the phase diagram illustrated in Fig.~\ref{phasediagram}.
Because the TRS has to be broken in Weyl SCs, the A- and C-phases are non-Weyl superconducting states.
Furthermore, the B-phase in the vicinity of the A-B and B-C phase boundaries is also
non-Weyl state because the gap closing is required for the topological transition.
Therefore, the transition from the non-Weyl state to the Weyl state occurs in the B-phase.
The shaded region in Fig.~\ref{phasediagram} schematically illustrates the Weyl superconducting phase.
\subsection{Majorana cone and mirror Chern number}\label{sec:mirror}
Next we discuss the surface state around ${\bm k}_{\rm sf} = (0,0)$, which is observed
in all the A-, B-, and C-phases (Fig.~\ref{eGFSedgestate}).
In the TRS broken B-phase, the spectrum resembles a tilted Majorana cone as shown in Fig.~\ref{mirror-cone},
although the cone is not tilted in the A- and C-phases.
Naturally, the $\Gamma$-FS and $K$-FS are considered in this subsection.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=85mm]{mirror_cone.eps}
\caption{(Color online) Tilted Majorana cone at $\k_{\rm sf}=(0,0)$ in the B-phase ($\eta=0.5$).
Parameters are $(t,t_z,t',\alpha,\mu,\Delta,\delta)=(1,4,1,0,16,4,0.02)$ reproducing a $\Gamma$-FS.
}
\label{mirror-cone}
\end{center}
\end{figure}
We may understand the topological protection by implementing the crystal mirror reflection symmetry
with respect to the $xy$-plane.
Mirror reflection operator for the normal part Hamiltonian is,
\begin{align}
& M^{xy}(k_z) = i s_z V_{\sigma}(k_z).
\label{Mirror-operator}
\end{align}
The mirror reflection symmetry is equivalent to the product of inversion symmetry and screw symmetry,
that is, $M^{xy}(k_z)=I S_\pi^{z}(k_z)$.
The nonsymmorphic screw symmetry $S_\pi^{z} = \{R_\pi^{z}|\frac{z}{2}\}$ involves half translation along
the $z$-axis, and therefore, the screw operator $S_\pi^{z}(k_z)$ is $k_z$-dependent.
Thus, the mirror operator is also $k_z$-dependent, and we have $M^{xy}(\pi) = i s_z \sigma_z$ while $M^{xy}(0) = i s_z $.
This momentum dependence of $M^{xy}(k_z)$ may yield the unusual line node in nonsymmorphic odd-parity
SCs~\cite{Kobayashi-Yanase-Sato}, a counterexample of Blount's theorem.
The normal part Hamiltonian is invariant under the mirror reflection symmetry
\begin{align}
&
M^{xy}(k_z) \tilde{H}_0(\k) M^{xy}(k_z)^{-1} = \tilde{H}_0(k_x,k_y,-k_z),
\end{align}
and the order parameter is mirror-odd irrespective of $\eta$,
\begin{align}
&&
M^{xy}(k_z) \tilde{\Delta}(\k) M^{xy}(-k_z)^{\rm T} = -\tilde{\Delta}(k_x,k_y,-k_z).
\end{align}
Thus, the BdG Hamiltonian respects mirror reflection symmetry,
\begin{align}
&& \hspace{-0mm}
M^{xy}_{\rm BdG}(k_z) \tilde{H}_{\rm BdG}(\k) M^{xy}_{\rm BdG}(k_z)^{-1} = \tilde{H}_{\rm BdG}(k_x,k_y,-k_z),
\end{align}
by defining the operator in the Nambu space,
\begin{align}
M^{xy}_{\rm BdG}(k_z)
&=
\left(
\begin{array}{cc}
M^{xy}(k_z) & 0 \\
0 & -M^{xy}(-k_z)^* \\
\end{array}
\right)_{\tau}
\\
&= M^{xy}(k_z) \otimes \tau_0.
\end{align}
According to the $K$-theory for topological crystalline insulators and SCs~\cite{Shiozaki2014},
the effective 2D Hamiltonian at mirror invariant planes, namely, $k_z=0$ and $\pi$,
is specified by a topological index of class D, $\mathbb{Z} \oplus \mathbb{Z}$, in the TRS broken B-phase.
This is ensured by the algebra
$\left[M^{xy}_{\rm BdG}(0)\right]^2 = \left[M^{xy}_{\rm BdG}(\pi)\right]^2=-1$
and $\{M^{xy}_{\rm BdG}(0),C \} = \{M^{xy}_{\rm BdG}(\pi),C\} = 0$.
One of the two integer topological invariants is nothing but the Chern number $\nu(k_z)$ introduced
in Sec.~\ref{sec:Weyl}.
The other is the mirror Chern number, $\nu_{\rm M}^{\Gamma_z} \in \mathbb{Z}$ $\,\,$($\Gamma_z \equiv 0$ or $\pi$),
which is defined below by using the mirror reflection symmetry~\cite{Ueno-Sato,Yoshida2015}.
In the TRS invariant A- and C-phases, the Chern number must be zero, and the mirror Chern number
is naturally the $\mathbb{Z}$ topological index of class DIII appearing in Ref.~\onlinecite{Shiozaki2014}.
The commutation relation,
$\left[M^{xy}_{\rm BdG}(\Gamma_z), \tilde{H}_{\rm BdG}(\k_\parallel,\Gamma_z)\right] =0$, ensures that
the BdG Hamiltonian is block-diagonalized at mirror invariant planes on the basis diagonalizing
$M^{xy}_{\rm BdG}(\Gamma_z)$. In other words, the BdG Hamiltonian is decomposed into two mirror-subsectors
with mirror eigenvalues $\pm i$,
\begin{align}
\tilde{H}_{\rm BdG}(\k_\parallel,\Gamma_z) = \tilde{H}^{\Gamma_z}_{i}(\k_\parallel) \oplus \tilde{H}^{\Gamma_z}_{-i}(\k_\parallel).
\end{align}
The PHS is preserved in the mirror-subsector, because of
$[M^{xy}_{\rm BdG}(\Gamma_z)]^2 = -1$ and $\{M^{xy}_{\rm BdG}(\Gamma_z), C\}=0$. On the other hand, the TRS
is not preserved even in the TRS invariant A- and C-phases since $[M^{xy}_{\rm BdG}(\Gamma_z), T]=0$.
Thus, the symmetry of the mirror-subsector is class D irrespective of $\eta$, and
the Chern number of mirror-subsector Hamiltonian given by
\begin{align}
&
\nu^{\Gamma_z}_{\pm i} = \frac{1}{2\pi} \int {\rm d}{\bm k}_\parallel F_{z, \pm i}^{\Gamma_z}({\bm k}_\parallel),
\label{mirror-Chern}
\end{align}
may be nontrivial.
Here, $F_{z, \pm i}^{\Gamma_z}({\bm k}_\parallel)$ is the Berry curvature of $\tilde{H}^{\Gamma_z}_{\pm i}(\k_\parallel)$.
The mirror Chern number is defined by
\begin{align}
\nu_{\rm M}^{\Gamma_z} = \nu^{\Gamma_z}_{+i} \, - \, \nu^{\Gamma_z}_{-i},
\end{align}
while the total Chern number is given by $\nu(\Gamma_z) = \nu^{\Gamma_z}_{+i} \, + \, \nu^{\Gamma_z}_{-i}$.
\subsubsection{Mirror Chern number at $k_z=0$}
Later we show that the mirror Chern number at $k_z= \pi$ has to vanish owing to the constraint
by glide symmetry. On the other hand, the mirror Chern number may be nontrivial at $k_z =0$,
and the surface states around $\k_{\rm sf}=(0,0)$ are indeed protected by the mirror Chern number.
Because we have $M^{xy}_{\rm BdG}(0) = i s_z \sigma_0 \tau_0$, the mirror-subsector Hamiltonian
$\tilde{H}^0_{\pm i}(\k_\parallel)$ is equivalent to the spin sector for $s=\uparrow$ and $\downarrow$,
respectively. Thus, we obtain
\begin{align}
\tilde{H}^0_{\pm i}(\k_\parallel)
&=
\left(
\begin{array}{cc}
\hat{h}_{\pm i}(\k_\parallel) & \hat{\Delta}_{\pm i}(\k_\parallel) \\
\hat{\Delta}_{\pm i}(\k_\parallel)^\dag & - \hat{h}_{\pm i}(-\k_\parallel)^{T} \\
\end{array}
\right),
\end{align}
with
\begin{align}
\hspace{-3mm}
\hat{h}_{\pm i}(\k_\parallel)
&=
\left(
\begin{array}{cc}
\varepsilon(\k_\parallel) \pm \alpha g(\k_\parallel) & \tilde{a}(\k_\parallel) \\
\tilde{a}(\k_\parallel)^* & \varepsilon(\k_\parallel) \mp \alpha g(\k_\parallel) \\
\end{array}
\right),
\end{align}
and
\begin{align}
\hat{\Delta}_{\pm i}(\k_\parallel)
&=
-(\eta \pm 1) \Delta_{\rm p} \left[p_{x}(\k_\parallel) \pm i p_y(\k_\parallel) \right] \sigma_0.
\label{subsector-OP_0}
\end{align}
We denoted $A(\k_\parallel)=A(\k_\parallel,0)$ and $\Delta_{\rm p} = \delta \Delta/\sqrt{1+\eta^2}$.
It turns out that the mirror-subsector Hamiltonian is equivalent to the BdG Hamiltonian of
a two-band chiral $p$-wave SC.
In our model for the $\Gamma$-FS, only one band crosses the Fermi level,
and we obtain $\nu^0_{\pm i}= \pm 1$.
The sign of Chern number is opposite between the two mirror-subsectors because of
the opposite chirality of $p$-wave order parameter [see Eq.~(\ref{subsector-OP_0})].
Therefore, the total Chern number of the 2D BdG Hamiltonian is zero,
$\nu(0) = \nu^0_{+i} + \nu^0_{-i} = 0$, even in the TRS broken B-phase.
On the other hand, the mirror Chern number is nontrivial,
\begin{align}
\nu_{\rm M}^0= 2.
\label{mirror_Chern_Gamma}
\end{align}
We now understand that the (tilted-) Majorana cone in Fig.~\ref{mirror-cone} is the topological surface states
ensured by the bulk-boundary correspondence.
Since the chirality of Majorana modes corresponding to $\nu^0_{\pm i}= \pm 1$ is opposite between
two mirror-subsectors, the (tilted) helical mode appears at $k_z=0$, and the helical mode is gapped
at $k_z \ne 0$, implying the Majorana cone.
Finally, we comment on the multiband effect.
Although Eq.~(\ref{mirror_Chern_Gamma}) is obtained for a hole $\Gamma$-FS, we obtain $\nu_{\rm M}^0= -2$ for
a electron $\Gamma$-FS consistent with UPt$_3$~\cite{Taillefer1988,Kimura_UPt3,McMullan,Nomoto}.
Because the mirror Chern number is additive, we will obtain the mirror Chern number $\nu_{\rm M}^0= -6$ from
three $\Gamma$-FSs. Then, the surface states form a (tilted-) Majorana cone at ${\bm k}_{\rm sf} = (0,0)$
and two cones away from the $\Gamma$-point, ${\bm k}_{\rm sf} = (\pm k_y^0,0)$.
Although the $K$-FSs have also been predicted by band structure calculations~\cite{Taillefer1988,Nomoto},
the existence of them is still under debate~\cite{McMullan}.
The $K$-FSs also give nontrivial mirror Chern number $\nu_{\rm M}^0= -8$ if they exist.
Then, the mirror Chern number is $\nu_{\rm M}^0= -14$ by taking into account all the FSs.
In any case, $\nu_{\rm M}^0 \in 4 \Z +2$ indicates the existence of Majorana cone at ${\bm k}_{\rm sf} = (0,0)$.
\subsubsection{Vanishing mirror Chern number at $k_z=\pi$}
We here show that the mirror Chern number at $k_z=\pi$ must be trivial owing to the glide symmetry,
namely,
\begin{align}
\nu_{\rm M}^\pi= 0.
\label{mirror-Chern-pi}
\end{align}
First, we consider the glide invariant A- and C-phases.
The glide symmetry is also preserved in the mirror-subsectors at $k_z =\pi$ because of
$\left[M^{xy}_{\rm BdG}(\pi), G^{xz}_{\rm BdG}(\pi)\right] = 0$, although
$\left\{M^{xy}_{\rm BdG}(0), G^{xz}_{\rm BdG}(0)\right\} = 0$ indicates the broken glide-symmetry in
the mirror-subsectors at $k_z=0$.
Then, we can prove the relation for Berry curvature,
\begin{align}
F_{z, \pm i}^{\pi}(k_x,k_y) = - F_{z, \pm i}^{\pi}(k_x,-k_y).
\end{align}
Integration over the $(k_x,k_y)$ plane ends up vanishing Chern number, $\nu^{\pi}_{\pm i} =0$, and thus,
the mirror Chern number also vanishes.
In the B-phase, the glide symmetry is spontaneously broken. However, considering the magnetic-glide
symmetry $T G^{xz}_{\rm BdG}(\pi)$, we can show the relation
\begin{align}
F_{z, \pm i}^{\pi}(k_x,k_y) = F_{z, \mp i}^{\pi}(-k_x,k_y),
\end{align}
which leads to $\nu^{\pi}_{i} = \nu^{\pi}_{-i} $.
Therefore, the mirror Chern number at $k_z = \pi$ vanishes in the B-phase as well.
The trivial mirror Chern number is confirmed in our model as follows.
Using the mirror reflection operator $M^{xy}_{\rm BdG}(\pi) = i s_z \sigma_z \tau_0$,
we obtain the mirror-subsector Hamiltonian respecting the PHS,
\begin{align}
\tilde{H}^\pi_{\pm i}(\k_\parallel)
&=
\left(
\begin{array}{cc}
\hat{h}_{\pm i}(\k_\parallel) & \hat{\Delta}_{\pm i}(\k_\parallel) \\
\hat{\Delta}_{\pm i}(\k_\parallel)^\dag & - \hat{h}_{\pm i}(-\k_\parallel)^{T} \\
\end{array}
\right).
\end{align}
The normal part is given by
\begin{align}
\hspace{-3mm}
\hat{h}_{\pm i}(\k_\parallel)
&=
\left[\varepsilon(\k_\parallel) \pm \alpha g(\k_\parallel)\right] \sigma_0.
\end{align}
For instance, the order parameter part is
\begin{align}
\hat{\Delta}_{\pm i}(\k_\parallel)
&=
\Delta \times
\nonumber \\
& \hspace{-12mm}
\left(
\begin{array}{cc}
\mp \delta \left[p_{x}(\k_\parallel) \pm i p_y(\k_\parallel) \right]
&
\tilde{f}_{(x^2-y^2)z}(\k_\parallel) + i \tilde{d}_{yz}(\k_\parallel)
\\
\tilde{f}_{(x^2-y^2)z}(\k_\parallel)^* - i \tilde{d}_{yz}(\k_\parallel)^*
&
\pm \delta \left[p_{x}(\k_\parallel) \mp i p_y(\k_\parallel) \right]
\end{array}
\right),
\nonumber \\
\label{subsector-OP-pi}
\end{align}
in the C-phase.
When the $d+f$-wave component is dominant as we assume in this paper, the $p$-wave component can be
adiabatically reduced to zero without closing the gap. Then, it turns out that the Chern number of
mirror-subsectors is trivial because the phase winding of
$\tilde{f}_{(x^2-y^2)z}(\k_\parallel) \pm i \tilde{d}_{yz}(\k_\parallel)$ along the FS is zero.
Even when the $p$-wave component is dominant, the Chern number vanishes
because the chirality of gap function $p_{x}(\k_\parallel) \pm i p_y(\k_\parallel)$ is opposite
between the pseudospin up and down Cooper pairs.
Thus, we obtain $\nu^\pi_{\pm i} =0$ and $\nu_{\rm M}^\pi =0$ in the C-phase.
It is straightforward to show $\nu^\pi_{\pm i} =0$ in the A-phase as well.
In the B-phase we have obtained the nontrivial Chern number $\nu(\pi)=-4$ for the $A$-FSs. However,
the mirror Chern number remains trivial, because $\nu^\pi_{\pm i}=-2$.
We have numerically confirmed Eq.~(\ref{mirror-Chern-pi}) in the entire A-, B- and C-phases for all the FSs.
\subsection{Majorana flat band and glide winding number}\label{sec:glide-AIII}
As shown in Figs.~\ref{eGFSedgestate}(d) and (e) and Figs.~\ref{ZFSedgestate}(d) and (e),
the zero energy surface flat band appears in the A-phase and in a ``half'' of the B-phase ($|\eta| > 1$).
We here show that the flat band is topologically protected by the glide winding number.
Below we first demonstrate the topological protection in the A-phase, and later investigate the B-phase.
\subsubsection{A-phase}
Let's consider the glide invariant plane $k_y=0$ in the A-phase.
The glide winding number is defined for 1D models $\tilde{H}_{\rm BdG}(k_x,0,k_z)$ parametrized by $k_z$.
The 1D models do not respect TRS and PHS unless $(0,0,k_z)$ is a time-reversal invariant momentum.
On the other hand, the combined chiral symmetry, $\Gamma = i T_{\rm BdG} \, C$, is preserved.
Thus, the winding number of 1D AIII class can be defined. However, it is obtained to be zero.
The nontrivial winding number is obtained by implementing the
glide symmetry, which has been represented by Eq.~(\ref{glide-symmetry}).
The glide symmetry ensures the sector decomposition,
\begin{align}
\tilde{H}_{\rm BdG}(k_x,0,k_z) = \tilde{H}_{\lambda_+}(k_x,k_z) \oplus \tilde{H}_{\lambda_-}(k_x,k_z),
\end{align}
for eigenvalues $\lambda_{\pm} = \pm i e^{- i k_z/2}$ of the glide operator.
The chiral symmetry is preserved in the glide-subsector Hamiltonian
$\tilde{H}_{\lambda_\pm}(k_x,k_z)$ because $\left[ \Gamma, G^{xz}_{\rm BdG}(k_z)\right]=0$
in the A-phase~\cite{Comment1}.
Now we have two winding numbers of AIII class, ${\cal \omega}_{\rm G}(+, k_z)$ and ${\cal \omega}_{\rm G}(-, k_z)$,
which correspond to the $\mathbb{Z} \oplus \mathbb{Z}$ topological index of 1D AIII class
with $U_+$ crystal symmetry~\cite{Shiozaki2014}.
We here estimate the winding numbers by analyzing the original BdG Hamiltonian $\hat{H}_{\rm BdG}({\bm k})$,
instead of $\tilde{H}_{\rm BdG}({\bm k})$.
The periodicity along the $k_x$-axis is satisfied in $\hat{H}_{\rm BdG}({\bm k})$, and
the unitary transformation (\ref{unitary_transformation}) does not alter the winding number,
since $\left[\Gamma,U(\k) \right]=0$.
The glide operator for the original BdG Hamiltonian $\hat{H}_{\rm BdG}({\bm k})$ is
$G^{xz}_{\rm BdG} = i s_y \sigma_x \tau_0 e^{- i k_z/2}$ in the A-phase while $G^{xz}_{\rm BdG} = i s_y \sigma_x \tau_z e^{- i k_z/2}$
in the C-phase.
In the A-phase, the glide-subsectors of $\hat{H}_{\rm BdG}({\bm k})$ are,
\begin{align}
\hat{H}_{\lambda_\pm}(k_x,k_z) =& \, \varepsilon^{k_z}(k_x) \sigma_0 \tau_z \pm a^{k_z}(k_x) \sigma_z \tau_z
+ \alpha g(k_x) \sigma_y \tau_0
\nonumber \\ &
-\Delta \delta p_x(k_x) \sigma_0 \tau_x
\pm \Delta d_{xz}^{\, k_z}(k_x) \sigma_y \tau_y.
\label{glide_subsector_AIII}
\end{align}
The chiral symmetry is confirmed by $\left\{\Gamma_{\rm s}, \hat{H}_{\lambda_\pm}(k_x, k_z)\right\}=0$,
where $\Gamma_{\rm s} = \sigma_z \tau_y$ is the chiral operator in the subsector space.
Thus, we obtain the off-diagonal form
\begin{align}
&\hspace{-0mm}
U_{\Gamma_{\rm s}} \hat{H}_{\lambda_\pm}(k_x, k_z) U_{\Gamma_{\rm s}}^\dagger =
\left(
\begin{array}{cc}
0 & \hat{q}_{\pm}(k_x,k_z) \\
\hat{q}_{\pm}^\dagger(k_x,k_z) & 0 \\
\end{array}
\right),
\label{glide-off-diagonal}
\end{align}
by choosing the basis diagonalizing the chiral operator.
From Eq.~(\ref{glide_subsector_AIII}), we obtain
\begin{align}
q_{\pm}(k_x, k_z) = &
i\varepsilon^{k_z}(k_x) \sigma_z \pm i a^{k_z}(k_x) \sigma_0 + \Delta \delta p_x(k_x) \sigma_0
\nonumber \\ &
+ [\alpha g(k_x) \pm \Delta d_{xz}^{\, k_z}(k_x)] \sigma_y,
\end{align}
for the $\lambda_\pm$ glide-subsector, respectively.
We used abbreviations, $A^{k_z}(k_x) =A(k_x,0,k_z)$.
Now the winding number of glide-subsectors given by
\begin{align}
\label{glide-winding}
\hspace{-0mm}
{\cal \omega}_{\rm G}(\pm, k_z) = \frac{1}{4\pi i} \int_{0}^{4\pi} dk_x {\rm Tr}
& \Big[ \hat{q}_{\pm}(k_x,k_z)^{-1}\partial_{k_x}\hat{q}_{\pm}(k_x,k_z)
\nonumber \\ & \hspace{-5mm}
- \hat{q}_{\pm}^\dagger(k_x,k_z)^{-1}\partial_{k_x}\hat{q}_{\pm}^\dagger(k_x,k_z) \Big],
\end{align}
is calculated.
By adiabatically reducing $\alpha g(k_x) \rightarrow 0$ and $d_{xz}^{\, k_z}(k_x) \rightarrow 0$
without closing the excitation gap, we obtain the winding number as
\begin{align}
& {\cal \omega}_{\rm G}(\pm, k_z)
\nonumber \\ & =
\begin{cases}
\mp 1 & [\varepsilon({\bm 0},k_z) + a({\bm 0},k_z) > 0 > \varepsilon({\bm 0},k_z) - a({\bm 0},k_z)] \\
0 & [{\rm otherwise}]
\end{cases},
\nonumber \\
\label{glide-windingA}
\end{align}
for $t>0$, $t'>0$ and $\Delta \delta >0$.
This means that the $\lambda_\pm$ glide-subsectors of $\hat{H}_{\rm BdG}({\bm k})$ [and equivalently
the subsectors of the periodic BdG Hamiltonian $\tilde{H}_{\rm BdG}({\bm k})$]
are topologically characterized by the glide-winding number
${\cal \omega}_{\rm G}(\pm, k_z) = \mp 1$, when the condition
$\varepsilon({\bm 0},k_z) + a({\bm 0},k_z) > 0 > \varepsilon({\bm 0},k_z) - a({\bm 0},k_z)$ is satisfied.
This condition is equivalent to the number of FSs (per Kramers pairs) is odd.
In Figs.~\ref{eGFSedgestate}(e) and \ref{ZFSedgestate}(e), the flat band appears
on the $k_y=0$ line of surface BZ where only one FS is projected.
The nontrivial glide-winding number demonstrated above protects this Majorana flat band.
The zero energy states are two-fold degenerate in accordance with the bulk-boundary correspondence.
One comes from the $\lambda_{+} = i e^{- i k_z/2}$ glide-subsector and the other comes from
the $\lambda_{-} = - i e^{- i k_z/2}$ glide-subsector.
Note that the flat band is robust against the multiband effect. We find that the glide-winding number
of $K$-FSs is zero. Taking into account three $\Gamma$-FSs, we will have glide-winding number
${\cal \omega}_{\rm G}(\pm, 0) = \mp 3$, or $\mp 1$, or $\pm 1$, or $\pm 3$
depending on the sign of order parameter. In any case, the glide-winding number is nontrivial.
\subsubsection{B-phase}
The glide-subsector is no longer well-defined in the B-phase, because the glide symmetry
is spontaneously broken. However, the glide-winding number is well-defined
by the magnetic-glide symmetry $G^{xz}_{\rm BdG} T$ preserved in the B-phase.
Then, the glide-winding number is given by
\begin{align}
{\cal \omega}_{\rm G}(k_z) = \frac{i}{4\pi} \int_{0}^{4\pi} dk_x {\rm Tr} & \Big[
\Gamma_{\rm G} \tilde{H}_{\rm BdG}(k_x,0,k_z)^{-1}
\nonumber \\
& \times \partial_{k_x} \tilde{H}_{\rm BdG}(k_x,0,k_z) \Big],
\label{glide-winding2}
\end{align}
where $\Gamma_{\rm G} = e^{i \phi} G^{xz}_{\rm BdG}(k_z) T_{\rm BdG} C $ is the glide-chiral operator
with $\Gamma_{\rm G}^2=1$.
In the A-phase, Eq.~(\ref{glide-winding2}) is reduced to
\begin{align}
{\cal \omega}_{\rm G}(k_z) ={\cal \omega}_{\rm G}(+,k_z) - {\cal \omega}_{\rm G}(-,k_z).
\end{align}
Thus, we obtain ${\cal \omega}_{\rm G}(k_z)=-2$ in the A-phase.
The nontrivial glide-winding number is robust as long as the gap is finite.
Therefore, the Majorana flat band appears in the B-phase under the condition (\ref{glide-windingA}),
when the parameter $|\eta|$ is large [see Fig.~\ref{schematic-glide}(a)].
When $|\eta|$ is decreased from infinity, the pair creation of Weyl nodes occurs in the bulk BZ
on the $k_y=0$ plane~\cite{Yanase_UPt3_Weyl}. Then, a part of the Majorana flat band disappears
in between the pair of projected Weyl points [see Fig.~\ref{schematic-glide}(b)].
Therefore, the projected Weyl points are end points not only of the Majorana arc but also of the Majorana flat band.
This feature has been shown in Figs.~\ref{eGFSedgestate}(d) and \ref{ZFSedgestate}(d).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=80mm]{schematic_flatband2.eps}
\caption{(Color online)
Illustration of the Majorana flat band
(a) in the A-phase and non-Weyl B-phase ($\eta > \eta_{\rm c}$),
(b) in the Weyl B-phase ($\eta_{\rm c} > \eta > 1$), and
(c) at the critical point ($\eta = 1$).
Thick solid (purple) lines show the Majorana flat band. Thin lines illustrate the projection of a $\Gamma$-FS onto
the (100)-surface BZ. The closed (blue) circles indicate projections of Weyl point nodes.
(a), (b), and (c) correspond to the numerical results in Figs.~\ref{eGFSedgestate}(e), (d), and (c), respectively.
}
\label{schematic-glide}
\end{center}
\end{figure}
At $|\eta| =1$, a pair of Weyl nodes is annihilated on the $\k = (k_x,0,0)$ line, and other Weyl nodes
coalesce on the poles of FSs~\cite{Yanase_UPt3_Weyl}.
Then, the Majorana flat band completely disappears [Fig.~\ref{schematic-glide}(c)].
The fate of the Majorana flat band in the B-phase is schematically illustrated in Fig.~\ref{schematic-glide},
and shown in Figs.~\ref{eGFSedgestate} and \ref{ZFSedgestate} by the numerical diagonalization
of the BdG Hamiltonian.
\subsection{Symmetry constraint on winding numbers}\label{sec:symmetry}
The crystal symmetries preserved on the (100)-surface are as follows.
\begin{itemize}
\item Mirror symmetry $M^{xy}$.
\item Glide symmetry $G^{xz}$.
\item $\pi$-rotation symmetry $R^x$.
\end{itemize}
The $\pi$-rotation is given by the product of mirror and glide operations.
In addition to the glide-winding number studied in Sec.~\ref{sec:glide-AIII},
we can define the mirror-winding number~\cite{Tsutsumi_UPt3} and the rotation-winding number~\cite{Mizushima_3He}
in the same manner.
They are given by
\begin{align}
{\cal \omega}_{\rm M}^{\Gamma_z}(k_y) = \frac{i}{4\pi} \int_{0}^{4\pi} dk_x {\rm Tr} & \Big[
\Gamma_{\rm M}(\Gamma_z) \tilde{H}_{\rm BdG}(k_x,k_y,\Gamma_z)^{-1}
\nonumber \\
& \times \partial_{k_x} \tilde{H}_{\rm BdG}(k_x,k_y,\Gamma_z) \Big],
\label{mirror-winding}
\end{align}
and
\begin{align}
{\cal \omega}_{\rm R}^{\Gamma_z} = \frac{i}{4\pi} \int_{0}^{4\pi} dk_x {\rm Tr} & \Big[
\Gamma_{\rm R} \tilde{H}_{\rm BdG}(k_x,0,\Gamma_z)^{-1}
\nonumber \\
& \times \partial_{k_x} \tilde{H}_{\rm BdG}(k_x,0,\Gamma_z) \Big].
\label{rotation-winding}
\end{align}
$\Gamma_{\rm M}(\Gamma_z) = e^{i\theta} M^{xy}_{\rm BdG}(\Gamma_z) \Gamma$ and
$\Gamma_{\rm R} = e^{i\theta'}R^{x}_{\rm BdG} \Gamma$ are mirror-chiral operator and rotation-chiral operator, respectively.
The phase factors $e^{i\theta}$ and $e^{i\theta'}$ are chosen so that $\Gamma_{\rm M}(\Gamma_z)^2 =\Gamma_{\rm R}^2=1$.
The mirror-winding number is defined on the mirror invariant planes at $k_z = \Gamma_z = 0, \pi$
and $k_y$-dependent.
On the other hand, the rotation-winding number is defined on the rotation invariant lines.
The mirror-winding number is defined only in the TRS invariant A- and C-phases, since the mirror-chiral symmetry is broken
in the TRS broken B-phase.
From the algebra of symmetry operations we can prove that most of the winding numbers vanish.
The proof relies on the fact that the winding number disappears when any unitary symmetry preserved
on the surface anti-commutes with the chiral operator, $\left\{ U, \Gamma_{V} \right\} =0$.
This fact, ${\cal \omega}_{\rm V}=0$, is understood by
\begin{align}
{\cal \omega}_{\rm V} & = \frac{i}{4\pi} \int_{0}^{4\pi} dk_x {\rm Tr} \Big[U
\Gamma_{V} \tilde{H}_{\rm 1D}(k_x)^{-1} \partial_{k_x} \tilde{H}_{\rm 1D}(k_x) U^\dag \Big]
\nonumber \\ & = \frac{i}{4\pi} \int_{0}^{4\pi} dk_x {\rm Tr} \Big[
\left(-\Gamma_{V}\right) \tilde{H}_{\rm 1D}(k_x)^{-1} \partial_{k_x} \tilde{H}_{\rm 1D}(k_x) \Big]
\nonumber \\ & = - {\cal \omega}_{\rm V}.
\end{align}
Furthermore, the TRS has to satisfy $[T, \Gamma_V]=0$ when the winding number is nontrivial.
All of the mirror, glide, and rotation symmetries are preserved at the rotation invariant lines in the A- and C-phases,
although the glide and rotation symmetries are spontaneously broken in the B-phase.
Thus, we obtain some constraints on the winding numbers at ${\bm k}_{sf} =(0, 0)$ and $(0, \pi)$ in the A- and C-phases.
\begin{table}[htbp]
\begin{tabular}{c|c|c|c|c|c}
& $\Gamma_z$ & $c(M^{xy},\Gamma_{\rm M})$ & $c(G^{xz},\Gamma_{\rm M})$ & $c(R^{x},\Gamma_{\rm M})$ & $c(T,\Gamma_{\rm M})$ \\
\hline
A-phase & $0$ & -1 & -1 & +1 & -1 \\ \cline{2-2}
& $\pi$ & -1 & +1 & -1 & -1 \\ \cline{1-2}
C-phase &$0$ & -1 & +1 & -1 & -1 \\ \cline{2-2}
&$\pi$ & -1 & -1 & +1 & -1 \\
\hline
\end{tabular}
\caption{Commutation (anti-commutation) relations of the mirror-chiral operator $\Gamma_{\rm M}$ with
the crystal symmetry and time-reversal operators are represented by $+1$ ($-1$).
}
\label{tab3}
\end{table}
\begin{table}[htbp]
\begin{tabular}{c|c|c|c|c|c}
& $\Gamma_z$ & $c(M^{xy},\Gamma_{\rm G})$ & $c(G^{xz},\Gamma_{\rm G})$ & $c(R^{x},\Gamma_{\rm G})$ & $c(T,\Gamma_{\rm G})$ \\
\hline
A-phase & $0$ & +1 & +1 & +1 & +1 \\ \cline{2-2}
& $\pi$ & -1 & +1 & -1 & -1 \\ \cline{1-2}
C-phase &$0$ & +1 & -1 & -1 & -1 \\ \cline{2-2}
&$\pi$ & -1 & -1 & +1 & +1 \\
\hline
\end{tabular}
\caption{Commutation (anti-commutation) relations of the glide-chiral operator $\Gamma_{\rm G}$ with
the crystal symmetry and time-reversal operators.
}
\label{tab4}
\end{table}
\begin{table}[htbp]
\begin{tabular}{c|c|c|c|c|c}
& $\Gamma_z$ & $c(M^{xy},\Gamma_{\rm R})$ & $c(G^{xz},\Gamma_{\rm R})$ & $c(R^{x},\Gamma_{\rm R})$ & $c(T,\Gamma_{\rm R})$ \\
\hline
A-phase & $0$ & +1 & -1 & -1 & -1 \\ \cline{2-2}
& $\pi$ & -1 & +1 & -1 & -1 \\ \cline{1-2}
C-phase &$0$ & +1 & +1 & +1 & +1 \\ \cline{2-2}
&$\pi$ & -1 & -1 & +1 & +1 \\
\hline
\end{tabular}
\caption{Commutation (anti-commutation) relations of the rotation-chiral operator $\Gamma_{\rm R}$ with
the crystal symmetry and time-reversal operators.
}
\label{tab5}
\end{table}
The commutation (anti-commutation) relations between crystal symmetry operators
$M^{xy}$, $G^{xz}$, $R^x$ and chiral operators $\Gamma_{\rm M}$, $\Gamma_{\rm G}$, and $\Gamma_{\rm R}$
are summarized in Tables~\ref{tab3}, \ref{tab4}, and \ref{tab5}.
From these algebra, we find that only ${\cal \omega}_{\rm G}(0)$ and ${\cal \omega}_{\rm R}^{0}$ may be nontrivial.
Interestingly, all the winding numbers at $k_z=\pi$ vanish as a consequence of the nonsymmorphic glide symmetry.
The mirror-winding number at $k_z=0$ also vanishes in both A- and C-phases.
Furthermore, we see that the rotation-winding number ${\cal \omega}_{\rm R}^{0}$ disappears in the A-phase,
while the glide-winding number ${\cal \omega}_{\rm G}(0)$ disappears in the C-phase.
These symmetry constraints are consistent with our numerical calculations summarized in Table.~\ref{tab6},
and also consistent with recently obtained general rules for winding numbers~\cite{Xiong}.
\begin{table}[htbp]
{\renewcommand\arraystretch{1.2}
\begin{tabular}{c|c|c}
& $|\eta|>1$ & $|\eta| < 1$ \\
\hline
${\cal \omega}_{\rm G}(0)$ & -2 & \textcolor{blue}{0} \\
\hline
${\cal \omega}_{\rm R}^{0}$ & \textcolor{blue}{0} & -2 \\
\hline
\end{tabular}
}
\caption{Nontrivial winding numbers of the $\Gamma$-FS.
The other winding numbers are proved to be zero owing to the symmetry constraints.
The zeros in the table are also ensured by the adiabatic connection from
the TRS invariant A- or C-phases.
}
\label{tab6}
\end{table}
In addition to the glide-winding number ${\cal \omega}_{\rm G}(0)$ discussed in Sec.~\ref{sec:glide-AIII},
we may have a nontrivial rotation-winding number, which is introduced below for completeness.
Combining the $\pi$-rotation symmetry
with TRS, we define the magnetic $\pi$-rotation symmetry by $T' = R_\pi^x T = -i s_z \sigma_x K$.
The BdG Hamiltonian is invariant
\begin{align}
&
T'_{\rm BdG} \tilde{H}_{\rm BdG}(\k) T_{\rm BdG}'^{\,\,\,\,\,\,\,\,-1} = \tilde{H}_{\rm BdG}(-k_x,k_y,k_z),
\end{align}
under the magnetic $\pi$-rotation in the Nambu space,
\begin{align}
T'_{\rm BdG} &=
\left(
\begin{array}{cc}
T' & 0 \\
0 & T'^* \\
\end{array}
\right)_{\tau}
= T' \otimes \tau_z,
\end{align}
not only in the rotation invariant A- and C-phases but also in the B-phase.
According to the classification by $K$-theory~\cite{Shiozaki2014}, the 2D Hamiltonian of D class on the $k_z = 0$ or $\pi$ plane
is specified by a $\mathbb{Z} \oplus \mathbb{Z}$ topological invariant by implementing the magnetic $\pi$-rotation symmetry.
The relations $\left(T'_{\rm BdG}\right)^2 = 1$ and $\left[\,T'_{\rm BdG}, C \,\right] = 0$ are used there.
One of the integer topological numbers is the rotation-winding number given by Eq.~(\ref{rotation-winding}),
where the rotation-chiral operator is $\Gamma_{\rm R} = T'_{\rm BdG} C = s_z \sigma_x \tau_y$.
\subsection{Topological transition in B-phase}\label{sec:magnetic-rotation}
In Sec.~\ref{sec:symmetry}, symmetry constraints on the winding numbers have been proved
in the A- and C-phases. In this subsection the B-phase is discussed.
We again see that ${\cal \omega}_{\rm G}(\pi) = {\cal \omega}_{\rm R}^{\pi}=0$ owing to the mirror symmetry.
On the other hand, we obtain ${\cal \omega}_{\rm G}(0) =-2$ when $|\eta|>1$,
while ${\cal \omega}_{\rm R}^{0}=-2$ when $|\eta|<1$ (see Table~\ref{tab6}).
The Majorana cone discussed in Sec.~\ref{sec:mirror} is protected by these winding numbers as well.
At $|\eta|=1$ the jump of the winding numbers ${\cal \omega}_{\rm G}(0)$ and ${\cal \omega}_{\rm R}^{0}$ indicates
the gap closing. Equation~(\ref{subsector-OP_0}) shows that the superconducting gap on the $k_z=0$
plane actually disappears at $|\eta|=1$.
This gap node has been reported as unusual ``quadratic line node''~\cite{Yanase_UPt3_Weyl}.
In contrast to the usual linear line node with $\Delta({\bm k}) \propto |k_z|$, which appears in the purely
$f$-wave $E_{\rm 2u}$-state~\cite{Sauls,Joynt}, the line node of the generic $E_{\rm 2u}$-state is accompanied by
the quadratic behavior, $\Delta({\bm k}) \propto |k_z|^2$.
Such an unusual nodal structure at $|\eta|=1$ has been attributed to the pair annihilation of
Weyl nodes~\cite{Yanase_UPt3_Weyl}. It can also be viewed as a criticality of topological phase transition
specified by ${\cal \omega}_{\rm G}(0)$ and ${\cal \omega}_{\rm R}^{0}$.
In contrast to the $k_z=0$ plane, all of the winding numbers on the $k_z=\pi$ plane are zero irrespective of $\eta$.
Thus, the gap closing enforced by the change of winding numbers does not occur at $k_z=\pi$.
This is consistent with the numerical result showing the finite superconducting gap on the $k_z=\pi$ plane.
\section{Summary and discussions}
We investigated topologically nontrivial superconducting phases in UPt$_3$.
Taking into account the FSs reported by first principles band structure calculation
and quantum oscillation experiments, we have calculated the topological invariants specifying the
superconducting states and demonstrated topological surface states.
Among a variety of topological properties in UPt$_3$, the most intriguing result is the nontrivial
glide-$\Z_2$ invariant in the TRS invariant A-phase. By using the $K$-theory for topological nonsymmorphic
insulators/superconductors, we showed that the glide-$\Z_2$ invariant is the strong topological index
specifying the 3D glide-even superconductivity of class DIII.
Although UPt$_3$ is a gapless SC in the bulk, the glide-$\Z_2$ invariant is well-defined and nontrivial.
Thus, the UPt$_3$ A-phase can be reduced to a 3D gapped TNSC with keeping
double Majorana cone surface states, when the point nodes are removed by some perturbations.
By these findings, UPt$_3$ is identified as a 3D gapless TNSC.
At our best knowledge, this is the first proposal for the material realization of emergent topological
superconductivity enriched by nonsymmorphic space group symmetry.
Not only the A-phase but also the B- and C-phases have been identified as symmetry-enriched topological superconducting states.
Combining the crystal symmetries of UPt$_3$ with the TRS and PHS, we find
topological invariants and surface states as follows.
\begin{itemize}
\item
Double Majorana cone protected by the glide-$\Z_2$ invariant in the A-phase
\item
Chiral Majorana arcs in the Weyl B-phase
\item
Majorana cone protected by the mirror Chern number in the A-, B-, and C-phases
\item
Majorana flat band protected by the glide-winding number in the A-phase and ``half'' of the B-phase
\end{itemize}
It has been proved that the other mirror Chern number and winding numbers must be trivial because of
the constraints by symmetry.
From the results obtained in this paper, we notice rich topological properties of superconducting UPt$_3$.
Underlying origins of such topological superconducting phases are as follows.
(1) Spin-triplet odd-parity superconductivity, which is often a platform of topological SC.
(2) 2D $E_{\rm 2u}$ representation, which allows multiple superconducting phases distinguished by symmetry.
(3) Nonsymmorphic space group symmetry $P6_{3}/mmc$, which gives rise to following features distinct from symmorphic systems,
\begin{enumerate}
\item
Classification of topological insulators and SCs changes, and allows emergent topological phases.
\item
Dirac nodal lines yield the paired FSs which correspond to the pseudospin degree of freedom
in glide-subsectors.
\item
The sublattice-singlet $d$-wave pairing naturally admixes with the $f$-wave pairing, and
leads to the nontrivial glide-$\Z_2$ invariant.
\item
Most mirror Chern numbers and winding numbers are forced to be zero,
and do not support topological surface states.
\end{enumerate}
Thus, an old heavy fermion superconductor UPt$_3$ is a precious platform of
topological superconductivity enriched by nonsymmorphic space group symmetry.
\begin{acknowledgments}
The authors are grateful to A. Daido, S. Kobayashi, M. Sato, and S. Sumita for fruitful discussions.
This work was supported by Grant-in Aid for Scientific Research on Innovative Areas ``J-Physics'' (JP15H05884)
and ``Topological Materials Science'' (JP16H00991) from JSPS of Japan, and by JSPS KAKENHI Grant Numbers
JP15K05164 and JP15H05745.
K.S.\ is supported by JSPS Postdoctoral Fellowship for Research Abroad.
\end{acknowledgments}
|
1,108,101,563,903 | arxiv | \section{Introduction}
In this note we consider a frequency localized Bernstein-type inequality which
has useful applications in fluid dynamics. Let $\alpha>0$ and consider the fractional Laplacian operator
$|\nabla|^\alpha$ defined via Fourier transform by the relation
\begin{align*}
\widehat{|\nabla|^\alpha f }(\xi)=(2\pi |\xi|)^\alpha \hat f(\xi), \quad \xi \in \mathbb R^d.
\end{align*}
Here $\hat f(\xi)$ is the usual Fourier transform of a scalar-valued function $f$ on $\mathbb R^d$.
To fix the notations, we adopt the following conventional definition of Fourier transform pair:
\begin{align}
\text{Fourier transform:}\qquad &(\mathcal F f)(\xi)=\hat f(\xi) = \int_{\mathbb R^d} f(x) e^{-2\pi i x \cdot \xi} dx, \notag \\
\text{Inverse Fourier transform:}\quad & f(x) = \int_{\mathbb R^d}
\hat f(\xi) e^{2\pi i x \cdot \xi}d \xi. \notag
\end{align}
Occasionally we also use the notation $\mathcal F^{-1}$ to denote inverse Fourier transform.
Let $0 < \alpha \le 2$ and $1<q<\infty$, the Bernstein-type inequality we are interested in
takes the following form: for any $A_2>A_1>0$ and any $f \in L^q(\mathbb R^d)$ with
\begin{align} \label{sup_cond1}
\text{supp}(\hat f) \subset \{ \xi:\;
A_1 \le |\xi| \le A_2\},
\end{align}
there is a constant $C$ depending only on $(d,p,\alpha,A_1,A_2)$ such that
\begin{align}
C \| f\|_{q}^q\ge \int_{\mathbb R^d} (|\nabla|^{\alpha} f) |f|^{q-2} f dx \ge \frac 1 {C} \| f\|_{q}^q. \label{Bern1}
\end{align}
Here $\|f \|_q$ is the usual Lebesgue norm of $f$ on $\mathbb R^d$. An equivalent and more commonly used formulation of
\eqref{Bern1} is stated in Corollary \eqref{cor1} in which the frequency support condition \eqref{sup_cond1}
is replaced by the Littlewood-Paley operators. In \eqref{Bern1}, the upper bound is trivial: it is a consequence of the
H\"older's inequality and the usual Bernstein inequality. As for the lower bound, the case $q=2$ is a simple consequence
of the Plancherel theorem since by \eqref{sup_cond1},
\begin{align*}
\int_{\mathbb R^d} |\nabla|^{\alpha} f f dx & = \int_{A_1 \le |\xi| \le A_2} (2\pi|\xi|)^{\alpha} |\hat f(\xi)|^2
d\xi \notag \\
& \ge {(2\pi A_1)^{\alpha}} \int_{\mathbb R^d} |\hat f(\xi)|^2 dx = {(2\pi A_1)^{\alpha}} \| f \|_2^2.
\end{align*}
It is the case $1<q<\infty$, $q\ne 2$ which requires more elaborate analysis.
For the full Laplacian case $\alpha=2$, the inequality \eqref{Bern1} can be reduced
to the form (still under the condition \eqref{sup_cond1})
\begin{align}
\int_{\mathbb R^d} | \nabla f |^2 |f|^{q-2} dx \ge C \| f \|_q^q \label{Bern2}
\end{align}
after an integration by parts argument. The inequality \eqref{Bern2} was first proved by Danchin
\cite{Dan1} when $q$ is an even integer and under a certain $q$-dependent small angle condition on the frequency
support. Planchon
\cite{Pl1} proved the case $\alpha=2$, $2<q<\infty$ by using an integration by parts argument.
In \cite{Dan2}, Danchin settled the remaining case $\alpha=2$, $1<q<2$ in the appendix of
that paper (see Lemma A.5 therein). The fractional Laplacian formulation of \eqref{Bern1} for $0<\alpha<2$ first
appeared in Wu \cite{Wu1} and it is of fundamental importance in the wellposedness theory for the dissipative quasi-geostrophic
equations. In \cite{CMZ07}, Chen, Miao and Zhang proved the inequality \eqref{Bern1} for $0<\alpha<2$,
$2<q<\infty$ by using an interpolation definition of Besov spaces. Recently Hmidi \cite{Hmidi11}
even generalized \eqref{Bern1} to some logarithm-damped fractional Laplacian operators of the form
\begin{align*}
\frac{|\nabla|^{\gamma}} {\log^{\beta}(\lambda +|\nabla|)}, \qquad 0\le \beta\le \frac 12,\; 0\le \gamma \le 1,
\lambda \ge e^{\frac{3+2\alpha}{\beta}}
\end{align*}
in dimensions $d=1,2,3$. The maximum principle for these nonlocal operators is obtained in \cite{Hmidi11} and \cite{DL12}.
The purpose of this note is to give a completely new proof of \eqref{Bern1} which works for $0<\alpha \le 2$ and for all
$1<q<\infty$. We begin by reformulating \eqref{Bern1} in terms of a (fractional) heat flow estimate.
The following result is the key step. See Remark \ref{rem_weak} for a slightly weaker result.
\begin{thm}[Improved heat flow estimate] \label{thm0}
Let the dimension $d\ge 1$. Let $0 < \alpha < 2$ and $1\le q \le \infty$. There exists a constant $c>0$ depending {only
on the dimension $d$ and $\alpha$} such that
for any dyadic $N>0$ and any function $f\in L_x^q (\mathbb R^d)$, we have
\begin{align}
\|e^{-t|\nabla|^{\alpha}} P_N f \|_{q} \le e^{-c t N^{\alpha} } \| P_N f \|_q, \qquad \forall\, t\ge 0. \label{e00}
\end{align}
Here $P_N$ is the Littlewood-Paley operator defined in \eqref{lp_def}. For $\alpha=2$, there is an absolute constant $\tilde c>0$ such that
for any $1<q<\infty$, $f \in L_x^q(\mathbb R^d)$, we have
\begin{align}
\|e^{t\Delta} P_N f \|_{q} \le e^{-\tilde c \frac{q-1} {q^2} t N^2 } \| P_N f \|_q, \qquad \forall\, t\ge 0. \label{e00a}
\end{align}
\end{thm}
\begin{rem}
The usual Young's inequality together with the fact $\|\mathcal F^{-1} (e^{-t (2\pi|\xi|)^{\alpha}}) \|_{L^1_x}=1$
easily yields that
\begin{align}
\|e^{-t|\nabla|^{\alpha}} P_N f \|_{q} \le \| P_N f\|_q, \qquad\forall\, t \ge 0. \notag
\end{align}
The inequalities \eqref{e00}--\eqref{e00a} give a strengthening (thus the name "improved") of the above estimate.
It is of course fairly easy to prove the estimate
\begin{align}
\|e^{-t|\nabla|^{\alpha}} P_N f \|_{q} \le C_1 e^{-c t N^{\alpha} } \| P_N f \|_q, \qquad \forall\, t\ge 0
\end{align}
with a non-sharp constant $C_1$. The main point of \eqref{e00} is that $C_1$ can take the sharp value $1$. This is
very important for deriving the later inequality \eqref{Bern1}.
We shall only need the estimate near $t=0$ to prove
the inequality \eqref{Bern1}. Also it is worthwhile pointing it out that in \eqref{e00} $P_N$ can
be replaced by $P_{\ge N}$ or $P_{>N}$ since the main property
needed in the proof is a certain spectral gap condition.
\end{rem}
\begin{rem}
We stress that the two bounds \eqref{e00} and \eqref{e00a} are essentially optimal.
In particular the constant $c$ in \eqref{e00} cannot be taken to be uniform for all $0<\alpha\le 2$ and will actually blow up at $\alpha=2$.
This is deeply connected with the fact that the decay of $e^{-t|\nabla|^{\alpha}}$ is power-like only for $0<\alpha<2$.
For the full Laplacian, even with frequency localization, one \emph{should not} expect the inequality
\begin{align*}
\| e^{t \Delta} P_N f\|_{\infty} \le e^{-ctN^2} \| P_N f\|_{\infty}.
\end{align*}
To see this point it suffices to consider the periodic case, see Remark \ref{rem_counter_p} below.
\end{rem}
\begin{rem} \label{rem_weak}
If we do not care so much about the constant dependence on $q$, we can give a much shorter (and almost trivial) proof.
We sketch the argument as follows. By interpolating
the obvious inequalities (here $0<\alpha\le 2$, $N>0$, and $c_1$ is an absolute constant):
\begin{align}
\| e^{-t|\nabla|^{\alpha} } P_N f \|_2 &\le e^{-c_1t N^{\alpha}} \| P_N f \|_2, \notag \\
\| e^{-t|\nabla|^{\alpha} } P_N f \|_{q} &\le \| P_N f \|_{q}, \quad q=1\, \text{or}\, \infty, \label{einfty_weak}
\end{align}
we obtain
\begin{align} \label{erem_00a}
\| e^{-t |\nabla|^{\alpha} } P_N f \|_q \le e^{-c_1 tN^{\alpha} \cdot \frac{q-1} {q^2} } \|P_N f\|_q, \quad \forall\, 1<q<\infty.
\end{align}
Comparing \eqref{erem_00a} with \eqref{e00}, the main improvement there is at the endpoints $q=1$ and $q=\infty$ for $0<\alpha<2$.
\end{rem}
\begin{rem}
One may wonder whether it is possible to absorb the frequency localization into the kernel and prove directly the bound
(say for $N=1$)
\begin{align} \label{eNov29_1}
\left \| \mathcal F^{-1} ( P_1 e^{-t |\nabla|^{\alpha} } ) \right\|_{L_x^1} \le e^{-ct}, \quad t>0.
\end{align}
We show that \eqref{eNov29_1} is impossible even for $t$ sufficiently small. Let $\phi_1(\xi) = \varphi(\xi) -\varphi(2\xi)$ (see \eqref{lp_def} for
the definition of $\varphi$)
and consider the function
\begin{align*}
g(t,x) = \int_{\mathbb R^d} e^{-t(2\pi|\xi|)^{\alpha}} \phi_1(\xi) e^{2\pi i \xi \cdot x} d\xi.
\end{align*}
For $ |\xi| =1$, we have by definition
\begin{align*}
1= |\phi_1(\xi)| & = \left| \int_{\mathbb R^d} g(0,x) e^{-2\pi ix \cdot \xi} d\xi \right| \notag \\
& \le \int_{\mathbb R^d} |g(0,x)| |\cos (2\pi x \cdot \xi) | dx \notag \\
& \le \int_{\mathbb R^d} |g(0,x) | dx.
\end{align*}
By examining the conditions for equality, it is not difficult to disprove the possibility $\| g(0,\cdot) \|_1=1$. Therefore
we have $\|g(0,\cdot)\|_1>1$. Since $\| g(t,\cdot)\|_1$ is continuous in $t$, we get for $\| g(t,\cdot)\|_1 >1$ for $t$ sufficiently small.
This disproves \eqref{eNov29_1}.
\end{rem}
The Bernstein inequality \eqref{Bern1} can be regarded as an infinitesimal version of the decay
estimate \eqref{e00}--\eqref{e00a}. We state it as the following corollary.
\begin{cor} \label{cor1}
Let the dimension $d\ge 1$. Let $0<\alpha<2$ and $1< q <\infty$. Then for any dyadic $N>0$,
any $f\in L_x^q(\mathbb R^d)$,
\begin{align}
\int_{\mathbb R^d} (P_N |\nabla|^\alpha f )|P_N f |^{q-2} P_N f dx
\ge c N^{\alpha} \| P_N f\|_q^q, \label{e30a}
\end{align}
where the constant $c$ depends only on the dimension $d$ and $\alpha$.
For $\alpha=2$, there is an absolute constant $\tilde c>0$ such that for any $1<q<\infty$, any dyadic $N>0$ and
any $f\in L_x^q(\mathbb R^d)$, we have the inequality
\begin{align}
- \int_{\mathbb R^d} (P_N \Delta f )|P_N f |^{q-2} P_N f dx
\ge \tilde c \frac {q-1} {q^2} N^{2} \| P_N f\|_q^q. \label{e30b}
\end{align}
\end{cor}
\begin{rem}
In \eqref{e30a} $P_N$ can be replaced by $P_{\ge N}$ or $P_{>N}$ or other similar frequency projection operators.
Note that in Corollary \ref{cor1} the restriction
of $q$ is $1<q<\infty$. This is because we shall deduce
\eqref{e30a}--\eqref{e30b} from \eqref{e00}--\eqref{e00a} through a differentiation
argument. A rigorous justification of differentiating under the integral
requires $1<q<\infty$.
\end{rem}
\begin{rem}
We stress that for $0<\alpha<2$ the constant $c$ in \eqref{e30a} depends only on $(d,\alpha)$. In particular it does not depend on
the constant $q$. This is in sharp contrast with the full Laplacian case where the constant is proportional to $(q-1)/q^2$ which
vanishes at $q=1$ and $q=\infty$. In \cite{Dan2} (see equation (88) on page 1228 therein),
Danchin effectively proved the inequality \eqref{e30b} by using an integration by parts argument. Our new proof here also reproduces
the same constants. It should be possible to show that \eqref{e30a}--\eqref{e30b} are essentially optimal. But we will not dwell on this
issue here.
\end{rem}
\begin{rem}
If we do not insist on obtaining the sharp constant dependence on $q$ (especially for $0<\alpha<2$), we can give a much shorter
proof of a weaker version
of \eqref{e30a} which includes \eqref{e30b} as a special case. The starting point is the almost trivial inequality \eqref{erem_00a}.
By a rigorous differentiation and comparison argument at $t=0$ (see e.g. \eqref{pt_50c}--\eqref{e32} in the proof of Corollary \ref{cor1}), we
arrive at the inequality
\begin{align}
\int_{\mathbb R^d} (P_N |\nabla|^{\alpha} f )|P_N f |^{q-2} P_N f dx
\ge c^{\prime} \cdot \frac{q-1} {q^2} N^{\alpha} \| P_N f\|_q^q, \label{e30_new}
\end{align}
where $1<q<\infty$ and $c^{\prime} >0$ is an absolute constant. This is already enough for most applications in the local wellposedness theory
of PDEs with fractional Laplacian dissipation.
\end{rem}
Before we move on to other similar results, let us explain the ``mechanism'' of our proof.
In some sense our proof is an upgraded version of the proof in Remark \ref{rem_weak}. In particular \eqref{e00} is a strengthened
version of \eqref{einfty_weak} in the case $0<\alpha<2$ (and hence the improvement). The proof of
\eqref{e00} is based
on frequency localization and Young's inequality. We briefly
explain the idea as follows. First notice that by scaling
it suffices to prove \eqref{e00} for $N=1$. By frequency
localization we have
\begin{align*}
e^{-t (2\pi|\xi|)^{\alpha}} \widehat{P_1 f}(\xi) &= e^{-t(2\pi|\xi|)^{\alpha}}
\psi(\xi) \hat f(\xi) \notag \\
&=e^{-t ( (2\pi|\xi|)^{\alpha} + \epsilon \phi_1(\xi))}
\psi (\xi) \hat f(\xi),
\end{align*}
where $\psi(\xi)=\varphi(\xi)-\varphi(2\xi)$ (see \eqref{lp_def}) and $\phi_1(\xi)= \varphi(6 \xi)$.
In the last equality we used the fact that $\psi(\xi)=0$ for $|\xi| \le 1/2$
and $\phi_1(\xi)=0$ for $|\xi| \ge 1/3$. Now to prove \eqref{e00} it suffices to
show that the modified kernel
\begin{align*}
k_\epsilon(t,x) = \mathcal F^{-1} ( e^{-t ((2\pi|\xi|)^{\alpha} + \epsilon \phi_1(\xi))})
\end{align*}
has the bound $\| k_\epsilon(t,\cdot)\|_{L_x^1} \le e^{-ct}$ for some $c>0$. Since
$\hat k_\epsilon(t, 0)= e^{-t \epsilon \phi_1(0)} = e^{-t \epsilon}$,
we only need to prove that $k_{\epsilon}(t,x)$ is non-negative for $\epsilon$ sufficiently small.
The proof of this fact is given in Lemma \ref{lem0}. The main idea is to use the slow (power-like) decay (see \eqref{slow_decay})
of the L\'{e}vy semigroup when $0<\alpha<2$ which is stable under smooth perturbations.
It is fairly interesting to establish some analogues of Theorem \ref{thm0} and Corollary \ref{cor1} in the periodic
domain case. To fix notations, let the dimension $d\ge 1$ and $\mathbb T^d=\mathbb R^d/\mathbb Z^d$ be the usual periodic
torus. For a smooth periodic function $f:\, \mathbb T^d \to \mathbb R$, we adopt the Fourier expansion and
inversion formulae:
\begin{align*}
f(x) & = \sum_{k\in \mathbb Z^d} \hat f(k) e^{2\pi i k\cdot x}, \quad x \in \mathbb T^d, \notag \\
\hat f(n) &= \int_{\mathbb T^d} f(x) e^{-2\pi i n\cdot x} dx, \quad n \in \mathbb Z^d.
\end{align*}
For $0<\alpha \le 2$, the fractional Laplacian operator $|\nabla|^{\alpha}$ is defined by the relation
\begin{align*}
\widehat{|\nabla|^{\alpha} f} (n) = (2\pi|n|)^{\alpha} \hat f(n),\quad n \in \mathbb Z^d.
\end{align*}
We shall say a function $f\in L^1(\mathbb T^d)$ has mean zero if $\hat f(0)=0$ or equivalently
\begin{align*}
\int_{\mathbb T^d} f(x) dx =0.
\end{align*}
With these notations, we have
\begin{thm}[Improved heat flow estimate, periodic case] \label{thm0_period}
Let the dimension $d\ge 1$. For any $0<\alpha < 2$, there is a constant $c_1>0$ depending only
on $(\alpha,d)$ such that for any $1\le q\le \infty$ and any $f\in L^q(\mathbb T^d)$ with mean zero, we have
\begin{align}
\| e^{-t |\nabla|^{\alpha} } f \|_q \le e^{-c_1 t} \| f\|_q, \quad \forall\, t>0. \label{eNov30_1}
\end{align}
For $\alpha=2$, there is an absolute constant $c_2>0$ such that for any $1<q<\infty$, $f\in L^q(\mathbb T^d)$ with
mean zero, we have
\begin{align}
\| e^{t\Delta} f\|_q \le e^{- c_2 \frac{q-1} {q^2} t} \| f\|_q, \quad \forall\, t>0. \label{eNov30_2}
\end{align}
\end{thm}
\begin{rem}
One should notice again the subtle difference between the bounds \eqref{eNov30_1} for $0<\alpha<2$ and \eqref{eNov30_2} for $\alpha=2$.
The main reason is that in the periodic setting our perturbation argument also relies heavily on the pointwise lower bound of the
L\'{e}vy semigroup for sufficiently small $t$. When $0<\alpha<2$ the decay of $e^{-t|\nabla|^{\alpha}}$ is power-like. However
for $\alpha=2$ this is no longer the case and the perturbation argument does not work.
\end{rem}
\begin{rem} \label{rem_counter_p}
We stress that \eqref{eNov30_2} is optimal. In particular one should not expect the inequality
\begin{align}
\| e^{t\Delta} f \|_\infty \le e^{-ct} \| f\|_{\infty}, \label{erem_impossible}
\end{align}
even for $t>0$ sufficiently small. To see this, we take any smooth $f$ on $\mathbb T^d$ with $\| f\|_{\infty} =1$ and zero mean.
Suppose $x_0\in \mathbb T^d$ and $f(x)\equiv 1$ in some neighborhood $|x-x_0|\le \delta_0$ with $\delta_0>0$. Write
\begin{align*}
e^{t\Delta} f = k(t,\cdot)*f,
\end{align*}
where $*$ denote the usual convolution on $\mathbb T^d$ and $k(t,\cdot)$ is the periodic heat kernel. By using the Poisson summation formula
(see Lemma
\ref{lem_poi}),
it not difficult to check that for $|y|\le \frac 12$, $0<t<1$, we have
\begin{align*}
k(t,y)= \frac 1{(4\pi t)^{\frac d 2} } e^{-\frac{|y|^2}{4t}} + O(e^{-\frac C t}),
\end{align*}
where $C>0$ is some constant. We then have for $t>0$ sufficiently small and some constant $C_{0}>0$,
\begin{align*}
|(e^{t\Delta} f)(x_0)| &\ge \int_{|y|\le \frac {\delta_0} 2} \frac 1 {(4\pi t)^{\frac d2} } e^{-\frac{|y|^2}{4t} } dy +
O(e^{-\frac {C} t}) \\
& \ge 1 + O( e^{-\frac {C_0} t} ).
\end{align*}
This disproves \eqref{erem_impossible}.
\end{rem}
An immediate consequence of Theorem \ref{thm0_period} is a family of generalized Poincare-type inequalities.
\begin{cor}[Generalized Poincare-type inequalities for periodic domains] \label{cor_period}
Let the dimension $d\ge 1$. For any $0<\alpha < 2$, there is a constant $c_1>0$ depending only
on $(\alpha,d)$ such that for any any $f\in C^{2}(\mathbb T^d)$ with mean zero, we have
\begin{align}
\int_{\mathbb T^d} (|\nabla|^{\alpha} f) |f|^{q-2} f dx \ge c_1 \| f\|_q^q, \quad \forall\, 1<q<\infty. \label{eNov30_3}
\end{align}
For $\alpha=2$, there is an absolute constant $c_2>0$ such that for any $f\in C^{2}(\mathbb T^d)$ with
mean zero, we have
\begin{align}
- \int_{\mathbb T^d} (\Delta f) |f|^{q-2} f dx \ge c_2 \frac{q-1} {q^2}\| f\|_q^q, \quad \forall\, 1<q<\infty. \label{eNov30_4}
\end{align}
\end{cor}
The proof of Corollary \ref{cor_period} is quite similar to the proof of Corollary \ref{cor1} and therefore we omit it.
\begin{rem}
In \cite{KS_book} (see Proposition A.14.1 on page 291 therein), by using a contradiction argument,
the authors proved the inequality \eqref{eNov30_4} for the case $2\le q<\infty$
with a dimension-dependent constant. Our new proof here covers the whole range $1<q<\infty$ with a dimension-independent constant $c_2$.
\end{rem}
We conclude the introduction by setting up some
\subsubsection*{Notations}
We will need to use the
Littlewood-Paley frequency projection operators. Let $\varphi(\xi)$ be a smooth bump
function supported in the ball $|\xi| \leq 2$ and equal to one on
the ball $|\xi| \leq 1$. For each dyadic number $N \in 2^{\mathbb Z}$ we
define the Littlewood-Paley operators
\begin{align}
\widehat{P_{\leq N}f}(\xi) &:= \varphi(\xi/N)\hat f (\xi), \notag\\
\widehat{P_{> N}f}(\xi) &:= [1-\varphi(\xi/N)]\hat f (\xi), \notag\\
\widehat{P_N f}(\xi) &:= [\varphi(\xi/N) - \varphi (2 \xi /N)] \hat
f (\xi). \label{lp_def}
\end{align}
Similarly we can define $P_{<N}$, $P_{\geq N}$, and $P_{M < \cdot
\leq N} := P_{\leq N} - P_{\leq M}$, whenever $M$ and $N$ are dyadic
numbers.
\subsection*{Acknowledgements}
The author would like to thank Prof. Ya.G. Sinai for informing him the book \cite{KS_book}.
D. Li was supported in part by NSF under agreement No. DMS-1128155. Any opinions, findings
and conclusions or recommendations expressed in this material are those of the authors and
do not necessarily reflect the views of the National Science Foundation. D. Li was also supported in part
by an Nserc discovery grant.
\section{Proof of main theorems}
We begin with a simple lemma. Let $\phi_1 \in C_c^{\infty} (\mathbb R^d)$ be a radial function such that
\begin{align} \label{e_phi1}
\phi_1(x)=
\begin{cases}
1, \quad |x|\le \frac 14, \\
0, \quad |x| \ge \frac 13.
\end{cases}
\end{align}
Let $\epsilon>0$ and define for $t>0$,
\begin{align} \label{eq_F}
F_{\epsilon} (t,x) =
\int_{\mathbb R^d} e^{-t (2\pi|\xi|)^{\alpha}} { (e^{-\epsilon t \phi_1(\xi) } -1)} e^{2\pi i \xi \cdot x} d\xi.
\end{align}
Also denote
\begin{align}
p(t,x) = \mathcal F^{-1} ( e^{-t(2\pi|\xi|)^{\alpha}}) = \int_{\mathbb R^d}
e^{-t(2\pi |\xi|)^{\alpha}} e^{ 2\pi i\xi \cdot x} d\xi, \label{eq_P}
\end{align}
and
\begin{align}
k_{\epsilon}(t,x) = \mathcal F^{-1} ( e^{-t ((2\pi |\xi|)^{\alpha} + \epsilon \phi_1(\xi)} )
= \int_{\mathbb R^d}
e^{-t ((2\pi |\xi|)^{\alpha} +\epsilon \phi_1(\xi) )} e^{2\pi i\xi \cdot x} d\xi. \label{eq_k}
\end{align}
Clearly
\begin{align} \label{e_23a}
k_{\epsilon}(t,x) = p(t,x) + F_{\epsilon}(t,x).
\end{align}
The following lemma shows that for $\epsilon>0$ sufficiently small $k_{\epsilon}(t,x)$ is still
a positive kernel.
\begin{lem} \label{lem0}
Let $d\ge 1$ and $0<\alpha<2$. There exists a constant $C_1=C_1(d,\alpha)>0$ such
that for any $x \in \mathbb R^d$, $t>0$,
\begin{align} \label{to1a}
\frac 1 {C_1} \cdot \frac {t} { (t^{\frac 1 {\alpha}} + |x| )^{d+\alpha} }
\le p(t,x) \le {C_1} \cdot \frac {t} { (t^{\frac 1 {\alpha}} + |x| )^{d+\alpha} } .
\end{align}
There exists a constant $\epsilon_0=\epsilon_0(d,\alpha)>0$
such that if $0<\epsilon<\epsilon_0$, then
\begin{align} \label{e4_7}
p(t,x) - |F_{\epsilon}(t,x)| > 0, \qquad \forall\, x \in \mathbb R^d, 0<t \le 1.
\end{align}
In particular
\begin{align} \label{39a}
\| k_{\epsilon} (t,\cdot) \|_{L_x^1} \le e^{- \epsilon t}, \qquad \forall\, 0<t \le 1.
\end{align}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem0}]
The bound \eqref{to1a} is a well-known result, cf. \cite{BG60}:
\begin{align} \label{slow_decay}
\frac 1 {C_1} \min\{ t^{-d/\alpha},\; \frac t
{|x|^{d+\alpha}} \}
\le p(t,x) \le {C_1} \min\{ t^{-d/\alpha},\; \frac t
{|x|^{d+\alpha}} \}
\end{align}
which is clearly equivalent to \eqref{to1a}. By \eqref{eq_F} we have
\begin{align}
F_{\epsilon}(t,x) = p(t)*w(t)= \int_{\mathbb R^d} p(t,x-y) w(t,y)dy, \label{e_conv}
\end{align}
where
\begin{align*}
w(t,y)&= \epsilon t \int_{\mathbb R^d} \frac {e^{-\epsilon t \phi_1(\xi)} -1}
{\epsilon t} e^{2\pi i \xi \cdot y} d\xi \notag \\
&=: \epsilon t \, w_1(t,y).
\end{align*}
It is not difficult to check that for any $0<\epsilon t \le 1$, we have
\begin{align*}
|w_1(t,y)| \le C(d,\alpha)\cdot \frac 1 {(1+|y|^2)^{d+10}}, \qquad \forall\, y \in \mathbb R^d.
\end{align*}
Comparing this with \eqref{to1a}, we get
\begin{align*}
|w(t,y)| \le C(d, \alpha) \cdot \epsilon\cdot p(t,y), \qquad \forall\, y \in \mathbb R^d, \, 0<t \le 1.
\end{align*}
Plugging it into \eqref{e_conv} and using the fact that $p(t)*p(t)=p(2t)$, we get
\begin{align}
F_{\epsilon}(t,x) &\le C(d,\alpha) \cdot \epsilon p(2t,x), \notag \\
& \le \frac 12 p(t,x), \qquad \forall\, x \in \mathbb R^d, \, 0<t \le 1, \notag
\end{align}
where we used the fact $p(2t,x) \le const \cdot p(t,x)$ and chose $\epsilon$ sufficiently small.
Clearly \eqref{e4_7} follows. Finally \eqref{e4_7} and
\eqref{e_23a} implies that $k_{\epsilon}(t,x)$ is positive everywhere. The bound
\eqref{39a} follows from Fourier transform and the fact that $\phi_1(0)=1$.
\end{proof}
Now we are ready to complete the
\begin{proof}[Proof of Theorem \ref{thm0}]
First we note that \eqref{e00a} is already proved in Remark \ref{rem_weak}. Therefore we only need to prove
\eqref{e00} for $0<\alpha<2$.
By a scaling argument
we only need to prove \eqref{e00} for $N=1$. In view of the usual convolution property of the heat semigroup, i.e.
\begin{align*}
e^{-t|\nabla|^{\alpha} } = \Bigl( e^{- \frac t {m} |\nabla|^{\alpha}} \Bigr)^m,
\end{align*}
it suffices to prove
\eqref{e00} for $0<t\le 1$.
Now recall that
\begin{align*}
\widehat{P_1 f}(\xi) = \psi(\xi) \hat f(\xi),
\end{align*}
where $\psi$ is compactly supported in the annulus $\{\xi:\; 1/2 \le |\xi| \le 2 \}$. In view of this
localization property, we can smoothly redefine the kernel $e^{-t(2\pi|\xi|)^{\alpha}}$ on the tail part $\{\xi:\; |\xi| < \frac 12\}$
such that
\begin{align} \label{id_1}
e^{-t(2\pi|\xi|)^{\alpha}} \widehat{P_1 f}(\xi) = e^{-t \left( (2\pi|\xi|)^{\alpha}+ \epsilon \phi_{1} (\xi) \right)} \widehat{P_1 f}(\xi),\qquad \forall\, \xi \in \mathbb R^d,
\end{align}
where we choose $\phi_{1} (\xi) $ as in \eqref{e_phi1}.
Now denote $\phi_{\epsilon}(\xi)=(2\pi |\xi|)^{\alpha} + \epsilon \phi_1(\xi)$.
Therefore we only need to prove for $\epsilon>0$ sufficiently small,
\begin{align*}
\left\| \Bigl(\mathcal F^{-1} (e^{-t \phi_{\epsilon} } ) \Bigr)
* P_1 f \right\|_q \le e^{-\epsilon t} \| P_1 f\|_q, \quad \forall\, 0< t \le 1.
\end{align*}
But this follows easily from Lemma \ref{lem0} and Young's inequality.
\end{proof}
To prove Corollary \ref{cor1}, we need the following simple lemma.
\begin{lem} \label{lem5}
Let $0<\alpha\le 2$. Let $f \in L^1_{loc}(\mathbb R^d)$. Then there is
a constant $C_{\alpha,d}>0$ depending only on $(\alpha,d)$, such that
\begin{align}
\sup_{0< t <\infty} | (e^{-t |\nabla|^{\alpha}} f) (t,x) | \le C_{\alpha,d} (Mf)(x),
\label{Mf_bound}
\end{align}
where $Mf$ is the Hardy-Littlewood maximal function defined as
\begin{align*}
(Mf)(x)= \sup_{B\ni x} \frac 1{|B|} \int_B |f|.
\end{align*}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem5}]
Denote $p(y)=\mathcal F^{-1}(e^{-(2\pi |\xi|)^{\alpha}})$. By \eqref{to1a} we have
for some constant $C_1>0$,
$$p(y) \le C_1 \cdot (1+|y|)^{-(d+\alpha)}.$$
Therefore for $0<t <\infty$,
\begin{align*}
| &(e^{-t|\nabla|^{\alpha}} f )(t,x) | \notag \\
& \le \int_{\mathbb R^d} p(y)|f(x+t^{\frac 1 {\alpha}} y) | dy \notag\\
& \le \int_{|y| \le 1} p(y) |f(x+ t^{\frac 1 {\alpha}} y)| dy+ \sum_{k=1}^{\infty} \int_{2^{k-1} \le |y|
\le 2^k} p(y) |f(x+ t^{\frac 1 {\alpha}} y)| dy \notag \\
& \le C_1 \int_{|y| \le 1} |f(x+t^{\frac 1 {\alpha}} y)|dy+C_1 \sum_{k=1}^{\infty} 2^{-(k-1)(d+\alpha)}
\int_{|y| \le 2^k} |f(x+ t^{\frac 1 {\alpha}} y) | dy \notag \\
& \le C_1 2^{d+\alpha} \sum_{k\ge 0} 2^{-k\alpha} \sup_{k^{\prime} \ge 0} \int_{|y| \le 1}
|f(x+ 2^{k^{\prime}} t^{\frac 1 {\alpha}} y )|dy \notag \\
& \le C_{d,\alpha} (Mf)(x).
\end{align*}
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor1}]
Define
\begin{align*}
F(t) = \int_{\mathbb R^d} | e^{-t|\nabla|^{\alpha}} P_N f |^q dx
\end{align*}
Note that for each $t\ge 0$,
\begin{align*}
\partial_t ( e^{-t |\nabla|^{\alpha} } P_N f) & = -e^{-t |\nabla|^{\alpha}} |\nabla|^{\alpha} P_N f. \notag
\end{align*}
Since $1<q<\infty$, we get
\begin{align}
&\partial_t ( |e^{-t |\nabla|^{\alpha} } P_N f |^q) \notag \\
=&\; - q(e^{-t |\nabla|^{\alpha}} |\nabla|^{\alpha} P_N f)
\cdot |e^{-t|\nabla|^{\alpha}}P_N f|^{q-2} \cdot
e^{-t |\nabla|^{\alpha}} P_N f. \label{pt_50a}
\end{align}
By Lemma \ref{lem5} and \eqref{pt_50a}, we have
\begin{align}
\sup_{t\ge 0} &\left| \biggl(\partial_t (|e^{-t |\nabla|^{\alpha} } P_N f |^q) \biggr)(t,x) \right| \notag \\
& \le q M(|\nabla|^{\alpha}P_N f)(x) \cdot \left( M(f)(x) \right)^{q-1} \notag \\
& \le \left(M(|\nabla|^{\alpha}P_N f)(x)\right)^q + (q-1) \left( M(f)(x) \right)^{q} \notag \\
& =: H(x). \label{pt_50b}
\end{align}
Since $1<q<\infty$ and $f \in L^q_x(\mathbb R^d)$, it is easy to check that $H \in L_x^1(\mathbb R^d)$.
Now denote $b(t,x) = \partial_t ( |e^{-t |\nabla|^{\alpha}} P_N f|^q)(t,x)$. Observe that
by \eqref{pt_50a}, it is easy to check that $b$ is continuous in $(t,x)$ and consequently
\begin{align}
\lim_{\delta \downarrow 0} \frac {\int_0^\delta b(s,x) ds} \delta = -q (|\nabla|^{\alpha}P_N f)(x)
|P_N f(x)|^{q-2} P_N f(x), \qquad \forall\, x \in \mathbb R^d. \label{pt_50c}
\end{align}
Now let $0<h<1$ and write
\begin{align*}
\frac{F(h)-F(0)} h
= \int_{\mathbb R^d} \frac{ \int_0^h b(s,x) ds} h dx.
\end{align*}
By \eqref{pt_50b}, we have
\begin{align}
\sup_{0 <h <1} \Bigl|\frac{ \int_0^h b(s,x) ds} h\Bigr| \le H(x). \label{pt_50d}
\end{align}
By \eqref{pt_50c}--\eqref{pt_50d} and the Lebesgue Dominated Convergence Theorem, we obtain that $F$ is right-differentiable
at $t=0$ and
\begin{align*}
F^{\prime}(0+) = -q \int_{\mathbb R^d} (|\nabla|^{\alpha}P_N f)(x)
|P_N f(x)|^{q-2} P_N f(x) dx.
\end{align*}
In particular we have
\begin{align}
\text{LHS of \eqref{e30a}} = - \frac 1 q F^{\prime}(0+). \label{e31}
\end{align}
On the other hand by using Theorem \ref{thm0}, we have for any $t>0$,
\begin{align*}
\frac {F(0) -F(t)} {t} \ge \frac {1-e^{-ctqN^{\alpha}}} t \| P_N f \|_q^q.
\end{align*}
Taking the limit $t\to 0$ immediately give us
\begin{align}
- F^{\prime}(0) \ge c q N^{\alpha} \|P_N f \|_q^q. \label{e32}
\end{align}
By \eqref{e31} this gives us \eqref{e30a}.
\end{proof}
For the periodic case we recall the following standard Poisson summation formula.
\begin{lem}[Poisson summation formula] \label{lem_poi}
Let $f$ be a Schwartz function on $\mathbb R^d$ and denote by $\hat f$ its Fourier transform on $\mathbb R^d$. Then
\begin{align*}
\sum_{n \in \mathbb Z^d} f(x+n) = \sum_{n \in \mathbb Z^d} \hat f(n) e^{2\pi i n \cdot x}, \quad \forall\, x \in \mathbb R^d.
\end{align*}
\end{lem}
As is well known, the Poisson summation formula holds for general continuous functions of moderate decrease including the heat
kernels
\begin{align}
k_{\alpha}(t,\cdot) = \mathcal F^{-1} ( e^{-t (2\pi |\xi|)^{\alpha}}), \quad 0<\alpha\le 2, t>0. \notag
\end{align}
Denote the periodic kernel
\begin{align}
k_{\alpha}^{per} (t,x) = \sum_{n\in \mathbb Z^d} e^{-t(2\pi |n|)^{\alpha}} e^{2\pi i n\cdot x}. \notag
\end{align}
The next lemma gives a lower bound on $k_{\alpha}^{per}(t,x)$. It is amusing that later this lower bound is used to prove
an upper bound of the heat kernel.
\begin{lem}[Short time lower bound of the fractional heat kernel] \label{lem_lower}
Let the dimension $d\ge 1$ and let $0<\alpha<2$. There is a constant $c_3>0$ depending only on $(\alpha,d)$ such that
\begin{align}
k_{\alpha}^{per}(t,x) \ge c_3 t, \quad \forall\, 0<t\le 1, \, x \in \mathbb T^d. \label{eDe_1}
\end{align}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem_lower}]
By Lemma \ref{lem_poi} and \eqref{to1a}, we have for $|x|\le 1$, $0<t\le 1$,
\begin{align*}
k_{\alpha}^{per}(t,x) & = \sum_{n \in \mathbb Z^d} k_{\alpha}(t,x+n) \notag \\
& \ge \sum_{n \in \mathbb Z^d, \, |n|=3} k_{\alpha}(t,x+n) \notag \\
& \ge c_3 t,
\end{align*}
where $c_3$ depends only on $(d,\alpha)$.
\end{proof}
We now complete the
\begin{proof} [Proof of Theorem \ref{thm0_period}]
We only need to show \eqref{eNov30_1} since \eqref{eNov30_2} follows the same argument as in \eqref{erem_00a}.
Now assume $0<\alpha<2$. By the convolution property we only need to prove the case $0<t\le 1$.
Since $\hat f(0)=0$ we may freely adjust the $\text{zero}^{th}$ Fourier coefficient of the heat kernel.
Doing so gives us
\begin{align*}
e^{-t|\nabla|^{\alpha}} f = \tilde k(t,\cdot)*f,
\end{align*}
where
\begin{align*}
\tilde k(t,x) = k_{\alpha}^{per}(t,x) -c_3 t,
\end{align*}
By Lemma \ref{lem_lower}, we have
\begin{align*}
\tilde k(t,x) \ge 0, \quad \forall\, 0<t\le 1, \, x \in \mathbb T^d.
\end{align*}
Hence
\begin{align*}
\| \tilde k(t,\cdot) \|_1 = 1-c_3 t \le e^{-c_1 t}, \quad \forall\, 0<t \le 1.
\end{align*}
Obviously \eqref{eNov30_1} follows from the above bound and Young's inequality.
\end{proof}
|
1,108,101,563,904 | arxiv | \section{Introduction}
{\bf{Flat rotational curves for spiral galaxies cannot be explained by Newtonian or Einstein's gravity since neither the former or the latter theories of gravity are satisfied and therefore such deviations from these 2 established theories constitutes the existence of Dark Matter. (DM). In our galaxy meticulous observations have confirmed that rotational velocities range between 200 -300 km/s provided that these clouds are considered to be moving in circular orbits.\\}}
Yet, there are alternative remedies related to modify Newtonian or Einsteinian gravity by changing their corresponding gravitational potential $\phi$ to become $\phi = -GM[1+\alpha \exp{(-r/r_{0})}]/(1+\alpha)$ , such that $\alpha = -0.9$ and $ r_{0} \approx 30 kpc $ , in order to explain the behavior of the flat rotational curves for spiral galaxies.[1]Consequently, such a unique explanation of discrepancy between theory and observation is still under debate. This may lead us to revisit the notation of DM as to be expressed in terms of a mass excess quantity.
This type of explanation is represented as the behavior of objects following non-geodesic equations or being shown as the projection of the fifth term on the its other 4 dim components of geodesic equation satisfying space-time-matter (STM) [2] . Also, DM can be explained as the behavior of dipolar particles in the presence of polarization field [3]. However, this type of description has been amended to be regarded as dipolar fluids due to taking the effect of dark energy (DE) in the halos [4].
Thus, one must take into account that DE occupies $74 \%$ of invisible matter of universe; while DM occupies about $ 23 \%$ of that one [5]. This may gives an indication that DM particles are detected in several regions in the universe beside the halos.
Accordingly, some incidents like the excess of gamma ray radiation nearby the core of the galaxy can be revealed due to DM annihilation [6]. In this case, the effect of dark matter is observed as an excess of mass in the hydrodynamical equations for the accretion disk [7]. Moreover, it has been considered that cause of slight deviation of perihelion motion is counted due the influence of dark matter particles [8].
Now, it is essential to implement the significance of studying the problem of motions for the suspected particles or fluids in order to give a possible scenario for the behavior of DM at different scales in the universe. Accordingly, one must seek an appropriate theory of gravity able to detect its existence at different scales. One of the candidates is studying a class of bi-metric theory of gravity which is able to express strong gravitational fields like SgrA* , neutron stars and binary pulsars. Also,to able to express weak gravitational fields playing the same role of general relativity [9].
From this perspective, it is vital in our study to derive the candidate equations of motion showing the mass excess term is due to the existence of dark matter. Such a vital question should be addressed: What is dark matter?\\
In our present work, {\bf{ it can be possible to illustrate its causality by three different rival explanations }}:\\
(I) The existence of a scalar field associated with the Galaxy's gravitational field? [1]\\
(II) The projection of a higher dimension spatial dimension on the 4-dim manifold? [2]\\
(III) Motion of dipolar particles /fluids as claimed in spiral galaxies? [4] \\
Consequently, we are going to deal with expressing, the behavior of dark matter in terms of non-geodesic equations, these equations are derived using the Lagrangian formalism using the Bazanski-like Lagrangian [10]. This type of equation may give rise to geometrize all trajectories associated with the appearance of dark matter. In other words, the appropriate path equations as described in the Riemannian geometry to represent dipolar particles or fluids of the halos; and the corresponding path equations that represents the hydrostatic stream of fluids in of accretion disk.due to solving the non-geodesic deviation equation, we can give rise to examine stability conditions, which means an indication of remaining DM effect on each observed regions.
On the other hand , another approach to reveal the above discrepancies between theory and observation at galactic level is due to modified newtonian gravity (MOND) [11] or its bi-metric version BIMOND [12]. These types of theories are rejecting the existence of dark matter and dark energy [DE] and refer such an anomaoly is due to a deficiency in obtaining an appropriate theory of gravity able to cure the Newtonian explanation . Even though , Blanchet has regarded the MOND as gravitational polarization effect [13].
From this perspective, we are going to derive to apply the appropriate Bazanski-like Lagrangians [14] to examine the equivalence of non-geodesic trajectories with each of the following equations, dipolar moment and dipolar fluids and hydrostatic stream of motion as described in General Relativity in section 2 . We extend the previous equations to be expressed in different versions of bi-metric theories of gravity as shown in sec 3.
Finally, it turns out that, the problem of detecting the existence of DM is connected with studying the behavior of the stream of fluids in different gravitational fields. \\
{This may raise the necessity to examine the stability of these systems for being affected by dark matter. This can be seen, by solving the different corresponding deviation equation for examining the stability condition, using an independent method of coordinate transformation [15-16] which is be described in sec 4. }
{\bf{
\section{Dark Matter: Equations of Motion from Different Perspectives }
\subsection{ Dark Matter: Non-Geodesic Equations}
{\bf{Dark matter can be detected its presence due to the excess of mass as appeared in non-geodesic trajectories}}. These equations are obtained by applying the Euler-Lagrange equation on the following Lagrangian [1] :
\begin{equation}
L {\stackrel{def.}{=}} m(s) g_{\mu \nu}U^{\mu} \frac{D \Psi^\nu}{Ds} + m(s),_{\rho}\Psi^{\rho}
\end{equation}
where $U^{\mu}$ is a unit tangent vector, $\Psi^{\nu}$ its corresponding deviation vector, $m(s)$ is its mass ,to be considered as function of the parameter, and $\mu=1,2,3,4$. provided that:
\begin{equation}
\frac{d \partial L}{ds \partial{{\dot\Psi^{\alpha}}}} - \frac{\partial L}{ \partial \Psi^{\alpha}} =0
\end{equation}
one gets,
\begin{equation}
\frac{dU^{\alpha}}{ds}+ \Gamma^{\alpha}_{\beta \delta} U^{\beta}U^{\delta} = \frac{m{(s)}_{,\beta}}{m{(s)}} (g^{\alpha \beta}- U^{\alpha}U^{\beta})
\end{equation}
such that
$$
m(s)= - \nabla[g(\psi)\psi],
$$
where $g(\psi) \psi$, is a scalar function , in which the right hand side of equation (3) behaves as a parallel force to represent the presence of dark matter.
Also, its corresponding non-geodesic deviation equation is obtained by using the commutation relation on equation (3) i.e.
$$
A^{\mu}_{; \nu \rho} - A^{\mu}_{; \rho \nu} = R^{\mu}_{\beta \nu \rho} A^{\beta},
$$
where $A^{\mu}$ is an arbitrary vector,$R^{\mu}_{\beta \nu \rho}$ is the curvature tensor .\\
Multiplying both sides by arbitrary vectors, $U^{\rho} \Psi^{\nu}$ as well as taking into consideration the following condition [15]
$$
U^{\alpha}_{; \rho} \Psi^{\rho} = \Psi^{\alpha}_{; \rho } U^{\rho}.
$$
Thus, we obtain the corresponding deviation equations
\begin{equation}
\frac{D^{2}\Psi^{\mu}}{Ds^2}= R^{\mu}_{\nu \rho \sigma}U^{\nu}U^{\rho} \Psi^{\sigma} + (\frac{m{(s)}_{,\beta}}{m{(s)}} (g^{\alpha \beta}- U^{\alpha}U^{\beta}))_{;\rho} \Psi^{\rho}.
\end{equation}
Yet, for examining the flat rotation curves, it has been found [2] by taking $ \sigma$ as a parameter describing the trajectories of particles on this region , such that $ s \sim \sigma$ , to obtain the following relation \begin{equation}\frac{1}{m}\frac{dm}{d \sigma} \equiv {\sqrt{\Lambda/2}} \end{equation}
in which to be expressed as,
\begin{equation} \frac{1}{m}\frac{dm}{d \sigma} \approx 2 a_{0}/c^2 \end{equation}
where $a_{0}$ is a constant of acceleration, {$a_{0} \sim 2 \times 10^{-10} m/sec^2$}, as known of the MOND and c is the speed of light.
Accordingly, we can find that the non-geodesic equation can be related to MOND [11] in the following way:
\begin{equation}
\frac{d\hat{U}^{\alpha}}{d\sigma}+ \Gamma^{\alpha}_{\beta \delta} \hat{U}^{\beta}\hat{U}^{\delta} = 2 \frac{ a_{0}}{c^2} \hat{U}_{\beta} (g^{\alpha \beta}- \hat{U}^{\alpha}\hat{U}^{\beta})
\end{equation}
where, $\hat{U}^{\alpha} = \frac{dx^{\alpha}}{d \sigma}$ its associated unit tangent vector. \\
Consequently, its corresponding deviation equation becomes
\begin{equation}
\frac{D^{2}\hat{\Psi}^{\mu}}{D{\sigma}^2}= R^{\mu}_{\nu \rho \sigma}\hat{U}^{\nu}\hat{U}^{\rho} \hat{\Psi}^{\sigma} + 2 \frac{ a_{0}}{c^2}( \hat{U}_{\beta} (g^{\alpha \beta}- \hat{U}^{\alpha}\hat{U}^{\beta}))_{; \rho} \hat{\Psi}^{\rho}
\end{equation}
where $\hat{\Psi^{\mu}}$ its corresponding non-geodesic deviation vector. }}
\subsection{ Dark Matter: An Extra-dimensional Effect}
It is well known that the non-geodesic equations are expressed as , the four components of a geodesic equations for a test particle [1] in a non-compact space-time $g_{AB,5} \neq 0$ following Wesson's approach of space-time-matter[2].
Thus, the characteristics of dark matter can be appeared within solving the geodesic equation in 5-dim., provided that
$$
\frac{dS}{ds}= \sqrt{(1 + \epsilon \hat{\Phi}^2 (U^5)^2 )}
$$
such that $\hat{\Phi}$ is a scalar function, and $\epsilon = \pm 1 $.\\
Thus, it can possible to suggest the following Lagrangian:
\begin{equation}
L= g_{AB} U^{A}\frac{D \Psi^{A}}{DS},
\end{equation}
where $A=1,2,3,4,5$. \\
Thus, taking the variation with respect to $\Psi^{C}$ and $U^{C}$ respectively, one can find\\
(i) Equation of Geodesic:
\begin{equation}
\frac{D U^{C}}{DS}=0,
\end{equation}
(2) Equation of Geodesic Deviation:
\begin{equation}
\frac{D^2 \Psi^{C}}{DS^2} = R^{C}_{BDE}U^{B}U^{D}\Psi^{E}.
\end{equation}
With taking into account that the force appeared on its right hand side is expressed within the component of the fifth dimension of a 5-dim manifold.
Accordingly, equation (3) may be expressed as
$$
\frac{d^2 x^{\mu} }{d S^{2}} + \Gamma^{\mu}_{AB}\frac{d x^{A}}{dS}\frac{d x^{B}}{dS}=0
$$
$$
\frac{d^2 x^{\mu}}{d S^{2}} + \Gamma^{\mu}_{\mu \nu}\frac{d x^{\mu}}{dS}\frac{d x^{\nu}}{ds}= -(\Gamma^{\mu}_{\mu \nu}\frac{d x^{\mu}}{dS}\frac{d x^{\nu}}{dS} + \Gamma^{\mu}_{\mu \nu}\frac{d x^{5}}{dS}\frac{d x^{5}}{dS}).
$$
Meanwhile, by solving equation (10) and considering its fifth component to be substituted in the other four components, this may be regarded as similar to the behavior of dark matter particles in (3) .\\
Thus, we find that the indication of dark matter may be represented in terms of excess of mass in the right hand side of the non-geodesic equation. Such an equation is obtained as the projection of the fifth component of the geodesic equation onto its counterpart the four dimensional components.
\subsection{ Dark Matter: Equations of Motion Dipolar Moment Particles in The Halo }
{\bf{ A rival explanation for the cause of the flat rotational curves of spiral galaxies can be expressed due to the presence of dipolar dark matter particles [3]. Such particles are not purely dipolar as the involve monopole contribution from the stress-energy momentum tensor obtained from Einstein field equations. }} It has been proposed by Blanchet et al that these particles are examined in terms of studying their corresponding equations of motion, composed of two system of equations, one may be described $P^{\mu}$ the (passive) linear momentum vector and $\Omega^{\mu}$ the evolution vector , describing microscopic (active) momentum- acting as the spin tensor $S^{\mu \nu}$ in the Papapetrou equation of motion for spinning objects [15]. These equations are obtained using Lagrangian formalism is analogous to the its counterpart the motion of spinning with precession {\footnote {see Appendix A}}.
Thus, we suggest the following Lagrangian:
\begin{equation}
L{\stackrel{def.}{=}} g_{\alpha \beta} P^{\alpha}\frac{D \Psi_{(1)}^{\beta}}{Ds} + \Omega_{\alpha} \frac{D \Psi_{(2)}^{\beta}}{Ds} + f_{\alpha}\Psi_{(1)}^{\alpha}+ \hat{f}_{\alpha}\Psi_{(2)}^{\alpha},
\end{equation}
in which
$$
P^{\mu} = (2mU^{\mu} + \frac{D \pi^{\mu}}{D s}),
$$
where $\pi^{\mu}$ is dipolar vector and $\Psi_{(1)}^{\mu}$ is the non-geodesic deviation from the world line and $\Psi_{(2)}^{\mu}$ is the evolution deviation due to dipole moment; with taking the raising and lowering indices for the evolution vector is by $h^{\mu \nu}$ the projector tensor i.e.
\begin{equation}
h^{\mu \nu} = g^{\mu \nu}- U^{\mu}U^{\nu},
\end{equation}
$$ \bar{\Omega^{\mu}} = h^{\mu \nu} \Omega_{\nu}.
$$
Taking the variation with respect to $\Phi_{1}^{\mu}$ and $\Phi_{2}^{\mu}$ separately we obtain the following set of equation of motion and evolution respectively:
\begin{equation}
\frac{D P^{\mu}}{D s} = f^{\mu},
\end{equation}
and
\begin{equation}
\frac{D \Omega^{\mu}}{D s} = \hat{f}^{\mu},
\end{equation}
such that{\bf{
$$ f^{\mu}= 2m \frac{\bar{\pi}_{\nu}}{\bar \pi} \frac{d V}{d x}(\frac{\bar \pi}{m}), $$
where,
$\bar{\pi} = h^{\mu \nu} {\pi}_{\nu}$, and $V$ is an associated potential function in terms of dipolar vectors.}}
While the evolution equation becomes
\begin{equation}
\frac{D \bar{\Omega}^{\mu}}{D s} = \hat{f}^{\mu},
\end{equation}
provided that $\hat{f} = R^{\mu}_{\nu \rho \sigma} \hat{\pi}^{\sigma} U^{\rho} U^{\nu}.$
Similarly, using (A.4) and (A.5) as in [2.1], we obtain the corresponding geodesic deviation equations:
\begin{equation}
\frac{D^2 \Psi_{(1)}^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}P^{\nu} U^{\rho} \Psi_{(1)}^{\sigma}+ f^{\mu}_{; \rho} \Psi_{(1)}^{\rho},
\end{equation}
and,
\begin{equation}
\frac{D^2 \Psi_{(2)}^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}\Pi^{\nu} U^{\rho} \Psi_{(2)}^{\sigma}+ \hat{f}^{\mu}_{; \rho} \Psi_{(2)}^{\rho}.
\end{equation}
{\bf{Equations (16), (17) are essentially vital to examine the stability for different celestial objects in various gravitational fields due to presence of dark matter particles.}}
\subsection{ Equations of Motion of Dipolar Fluid in The Halos}
{\bf{ The involvement of cosmological constant, a candidate for DE, has vital role to identify the mystery of dark matter. This led Blanchet et al to revisit the description of of dipolar dark matter from particle contents into fluid-like description [2] . This can be found by replacing $V$ in equation by $W$ the effect of polarization potential, to express interaction of DE on the system .}}
From this perspective, Blanchet and Le Tiec [4] have postulated that the dynamics of the dipolar fluid in a prescribed gravitational field $g_{\mu \nu}$ is derived from an action of the type found
\begin{equation}
S= \int d^{4}x \sqrt{- g}L [ J^{\mu}, \xi^{\mu} \dot{\xi}, g_{\mu \nu} ]
\end{equation}
Provided that the density current $J^{\mu}$ and the polarization vector ${\Pi^{\mu}}$ are new quantities added in dipolar fluids:
such that:
{\bf{$ J^{\mu} = \rho U^{\mu}$, and $ \Pi^{\mu} = \rho \xi^{\mu} $ , where $\rho =2mn$ , the inertial mass density to the diploe particles, $n$ the density number of the dipole moment.}} Applying the least action principle on (19) to obtain their corresponding set of path equations
\begin{equation} \frac{D K^{\mu}}{Ds} = \frac{{f}^{\mu}}{m} \end{equation}
and
$$ \frac{D \Omega}{Ds} = \frac{1}{\hat{\sigma}} \nabla^{\mu} (W-\hat{\Pi} \hat{W}) - R^{\mu}_{\rho \nu \lambda}u^{\rho}\xi^{\nu}K^{\lambda} $$
where, $\hat{\sigma} = \sqrt(- J^{\mu}J_{\mu})$, $W$ is the density dependent potential, and $K^{\mu}$ is another linear momentum parameterized the dipolar contribution [2] such that
$$
K^{\mu} = \frac{P^{\mu}}{2m}.
$$
and
$$
\hat{\Pi} = \sigma \hat{\pi}
$$
where $K^{\mu}$ is the proper time derivative of the linear momentum and$ \hat{\Pi}$ is the density number of the dipole moment.
The above set of equations can be obtained using its associated Bazanski-Like Lagrangian,
\begin{equation}
L = g_{\mu \nu} K^{\mu} \frac{D \Psi_{(1)}^{\nu}}{Ds} + \Omega_{\mu}\frac{D \Psi_{(2)}^{\nu}}{Ds}+ \bar{f}_{(1)\nu}\Psi_{(1)}^{\nu} +\bar{f}_{(2 )\mu}\Psi_{2}^{\mu},
\end{equation}
By taking the variation with respect to their path deviation vector$\Psi^{\mu}_{(1)}$ and evolution deviation vector $\Psi^{\mu}_{(2)}$ simultaneously. Provided that
$$ f^{\mu}_{(1)} = \hat{\Pi^{\mu}} \frac{d W}{d \hat{\Pi}}$$
and $$ \bar{f}^{\mu}_{(2)}= \frac{1}{\hat{\sigma}} \nabla^{\mu} (W-\hat{\Pi} \hat{W}) - R^{\mu}_{\rho \nu \lambda}u^{\rho}\xi^{\nu}K^{\lambda}. $$
Thus, using the commutation rule (A.4) and the condition (A.5) we obtain their corresponding path deviation and evolution deviation equations respectively,
\begin{equation}
\frac{D^2 \Psi^{\mu}_{(1)}}{Ds^2} = R^{\mu}_{\nu \rho \sigma} K^{\nu} U^{\rho} \Psi^{\sigma}_{1},
\end{equation}
and
\begin{equation}
\frac{D^2 \Psi^{\mu}_{(2)}}{Ds^2} = R^{\mu}_{\nu \rho \sigma} \Omega^{\nu} U^{\rho} \Psi^{\sigma}_{2} + \bar{f}^{\mu}_{; \rho}\Psi_{2}^{\rho}.
\end{equation}
{\bf{From equation (22) and (23) , we may also examine the corresponding deviation vectors that are examining the stability of dipolar fluid in the halo due to the presence of DM with taking into consideration the influence of DE.}}
\subsection{Equations of Motion of Fluids in The Accretion Disk}
Due to the role of non-geodesic equations to explain the behavior of dark matter particles in the accretion disk, as a collision-less fluid. We are going to focus on its contribution to mass of the accretion disc and consequently, the accretion process is less efficient than that expected from dissipative fluid ; dark matter gives a significant contribution to the mass of the accretion disk producing an important inflow as in our Galaxy, e.g. a mass growth scaling as $M_{bh} = const. t^{9/16}$ [16].
Thus, we can find out that the equivalence between non-geodesic motions and hydrodynamics flows appears in following two sets of equations
\begin{equation}
\frac{dU^{\alpha}}{ds}+ \Gamma^{\alpha}_{\beta \delta} U^{\beta}U^{\delta} = f^{\alpha},
\end{equation}
where$ f^{\alpha}$ is described as non-gravitational force, in which its vanishing turns the equation into a geodesic, which becomes
\begin{equation}
\frac{dU^{\alpha}}{ds}+ \Gamma^{\alpha}_{\beta \delta} U^{\beta}U^{\delta} = \frac{1}{E+\hat{P}}h^{\alpha \beta} \hat{P}_{, \beta},
\end{equation}
where , $\hat{P}$ is the pressure of the fluid, $E$ is the over all mass-energy density [7] and $\rho$ is the amount of density .\\
If equation (24) satisfies the first law of thermodynamics.
\begin{equation}
\hat{P}_{, \beta}= \rho c^2 ( \frac{(E + \hat{P})}{\rho c^2})_{, \beta},
\end{equation}
then its associated equation of motion of fluids becomes,
\begin{equation}
\frac{dU^{\alpha}}{ds}+ \Gamma^{\alpha}_{\beta \delta} U^{\beta}U^{\delta}= \frac {(\frac{E + \hat{P}}{\rho c^2})_{, \beta}}{ (\frac{E + \hat{P}}{\rho c^2})} h^{\alpha \beta} . \end{equation}
Meanwhile, in case of isobaric pressure, the equation of stream becomes conditionally equivalent to geodesic.
Thus, the appearance of the extra term on the right hand side of equation (4) inspire many authors to interrelate it with the problem of dark matter as an excess of mass due to the Lagrangian suggested by Kahil and Harko (2009) [1]:
From the above equations, we can find that the excess of mass for a test particle is equivalent to the hydrodynamic equation of motion for a perfect fluid satisfying the first law of thermodynamics. Such an analogy is required to describe the behavior of cluster of fluid circumventing the active galactic nuclei (AGN) it has detected that annihilation of dark matter particles in terms of increase $\gamma$ ray density in the accretion disc)[6]\\
Accordingly, we can obtain the hydrodynamic flow of accretion disc by applying the Euler-Lagrange equation on (2) with taking into account that
\begin{equation}
m(s) {\stackrel{def.}{=}} \frac{(\hat{P}+E)}{\rho c^2}
\end{equation}
Using (27), we find that
\begin{equation} \frac{1}{( E+\hat{P}/ \rho )}\frac{( dE+\hat{P}/ \rho ) }{d \sigma} \approx 2 a_{0}/c^{2}. \end{equation}
Such a result is inevitable to ensure that the stream of hydrodynamics equations may be expressed with respect to the MOND constant, for arbitrary parameters $\sigma$ defining the motion.
\section{Dark Matter : Equations of Motion in Bimetric Theories}
Implementing the concept of geometerization of physics, it is essential to express the motion of non-geodesic equations and their corresponding deviation equation to regulate the behavior of as expressed in particle content or fluid-like in the presence of different bi-metric gravitational fields, able to explain DM at different regions inside spiral galaxies.
\subsection{ Non-Geodesic Trajectories for Bi-gravity }
Hossenfelder [17] has introduced an alternative version of bi-metric theory, having two different metrics $\bf{g}$ and $\bf{h}$ of Lorentzian signature on a manifold $\bf{M}$ defining the tangential space TM and co-tangential space T*M respectively. These can be obtained in terms of two types of matter and twin matter; existing individually. Each of them has its own field equations as defined within Riemannian geometry.
It is well known that implementing bi-gravity theory, without cosmological constants, will be vital to describe motion of dipolar objects in the halos [23]; while the conformal type may be able to describe dark matter as mass excess quantities found in as in accretion disk circumventing the center of the Galaxy, as described by strong gravitational fields.\\
Meanwhile, theories of bi-metric theories, have one metric combining the two metrics, with cosmological constant, describing variable speed of light to replace the effect dark energy in big bang scenario [18].\\
From the previous versions of bi-metric theories [19], we are going to present a generalized form which can be present different types of path and path deviation which can be explained for any bi-metric theory which has two different metrics and curvatures as defined by Riemannian geometry [20]. their Corresponding Lagrangian can be expressed in the following way [21],
\begin{equation}
L{\stackrel{def.}{=}}m_{g}g_{\mu \nu} \Psi_{; \nu} U^{\mu} U^{\nu} + m_{f}f_{\mu \nu} \Phi_{| \nu} V^{\mu} V^{\nu} +(\frac{m_{g}{(s)}_{,\beta}}{m_{g}{(s)}} (g^{\alpha \beta}- U^{\alpha}U^{\beta}))_{;\rho} \Psi^{\rho} + (\frac{m_{f}{(\tau)}_{,\beta}}{m_{f}{(\tau)}} (g^{\alpha \beta}- V^{\alpha}V^{\beta}))_{;\rho} \Psi^{\rho} .
\end{equation}
Thus, regarding\\
{(1)} $ \frac{d \tau}{ds} =0$ , \\
this will give to two separate sets of path equations owing to each parameter by applying the following Bazanski-like Lagrangian:
\begin{equation}
\frac{DU^{\alpha}}{DS}= \frac{m_{(g)}{(s)}_{,\beta}}{m_{(g)}{(s)}} (g^{\alpha \beta}- U^{\alpha}U^{\beta}) ,
\end{equation}
and
\begin{equation}
\frac{DV^{\alpha}}{D \tau}=\frac{m_{(f)}{(\tau)}_{,\beta}}{m_{(f){(\tau})}} (f^{\alpha \beta}- V^{\alpha}V^{\beta}).
\end{equation}
While their corresponding path deviation equations:
\begin{equation}
\frac{D^2\Psi^{\alpha}}{DS^2}= R^{\alpha}_{\beta \gamma \delta} U^{\gamma} U^{\beta} \Psi^{\delta} + (\frac{m_{(g)}{(s)}_{,\beta}}{m_{(g)}} (g^{\alpha \beta}- U^{\alpha}U^{\beta}))_{\rho}\Psi^{\rho},
\end{equation}
And,
\begin{equation}
\frac{D^2\Phi^{\alpha}}{D\tau^2}= S^{\alpha}_{\beta \gamma \delta} V^{\gamma} V^{\beta} \Phi^{\delta} + (\frac{m_{(f)}{(\tau)}_{,\beta}}{m_{(f)}} . (f^{\alpha \beta}- V^{\alpha}V^{\beta}))_{; \rho}\Phi^{\rho},
\end{equation}
{(2)}$ \frac{d \tau}{dS} \neq 0 $ [19], \\
the two metrics can be related to each other by means of a quasi-metric one [22].
\begin{equation}
\tilde{g}_{\mu \nu} = g_{\mu \nu} - f_{\mu \nu} + \alpha_{g} ( g_{\mu \nu} - U_{\mu}U_{\nu} ) + \alpha_{f} ( f_{\mu \nu} - V_{\mu}V_{\nu}),
\end{equation}
where $\alpha_{g}$ and $\alpha_{f}$ are arbitrary constants. \\
Such an assumption may give rise to define its related Lagrangian of Bazanski's flavor to describe the geodesic and geodesic deviation equation due to this version of bi-gravity theory.
\begin{equation}
L {\stackrel{def.}{=}} \tilde{g}_{\alpha \beta} U^{\alpha}\frac{\tilde{D} \Psi^{\beta}}{\tilde{D}S},
\end{equation}
$$
\tilde{\Gamma}^{\alpha}_{\beta \sigma} = \frac{1}{2}\tilde{g}^{\alpha \delta}( \tilde{g}_{\sigma \delta ,\beta } +\tilde{g}_{\delta \beta , \sigma } -\tilde{g}_{\beta \sigma ,\delta} ),
$$
and its corresponding Lagrangian:
\begin{equation}
L= \tilde{m(s)} \tilde{g}_{\mu \nu} \tilde{U}^{\mu} ( \frac{d \tilde{\Psi}^{\nu} }{d\tilde{S}} + \tilde{\Gamma}^{\nu}_{\rho \delta} \tilde{\Psi}^{\rho} \tilde{U}^{\delta} )+ \tilde{f_{\mu}}\Psi^{\mu}.
\end{equation}
Thus, equation of its path equation can be obtained by taking the variation respect to $\Psi^{\mu}$ to obtain:
\begin{equation}
\frac{d\tilde{U}^{\alpha}}{d\tilde{S}}+ \tilde{\Gamma}^{\alpha}_{\beta \delta} \tilde{U}^{\beta}\tilde{U}^{\delta} = \frac{{m{\tilde(S)}}_{,\beta}}{{m{\tilde(S)}}} (\tilde{g}^{\alpha \beta}- \tilde{U}^{\alpha}\tilde{U}^{\beta})
, \end{equation}
and using the commutation relation (A.4) and the condition (A.5), we obtain its corresponding deviation equation;
\begin{equation}
\frac{D^{2}\Psi^{\mu}}{\tilde{DS}^2}= \tilde{R}^{\mu}_{\nu \rho \sigma}\tilde{U}^{\nu}\tilde{U}^{\rho} \tilde{\Psi}^{\sigma} + (\frac{\tilde{m{(\tilde{S})}}_{,\beta}}{m{(\tilde{S})}} (\tilde{g}^{\alpha \beta}- \tilde{U}^{\alpha}\tilde{U}^{\beta}))_{;\rho} \tilde{\Psi}^{\rho}
,\end{equation}
where
$$
\tilde{R}^{\alpha}_{.\mu \nu\rho}= \tilde{\Gamma}^{\alpha}_{\mu \rho ,\nu} - \tilde{\Gamma}^{\alpha}_{\mu \nu ,\rho}
+ \tilde{\Gamma}^{\sigma}_{\mu \rho } \tilde{\Gamma}^{\alpha}_{\sigma \rho } - \tilde{\Gamma}^{\sigma}_{\mu \rho } \tilde{\Gamma}^{\alpha}_{\sigma \rho }.
$$
\subsection{Equations of Dipolar Moment in Bi-gravity Theory }
Equation of motion of dipolar moment in the presence of bi-metric theory as a candidate to represent DM as an interaction between ordinary and twin matter as described by bi-gravity ghost-free theory.
Accordingly, we suggest the following Lagrangian;
\begin{equation}
L {\stackrel{def.}{=}} g_{\alpha \beta} P^{\alpha}\frac{D \Psi_{(1)}^{\beta}}{Ds} + \Omega_{\alpha} \frac{D \Psi_{(2)}^{\beta}}{Ds} + f_{\alpha}\Psi_{(1)}^{\alpha}+ \hat{f}_{\alpha}\Psi_{(2)}^{\alpha} + f_{\alpha \beta} Q^{\alpha}\frac{D \Phi_{(1)}^{\beta}}{D\tau} + \Delta_{\alpha} \frac{D \Phi_{(2)}^{\beta}}{D\tau} + k_{\alpha}\Phi_{(1)}^{\alpha}+ \hat{k}_{\alpha}\Psi_{(2)}^{\alpha},
\end{equation}
where, $ Q $ twin matter momentum vector $ \Delta $ twin matter dipole moment vector ,$ J $ twin non-gravitational force to momentum $ $twin non-gravitational force of dipole moment.
Consequently, taking the variation with respect to $\Psi_{1}$, $\Psi_{2}$ , $\Phi_{1}$ and $\Phi_{2}$ we obtain: the dipolar momentum of ordinary matter, the evolution equation of ordinary matter, the equation of twin dipolar momentum and the equation of twin evolution dipolar moment
\begin{equation}
\frac{D P^{\mu}}{D s} = f^{\mu},
\end{equation}
and its corresponding evolution equation for dipolar moment
\begin{equation}
\frac{D \Omega^{\mu}}{D s} = \hat{f}^{\mu}.
\end{equation}
While, for the twin matter we obtain the equation of its dipolar moment
\begin{equation}
\frac{D Q^{\mu}}{D \tau} = k^{\mu}
\end{equation}
where $k^{\mu}$ is its corresponding non-gravitational force.
Also, the evolution equation of the twin dipolar moment is expressed as follows
\begin{equation}
\frac{D \Delta^{\mu}}{D \tau} = \hat{k}^{\mu},
\end{equation}
in which $k^{\mu}$ is its associate non-gravitational force.
Moreover, in order to obtain their corresponding deviation equations following the same procedures in for both metrics $g$ and $f$ independently, we get after some manipulations the following set of deviation equations for ordinary matter and twin matter as follows;
for the ordinary matter.
\begin{equation}
\frac{D^2 \Psi_{(1)}^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}P^{\nu} U^{\rho} \Psi_{(1)}^{\sigma}+ f^{\mu}_{; \rho} \Psi_{(1)}^{\rho},
\end{equation}
and
\begin{equation}
\frac{D^2 \Psi_{(2)}^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}\Pi^{\nu} U^{\rho} \Psi_{(2)}^{\sigma}+ \hat{f}^{\mu}_{; \rho} \Psi_{(2)}^{\rho},
\end{equation}
and for the twin matter
\begin{equation}
\frac{D^2 \Phi_{(1)}^{\mu}}{D\tau^2}= S^{\mu}_{\nu \rho \sigma}Q^{\nu} V^{\rho} \Phi_{(1)}^{\sigma} + k^{\mu}_{; \rho} \Phi_{(1)}^{\rho},
\end{equation}
and
\begin{equation}
\frac{D^2 \Phi_{(2)}^{\mu}}{D\tau^2}= S^{\mu}_{\nu \rho \sigma} {\hat\Pi^{\nu}} V^{\rho} \Phi_{(2)}^{\sigma} + \hat{k}^{\mu}_{; \rho} \Phi_{(2)}^{\rho},
\end{equation}
where, $S^{\alpha}_{\beta \gamma \delta}$ , $V^{\alpha}$, $\hat{\Pi}^{\alpha}$ are their associated curvature, four vector velocity, the polarization vector for particles defined as twin matter respectively.
\subsection{ Dipolar Fluid in Bi-gravity Theory}
Extending the previous ideas as discussed in [3.2], to examine the existence of DM, using bi-gravity -ghost free theory- to describe both ordinary fluid and twin fluid simultaneously , we suggest the following Lagrangian;
\begin{equation}
L {\stackrel{def.}{=}} g_{\mu \nu} K^{\mu} \frac{D \Psi_{(1)}^{\nu}}{Ds} + \Omega_{\mu}\frac{D \Psi_{(2)}^{\nu}}{Ds}+ f_{\mu \nu}\hat{K}^{\mu} \frac{D \hat{\Psi}_{(1)}^{\nu}}{D\tau} + \hat{\Omega}_{\mu}\frac{D \Psi_{(2)}^{\nu}}{D\tau},
\end{equation}
where $\hat{K}^{\mu} $ is the twin matter linear momentum and $\Phi_{1}^{\mu}$ its associated deviation vector, $\hat{\Omega}^{\mu}$ the evolution vector associated with twin matter and $\Phi_{2}^{\mu}$ its corresponding deviation vector of the evolution vector for the twin matter,
in which, $ \hat{f}_{1}\frac{1}{
hat{\sigma}} \nabla^{\mu} (W-\hat{\Pi} \hat{W}) - R^{\mu}_{\rho \nu \lambda}u^{\rho}\xi^{\nu}K^{\lambda}.$
Thus, taking the variation with respect to $\Psi_{1}$, $\Psi_{2}$ , $\Phi_{1}$ and $\Phi_{2}$ we obtain
for the ordinary fluid
\begin{equation}
\frac{D K^{\mu}}{Ds} = f_{1}^{\mu},
\end{equation}
and
\begin{equation}
\frac{D \Omega^{\mu}}{Ds} = f_{2}^{\mu}. \end{equation}
Also, for the twin fluid
\begin{equation}
\frac{D \hat{K}^{\mu}}{D\tau} = \hat{f}_{1} ,\end{equation}
and
\begin{equation}
\frac{D \hat{\Omega} }{D\tau} = \frac{1}{\sigma} \nabla^{\mu} (\tilde{W}-\tilde{\Pi} \tilde{W}) - S^{\mu}_{\rho \nu \lambda}V^{\rho}\tilde{\xi}^{\nu}\tilde{K}^{\lambda} , \end{equation}
where $ \tilde{W}^{\mu} $, $\tilde \Pi^{\mu}$, and $K^{\mu}$ are the corresponding twin density dependent potential, the polarization vector, and the related linear momentum vector parameterized due to dipolar description as expressed in bi-gravity theory.
\subsection{Non-Geodesic Equations in AGN: Bimetric theory }
The bi-metric version of equation (4) can be obtained by obtaining the Euler-lagrange equation on the following Lagrangian
\begin{equation}
\tilde{L}= \tilde{g}_{\alpha \beta} \tilde{U}^{\alpha} \frac{D \tilde{\Psi}^{\beta}}{D \tilde{s}}.
\end{equation}
To obtain the corresponding path equation
\begin{equation}
\frac{d\tilde{U}^{\alpha}}{d\tilde{s}}+ \tilde{\Gamma}^{\alpha}_{\beta \delta} \tilde{U}^{\beta}\tilde{U}^{\delta} = \frac{\tilde{m}{(s)}_{,\beta}}{\tilde{m}{(\tilde{s})}} (\tilde{g}^{\alpha \beta}- \tilde{U}^{\alpha}\tilde{U}^{\beta}),
\end{equation}
and using the commutation relation (A.4) and the condition (A.5), we obtain its corresponding deviation equation;
\begin{equation}
\frac{D^{2}\tilde{\Psi}^{\mu}}{D\tilde{s}^2}= \tilde{R}^{\mu}_{\nu \rho \sigma}\tilde{U}^{\nu}\tilde{U}^{\rho} \tilde{\Psi}^{\sigma} + (\frac{\tilde{m}{(s)}_{,\beta}}{\tilde{{m}\tilde{(s)}}} (\tilde{g}^{\alpha \beta}- \tilde{U}^{\alpha}\tilde{U}^{\beta}))_{;\rho} \tilde{\Psi}^{\rho}.
\end{equation}
\section{Dark Matter: Problem of Stability }
\subsection{Testing Stability of Celestial Objects by The Geodesic Deviation Vector}
{{\bf{The importance of solving geodesic(non-geodesic) deviation equations that are obtained with its path equation for an object, whether is counted to be a test particle or not is inevitably used for examining the stability of the system. The term stability is an analogous meaning to examine the amount of perturbation using deviation vector along its course of motion, to reveal the status of objects in the presence of DM.\\
In this present work, we are going to implement such a technique which has been applied previously in examining the stability of some cosmological models using two geometric structures [23].
Recently, this approach has been modified by [24] to regard the stability condition as a result of by obtaining the scalar value of the deviation vector, independent of any coordinate system being in covariant form able to study which works for examining the stability problem for any planetary system , and extended for examining the stability of stellar systems orbiting strong gravitational fields [25].}} \\
Thus, from geodesic deviation equation (11) has its solution expressed in the following manner:
$$
\Psi^{\mu} = f(S) C^{\mu},
$$
where $ C^{\alpha}$ are constants and $f(S)$ is a function known from the metric. If $ f(S) \rightarrow \infty$ , the system becomes unstable otherwise it is stable. ,
in a given interval [a,b] in which $\Psi^{\alpha}(S)$ behave monotonically. These quantities can become sensors for measuring the stability of the system are
\begin{equation} q~~ {\stackrel{def.}{=}}~~ \lim_{s \rightarrow b} \sqrt{\Psi^{\alpha}\Psi_{\alpha}} . \end{equation} If $$q \rightarrow \infty$$ then the system is unstable, otherwise it is always stable. \\
Yet this condition cannot be solely satisfied if one study the case of dipolar particles(fields) .\\
{\underline{The necessary and sufficient conditions}} should be related to the solution of geodesic (non-geodesic) and evolution deviation equations simultaneously i.e.
$$
\Psi_{1}^{\mu} = f(S) C_{1}^{\mu},
$$
and
$$
\Psi_{2}^{\mu} = f(S) C_{2}^{\mu},
$$
where $ C_{1}^{\alpha}$, $ C_{2}^{\alpha}$ are constants and $f(S)$ is a function known from the metric. If $ f(S) \rightarrow \infty$ , the system becomes unstable otherwise it is stable. ,
in a given interval [a,b] in which $\Psi_{1}^{\alpha}(S)$ and $\Psi_{2}^{\alpha}(S)$ behave monotonically. These quantities can become sensors for measuring the stability of the system.\\
Yet, these conditions can be extended in case of bi-metric theory to be regarded in the following way:
In case $$ \frac{d \tau}{d s} \neq 0$$ The solution of the set deviation equations (21) and (22) are
\begin{equation}
\Psi_{(1)}{a}^{\alpha} = \hat{C}_{1}^{\alpha}f(s),
\end{equation}
and,
\begin{equation}
\Phi_{(1)}{a}^{\alpha} = \hat{C}_{1}^{\alpha}f(\tau).
\end{equation}
Thus, we must obtain two stability conditions in the following way:
\begin{equation} q_{1}~~ {\stackrel{def.}{=}}~~ \lim_{s \rightarrow b} \sqrt{\Psi_{1}^{\alpha}\Psi_{(1) \alpha}} . \end{equation}
and
\begin{equation} {q}_{2}~~ {\stackrel{def.}{=}}~~ \lim_{\tau \rightarrow b} \sqrt{\Phi_{1}^{\alpha}\Phi_{(1)\alpha}} . \end{equation}
Meanwhile, in case of dioplar particles in bimetric metric we get another two more conditions to become:
\begin{equation}
\Psi_{(2)}{a}^{\alpha} = \hat{C}_{2}^{\alpha}f(s),
\end{equation}
and,
\begin{equation}
\Phi_{(2)}{a}^{\alpha} = \hat{C}_{2}^{\alpha}f(\tau).
\end{equation}
Accordingly, in case of the Verozub bi-metric version [9], $ \frac{d \tau}{ds} =0 $, the above conditions appeared for stability for a test particle and a dipole particle will be reduced to from two to one and from four to two respectively.
\section*{Discussion and Conclusion}
Dark Matter maybe regarded either as a particle or a fluid due to its detection from the source of the gravitational field. This has led many authors to revisit its notation and to offer alternatives such as dipolar particles or fluids, an effect of the scalar field and its additional gravitational field or even as a result of the projection of higher dimensions upon other components.
Due to the variety of its differing definitions or notation, a class of bimetric theories of gravity have been presented to describe the status of these gravitational fields, whether it is very strong as in the core of the galaxy or a neutron star or weak ones like the Sun that still satisfy the tests of relativity.
This type of theory consists of studying the motion of particles in terms of their path and deviations vectors. The use of deviation equations is to demonstrate a schematic approach for estimating the stability of these systems in a covariant form as mentioned in section 4. It has been demonstrated that two conditions are essential to examine the stability of a test particle in the presence of the bimetric theory. As these two conditions apply a doubled effect is examined in the case of their counterparts in bigravity theories. However applying the Verozub version of bimetric gravity shows its behavior to be the same as the GR. Owing to the equation of motion, it is vital to examine the stability of these regions, by solving the geodesic deviation equations, due to inter-relation between geodesic deviation equation and stability conditions.
In our present work, it has been found that non-geodesic equations, as described in bi-metric theory of gravity, may be regarded as a good representative to DM at different regions [26-29].
Nevertheless, DM has another rival explanation to be examined nearby active galactic nuclei such as SgrA*, due to the excess of mass appeared in equations of relativistic hydrodynamics (27), which is present as a non-geodesic equations equation (3). Also, we have connected between MOND parameters and the rate of mass excess term ,upon parametrization, as shown in equations (6) and (30).
{\bf{Finally, we sum up that the quest of identifying precisely the nature of DM is still under debate. Yet, some authors believe that it may be regarded as a massive neutrino,a super-symmetric neutralino or even an axion [30].
The problem of motion as described in the Riemanian geometry will be extended to be explained by different geometries, admitting non vanishing curvature and torsion simultaneously.\\
Our future work will continue to emphasize the concept of the geometrization of physics in determining the existence of DM and DE by different classes of Non-Riemaiann geometry, as a further step in demystifying the various notations of both DM and DE.
}}
\section*{ Acknowledgment} {The author would like to thank Mr. Andrew Gordon for his comments.}
\section*{References}
{1.} M. E. Kahil, and T. Harko, Mod. Phys. Lett. {\bf{A24}},667(2009). \\
{2.}P. Wesson, J. Math Phys, {\bf{43}},2423 (2002).\\
{3.}L. Blanchet , Class. Quant Grav.,{\bf{24}},3541 (2007)\\
{4.} L. Blanchet and A. Le Tiec, Phys. Rev. D{\bf{78}},024031 (2008).\\
{\bf{{5.} A. G. Riess et al, Astron. J. {\bf{116}},1009 (1998)}}.\\
{6.} B.C. Bromly, ApJs {\bf{197}},2 (2011). \\
{7.} K. Kleids and N. K. Spyrou, Class. Quant. Grav. {\bf{17}},2965 (2000). \\
{8.} L. Iorio, Galaxies {\bf {1}},6 (2013). \\
{9.} L. V. Verozub, {\it{Space-time Relativity and Gravitation, Lambert, Academic Publishing}}(2015).\\
{10.} S. L. Bazanski, J. Math. Phys., {\bf {30}},1018 (1989). \\
{11.} M. Milgrom, Astrophys. J. {\bf{270}},365 (1983). \\
{12.} M. Milgrom, Phys. Rev. D{\bf{89}},024027 (2014).\\
{13.} L. Blanchet, Class. Quant. Grav. {\bf{24}},3541 (2007).\\
{14.} M. E. Kahil, J. Math. Physics {\bf {47}},052501 (2006). \\
{15.} M. Heydrai-Fard, M. Mohseni, and H. R. Sepanigi, Phys. Lett. B{\bf{626}}, 230 (2005). \\
{16.} S. Peirani and J. A. de Freitas Pacheco, Phys. Rev. D{\bf{78}},024031 (2008)\\
{17.} S. Hossenfelder, Phys. Rev. D. {\bf{78}}, 044015 (2008).\\
{18.} J. W. Moffat, Int. J. Mod. Phys. {\bf{A 20}}, 1105 (2005). \\
{19.} Y. Akrami, T. Kovisto, and A. R. Solomon, Gen. Realt. Gravit. {\bf{47}},1838 {(2015)}.\\
{20.} K. Aoki and K. Maeda, Phys. Rev. D{\bf{90}}, 124089 (2014). \\
{21.} Magd E. Kahil, Gravit. Cosmol.,{\bf{23}}, 70 (2017). \\
{22.} J. D. Bekenstein, Phys. Rev. D {\bf{48}},3641 (1993). \\
{23.} M.I. Wanas, and M.A Bakry, Proc. MG XI, Part C, 2131(2008). \\
{24.} M.I. Wanas and M.A. Bakry, Astrophys. Space Sci., {\bf{228}},239 (1995). \\
{25.} Magd E. Kahil, Odessa Astronomical Publications, {\bf{28/2}}, 126 (2015) \\
{26.} N. Rosen, Gen. Relativ. and Gravit., {\bf{4}}, 435 (1973). \\
{27.} S.F. Hassan and Rachel A. Rosen , JHEP, 126 (2012) \\
{28.} L .Blanchet and L. Heisenberg, Phys Rev. D {\bf{96}},083512 (2017)\\
{29.} L .Blanchet L. Heisenberg , NORDITA-2015-38 (2015) \\
{30.} I. Pestov, Proceedings of 5th International Workshop on Complex Structures and Vector Fields, ed S. Dimiev, and K.Sekigawa, World Scientific Singpore, 180 (2001). \\
{31.} Magd E. Kahil Grav. Cosmol., {\bf(24)}, 83 (2018) \\
{32.} M. Roshan, Phys.Rev. D{\bf{87}}, 044005 (2013). \\
\section*{Appendix (A) }
\subsection*{The Papapertrou Equation in General Relativity: Lagrangian Formalism}
It is well known that equation of spinning objects in the presence of gravitational field have been studied extensively. This led us to suggest its corresponding Lagrangian formalism , using a modified Bazanski Lagrangian [31], for a spinning and precessing object and their corresponding deviation equation in Riemanian geometry in the following way
$$
L= g_{\alpha \beta} P^{\alpha} \frac{D \Psi^{\beta}}{Ds} + S_{\alpha \beta}\ \frac{D \Psi^{\alpha \beta}}{Ds}+ F_{\alpha}\Psi^{\alpha}+ M_{\alpha \beta}\Psi^{\alpha \beta} \eqno{(A.1)}
$$
where
$ P^{\alpha}= m U^{\alpha}+ U_{\beta} \frac{D S^{\alpha \beta}}{DS}$ and $\Psi^{\mu \nu}$ is the spin deviation tensor.\\
Taking the variation with respect to $ \Psi^{\mu}$ and $\Psi^{\mu \nu}$ simultaneously we obtain
$$
\frac{DP^{\mu}}{DS}= F^{\mu},\eqno{(A.2)}
$$
$$
\frac{DS^{\mu \nu}}{DS}= M^{\mu \nu} \eqno{(A.3)} ,
$$
where $P^{\mu}$ is the momentum vector, $ F^{\mu} = \frac{1}{2} R^{\mu}_{\nu \rho \delta} S^{\rho \delta} U^{\nu},$ and $R^{\alpha}_{\beta \rho \sigma}$ is the Riemann curvature, $\frac{D}{Ds}$ is the covariant derivative with respect to a parameter $S$,$S^{\alpha \beta}$ is the spin tensor, $ M^{\mu \nu} =P^{\mu}U^{\nu}- P^{\nu}U^{\mu}$, and $U^{\alpha}= \frac{d x^{\alpha}}{ds}$ is the unit tangent vector to the geodesic. \\
Using the following identity on both equations (1) and (2)
$$
A^{\mu}_{; \nu \rho} - A^{\mu}_{; \rho \nu} = R^{\mu}_{\beta \nu \rho} A^{\beta}, \eqno{(A.4)}
$$
where $A^{\mu}$ is an arbitrary vector. \\
Multiplying both sides with arbitrary vectors, $U^{\rho} \Psi^{\nu}$ as well as using the following condition [15]. \\
$$
U^{\alpha}_{; \rho} \Psi^{\rho} = \Psi^{\alpha}_{; \rho } U^{\rho}, \eqno{(A.5)}
$$
and $\Psi^{\alpha}$ is its deviation vector associated to the unit vector tangent $U^{\alpha}$.
Also in a similar way:
$$
S^{\alpha \beta}_{; \rho} \Psi^{\rho} = \Psi^{\alpha \beta}_{; \rho } U^{\rho}, \eqno{(A.6)}
$$
one obtains the corresponding deviation equations [32]
$$
\frac{D^2 \Psi^{\mu}}{DS^2}= R^{\mu}_{\nu \rho \sigma}P^{\nu} U^{\rho} \Psi^{\sigma}+ F^{\mu}_{; \rho} \Psi^{\rho}, \eqno{(A.7)}
$$
and
$$
\frac{D^2\Psi^{\mu \nu}}{DS^2}= S^{\rho [ \mu} R^{\nu ]}_{\rho \sigma \epsilon} U^{\sigma} \Psi^{\epsilon} + M^{\mu \nu}_{; \rho} \Psi^{\rho}.\eqno{(A.8)}
$$
\end{document}
{Dark Matter maybe investigated by studying the motion of objects that are not subject to the laws of Conventional gravitational theories. Several authors have postulated the existence of dark matter to certain casual factors including a projection of higher dimensions onto lower ones, dipolar particles and fluids in conjunction with the effects of dark energy felt at the halos of spiral galaxies.
However consideration must be taken to the effects of dark matter that can be represented in terms of motion of particles in the accretion disk near the core of galaxies. Therefore the existence of dark matter could be explained by examining the behavior of said particles by articulating there path and path deviation equations, based on the concept of {\it{Geometrization of Physics}}. Such a tendency is illustrated Such a tendency is illustrated by deriving their corresponding non-geodesic and non-geodesic deviations equations , using a specific type of classes of the Bazanski Lagrangian. This type of technique displays dark matter to be shown as a variable mass of these particles expressed in the presence of Riemannem Geometry. This reveals a mysterious motion of these particles that do not comply to Newtonian or Einsteinian theories of gravition. Using such an novel approach may require looking for an appropriate theory of gravity to describe different regions, eligible for detecting dark matter. Using different versions of bi-metric theory of gravity are applied to represent this role. We have also introduced the stability conditions that are essential to study the behavior of particles, expressed as non-geodesic equations, in the presence of dark matter}
thus, the stability condition becomes $\tilde{q} \rightarrow {\infty}$ then the system is unstable, otherwise it is always stable.
Also, in a similar way , the case of dipole moment trajectories as express in GR, we may have two simultaneous conditions one for linear passive momentum, as similar as for spinning particle with precession [32], and the other is for evolution deviation equation. Such a previous condition in case of its analogue in Bi-metric theory such that $\frac{d s}{d \tau} =0$ we obtain four conditions due to existence of twin matter equations. But it reduces to two conditions if we apply $\frac{d s}{d \tau} \neq =0$.
However, it is possible to revisit notation of DM to be detected as {\underline{a mass excess quantity}}, expressed as a non-geodesic equation, or the projection of the fifth component of the geodesic of non-compact higher dimensional theory of gravity on its 4-dimensions [1], or even as dipolar particles presented in a polarization field on these clouds [2].
This type of illustration has led Blanchet et al to modify the appearance of DM in their model to behave from a particle content into a fluid-like behavior.[2] This type of illustration has led Blanchet et al to modify the appearance of DM in their model to behave from a particle content into a fluid-like behavior.[2]
{bf{In more detail, {\bf{DM may be expressed, by a set of equations of linear momentum and internal momentum and the evolution equation respectively}}. These equations are going to be derived from a special class of the Bazanski Lagrangian [3-4]}}.
Nevertheless, the notation of DM has not only fixed only in the region of halos, but also different regions in the universe e.g. in regions closed to the core of the Galaxy , as well as at a cosmological scale from detecting the microwave spectrum, which reveal the abundances of elementary particles of large scale structures from Big Bang nucleosynthesis [5], such that DM as weak interacting massive particles,
{\bf{Yet, such a tendency to express to include DM in the halo as dipolar particles is constraining this effect al galactic level, which is a violation to apparent observation that DM is also existed in the Universe as well as being felt to to associated incidences of excess of $\gamma$-ray radiation as an indicator of self annihilation DM-particles in the AGNs, as well as nearby neutron stars, binary pulsars. This can be revealed by applying the Blanchet approach of replacing the dipolar particles by dipolar fluids.
However, due to the presence of massive black hole in the core of galaxies one may enhance the rise of DM within the accretion disc [8]. Also, it can be figured out the effect of DM is considered to reveal the behavior of the very minute discrepancies affecting on the perihelion motion for some planets [12].
However, due to the presence of massive black hole in the core of galaxies one may enhance the rise of DM within the accretion disc [8]. Also, it can be figured out the effect of DM is considered to reveal the behavior of the very minute discrepancies affecting on the perihelion motion for some planets [12].}}
{\bf { From this perspective, it has been figured out that the cause of invisible matter in the universe is due to DM which occupies about $26 \%$ ; while $73\%$ of Universe's composition is made of DE due to expansion with acceleration [11]}}. This may lead to seek a consistent theory of gravity able to reveal these variant regions, having some features of strong field theory of gravitation.
Thus, it can remarkable, if we regard the class bi-metric theories of gravity as one good candidates to achieve this goal . Such this class of theories ha an ability to express dark matter as a twin matter appeared in the Halo , by means of a ghost free bi-gravity theory or a mass-excess term appeared in a fluid circumventing the core of the galaxy, as in case of SgrA*, using a specific bi-metric theory having geodesic mapping to be acting as gauge transformation [6].
|
1,108,101,563,905 | arxiv | \section{Introduction}
The importance of quantum teleportation \cite{Ben93} is widely
recognized today. Not only does it enable the remote transmission
of the state describing a quantum system to another one, without
ever knowing the state, but it also allows the construction of a
new way to perform quantum computation \cite{Got99,Kni01}. In the
previous and in many other applications of teleportation, it is
desirable, if not crucial, that the teleported state arrives at
its destination (Bob) exactly as it left the preparation station
(Alice). In other words, we want a unity fidelity output state,
which is always achieved
if Alice and Bob share a maximally entangled state (MES)
\cite{Ben93}.
However, there might happen that our parties do not share a MES
or, in addition, intermediate teleportations to other parties must
be done before the state reaches Bob. This limitation can be
overcome by distilling out of an ensemble of partially entangled
states (PES's) maximally entangled ones \cite{Ben96}. But this
approach requires a large amount of copies of PES's to succeed and
is ineffective when just a few copies are available. Another way
to achieve unity fidelity teleportation with limited resources is
based on the probabilistic quantum teleportation (PQT) protocols
of Refs. \cite{Agr02,Gor06,Guo00}.
Recently, in an interesting work, Mod{\l}awska and Grudka
\cite{Gru08} presented yet another way of achieving
probabilistically unity fidelity teleportation. Their strategy was
developed in the framework of the KLM scheme \cite{Kni01} for
linear optical teleportation. The main idea behind their approach
was the recognition that multiple (successive) teleportations
using the \textit{same} PES increased the chances of getting a
perfect teleported qubit. We can also see the ideas of Ref.
\cite{Gru08}, as generalized here, as a way to extend the
usefulness of quantum relays \cite{Bri98} whenever non MES's are
at stake and entanglement concentration is not practical (only a
few copies of entangled states are available).
In this contribution we show that the features of the MTP of Ref.
\cite{Gru08} are not restricted to the KLM teleportation scheme.
In order to show that we build in Sec. \ref{MTPs} a similar
protocol (protocol $1$) without relying on the intricacies of the
KLM scheme. Actually, we use the same language of the original
Bennett \textit{et al.} proposal \cite{Ben93}, which allows us to
express $\mathcal{P}_{suc}$, the total probability of getting
unity fidelity outcomes, as a function of the number of
teleportations and of the shared entanglement between Alice and
Bob. We then present two new protocols (protocols $2$ and $3$,
see Fig. \ref{Fig1}), both of which are more efficient than the
previous one. An important feature of these protocols is that they
give $\mathcal{P}_{suc}>1/2$ for a huge class of PES's. This is
particularly useful when we have a few copies of the qubit to be
teleported, since after a few runs of the MTP the overall
$\mathcal{P}_{suc}\rightarrow 1$. On top of that, protocol $2$
possesses the same efficiency of the first one but needs only
\textit{half} the number of teleportations to achieve the same
$\mathcal{P}_{suc}$. We also show that this protocol is connected
to the PQT of Refs. \cite{Agr02,Gor06}. Protocol $3$, on the other
hand, in addition to requiring just half the number of
teleportations of protocol $1$ also achieves the highest
$\mathcal{P}_{suc}$. Actually, we show that for some set of PES's
$\mathcal{P}_{suc} \approx 1$ after just a few teleportations
within a single run of the MTP. Moreover, and surprisingly, at
each successive teleportation this last protocol requires less and
less entanglement to properly work. In Sec. \ref{comparison} we
compare the efficiencies of all the three protocols presented here
with a different strategy to achieve unity fiedelity teleportation
based on entanglement swapping \cite{Bos99}. In particular, we
compare our results with those obtained for multiple entanglement
swapping as presented in Ref. \cite{Per08}. We show that, under
certain conditions, we can achieve a better performance using the
protocols here presented.
\section{Multiple teleportation protocols}
\label{MTPs}
\textit{Protocol 1.} Let us assume that we have $j=N$ PES's
described by $|\Phi^+_{n_j}\rangle = f_{n_j}\ket{00} +
g_{n_j}\ket{11}$, with $f_{n_j}=1/\sqrt{1+n_j^2}$ and
$g_{n_j}=n_j/\sqrt{1+n_j^2})$. (See panel (a) of Fig. \ref{Fig1}.)
We assume the first PES is shared between Alice and Bob while the
remaining $N-1$ are with Bob. Without loss of generality we set
$0<n_j<1$ \cite{Gor06} and for this protocol also that $n_j=n$,
$j=1,\dots, N$ \cite{Gru08}, i.e, same entanglement at each
teleportation. We can also build a generalized Bell basis as
follows,
\begin{eqnarray*}
\ket{\Phi_{m_j}^{+}} = f_{m_j}\ket{00} + g_{m_j}\ket{11}, &
\ket{\Phi_{m_j}^{-}} = g_{m_j}\ket{00} - f_{m_j}\ket{11},\\
\ket{\Psi_{m_j}^{+}} = f_{m_j}\ket{01} + g_{m_j} \ket{10},&
\ket{\Psi_{m_j}^{-}} = g_{m_j}\ket{01} - f_{m_j}\ket{10},
\end{eqnarray*}
with $m_j=1$ being the original Bell basis and the choice for
protocol $1$. Alice wants to teleport the qubit
$\ket{\phi^A}=\alpha\ket{0}+\beta\ket{1}$ and at each step $j$ a
Bell measurement (BM) is implemented whose result is known to Bob
(See Fig. \ref{Fig1}). This information allows him to correct the
final state applying the proper unitary operations conditioned on
the results of each BM \cite{Ben93}, i.e, $I$ if the BM yields
$\ket{\Phi^+}$, $\sigma_z$ for $\ket{\Phi^-}$, $\sigma_x$ for
$\ket{\Psi^+}$, and $\sigma_z\sigma_x$ for $\ket{\Psi^-}$, where
$I$ is the identity and $\sigma_{z,x}$ the
standard Pauli matrices.
\begin{figure}[!ht]
\includegraphics[angle=0,width=7cm]{Fig1a.eps}
\caption{\label{Fig1}(Color online) Pictorial view of all MTP's
after $q$ teleportations. Note that all but the first PES is
shared between Alice and Bob. All the others are at Bob's. (a)
Protocol $1$: Boxes denote standard BM's ($m=1$) and at each
teleportation the quantum channel is the same state
$\ket{\Phi^+_n}$. (b) Protocol $2$: Boxes now denote GBM's with
$m=n<1$ with the same state $\ket{\Phi^+_n}$ at each stage. (c)
Protocol $3$: Boxes denote standard BM's ($m=1$) but the quantum
channel's entanglement are successively reduced after the second
teleportation according to the following rule,
$\ket{\Phi^+_{n_j}}\rightarrow \ket{\Phi^+_{n^2_j}}$.}
\end{figure}
Before the first teleportation the state describing all qubits are
$\ket{\Phi}=\ket{\phi^A}\otimes_{j=1}^{N}\ket{\Phi^+_{n_j}}$,
which can be written as $\ket{\Phi}$ $=$ [$\ket{\Phi^+}$
$(f_1f_n\alpha\ket{0}$ $+$ $g_1g_n\beta\ket{1})$ $+$
$\ket{\Phi^-}$ $\sigma_z$ $(g_1f_n\alpha\ket{0}$ $+$
$f_1g_n\beta\ket{1})$ $+$ $\ket{\Psi^+}$ $\sigma_x$
$(f_1g_n\alpha\ket{0}$ $+$ $g_1f_n \beta\ket{1})$ $+$
$\ket{\Psi^-}$ $\sigma_z\sigma_x$ $(g_1g_n\alpha\ket{0}$ $+$
$f_1f_n \beta\ket{1})]$ $\otimes_{j=2}^{N}\ket{\Phi^+_{n_j}}.$
Unity fidelity teleportation occurs only if $f_1f_n=g_1g_n$ or
$f_1g_n=g_1f_n$. But this is only possible if we have a MES ($n=1$
$\rightarrow$ $f_n=f_1=g_n=g_1=1/\sqrt{2}$). Hence, after the
first teleportation $P^{(1)}_{suc}=0$. It is important to note
that at each teleportation, the previous teleported qubit is
changed to $\alpha_{j-1} \rightarrow \alpha_j = h_j^\alpha
\alpha_{j-1}$ and $\beta_{j-1} \rightarrow \beta_j = h_j^\beta
\beta_{j-1}$, with $(h_j^\alpha, h_j^\beta)$ $=$ $(f_{1}f_{n},
g_{1}g_{n})$, or $(g_{1}f_{n}, f_{1}g_{n})$, or $(f_{1}g_{n},
g_{1}f_{n})$, or $(g_{1}g_{n}, f_{1}f_{n})$, for $j>1$ and
$\alpha_0=\alpha$ and $\beta_0 = \beta$. We are neglecting
normalization for the moment. After the second teleportation there
exist $16$ possible outcomes ($4 \times 4$ pairs of BM's) for the
teleported qubit, which is described by one of $16$ states whose
coefficients are given by terms like $(\alpha_2,\beta_2)$ $=$
$(h_2^\alpha\alpha_1,h_2^\beta\beta_1)$ $=$ $(f_1f_n
f_1f_n\alpha,g_1g_n g_1g_n\beta)$, $(g_1f_n f_1f_n\alpha,f_1g_n
g_1g_n\beta)$, $\dots$, $(g_1g_n g_1g_n\alpha,f_1
f_nf_1f_n\beta)$. Of all possibilities, those giving unity
fidelity are such that $h_2^\alpha=h_2^\beta$, since we can factor
out the terms multiplying $\alpha$ and $\beta$ obtaining the exact
original state $|\phi^A\rangle$. To determine those successful
cases we first note that whenever $\ket{\Phi^{\pm}}$ is a result
of a BM the teleported coefficients change to $\alpha_{j}
\rightarrow \alpha_{j}$ with $\beta_{j}\rightarrow n \beta_{j}$.
Second, whenever the BM results in $\ket{\Psi^{\pm}}$ we get
$\alpha_{j} \rightarrow n \alpha_{j}$ and $\beta_{j}\rightarrow
\beta_{j}$. Therefore, it is not difficult to see that we always
get unity fidelity teleportation when we have an equal number of
$|\Phi^{\pm}\rangle$ and $\ket{\Psi^{\pm}}$ in a sequence of BM's,
or equivalently, an equal number of functions $g_n$ multiplying
$\alpha$ and $\beta$. For the case of two teleportations the
successful cases are given by eight possibilities:
$\ket{\Phi^{\pm}}\ket{\Psi^{\pm}}$ and
$\ket{\Psi^{\pm}}\ket{\Phi^{\pm}}$. The probability of all those
cases are equal and is given by $P_{event}^{(2)}$ $=$
$n^2/[4(1+n^2)^2]$. Thus, $P_{suc}^{(2)}=2n^2/(1+n^2)^2$. If we
are successful, we do not need another teleportation. However, if
we fail, we need to proceed with successive teleportations, hoping
to get a balanced sequence of $|\Phi^{\pm}\rangle$ and
$\ket{\Psi^{\pm}}$ BM's. We can show that at the $q$-th
teleportation
\begin{equation}
P_{suc}^{(q)} = A(q)n^{q}/[2^q(1+n^2)^{q}], \label{A}
\end{equation}
where for $q$ odd $A(q)=0$ and for $q$ even A(q) is the number of
all possible combinations of $q$ BM's in which we have an equal
number of $|\Phi^{\pm}\rangle$ and $|\Psi^{\pm}\rangle$,
excluding, of course, those cases where we already had a balanced
number in the previous even teleportations. For the first $12$
teleportations we have $A(2)=8$, $A(4)=32$, $A(6)=256$,
$A(8)=2560$, $A(10)=28672$, and $A(12)=344064$. In Fig.~\ref{Fig2}
we plot the total probability of success after the $q$-th
teleportation, $\mathcal{P}_{suc}=\sum_{j=1}^q P_{suc}^{(j)},$ as
a function of $n$ (the greater $n$ the greater the entanglement).
Note that here and in the remaining of this section
$\mathcal{P}_{suc}$ is given by the sum of the probabilities of
all previous successful teleportations since the $N-1$ PES's are
with Bob. In Sec. \ref{comparison} we also study other scenarios,
in particular the one in which Bob possesses just one PES.
\begin{figure}[!ht]
\includegraphics[width=7cm]{Fig2.eps}
\caption{\label{Fig2}(Color online) For protocol 1: From bottom to
top the curves represent $\mathcal{P}_{suc}$ after $q=2$, $4$,
$6$, $8$, $10$, and $12$ successive teleportations. For protocol
2: From bottom to top the curves show $\mathcal{P}_{suc}$ after
$q=1$, $2$, $3$, $4$, $5$, and $6$ successive teleportations. The
dashed curve shows the optimal probability (1/2) using the PQT
protocol. All quantities are dimensionless.}
\end{figure}
Looking at Fig. \ref{Fig2} we see that after each teleportation
$\mathcal{P}_{suc}$ increases at a lower rate. Also, after the
$10$-th teleportation we are already close to the maximal value of
$\mathcal{P}_{suc}$ for whatever value of $n$. We should remark
that we are considering success only unity fidelity
teleportations. That is why $\mathcal{P}_{suc}$ does not tend to
one as $n\rightarrow 1$. Indeed, no matter how close $n$ is to one
we are always discarding the sequences of BM's where we do not get
a balanced set of measurements involving the Bell states
$|\Phi^{\pm}\rangle$ and $|\Psi^{\pm}\rangle$.
\textit{Protocol 2.} As before, we assume that one has $j=N$ PES's
described by $|\Phi^+_{n_j}\rangle$, with $0<n_j<1$ and $n_j=n$,
$j=1,\dots, N$. (See panel (b) of Fig. \ref{Fig1}.) However,
differently from protocol $1$, we now assume $m_j=m=n$, any $j$.
The state to be teleported is
$\ket{\phi^A}=\alpha\ket{0}+\beta\ket{1}$ and at each step $j$ one
implements a generalized Bell measurement (GBM)
\cite{Agr02,Gor06}. A GBM is a projective measurement of two
qubits onto one of the four generalized Bell states given above
(See Ref. \cite{Kim04} for ways of implementing a GBM.) The result
of each GBM is known to Bob who uses this information to apply the
right unitary operations on his qubit as described in the first
protocol. The rest of the present protocol is nearly the same as
before and is inspired by the PQT of Refs. \cite{Agr02,Gor06}.
Before any teleportation the state describing all qubits can be
written as $\ket{\Phi}$ $=$ [$\ket{\Phi^+_{m}}$
$(f_mf_n\alpha\ket{0}$ $+$ $g_mg_n\beta\ket{1})$ $+$
$\ket{\Phi^-_m}$ $\sigma_z$ $(g_mf_n\alpha\ket{0}$ $+$
$f_mg_n\beta\ket{1})$ $+$ $\ket{\Psi^+_m}$ $\sigma_x$
$(f_mg_n\alpha\ket{0}$ $+$ $g_mf_n \beta\ket{1})$ $+$
$\ket{\Psi^-_m}$ $\sigma_z\sigma_x$ $(g_mg_n\alpha\ket{0}$ $+$
$f_mf_n \beta\ket{1})]$ $\otimes_{j=2}^{N}\ket{\Phi^+_{n_j}}.$
Note that now we have rewritten the first two qubits using the
generalized Bell basis with $m=n$, i.e., we have imposed the
`matching condition', where the entanglement of the channel and of
the measuring basis are the same \cite{Agr02,Gor06}. This allows
us to obtain unity fidelity teleportation right after the first
teleportation whenever we measure $|\Phi^-_m\rangle$ or
$|\Psi^+_m\rangle$ with $P^{(1)}_{suc}=2n^2/(1 + n^2)^2$. The
previous step is precisely the PQT \cite{Agr02,Gor06}.
To analyze the other teleportations we need to keep in mind three
facts. (1) The $j$-th teleported qubit is changed to
$(\alpha_{j},\beta_j) \rightarrow (\alpha_{j}, n^2\beta_{j})$
whenever $\ket{\Phi^+_m}$ is a result of a GBM; (2) If the GBM
yields $\ket{\Phi^-_m}$ or $\ket{\Psi^+_m}$ we get
$(\alpha_{j},\beta_j) \rightarrow n(\alpha_{j},\beta_{j})$; (3) if
we measure $\ket{\Psi^-_m}$ the qubit goes to
$(\alpha_{j},\beta_j) \rightarrow (n^2\alpha_{j},\beta_{j})$.
Therefore, when we have an equal number of $|\Phi^{+}_m\rangle$
and $\ket{\Psi^{-}_m}$, $m=n$, in a sequence of GBM's we get unity
fidelity. The $n^2\beta_j$ coming from the measurement of
$|\Phi^{+}_m\rangle$ is compensated by the $n^2\alpha_j$ coming
from another GBM giving $\ket{\Psi^-_m}$. Note that the states
$\ket{\Phi^-_m}$ and $\ket{\Psi^+_m}$ are `neutral', giving an
overall $n$ that can be ignored for the determination of the
successful cases.
For example, after the second teleportation we have two possible
GBM outcomes where we have a unity fidelity teleportation, namely,
$\ket{\Phi^+_m}\ket{\Psi^-_m}$ and $\ket{\Psi^-_m}\ket{\Phi^+_m}$
with $P^{(2)}_{suc}=2n^4/(1 + n^2)^4$. And after the third
teleportation the successful cases are four:
$\ket{\Phi^+_m}\ket{\Phi^-_m}\ket{\Psi^-_m}$,
$\ket{\Phi^+_m}\ket{\Psi^+_m}\ket{\Psi^-_m}$,
$\ket{\Psi^-_m}\ket{\Phi^-_m}\ket{\Phi^+_m}$, and
$\ket{\Psi^-_m}\ket{\Psi^+_m}\ket{\Phi^+_m}$, with
$P^{(3)}_{suc}=4n^6/(1 + n^2)^6$.
In general, after the $q$-th teleportation we have,
\begin{equation}
P_{suc}^{(q)} = B(q)n^{2q}/[(1+n^2)^{2q}], \label{B}
\end{equation}
where B(q) is the number of all possible combinations of $q$ GBM's
where we have an equal number of $|\Phi^{+}_m\rangle$ and
$|\Psi^{-}_m\rangle$, excluding, as we did in protocol $1$, the
cases where we already got an equal number of those two states in
the previous teleportations. For the first six teleportations we
have $B(1)=2$, $B(2)=2$, $B(3)=4$, $B(4)=10$, $B(5)=28$, and
$B(6)=84$.
Noting that $A(2q)/2^{2q}=B(q)$ we immediately see that
Eqs.~(\ref{A}) and (\ref{B}) are the same. However, in protocol
$2$, we just need \textit{half} of the number of teleportations to
achieve the same efficiency, which is a quite remarkable economy
on entanglement resources. Also, the need for less teleportations
reduces other possible errors introduced by imperfect projective
measurements. Furthermore, this result connects the PQT of Refs.
\cite{Agr02,Gor06} to protocol $1$. This is true because two
successive teleportations using that protocol is equivalent to one
using protocol $2$, being the latter an extension of the PQT.
\textit{Protocol 3.} Like protocol $1$, here we do not need GBM's.
(See panel (c) of Fig. \ref{Fig1}.) The projective measurements
are made using the standard Bell basis, i.e., $m_j=1$, any $j$.
However, and differently from the previous protocols, we assume
that at each teleportation the entanglement of the quantum channel
is reduced according to the following rule: $n_{j}=n_{j-1}^2$,
$j\geq 3$ with $n_1=n_2=n<1$. In words, the first two
teleportations are done spending two entangled states
$|\Phi^+_n\rangle$ and after that we start using less and less
entanglement. The first two steps of this protocol are identical
to the first two of protocol $1$ yielding $P_{suc}^{(1)}=0$ and
$P_{suc}^{(2)}=2n^2/(1+n^2)^2$. After the second teleportation,
the unsuccessful cases are described by the state
$\alpha\ket{0}+n^2\beta\ket{1}$, if the BM's resulted in
$|\Phi^{\pm}\rangle|\Phi^{\pm}\rangle$, or by the state
$n^2\alpha\ket{0}+\beta\ket{1}$, if the two successive BM's
yielded $|\Psi^{\pm}\rangle|\Psi^{\pm}\rangle$. Since in the third
teleportation the entangled state spent is $\ket{\Phi^+_{n^2}}$,
the previous teleported qubit changes to $(\alpha_2,\beta_2)
\rightarrow (\alpha_{2}, n^2\beta_{2})$ if we measure
$|\Phi^{\pm}\rangle$ or to $(\alpha_{2},\beta_2) \rightarrow
(n^2\alpha_{2}, \beta_{2})$ if we get $|\Psi^{\pm}\rangle$. Hence,
whenever we get the following sequences of BM's,
$|\Phi^{\pm}\rangle|\Phi^{\pm}\rangle|\Psi^{\pm}\rangle$ or
$|\Psi^{\pm}\rangle|\Psi^{\pm}\rangle|\Phi^{\pm}\rangle$ we
achieve unity fidelity with
$P_{suc}^{(3)}=2n^4/[(1+n^2)^2(1+n^4)]$. The unsuccessful cases
are given by the following $16$ cases,
$|\Phi^{\pm}\rangle|\Phi^{\pm}\rangle|\Phi^{\pm}\rangle$ and
$|\Psi^{\pm}\rangle|\Psi^{\pm}\rangle|\Psi^{\pm}\rangle$, with the
unsuccessful teleported qubits being either $(\alpha, n^4\beta)$
or $(n^4\alpha, \beta)$, respectively.
It is now clear why we will use $\ket{\Phi^+_{n^4}}$ to implement
the fourth teleportation. We are trying to catch up with the $n^4$
that multiplies either $\alpha$ or $\beta$. And since the
unsuccessful cases after this step will turn to have a $n^8$
multiplying either $\alpha$ or $\beta$, we will need
$\ket{\Phi^+_{n^8}}$ at the fifth teleportation to catch up with
it. In general, after the $(q-1)$-th teleportation the
unsuccessful cases are those where we got the following sequences
of $q-1$ BM's: $\otimes_{j=1}^{q-1}\ket{\Phi^{\pm}}$ or
$\otimes_{j=1}^{q-1}\ket{\Psi^{\pm}}$, giving a total of $2\times
2^{q-1}$ cases with unsuccessful (not normalized) teleported
qubits described by $\alpha \ket{0} + n^{2^{q-2}}\beta\ket{1}$ or
$n^{2^{q-2}} \alpha \ket{0} + \beta\ket{1}$, respectively. At the
$q$-th teleportation we succeed if we have either
$(\otimes_{j=1}^{q-1}\ket{\Phi^{\pm}})\ket{\Psi^{\pm}}$ or
$(\otimes_{j=1}^{q-1}\ket{\Psi^{\pm}})\ket{\Phi^{\pm}}$ as our
sequence of BM's, with the probability to get any single
successful sequence being identical and given by
$P_{event}^{(q)}=n^{2^{q-1}}/[2^q(1+n^2)\prod_{j=1}^{q-1}(1+n^{2^j})]$.
But since we have a total of $2 \times 2^q$ successful sequences
we get ($q\geq 2$),
\begin{equation}
P_{suc}^{(q)}=2n^{2^{q-1}}(1-n^2)/[(1+n^2)(1-n^{2^{\,\!^q}})],
\end{equation}
where we used that $\prod_{j=1}^{q-1}(1+n^{2^{j}})= (1 -
n^{2^{q}})/(1- n^2)$. There is also a peculiar way of writing
$P_{suc}^{(q)}$ in terms of the concurrence \cite{Woo98},
$C_{n_j}=2n_j/(1+n_j^2)$, an entanglement monotone/quantifier for
the state $|\Phi^{+}_{n_j}\rangle$,
$$
P_{suc}^{(q)}=2\mbox{$\prod_{j=1}^{q}$}C_{n_j}/2, \hspace{1cm}
q\geq 2.
$$
Actually, for the other two protocols we can write similar
expressions for $P_{suc}^{(q)}$. The difference comes from the
factor multiplying the product of concurrences. Here, this factor
is $2$, for the other protocols, they are $A(q)/2^{q}$ and $B(q)$.
In protocol $2$ we must also consider the concurrences of the GBM.
This changes, in the above expression for $P^{(q)}_{suc}$, the
term $C_{n_j}/2$ to $C_{n_j}C_{m_j}/4$.
In Fig. \ref{Fig3} we plot the total probability of success after
$q$ teleportations, $\mathcal{P}_{suc}=\sum_{j=1}^q
P_{suc}^{(j)}$, as a function of $n$.
\begin{figure}[!ht]
\includegraphics[width=7cm]{Fig3.eps}
\caption{\label{Fig3}(Color online) From bottom to top the curves
show $\mathcal{P}_{suc}$ after $q=2$, $3$, $4$, $5$, and $6$
successive teleportations using protocol $3$. The dashed curve
shows the optimal probability (1/2) obtained using the PQT
protocol.}
\end{figure}
Comparing Fig. \ref{Fig2} with Fig. \ref{Fig3} we see that
protocol $3$ is far better than the previous two by any aspect we
might consider. First, it achieves the greatest
$\mathcal{P}_{suc}$. Indeed, for values of $n\approx 0.9$ we can
get $\mathcal{P}_{suc}\approx 0.9$, a feat unattainable by the
previous protocols. Second, it achieves its maximum
$\mathcal{P}_{suc}$ using \textit{half} the teleportations of
protocol $1$. Third, it uses much less entanglement to achieve
those highest $\mathcal{P}_{suc}$ since after the second
teleportation the entangled states employed change from
$\ket{\Phi^+_{n_j}}$ to $\ket{\Phi^+_{n_j^2}}$. This last result
is really remarkable and surprising. It means that in the
framework of MTP less entanglement at each step of the protocol is
more useful to achieve a higher $\mathcal{P}_{suc}$ than keeping
the same degree of entanglement for the quantum channel. Also,
since entanglement is a precious and difficult resource to obtain,
this property of the MTP can be really useful in practical
applications. It is worth mentioning that one interesting question
remains to be answered. Is this protocol the optimal one? For just
a few teleportations a partial analysis suggest that protocol $3$
may be the optimal one. However, no general proof, even
numerically, is available yet.
There is another property
which is also existent in the previous two protocols.
Looking at $\mathcal{P}_{suc}$ as a function of the number of
teleportations we see it achieves its maximal value after a small
number of steps.
This
is more evident the lower the entanglement of the quantum channel.
Looking at Fig. \ref{Fig3} we see that for $n<0.6$ just three
teleportations are enough to achieve the maximal
$\mathcal{P}_{suc}$. And for higher values of $n$, a few more
give the same feature. This is a
practical property of MTP for we do not need to implement a
prohibitively large number of teleportations to get the optimal
value of $\mathcal{P}_{suc}$. One last remark. We can also look
at protocol $3$
as a way to correct
errors in previous teleportations. If it is discovered that in a
previous step of the protocol an error changed the entangled state
used in the teleportation process we can correct it
by properly choosing the right entangled state for the next
teleportation.
\section{Comparison with multiple entanglement swapping}
\label{comparison}
So far we have considered a ``direct approach" to teleport a qubit
using PES's. By direct we mean that we use the PES's as they are
offered to us, without any pre-processing. We have also assumed
that Bob has access to $N-1$ PES's out of a total of $N$. But we
can change this scenario in at least two ways. On the one hand we
can impose that Bob has access to only one PES. The other $N-2$
states lie between Alice and Bob. See the bottom of Fig.
\ref{Fig6}. On the other hand we can first try to extract a
maximally entangled state out of those $N$ PES's and only then
implement the usual, single-shot, teleportation protocol. See the
top-left of Fig. \ref{Fig6}, for example. Our goal in this section
is to compare the efficiencies (probabilities of success) for the
present direct protocols with the ones achieved using the multiple
entanglement swapping protocol (``swapping approach") of Ref.
\cite{Per08}, whose goal is to obtain out of $N$ PES's linking
Alice and Bob (bottom of Fig. \ref{Fig6}, for example) one
maximally entangled state (a Bell state). In this ``indirect
approach", a sequence of $N-1$ joint measurements (not only Bell
measurements) are implemented on qubits from different entangled
states (solid rectangles of Fig. \ref{Fig6}), with the hope that
at the end of the protocol the two qubits at end of the chain
become entangled. These measurements are chosen in such a way to
maximize the probability of Alice and Bob getting a maximally
entangled two-qubit state (Bell state) at the end of the protocol.
It is this Bell state that afterwards is employed to teleport the
qubit with Alice to Bob. As will be shown, we achieve the highest
probability of success (unity fidelity teleportation) sometimes
using the direct or the swapping approach. The best strategy is
dictated by the degree of entanglement of the PES's and also by
the way they are distributed between Alice and Bob.
\begin{figure}[!ht]
\includegraphics[width=7cm]{Fig6.eps}
\caption{\label{Fig6}(Color online) (1), (2), and (3) show three
possible configurations involving six PES's with decreasing
entanglement while (4) shows configuration (3) with six PES's
possessing the same entanglement. The dashed vertical lines
delimitate which qubits Alice and Bob have access to, the solid
line rectangles represent Bell measurements, and solid lines mean
entanglement between the connected qubits.}
\end{figure}
We start our analysis comparing the total probability of success
for protocols $2$ and $3$ against the total probability of success
for the swapping approach as giving by Eq. $D3$ of Ref.
\cite{Per08}, the best strategy for multiple swapping
teleportation. Equation $D3$ gives the probability ($P_{swap}$) of
getting one maximally entangled state out of $N$ PES's, which can
then be used to implement the usual teleportation scheme. To
derive Eq. $D3$ it is assumed that all PES's have the
\textit{same} entanglement and that Alice and Bob have access to
only one PES, as depicted at the bottom of Fig. \ref{Fig6}. In the
present notation, Eq. $D3$ reads $$P_{swap} =
1-(f_n^2-g_n^2)\sum_{j=0}^{[N/2]}f_n^{2j}g_n^{2j}\left(
\begin{array}{c}2j \\ j\end{array}\right),$$ with $[N/2]$ denoting
the integer part of $N/2$, $\left(
\begin{array}{c}2j \\ j\end{array}\right)$ meaning the binomial coefficient,
$f_n= 1/\sqrt{1 + n^2}$, and $g_n=n/\sqrt{1 + n^2}$.
In our first analysis we consider for protocols $2$ and $3$ that
the $N-1$ PES's are with Bob. For protocol $2$ they all have the
same entanglement while for protocol $3$ the entanglement
decreases as explained in the previous section. (See top-right of
Fig. \ref{Fig6}.) Note that for protocol $2$ we have generalized
Bell measurements. For the swapping approach, we consider the
configuration given at the bottom of Fig. \ref{Fig6}. The results
for this scenario are illustrated in Fig. \ref{Fig4}, where we
plot the probabilities of success for $N=6$ PES's. Note that in
this situation, protocols $2$ or $3$ are superior for almost all
the range of the parameter $n$.
\begin{figure}[!ht]
\includegraphics[width=7cm]{Fig4.eps}
\caption{\label{Fig4}(Color online) Upper curves represent in
ascending order $\mathcal{P}_{suc}$ for protocols $2$ (green) and
$3$ (blue) in configuration $(2)$ of Fig. \ref{Fig6}. The bottom
curve (red) gives $\mathcal{P}_{suc}$ for the swapping approach at
configuration $(4)$ of Fig. \ref{Fig6}. See text for more
details.}
\end{figure}
We now compare the swapping approach as given by configuration
$(1)$ of Fig. \ref{Fig6}, the optimal way for a swapping-based
protocol, with protocol $3$ as given by configuration $(2)$ of
Fig. \ref{Fig6}. Since we have three chances (three pairs of
PES's) for succeeding, we get for the swapping protocol
$\mathcal{P}_{suc} = S_1 + (1-S_1)S_2 + (1-S_1)(1-S_2)S_3$. Here
$S_j$, $j=1,2,3$, gives the optimal probability to obtain a
maximally entangled state out of two pairs of PES's. One can show
that \cite{Per08} $S_j=2n_j^2/(1+n_j^2)$, with $n_1=n$, $n_2=n^4$,
and $n_3=n^{16}$. Looking at Fig. \ref{Fig5} we see that in this
case the swapping protocol is slightly superior for
$n\stackrel{>}{\sim} 0.6$ while for small $n$ they both give the
same efficiencies. We should also mention that \textit{if} all the
six pairs of PES's are shared between Alice and Bob, a complete
different scenario from the ones depicted in Fig. \ref{Fig6},
entanglement concentration/filtering techniques applied
individually to all the six pairs \cite{Vid99} give a better
performance. This is true because the optimal probability to
locally concentrate a maximally entangled state from a
non-maximally pure one is $P_{con} = S_j$ \cite{Vid99}. However,
entanglement concentration can only be applied if Alice and Bob
initially do share entangled states. In the majority of the
situations studied here, though, Alice and Bob do not initially
share any entangled state and we have no choice but to rely on the
multiple teleportation or on the multiple swapping techniques.
\begin{figure}[!ht]
\includegraphics[width=7cm]{Fig5.eps}
\caption{\label{Fig5}(Color online) The upper/red curve represents
$\mathcal{P}_{suc}$ for the swapping protocol in the configuration
(1) of Fig. \ref{Fig6} and the lower/blue one $\mathcal{P}_{suc}$
for protocol $3$ at configuration (2) of Fig. \ref{Fig6}.}
\end{figure}
We end this section comparing both approaches at the same
configuration, namely, configuration (4) of Fig. \ref{Fig6}. For
the direct approach we employ protocol $1$. In this scenario
$\mathcal{P}_{suc}$ for the swapping approach is given by Eq. $D3$
of Ref. \cite{Per08}, where we assume all PES's to be described by
the state $|\Phi^+_n\rangle$. For protocol $1$ $\mathcal{P}_{suc}$
is calculated considering \textit{only} those instances in which
the qubit arrives with unity fidelity at its final destination.
This always happens whenever the Bell measurements after the six
teleportations yield a balanced number of $|\Phi^{\pm}\rangle$ and
$|\Psi^{\pm}\rangle$. A simple numerical count gives $1280$
possible ways that this can happen with the total probability
being $\mathcal{P}_{suc}=20n^6/(1+n^2)^6$. Fig. \ref{Fig7} shows
$\mathcal{P}_{suc}$ for both approaches when we have six PES's. It
is interesting to note that for $n<0.557$ the direct approach is
the best choice. We have numerically checked that the lower the
number of PES's the greater the value of $n$ below which the
direct approach is the best choice. For more than $10$ PES's the
swapping protocol can be considered the best choice.
\begin{figure}[!ht]
\includegraphics[width=6cm]{Fig7.eps}
\caption{\label{Fig7}(Color online) Top: For small values of $n$,
the upper/blue curve represents $\mathcal{P}_{suc}$ for protocol 1
while the bottom/red curve $\mathcal{P}_{suc}$ for the swapping
approach. Both cases are analyzed at configuration (4) of Fig.
\ref{Fig6}. Bottom: The vertical line (n=0.557) marks the critical
value below which the direct approach yields a better
performance.}
\end{figure}
Finally, we have compared the efficiency of protocol $1$ in
configuration (4) of Fig. \ref{Fig6} against protocol $3$ in
configuration (3). We always obtained better results for protocol
$1$ in this case.
\section{Conclusion}
We have shown that the properties of the multiple teleportation
protocol (MTP) are a general feature of successive teleportations,
not being restricted to the Knill-Laflamme-Milburn (KLM) scheme.
We have also connected one formulation of MTP to the probabilistic
quantum teleportation (PQT), another approach that aims to achieve
unity fidelity teleportation via partially entangled states
(PES's). Moreover, we have presented two new MTP's that are more
efficient than the original one. Indeed, in those two new MTP's we
just need \textit{half} the number of teleportations of the
original MTP to achieve at least the same probability of success
(unity fidelity teleportation). On top of that, we have shown that
the protocol furnishing the highest probability of success
(protocol $3$) is the one requiring, surprisingly, the least
amount of entanglement for its full implementation. On the one
hand, this result may have important practical applications, since
it is known that entanglement is a difficult resource to produce
experimentally, and, on the other hand, it suggests that whenever
PES's are at stake, perhaps the best strategy to achieve a certain
goal is not the one that uses the greatest amount of entanglement.
Finally, we have compared the three MTP's here developed with the
multiple entanglement swapping approach developed in Ref.
\cite{Per08}. We have checked that either one or the other
approach furnished a better performance, depending on the amount
of entanglement available and on the way the PES's are distributed
between Alice and Bob.
\begin{acknowledgments}
The author thanks the Brazilian agency Coordena\c{c}\~ao de
Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior (CAPES) for
funding this research.
\end{acknowledgments}
|
1,108,101,563,906 | arxiv | \section{Introduction}
Provable entanglement has been shown to be a
necessary precondition for secure quantum key-distribution (QKD)
in the context of any protocol \cite{CLL,AG}.
Recently \cite{NA}, we investigated the maximal average disturbance
(error rate) up to which the two legitimate users (Alice and Bob) of a QKD protocol
can prove the presence of quantum correlations in their sifted classical data.
In particular, we focused on qudit-based QKD protocols using two Fourier-dual
bases (to be referred to hereafter as $2d$-state protocols).
Under the assumption of arbitrary joint (coherent) attacks we
showed that the threshold disturbance for provable entanglement scales
with the qudit-dimension $d$ as
\begin{eqnarray}
D_{\rm th}(d)=\frac{d-1}{2d}.
\label{ThdistilConst}
\end{eqnarray}
This theoretical upper bound on tolerable error rates for $2d$-state protocols
is valid for arbitrary dimensions, provided that Alice and Bob
focus on their sifted key and do not apply any collective measurements on
their halves.
Its implications are obvious for estimated disturbances above $D_{\rm th}$
namely, Alice and Bob are not able to infer whether the correlations in their
data have originated from an entangled state or not, and the protocol must
be aborted.
However, for detected disturbances below $D_{\rm th}$, the picture is
incomplete. In particular, based on the above result we only know that the
two honest parties can be confident that they share provable
entanglement with high probability. Thus, the necessary precondition for
secret-key distillation is satisfied for disturbances up to $D_{\rm th}$.
Nevertheless, the details of a prepare-and-measure (P\&M) scheme which will be
capable of attaining this theoretical bound are unknown.
In fact, it is not at all clear whether such a P\&M scheme
exists.
So far, the highest tolerable error rates in the
framework of P\&M QKD schemes have been reported for protocols
using a two-way Gottesman-Lo-type procedure for key distillation \cite{GL}.
This procedure was introduced and improved in the
context of the standard qubit-based $(d=2)$ QKD protocols \cite{GL,C-2}.
It is based on local quantum operations and
two-way classical communication (LOCC2) and is able to provide the
two legitimate users with an unconditionally secure key up to high
error rates. In particular for the standard $4$-state qubit-based protocol
(BB84) the tolerable error rate is $20\%$ \cite{C-2,RA} which is well below the
corresponding theoretical upper bound given by Eq. (\ref{ThdistilConst}),
that is $25\%$. The natural question arises therefore whether this gap
still persists for higher dimensions $(d>2)$ and, in particular, how it scales
with the dimension $d$ of information carriers.
Recently, extending the Gottesman-Lo two-way key distillation (GL2KD)
procedure to higher dimensions, Chau addressed this open question in the context of
fully-symmetric qudit-based QKD schemes using all $(d+1)$ possible
mutually unbiased bases \cite{C-d}. More precisely he showed that
if $d$ is a prime power, the tolerable error-rate scales with dimension
as $1-(3+\sqrt{5})/2d$, for $d\to\infty$.
In this paper, our purpose is to analyze the error tolerance
of $2d$-state QKD protocols using a GL2KD process. In contrast to the protocols
considered in \cite{C-d}, the protocols considered here are not necessarily
fully symmetric.
In general, we have only one symmetry constraint i.e., the symmetry
between the two Fourier-dual bases used in the protocol.
Hence, the problem in its most general form is analytically solvable
to some extent only. Specifically, we are able to derive a sufficient condition for
secret-key distillation in which the number of open parameters
scales quadratically with $d$. However, the derivation of an analytic expression
for the tolerable error rate is possible under additional symmetry assumptions
related to isotropic quantum channels.
In this case, we find that the asymptotic $(d\to\infty)$ tolerable error-rate
scales with dimension as $1/2-1/4\sqrt{d}$, and slowly approaches therefore
its theoretical upper bound determined by Eq.(\ref{ThdistilConst}), that is $1/2$.
The organization of the paper follows the three phases of a
typical P\&M QKD scheme. In Sec. \ref{intro-2}, for the sake of
completeness we briefly summarize basic facts about the
first two phases of a $2d$-state QKD protocol, i.e.,
quantum state distribution and verification test.
Subsequently, in Sec. \ref{secII} we focus on the key-distillation phase
which is the main subject of this work. In particular
we consider a GL2KD procedure. Our analysis is based on the
entanglement-based version of the $2d$-state QKD protocol, whose
reduction to a P\&M scheme is summarized at the end of the section.
An analytic expression for the tolerable error-rate is derived in Sec. \ref{Sec-Iso}
under the assumption of isotropic quantum channels.
Finally, we conclude with a short summary and outlook in Sec. \ref{secIV}.
\section{The first two stages of Two-basis QKD protocols}
\label{intro-2}
For the sake of simplicity, and without loss of generality,
we will focus on prime dimensions only. Thus, throughout this work all the
arithmetics are performed in the finite (Galois) field
$\field{d}=\{0,1,\ldots,d-1\}$ \cite{ECC-book}. It has to be noted, however,
that similar arguments hold if $d$ is a prime power but the formalism
is more involved (e.g., see \cite{NA}).
In general, theoretical investigations of $d$-level quantum systems (qudits)
are performed conveniently with the help of the generalized Pauli operators
\begin{eqnarray}
\Er{mn} := \sum_{l\in\field{d}} \Phi(l\cdot n)
\ket{l-m}\bra{l}\quad {\rm for}\, m,n\in\field{d},
\label{err2}
\end{eqnarray}
where $\Phi(x)\equiv \exp(\frac{{\rm i}2\pi x}{d})$.
These $d^2$ operators form a faithful projective unitary representation
of $\left (\mathbb{Z}\backslash d\mathbb{Z}\right )\times \left (\mathbb{Z}\backslash d\mathbb{Z}\right )$
and an error basis on the Hilbert space of a qudit $\mathbb{C}^d$
\cite{ErrorGroup}.
In a typical $2d$-state P\&M scheme, Alice and Bob use for their
purposes two mutually unbiased bases. Following \cite{NA,BKBGC,AGS},
throughout this work we choose the eigenbasis
$\{\ket{\alpha} : \alpha\in\field{d}\}$ of $\Er{01}$ as the
standard (computational) basis ${\cal B}_1$, while the second basis
${\cal B}_2$ is the Fourier dual of the computational basis with the
discrete Fourier transformation given by
\[
\mathfrak{F} := \frac{1}{\sqrt{d}}\sum_{i,j\in\field{d}}\Phi(i\cdot j)
\ket{i}\bra{j}.
\]
Hence, the indices $m$ and $n$ in Eq. (\ref{err2}), refer to dit-flip
and phase errors in the standard basis ${\cal B}_1$, respectively.
Moreover, $\mathfrak{F}^\dag \Er{mn} \mathfrak{F} = \Phi(-m\cdot n)\Er{nm}^*$ which
indicates that dit-flip errors in the computational basis become phase
errors in the complementary basis and vice-versa.
In general, the first stage of a QKD protocol is the quantum state
distribution stage which involves quantum state (signal) preparation
and transmission via an insecure quantum channel.
The purpose of this phase is to establish correlations between
Alice and Bob, which may also involve correlations
with a third untrusted party (eavesdropper).
As far as a typical $2d$-state P\&M scheme is concerned,
this first stage proceeds
as follows \cite{NA,C-d,BKBGC,AGS}.
Alice sends to Bob a sequence of qudits each of which is
randomly prepared in one of the $2d$ non-orthogonal basis-states
($d$ states for each basis). Bob measures each received particle
randomly in ${\cal B}_1$ or ${\cal B}_2$.
Alice and Bob publicly discuss the bases chosen, discarding all the
dits where they have selected different bases (sifting).
Generalizing the ideas presented in \cite{BBM},
the aforementioned state-distribution process can be viewed
as follows \cite{NA,C-d,BKBGC,AGS}.
Alice prepares each of $N\gg 1$ entangled-qudit pairs in the
maximally entangled state $\ket{\Psi_{00}}$. Thereby, the
generalized maximally entangled states in the Hilbert space
of two distinguishable qudits
$\mathbb{C}_{\rm A}^d\otimes\mathbb{C}_{\rm B}^d$ are defined
as $\ket{\Psi_{mn}} := \sum_{j\in\field{d}}\ket{j_{\rm A}}\otimes
\Er{mn}^{\rm (B)}\ket{j_{\rm B}}/\sqrt{d}$,
where from now on the subscripts A and B refer to Alice and Bob,
respectively \cite{C-d,BKBGC,AGS,ADGJ,MDN}.
Alice keeps half of each pair and submits the other
half to Bob after having applied at random and independently,
a unitary transformation chosen from the set $\{\openone, \mathfrak{F}\}$.
As soon as Bob receives the particles, he acknowledges the fact
and applies at random $\openone$ or $\mathfrak{F}^{-1}$ on each qudit independently.
Alice reveals the sequence of operations she performed and
all the pairs which involve different operations on the
transmitted qudit are discarded. This is the associated
entanglement-based (EB) version of the $2d$-state QKD protocol
and offers many advantages, in particular with respect to security issues
and error tolerance.
The second stage of the QKD protocol is the verification test
(also called signal-quality test) which we discussed in detail elsewhere \cite{NA}.
In this stage, the two legitimate users sacrifice part of their (quantum)
signal in order to quantify the eavesdropping rate during the transmission stage.
More precisely, after a random
permutation of their sifted (qu)dit pairs, Alice and Bob randomly select
a sufficiently large number of them and determine their average
error probability (disturbance). If as a result of a noisy quantum channel
(from now on all the noise in the channel is attributed to eavesdropping)
the estimated disturbance is too high, the protocol is aborted.
Otherwise, Alice and Bob proceed to the key-distillation phase which
will be discussed in detail in the following section.
At any rate, it is always worth keeping in mind that the success of
the verification test (and thus security) relies on two key points.
First, an eavesdropper does not now in advance which qudit-pairs will be chosen
for quality checks and which qudit-pairs will contribute to the final key.
Second, any joint eavesdropping attack can be reduced to a
classical (probabilistic) cheating strategy for which classical
sampling theory can be safely applied
\cite{GL,C-d,LC,SP}.
In particular, the action of the quantum channel can be regarded as a
Pauli one \cite{GL,C-d}. At the end of the distribution stage
of the $2d$-state protocol, each transmitted qudit may have undergone any
of the $d^2$ possible types of errors $\Er{mn}$.
Let $p_{mn}$ denote the rate (probability) of errors of the
form $\Er{mn}$ in the particles shared between
Alice and Bob, with
\begin{eqnarray}
\sum_{m,n\in\field{d}}p_{mn}=1.
\label{norm}
\end{eqnarray}
In general, any symmetries underlying the QKD protocol under
consideration may imply additional constraints on $p_{mn}$.
For the protocols under consideration, both Fourier-dual bases are
used at random and independently on each qudit-pair during the transmission.
Moreover, the choices of the bases are not
known to an eavesdropper, and they are publicly announced only after
all the particles are in Bob's possession. Thus, as a result of the
symmetry between the two bases, the quantum channel
connecting Alice and Bob yields different sets of identical
error-probabilities \cite{NA}.
In particular, we have that
\begin{eqnarray}
p_{mn}=p_{n,d-m}=p_{d-m,d-n}=p_{d-n,m},\quad \forall\, m,n\in\field{d}.
\label{SymBases}
\end{eqnarray}
Note that in highly symmetric protocols, the corresponding symmetry between
all $(d+1)$ mutually unbiased bases leads to a depolarizing quantum channel
with $p_{mn}=p_{01}$ for all $(m,n)\neq(0,0)$ \cite{C-d}.
In view of the symmetries (\ref{SymBases}), the estimated disturbance during
the verification test is given by \cite{NA}
\begin{eqnarray}
D=\sum_{m\in\field{d}^*}p_{m0}
+\sum_{m\in\field{d}^*}\sum_{n\in\field{d}^*}p_{mn},
\label{estD}
\end{eqnarray}
where $\field{d}^*:=\field{d}\backslash\{0\}$. This estimated error rate
should not be confused with the so-called quantum-channel (overall)
error rate $Q=1-p_{00}$, which is not estimable in a typical verification
test of a P\&M $2d$-state QKD protocol.
At this point, we have all the necessary formalism and we turn
to investigate the error tolerance of $2d$-state P\&M protocols.
\section{Analysis of the two-way key distillation}
\label{secII}
Throughout this work we focus on the GL2KD procedure in the context
of which the highest tolerable error rates have been reported
for various P\&M QKD schemes \cite{GL,C-2,C-d}.
Our purpose is to investigate the conditions
under which an insecure quantum channel allows the distillation
of a secret key in the context of $2d$-state QKD protocols and
the GL2KD procedure. Such an analysis can be performed conveniently
in the EB version of the protocols we described in the previous
section and adopt from now on.
We will close this section with the reduction of the EB scheme to
a P\&M one.
\subsection{Dit-flip error rejection (DER)}
As any other key-distillation process, the GL2KD has two stages
\cite{GL,C-2,C-d}. The first stage
is a typical two-way entanglement purification with LOCC2
\cite{ADGJ,MDN,DEJ,BDSW}. More precisely, in order to reduce
the dit-flip-error rate in their signal Alice and Bob
apply a number of D-steps. In each D-step, they form tetrads
of particles by randomly pairing up their qudit-pairs.
Then, within each tetrad of particles they apply a bilateral
exclusive OR (BXOR) operation. Specifically, Alice and Bob individually
apply to their halves the unitary operation
\begin{eqnarray}
{\rm XOR}_{{\rm c}\to{\rm t}}: \ket{x}_{\rm c}\otimes
\ket{y}_{\rm t}\mapsto\ket{x}_{\rm c}\otimes
\ket{x-y}_{\rm t},
\end{eqnarray}
where ${\rm c}$ and ${\rm t}$ denote the control and target qudit, respectively.
Subsequently, they measure their target qudits in the computational basis
and compare their outcomes. The control qudit-pair is kept if and only if
their outcomes agree, while the target pair is always discarded.
In general, this procedure is repeated many times
(many rounds of D-step) until the dit-flip-error rate
in the surviving qudit-pairs is sufficiently low to
guarantee an arbitrarily small total error rate
at the end of the key-distillation protocol. We are
going to make this statement more precise later on.
For the time being, we turn to analyze the effect
of the D-steps on the signal shared between Alice and Bob.
Following \cite{GL,C-2,C-d}, our analysis will be based on classical
probability arguments since any eavesdropping attack can be reduced to
a classical probabilistic one. In particular, let
$S=\{p_{mn}|~m,n\in\field{d}\}$ be the set of error rates
(error-probability distribution) at the beginning of
DER (i.e., at the end of the first stage of the QKD protocol).
It has been shown \cite{MDN} that the effect of $k$ rounds of
D-step (with $k\in\mathbb{N}$) on
$S$ can be identified by a mapping
${\cal D}_k: S\mapsto S_k$,
where $S_k=\{p_{mn}^{(k)}|~m,n\in\field{d}\}$
and
\begin{eqnarray}
p_{mn}^{(k)} &=& \frac{
\sum_{l\in\field{d}}\Phi(-n\cdot l)
\left [ \sum_{j\in\field{d}}\Phi(l\cdot j)~p_{mj}
\right ]^{2^k}}{d\sum_{i\in\field{d}}\left (
\sum_{j\in\field{d}}p_{ij}\right )^{2^k}}.
\label{map1}
\end{eqnarray}
One can readily check that by setting $d=2$, this mapping reduces to
the well-known mapping for qubit-based protocols \cite{GL,C-2}.
Clearly, $p_{mn}^{(k)*}=p_{mn}^{(k)}$ since the summations in
Eq. (\ref{map1}) run over all the finite field $\field{d}$.
Furthermore, for the same reason, Eq. (\ref{map1}) can be rewritten as
\begin{eqnarray}
p_{mn}^{(k)} &=&
\frac{
\left [ C(m)\right ]^{2^k}+
\sum_{l\in\field{d}^*}\Phi(-l\cdot n)
\left [
A(m,l)
\right ]^{2^k}}
{d\left [1 +
\sum_{l\in\field{d}^*}
\left [
C(l)
\right ]^{2^k}\right ]},
\label{map2}
\end{eqnarray}
where
\begin{subequations}
\label{AB-par}
\begin{eqnarray}
A(m,l)&=&
\frac{\sum_{j\in\field{d}} \Phi(l\cdot j)p_{m j}}{\sum_{j\in\field{d}} p_{0j}},
\\
C(m)&=&\frac{\sum_{j\in\field{d}} p_{m j}}{\sum_{j\in\field{d}} p_{0j}},
\end{eqnarray}
\end{subequations}
for $m,l\in\field{d}$.
From now on we restrict ourselves to estimated disturbances $D< D_{\rm th}$, since
for $D \geq D_{\rm th}$ Alice and Bob do not share provable entanglement
\cite{NA,GL,C-2}. Furthermore, for $D< D_{\rm th}$ we also have
\begin{eqnarray}
\sum_{n\in\field{d}}p_{0n}>
\sum_{n\in\field{d}}p_{mn}\quad\forall\, m\in\field{d}^*,
\label{dist_id}
\end{eqnarray}
which implies that
$0\leq C(m)< 1,\forall~m\in\field{d}$.
Besides, a necessary condition for $0\leq p_{mn}^{(k)}\leq 1$
after many rounds of D-step is $|A(m,l)|<1$, for all
$m,l\in\field{d}$. Thus, as $k\to\infty$, we have
$|A|^{2^k}\to 0$
and $|C|^{2^k}\to 0$ which imply that $p_{0n}^{(k)} \to 1/d$ and
$p_{mn}^{(k)} \to 0$, for $m,n\in\field{d}$ and $m\neq 0$.
In other words, the main effect of DER on the surviving particles
shared between Alice and Bob is to reduce errors of
the form $\Er{mn}$ with $m\neq 0$, while increasing the rate of pure phase
errors of the form $\Er{0n}$ with $n\neq 0$.
In particular, let
\begin{subequations}
\label{tot_Rk}
\begin{eqnarray}
R_{\rm D}^{(k)}=\sum_{m\in\field{d}^*}\sum_{n\in\field{d}}p_{mn}^{(k)}
\end{eqnarray}
and
\begin{eqnarray}
R_{\rm P}^{(k)}=\sum_{m\in\field{d}}\sum_{n\in\field{d}^*}p_{mn}^{(k)}\equiv
\sum_{n\in\field{d}^*}q_{n}^{(k)}
\end{eqnarray}
\end{subequations}
be the total dit-flip- and phase-error rates after $k$ rounds of D-step, respectively.
As $k\to\infty$, $R_{\rm D}^{(k)}\to 0$ whereas $R_{\rm P}^{(k)}\to (d-1)/d$.
We must therefore have a closer look at the corresponding individual phase-error
rates $q_{n}^{(k)}$ which, using Eq. (\ref{map2}), are given by
\begin{eqnarray}
q_{n}^{(k)}=\sum_{m\in\field{d}}p_{mn}^{(k)}&=&
\frac{1}{d}+\frac{\xi_n^{(k)}}{d
\left [1+\chi^{(k)}\right ]}
\label{qD2}
\end{eqnarray}
for all $n\in\field{d}$, where
\begin{subequations}
\label{small-par}
\begin{eqnarray}
\xi_n^{(k)}&=&\sum_{m\in\field{d}}\sum_{l\in\field{d}^*}\Phi(-l\cdot n)\left[
A(m,l)\right ]^{2^k},\label{small-par-xi}\\
\chi^{(k)}&=&\sum_{m\in\field{d}^*}\left[C(m)\right ]^{2^k}.
\end{eqnarray}
\end{subequations}
Clearly, the parameters $\xi_n^{(k)}$ and $\chi^{(k)}$ also take arbitrarily
small values as $k\to\infty$, since $|A|^{2^k}\to 0$ and $|C|^{2^k}\to 0$.
{\em Observation 1}. The phase-error rates after $k$ rounds of D-step satisfy the
inequality
\begin{eqnarray}
q_{0}^{(k)}>q_{n}^{(k)} \quad\forall n\in\field{d}^*,
\label{mvec_nc}
\end{eqnarray}
where $q_0^{(k)}$ is the no-phase-error probability.
{\em Proof}. First of all, recall that throughout this work we assume
prime dimensions only. Starting from Eq. (\ref{qD2}), we have to show that
$\xi_0^{(k)} > \xi_n^{(k)}$, for all $n\neq 0$.
Using the symmetry condition (\ref{SymBases}),
Eq. (\ref{small-par-xi}) reads
\begin{eqnarray}
\xi_n^{(k)}&=&
\frac{2\sum_{m=0}^{\lfloor d/2 \rfloor}\sum_{l=1}^{\lfloor d/2 \rfloor}
\cos(l\cdot n)T(m,l)}{\left [\sum_{j\in\field{d}}p_{0j}\right ]^{2^k}}\quad
\forall n\in\field{d},\nonumber\\
\label{xi_n_Eq2}
\end{eqnarray}
where all $T(m,l)$ are real and positive. In particular, we have that
\begin{eqnarray}
T(0,l)&=&\left [p_{00}+2\sum_{j=1}^{\lfloor d/2\rfloor}\cos(l\cdot j)p_{0j}
\right ]^{2^k},
\nonumber \\
T(m,l)&=&2\Re\left \{
\left [\sum_{j\in\field{d}}\Phi(l\cdot j)p_{mj}\right ]^{2^k}\right \},
\quad {\rm for}\, m\neq 0.
\nonumber
\end{eqnarray}
where $\Re(x)$ denotes the real part of $x$.
In view of Eq. (\ref{xi_n_Eq2}), Eq. (\ref{mvec_nc}) now follows immediately
from the inequality $\xi_0^{(k)} > \xi_n^{(k)}$ as a consequence of the fact that
$\cos(x)<1,\, \forall\, x\in\field{d}^*$. A similar but more involved calculation
can be performed if $d$ is a prime power. \hfill $\blacksquare$
\subsection{Phase error correction (PEC)}
Assume now that Alice and Bob have applied a DER process involving many
$(k\gg 1)$ rounds of D-step. As we have just discussed, at
this point the dit-flip-error rate in their surviving pairs will
be negligible (i.e., $p_{mn}^{(k)}\simeq 0$ for $m\neq 0$),
whereas the
phase-error rate has possibly increased.
It is therefore reasonable that the second stage of the GL2KD
(usually called privacy amplification) deals with phase error
correction (PEC) \cite{GL,C-2,C-d}.
In general, at the beginning of the PEC we have a $d$-ary asymmetric channel
with respect to phase errors. In particular, we have $(d-1)$ possible
phase errors with corresponding probabilities (rates) $q_{n}^{(k)}$
given by Eq. (\ref{qD2}). To correct the phase errors, Alice and Bob apply
an $[r,1,r]_d$ repetition code with a relative majority-vote decoding
\cite{ECC-book}.
The key point is that, according to
inequality (\ref{mvec_nc}), the necessary condition \cite{ECC-book}
for such an error correction to work is satisfied at the end of the DER process.
For the sake of completeness, let us briefly summarize the main steps of the PEC
procedure \cite{GL,C-2,C-d}. Alice and Bob randomly divide their qudit-pairs into
sets (blocks), each containing $r$ qudit-pairs. Within each block, they perform a
discrete Fourier transform $\mathfrak{F}_{\rm A}\otimes\mathfrak{F}_{\rm B}$ on each pair.
Subsequently, they perform a sequence of $(r-1)$ BXOR operations with the
same control pair (say the first one) and targets each one of the remaining pairs.
For each target pair, they measure their corresponding halves and estimate the
parity of their outcomes. Finally, they apply $\mathfrak{F}^{-1}_{\rm A}\otimes\mathfrak{F}^{-1}_{\rm B}$
on the control pair and Bob performs $\Er{0s}$ on his control-qudit, where
$s\in\field{d}$ is the parity corresponding to the relative majority of their $(r-1)$
outcomes. If the relative majority of the outcomes is ambiguous, Bob applies $\Er{00}$.
In this way, each block may result in one phase-error-free qudit-pair at most.
Our task now is to investigate the effect of such a PEC process on the
signal shared between Alice and Bob. Let us denote by $p_{mn}^{\rm P}$ the various
error rates in the remaining qudit-pairs at the end of the process.
We are mainly interested in the corresponding total dit-flip- and phase-error rates.
\subsubsection{Phase-error rate}
Let us start with the estimation of an upper bound on the total
phase-error rate $R_{\rm P}\equiv\sum_{m}\sum_{n\neq 0} p_{mn}^{\rm P}$
of the signal at the end of PEC.
We are basically interested in the limit of large block-lengths $r$,
that is in the limit of a large number of distributed qudit-pairs.
Before we proceed further, it is worth noting that the problem under
consideration belongs to a well known class of stochastic processes,
the so-called occupancy problems or Balls-and-Bins experiments.
In this picture, our problem can be viewed as a probabilistic experiment
where $r$ balls (qudit-pairs) are randomly distributed among
$d$ different (error-)bins. This class of problems is fundamental to the
analysis of
randomized algorithms and has been extensively studied in the literature
(e.g., see \cite{brics,SSS,C-H-book-1}). A particularly useful result in this context
are the so-called Chernoff-Hoeffding bounds \cite{C-H-cite} which are
basically large-deviation estimates. In general, these bounds are applicable
to sums of negatively associated, identically distributed random variables.
Their precise derivation can be found in various papers and
standard textbooks (e.g., see \cite{SSS,C-H-cite,C-H-book-1,C-H-book-2}).
{\em Observation 2}. The phase-error rate in the surviving pairs
at the end of PEC satisfies the condition
\begin{eqnarray}
R_{\rm P}\leq
\sum_{n\in\field{d}^*}\left[1-\left (
\sqrt{q_0^{(k)}}-\sqrt{q_n^{(k)}}~
\right )^2\right ]^{r}.
\label{RPbound}
\end{eqnarray}
{\em Proof.}
Clearly, we have that $R_{\rm P}$ is upper bounded by the probability of
failure for the repetition code $P_{\rm fail}$. It suffices therefore, to estimate
an upper bound on $P_{\rm fail}$.
As we mentioned before, PEC is applied on a particular asymmetric channel
with phase-error rates $q_0 > q_j$ for all $j\neq 0$
(to simplify notation throughout this proof we write $q_j$ instead of
$q_j^{(k)}$).
Let us denote by $\eta_j$ the total number of qudit-pairs within a block of
length $r$ suffering from phase errors of the form $\Er{mj}$, with $m\in\field{d}$.
Clearly, majority voting fails only if $\eta_j>\eta_0$ for some $j\neq 0$,
where $\eta_0$ denotes the number of error-free pairs in the block.
For asymmetric channels satisfying Eq. (\ref{mvec_nc}), this may occur for sufficiently
large deviations of $\eta_j$ from their mean values.
In particular, we expect for the failure probability of the majority-vote decoding,
\begin{eqnarray}
P_{\rm fail}\leq P\left [
\bigvee_{j\in\field{d}^*} \left (\eta_j\geq \eta_0\right )\right ]
\leq \sum_{j\in\field{d}^*} P\left (\eta_j\geq \eta_0\right ).\nonumber\\
\label{Bonf}
\end{eqnarray}
where $\bigvee$ is the logical OR operator.
The next step now is to upper bound each of the probabilities
$P\left (\eta_j\geq \eta_0\right )$ appearing in the last summation.
Let us focus on a particular term, say $P\left (\eta_i\geq \eta_0\right )$.
We will work with the radom variables
$\eta_i$, $\eta_0$ and $\eta_{\rm rest}$, where $\eta_i+\eta_0+\eta_{\rm rest}=r$ and
$\eta_{\rm rest}=\sum_{j\not{\in}\{0,i\}} \eta_j$.
Accordingly, the corresponding probability distribution of interest is
$(q_0, q_i, q_{\rm rest})$ with $q_i+q_0+q_{\rm rest}=1$. Obviously,
$(\eta_0, \eta_i, \eta_{\rm rest})$ have a trinomial distribution which is given by
\begin{eqnarray}
P(\eta_0,\eta_i,\eta_{\rm rest})=\sum_{\eta_{\rm rest}=0}^r\binom{r}{\eta_{\rm rest}} q_{\rm rest}^{\eta_{\rm rest}}
\left [
\sum_{\eta_i=0}^{r_i}\binom{r_i}{\eta_i}q_0^{\eta_0}q_{i}^{\eta_i} \right ],
\nonumber
\end{eqnarray}
where $r_i=\eta_0+\eta_i=r-\eta_{\rm rest}$. Introducing the new normalized
probabilities $\tilde{q}_l=q_l/(q_0+q_i)$ with $l\in\{0,i\}$,
the trinomial distribution can be rewritten as
\begin{eqnarray}
P(\eta_0,\eta_i,\eta_{\rm rest})&=&\sum_{\eta_{\rm rest}=0}^r\binom{r}{\eta_{\rm rest}}
q_{\rm rest}^{\eta_{\rm rest}} (q_0+q_i)^{r-\eta_{\rm rest}}\nonumber\\
&&\times\left [
\sum_{\eta_i=0}^{r_i}\binom{r_i}{\eta_i}\tilde{q}_0^{\eta_0}
\tilde{q}_{i}^{\eta_i} \right ].
\nonumber
\end{eqnarray}
Note now that the expression in the brackets is the well
known binomial distribution involving the two events of interest,
i.e., the event of phase-error $i$, and the event of no-phase-error.
In particular, for a given $\eta_{\rm rest}$ the probability that $\eta_i\geq\eta_0$
is given by
\begin{eqnarray}
P\left (\eta_i\geq \eta_0~|~\eta_{\rm rest} \right )&=&
\sum_{\eta_i=\lceil r_i/2 \rceil}^{r_i}
\binom{r_i}{\eta_i}
\tilde{q}_0^{\eta_0}\tilde{q}_{i}^{\eta_i}\nonumber\\
&\leq&
\left (4\tilde{q}_0\tilde{q}_{i}\right )^{r_i/2}=
\left [\frac{4 q_0 q_i}{(q_0+q_i)^2} \right ]^{r_i/2}.
\nonumber
\end{eqnarray}
The above inequality is the well-known Chernoff-Hoeffding bound
for the binomial distribution \cite{C-H-book-2},
which also applies here since $q_0>q_i$ $\forall\, i\in\field{d}^*$.
Thus, in total we have
\begin{widetext}
\begin{eqnarray}
P\left (\eta_i\geq \eta_0\right )&=&
\sum_{\eta_{\rm rest}=0}^r\binom{r}{\eta_{\rm rest}}
q_{\rm rest}^{\eta_{\rm rest}} (1-q_{\rm rest})^{r-\eta_{\rm rest}}
P\left (\eta_i\geq \eta_0~|~\eta_{\rm rest} \right )
\nonumber\\
&\leq&
\sum_{\eta_{\rm rest}=0}^r\binom{r}{\eta_{\rm rest}}
q_{\rm rest}^{\eta_{\rm rest}} (1-q_{\rm rest})^{r-\eta_{\rm rest}}
\left [\frac{4 q_0 q_i}{(q_0+q_i)^2} \right ]^{(r-\eta_{\rm rest})/2}.
\label{p-ineq}
\end{eqnarray}
\end{widetext}
Finally, given that $R_{\rm P}\leq P_{\rm fail}$,
inequality (\ref{RPbound}) is obtained
from the condition (\ref{Bonf}), by using inequality
(\ref{p-ineq}) and the identity
$\sum_{a=0}^r\binom{r}{a}p^a(1-p)^{r-a}x^{r-a}=
\left[ p+(1-p)x\right ]^r$. \hfill $\blacksquare$
According to observation 2,
the phase-error rate in the signal after PEC decreases exponentially in the
block-length $r$. If we are not interested on a tight upper bound on $R_{\rm P}$,
we may upper-bound the right-hand side of this condition as follows
\begin{eqnarray}
R_{\rm P}&\leq& \sum_{n\in\field{d}^*}\left[1-\left (
\sqrt{q_0^{(k)}}-\sqrt{q_n^{(k)}}~
\right )^2\right ]^{r}\nonumber \\
&\leq&(d-1)\left[1-\left (
\sqrt{q_0^{(k)}}-\sqrt{q_{\tilde{n}}^{(k)}}~
\right )^2\right ]^{r}.
\label{RPbound2}
\end{eqnarray}
where $q_{\tilde{n}}^{(k)}=\max\left \{q_{n}^{(k)}~\left
|~\right.n\in\field{d}^*\right\}$, while equality in the latter part holds
if and only if
$q_{n}^{(k)}=q_{\tilde{n}}^{(k)}$, $\forall\,n\in\field{d}^*$.
Although this last step is not at all
necessary, it considerably simplifies the subsequent notation and discussion.
Recall now that the quantities $\xi_{n}^{(k)}$ and $\chi^{(k)}$
become arbitrarily small as $k\to\infty$. Thus, in view of Eq. (\ref{qD2}),
Eq. (\ref{RPbound2}) may further simplified to
\begin{eqnarray}
R_{\rm P}&\leq& (d-1)\left[1- ~\frac{\left (
\xi_0^{(k)}-
\xi_{\tilde{n}}^{(k)}
\right )^2}{4d}+O\left (3 \right )\right ]^{r},
\nonumber
\end{eqnarray}
where $O(3)$ denotes third-order terms in $\xi_{\tilde{n}}^{(k)},\,\chi^{(k)}$
and $\xi_{0}^{(k)}$.
Inclusion of such higher-order terms may only lead to negligible corrections
in the argument of the exponent. At any rate, the phase-error rate will always
be upper-bounded by a quantity which decreases exponentially fast in $r$.
Alternatively, using the inequality $(1 - x)^r\leq \exp(-rx)$ for all $x<1$,
we obtain
\begin{eqnarray}
R_{\rm P}&\leq& (d-1) \exp
\left [-r \frac{\left ( \xi_0^{(k)}-
\xi_{\tilde{n}}^{(k)} \right)^2}{4d}\right ].
\label{RP-bound3}
\end{eqnarray}
We turn now to estimate the corresponding dit-flip-error rate
in the signal.
\subsubsection{Dit-flip-error rate}
As we mentioned before, the PEC involves $(r-1)$ BXOR gates
in the complementary basis. During these gates the dit-flip errors propagate
backwards from the target to the control qudit.
As a result, at the end of the PEC the dit-flip-error
rate in the remaining particles increases by at most $r$ times
(the control qudit-pair itself may initially suffer from a dit-flip-error), i.e.,
\begin{eqnarray}
R_{\rm D}\equiv \sum_{m\in\field{d}^*}\sum_{n\in\field{d}}p_{mn}^P
&\leq& r\sum_{m\in\field{d}^*}\sum_{n\in\field{d}}p_{mn}^{(k)}.
\label{RD-bound}
\end{eqnarray}
According to the preceding discussion the net effect of the PEC is
to reduce any phase errors of the form $\Er{mn}$ with $n\neq 0$,
while possibly increasing dit-flip errors of the form $\Er{m0}$
with $m\neq 0$. Thus, at first site, the whole situation seems to be a
vicious circle since PEC tends to destroy what was achieved in DER
and vice-versa. A way out of this stumbling block relies on the judicious
combination of DER and PEC.
\subsection{A judicious combination of DER and PEC}
For a given $2d$-state protocol (i.e., for a fixed $d$)
Alice and Bob agree in advance upon a fixed and arbitrarily small security parameter
$\epsilon>0$. They apply many rounds ($k\gg 1$) of D-step, until there exists an integer
$r>0$ such that a single application of the PEC will bring the quantum-channel error
rate in the finally surviving pairs to values below $\epsilon$.
Clearly, the protocol has to be aborted if the estimated integer $r$ exceeds
the number of remaining pairs immediately after the DER procedure.
More precisely, at the
end of DER, Alice and Bob may choose the block length for the repetition code to be
\begin{eqnarray}
r\approx
\frac{\epsilon}{2\sum_{m\in\field{d}^*}\sum_{n\in\field{d}}p_{mn}^{(k)}}=
\frac{\epsilon}{2}~\bigg(1+\frac{1}{\chi^{(k)}}\bigg)\geq
\frac{\epsilon}{2\chi^{(k)}}.
\label{r_opt}
\end{eqnarray}
Note that for this particular choice of the block-length, $r\to\infty$ as $k\to\infty$.
The key point now is that for such a choice of $r$, the overall channel error rate
$Q=1-p_{00}^{\rm P}$ can be upper-bounded as follows
\begin{eqnarray}
Q&\leq&R_{\rm D}+R_{\rm P}\nonumber\\
&\leq&\frac{\epsilon}{2}+(d-1)\exp\left [-\frac{\epsilon}{8}
\frac{\left (\xi_0^{(k)}-\xi_{\tilde{n}}^{(k)} \right )^2}{d\chi^{(k)}} \right],
\end{eqnarray}
where inequalities (\ref{RD-bound}) and (\ref{RP-bound3}) have been used.
Thus, for any given dimension of the information carriers,
$Q<\epsilon$ provided that
\begin{eqnarray}
\frac{\left [\xi_0^{(k)}-\xi_{\tilde{n}}^{(k)}\right ]^2}{d\chi^{(k)}}>
\frac{8}{\epsilon}\ln\left [\frac{2(d-1)}{\epsilon}\right ],
\label{distil-cond-as}
\end{eqnarray}
As long as $Q<\epsilon$, Alice and Bob share a number of
nearly perfect pairs whose fidelity with respect to the ideal state
$\ket{\Psi_{00}}$ is exponentially close to one. The final key can
then be obtained by measuring each pair separately along the standard
basis, and the information that an eavesdropper may have on it,
is also upper bounded by the security parameter $\epsilon$.
The condition (\ref{distil-cond-as}) is a sufficient condition for secret-key
distillation in the context of $2d$-state QKD protocols using two Fourier-dual
bases. In particular, it determines the error rates
which can be tolerated by such protocols using a GL2KD procedure.
From that point of view, it is a generalization of the corresponding
condition for fully symmetric qudit-based protocols obtained by Chau \cite{C-d}.
Unfortunately, the number of independent parameters in inequality (\ref{distil-cond-as})
scales quadratically with $d$, and thus an analytical (or even numerical) solution
becomes rather difficult for $d>3$. Hence, in order to obtain an analytic expression for the
tolerable error rate for arbitrary dimensions we had to resort to isotropic
quantum channels. The related results will be discussed in detail in
Sec. \ref{Sec-Iso}. For the time being we close this section by summarizing the
main points in the reduction of the EB version of the $2d$-state QKD protocol to
a P\&M one.
\subsection{Reduction to a P\&M QKD scheme}
In general, not every EB QKD protocol can be reduced to a P\&M one.
The main difficulty appears in the reduction of the underlying
quantum key-distillation procedure to a purely classical one.
The advantage of the GL2KD is that by construction it allows for such
a reduction \cite{GL}.
The reduction of the EB $2d$-state QKD protocol to a P\&M one,
which tolerates precisely the same error rates, follows the
same steps as for other protocols \cite{GL,C-d,SP}.
Here, for the sake of completeness, we would like to summarize the four
cornerstones of such a reduction.
First, during the distribution stage, Alice can measure all the halves
of the pairs before sending the other halves to Bob. This is equivalent to
choosing a random dit-string and encoding each dit in the corresponding
qudit-state, in one of the two Fourier-dual bases.
Second, the XOR operation used in the quantum key-distillation
procedure can be easily replaced by its classical analogue.
Thus, the DER stage is immediately reduced to a classical error-rejection
(advantage distillation) process.
Third, the quantum circuit of the PEC can also
be reduced to a classical one. Such a reduction relies on the fact
that the sequence of gates applied independently by Alice and Bob in each block of $r$
qudits during PEC, i.e.,
$\mathfrak{F}_{1}^{-1}\left (
{\rm XOR}_{1\to r}\ldots {\rm XOR}_{1\to 2}\right )\bigotimes_{j=1}^{r}\mathfrak{F}_{j}$,
is equivalent to $\bigotimes_{j=2}^r\mathfrak{F}_{j}^{-1}
\left ({\rm XOR}_{r\to 1}^{(+)}\ldots {\rm XOR}_{2\to 1}^{(+)}\right )$.
This equivalence follows by induction from the fact that
for any two qudits,
$\left (\mathfrak{F}_{\rm c}^{-1}\otimes \openone_{\rm t} \right )
{\rm XOR}_{{\rm c}\to {\rm t}} \left (\mathfrak{F}_{\rm c}\otimes \mathfrak{F}_{\rm t} \right )
= \left ( \openone_{\rm c} \otimes \mathfrak{F}_{\rm t}^{-1}\right )
{\rm XOR}_{{\rm t}\to {\rm c}}^{(+)}$, where
${\rm XOR}_{{\rm c}\to{\rm t}}^{(+)}: \ket{x}_{\rm c}\otimes
\ket{y}_{\rm t}\mapsto\ket{x}_{\rm c}\otimes
\ket{x+y}_{\rm t}$.
Finally, the last essential point in the reduction is the observation that the
key-distillation procedure does not rely on phase information.
The above steps lead to a P\&M $2d$-state QKD protocol with the
distribution and the verification-test stages discussed in Sec. \ref{intro-2}.
The corresponding classical key-distillation stage of the protocol proceeds
as follows \cite{GL,C-2,C-d}.
{\bf DER:} Alice and Bob perform many rounds of D-step. In each round they randomly
form tetrads of their dits. For each tetrad $j$, Alice announces the
parity of her dits, i.e., she announces $X_{1}^{(j)}-X_{2}^{(j)}$, where
$X_{i}^{(j)}$ denotes the $i$-th pair of tetrad $j$.
Similarly, Bob announces the parity of his corresponding dits $Y_{1}^{(j)}-Y_{2}^{(j)}$.
One of the dit-pairs (say $X_{1}^{(j)}$ and $Y_{1}^{(j)}$) survives if and only if
the announced parities agree. This process is
repeated (many rounds of D-step), until there is an integer $r>0$ such that
a single application of the following phase-error correction will bring the overall
error rate in the remaining signal below $\epsilon$. The protocol is aborted if the estimated
parameter $r$ exceeds the number of remaining dits.
{\bf PEC:} In the classical PEC (which is essentially privacy amplification),
Alice and Bob randomly divide their remaining dit-pairs into blocks each containing
$r$ dit-pairs. Let us denote by $(X_{i}^{(j)},Y_{i}^{(j)})$ the $i$-th dit-pair in
block $j$. Alice and Bob, replace each block by the parity of its dits, i.e.,
by $\sum_{i=1}^rX_{i}^{(j)}$ and
$\sum_{i=1}^rY_{i}^{(j)}$, respectively.
In this way, the final secret key essentially consists of the estimated
parities for each one of the blocks.
In closing, it has to be noted here that for a more efficient secret-key distillation
the two legitimate users may follow the adaptive key-distillation procedure introduced
by Chau \cite{C-2,C-d}. The main difference is that Alice and Bob do not apply many
rounds of D-step and PEC in order to bring the overall error rate below the security
parameter $\epsilon$. Instead, they simply adjust their DER and PEC in order to
bring the overall error rate below, let us say $5\%$. From that point on, they
switch to more efficient error-correction and privacy amplification using
concatenated Calderbank-Shore-Steane codes.
\section{Isotropic quantum channels}
\label{Sec-Iso}
An isotropic channel is characterized by $p_{0j}=p_{j0}=p_{10}$ and
$p_{ij}=p_{ji}=p_{11}$ for $i,j\in\field{d}^*$.
It turns out that isotropy is an inherent property of the two-basis protocols
using qubits (standard BB84) or qutrits \cite{NA}. However, in general for $2d$-state
protocols using higher dimensions $(d>3)$, isotropy cannot be
justified so easily, unless the quantum channel itself is
isotropic (e.g., open-space quantum cryptography).
The robustness and security of various QKD protocols under the assumption of
isotropic eavesdropping has been extensively studied in the QKD literature
\cite{BKBGC,AGS,PABM,PT,DKCK,CG-FGNP}. In particular, we know that at any rate the isotropy
assumption does not affect the threshold disturbance for secret-key distillation
which, for $2d$-state protocols, is given by Eq. (\ref{ThdistilConst}) \cite{NA}.
In this section, our purpose is to further analyze the sufficient condition
for key distillation (\ref{distil-cond-as}) in the framework of isotropic
quantum channels and derive an analytic expression for the tolerable error
rate of $2d$-state QKD protocols.
Instead of isotropic channels, we may consider a slightly more general class of
channels for which $p_{0j}\neq p_{j0}$, that is
\begin{eqnarray}
p_{mn}=\left (
\begin{array}{cccc}
p_{00} & p_{01} & \ldots & p_{01}\\
p_{10} & p_{11} & \ldots & p_{11}\\
\vdots & \vdots & \ddots & \vdots\\
p_{10} & p_{11} & \ldots & p_{11}
\end{array}
\right ).
\label{mat_iso2}
\end{eqnarray}
Given the normalization condition (\ref{norm}), such a channel involves three independent
parameters and thus the derivation of an analytic expression for the tolerable error rate
is possible. Moreover, by setting $p_{01}=p_{10}$ we can easily obtain the corresponding
expressions for isotropic channels.
\subsection{Tolerable error rates}
For channels satisfying Eq. (\ref{mat_iso2}), Eq. (\ref{map2}) yields for the probabilities
after $k$ rounds of D-step
\begin{eqnarray}
p_{00}^{(k)} &=& \frac{[p_{00}+(d-1)p_{01}]^{2^k}+(d-1)(p_{00}-p_{01})^{2^k}}{d~\Pi},\nonumber\\
p_{0n}^{(k)} &=& \frac{[p_{00}+(d-1)p_{01}]^{2^k}-(p_{00}-p_{01})^{2^k}}{d~\Pi},\nonumber\\
p_{m0}^{(k)} &=& \frac{[p_{10}+(d-1)p_{11}]^{2^k}+(d-1)(p_{10}-p_{11})^{2^k}}{d~\Pi},\nonumber\\
p_{mn}^{(k)} &=& \frac{[p_{10}+(d-1)p_{11}]^{2^k}-(p_{10}-p_{11})^{2^k}}{d~\Pi},\nonumber
\end{eqnarray}
where $\Pi=[p_{00}+(d-1)p_{01}]^{2^k}+(d-1)[p_{10}+(d-1)p_{11}]^{2^k}$.
In view of these relations, the form (\ref{mat_iso2}) is invariant under D-steps since we have
that $p_{0n}^{(k)}=p_{01}^{(k)}$, $p_{m0}^{(k)}=p_{10}^{(k)}$ and
$p_{mn}^{(k)}=p_{11}^{(k)}$, $\forall\, m,n\neq 0$. Therefore, all the phase-error rates
$q_{n}^{(k)}$ with $n\neq 0$, are equal
at the end of DER and the corresponding quantum channel is therefore symmetric with respect
to phase errors.
As in the previous section, we may also introduce the parameters $A(m,n)$ and $C(m)$.
In fact, for the particular class of channels under consideration $A(m,n)=A(m)$ for all
$m\in\field{d}$ and
\begin{subequations}
\label{AB_iso}
\begin{eqnarray}
A(0)&=&\frac{p_{00}-p_{01}}{p_{00}+(d-1)p_{01}},\quad \\
A(m)&=&A(1)=\frac{p_{10}-p_{11}}{p_{00}+(d-1)p_{01}}\quad{\rm for}\,\, m\neq 0,\quad\\
C(m)&=&C(1)=\frac{p_{10}+(d-1)p_{11}}{p_{00}+(d-1)p_{01}}\quad{\rm for}\,\, m\neq 0,\quad
\end{eqnarray}
\end{subequations}
while $C(0)=1$.
To proceed further, we note that $A(m)=B(m)C(m)$, where
\begin{eqnarray}
B(m)=\frac{p_{m0}-p_{m1}}{p_{m0}+(d-1)p_{m1}}=B(1),
\label{Beq_iso}
\end{eqnarray}
and $[B(m)]^{2^k}\to 0$, as $k\to\infty$.
Thus, using Eqs. (\ref{AB_iso}) and (\ref{Beq_iso}),
Eqs. (\ref{small-par}) can be simplified to
\begin{subequations}
\begin{eqnarray}
\xi_0^{(k)}&=&(d-1)\sum_{m\in\field{d}}
\left [A(m)\right ]^{2^k}, \\
\xi_{n}^{(k)}&=&-\sum_{m\in\field{d}}
\left [A(m)\right ]^{2^k}\quad{\rm for}\,\, n\neq 0, \\
\chi^{(k)}&=& (d-1)\left [C(1)\right ]^{2^k},
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
\sum_{m\in\field{d}} [A(m)]^{2^k}&=&[A(0)]^{2^k}+
\sum_{m\in\field{d}^*} [B(m)]^{2^k}[C(m)]^{2^k}\nonumber\\
&=& [A(0)]^{2^k}+(d-1)[B(1)C(1)]^{2^k}.
\end{eqnarray}
Accordingly, condition (\ref{distil-cond-as}) now reads
\begin{eqnarray}
\frac{d\left \{ [A(0)]^{2^k}+(d-1)[B(1)C(1)]^{2^k}
\right \}^2}{(d-1)[C(1)]^{2^k}}>
\frac{8}{\epsilon}\ln\left [\frac{2(d-1)}{\epsilon}\right ],
\nonumber
\label{distil-cond-iso2}
\end{eqnarray}
or equivalently [setting $A=A(0)$, $B=B(1)$ and $C=C(1)$]
\begin{eqnarray}
\frac{dA^{2^{k+1}}}{(d-1)C^{2^k}}+d(d-1)C^2 B^{2^{k+1}}
+2dA^{2^k}B^{2^k}> f(d,\epsilon),\nonumber\\
\label{distil-cond-iso2a}
\end{eqnarray}
where $f(d,\epsilon)=8~\epsilon^{-1}\ln\left [2(d-1)/\epsilon\right ]$.
Recall now that the positive quantities $A^{2^k}\to 0$, $C^{2^k}\to 0$
and $B^{2^k}\to 0$ for $k\to\infty$.
Thus, inequality (\ref{distil-cond-iso2a}) can always be satisfied for any
$k$ such that
\begin{eqnarray}
\frac{dA^{2^{k+1}}}{(d-1)C^{2^k}}> f(d,\epsilon).
\label{distil-cond-iso2b}
\end{eqnarray}
For a given dimension, this latter inequality defines the critical
number of D-steps $k_{\rm c}$, such that for $k>k_{\rm c}$ inequality
(\ref{distil-cond-iso2a}) is satisfied. In particular, solving
(\ref{distil-cond-iso2b}) with respect to $k$ we obtain
\begin{eqnarray}
k_{\rm c} = \log_{2}\left \{
\frac{\ln \left [ (d-1) f(d,\epsilon)/d\right ]}{\ln(A^{2}/C)}\right \}.
\end{eqnarray}
This is a well defined quantity provided that $A^2>C$, i.e., for
\begin{eqnarray}
(p_{00}-p_{01})^2>[p_{10}+(d-1)p_{11}][p_{00}+(d-1)p_{01}].
\label{final_ineq}
\end{eqnarray}
where Eqs. (\ref{AB_iso}) have been used.
The same inequality holds for isotropic channels but $p_{01}=p_{10}$.
This is therefore a sufficient condition for secret-key distillation
in the context of any $2d$-state QKD protocol under the assumption of
isotropic quantum channels. In particular, it determines the error rates
which can be tolerated by such protocols using a GL2KD process.
Recall now that according to Eq. (\ref{estD}) the estimated disturbance for
the isotropic channel is $D=[1-p_{00}+(d-1)^2p_{11}]/2$.
Moreover, due to the normalization condition (\ref{norm}),
inequality (\ref{final_ineq}) actually involves two independent
parameters (say $p_{00},\,p_{11}$). Thus, estimating the values of
$p_{00}$ which satisfy it, we obtain the tolerable error
rate (disturbance) which depends on both $d$ and $p_{11}$,
i.e., $D_{\rm 2CC}(d, p_{11})$.
In fact, we find that $D_{\rm 2CC}(d, p_{11})$ increases monotonically with
respect to $p_{11}$. Hence, the worst-case scenario
(from Alices's and Bob's point of view) corresponds to $p_{11}=0$ for which we
obtain for the tolerable disturbance
\begin{eqnarray}
D_{\rm 2CC}(d)&=&\frac{1-p_{00}}2=\frac{2(d-1)}{4d-1+\sqrt{1+4d}},
\end{eqnarray}
where $D_{\rm 2CC}(d)=D_{\rm 2CC}(d, p_{11}=0)$. Given a particular dimension of the
information carriers (i.e., a specific $2d$-state protocol), the GL2KD procedure
enables Alice and Bob to generate a provably secure key whenever the estimated
disturbance is below $D_{\rm 2CC}(d)$ or else, the quantum channel
error rate is below $2D_{\rm 2CC}(d)$.
\\
\\
\begin{figure}[h]
\resizebox{0.75\columnwidth}{!}{%
\includegraphics{fig1.eps}
}
\caption{$2d$-state QKD protocols : The tolerable error rate
$D_{\rm 2CC}$ (dashed line) and its theoretical upper bound
$D_{\rm th}$ (solid line) as functions of the dimension $d$.
Secret-key distillation is impossible in the regime (I), while
it may be possible for error rates below $D_{\rm th}$.
In the regime (II) a secret key can be distilled by means of the
key-distillation procedure considered here.
Inset: The gap between the two regimes $\delta(d)=D_{\rm 2CC}-D_{\rm th}$
is plotted as a function of the dimension. The symbols
(triangles, circles and squares) correspond to prime dimensions.
}
\label{Dth:fig}
\end{figure}
\subsection{Discussion}
The tolerable disturbance $D_{\rm 2CC}$ and its theoretical upper bound $D_{\rm th}$
are plotted as functions of the dimension $d$, in Fig. \ref{Dth:fig}.
First of all, we see that $D_{\rm 2CC}(d)<D_{\rm th}$ for all $d$.
Actually, the difference between the two bounds $\delta(d)\equiv D_{\rm th}-D_{\rm 2CC}$
scales with dimension as
\begin{eqnarray}
\delta(d)=\frac{(d-1)\left (-2+\sqrt{1+4d}\right )}{2d(4d-3)},
\end{eqnarray}
and is also plotted in the inset of Fig. \ref{Dth:fig}.
It is also worth noting that $\delta$ increases as we go from qubits $(d=2)$
to qutrits $(d=3)$. It reaches its maximum value around $d=4$ (i.e., for quatrits) and
decreases monotonically for higher dimensions. Note that the same behavior also appears
in the case of $(d+1)$-basis protocols \cite{C-d}. Moreover, as $d\to\infty$, we have
that
\[
D_{\rm 2CC}(d)\approx \frac{1}{2} - \frac{1}{4\sqrt{d}},
\]
while $\delta(d)\approx 1/4\sqrt{d}$. In other words, we see that the the tolerable
error rate for the $2d$-state QKD protocols approaches its theoretical upper
bound as $1/\sqrt{d}$ for $d\to\infty$. This is in contrast to the $(d+1)$-basis
protocols where the corresponding asymptotic behavior scales with dimension as $1/d$.
A special case of the isotropic channel we have just considered is the so-called depolarizing
channel for which $p_{mn}=p_{01}$ for $(m,n)\neq (0,0)$. In this case,
condition (\ref{final_ineq}) reduces to
Eq. (36) of Ref. \cite{C-d} i.e.,
\[
(p_{00}-p_{10})^2>d~p_{10}\left [p_{00}+(d-1)p_{10}\right ].
\]
Note also that for $d=2$ we recover the well-known tolerable
error rate of the standard BB84 protocol, i.e., $D_{\rm 2CC}(2)=20\%$ \cite{C-2,RA}.
In closing, it is worth noting that condition (\ref{final_ineq}) can also be
obtained by generalizing the ideas of Ref. \cite{RA} to higher dimensions.
More precisely, let us define the characteristic exponent
$r_{\rm ch}^{(d)}\in\mathbb{R}$ with the defining property that there exists an
$\alpha>0$ such that
\begin{eqnarray}
\lim_{k\to\infty}\frac{R_{\rm D}^{(k)}}{\left (
\frac{d-1}d-R_{\rm P}^{(k)}\right )^{r_{\rm ch}^{(d)}}}
=\alpha,
\end{eqnarray}
where $R_{\rm D}^{(k)}$ and $R_{\rm P}^{(k)}$ are given by Eqs. (\ref{tot_Rk}), respectively.
For channels satisfying (\ref{mat_iso2}), the quantities $R_{\rm D}^{(k)}$ and
$[(d-1)/d]-R_{\rm P}^{(k)}$ tend to zero from above, as $k\to\infty$. Moreover, we obtain the
following expression for the characteristic exponent
\[
r_{\rm ch}^{(d)} = \ln\left[ \frac{p_{00}+(d-1)p_{01}}{p_{10}+(d-1)p_{11}}\right ] \bigg /
\ln\left[ \frac{p_{00}+(d-1)p_{01}}{p_{00}-p_{11}}\right ].
\]
Following \cite{RA}, Eq. (\ref{final_ineq}) can now be obtained from the condition
for asymptotic correctability, that is $r_{\rm ch}^{(d)}>2$. However, we would like to stress that
it is still an open problem why this particular correctability condition, which was originally
derived for qubit-based QKD protocols, is also valid for $2d$-state protocols and
isotropic channels.
\section{Conclusions}
\label{secIV}
We have discussed the error-tolerance of qudit-based QKD protocols
using two mutually unbiased (Fourier-dual) bases. In particular,
we focused on Gottesman-Lo-type key-distillation procedures.
For arbitrary quantum channels subject only to the symmetry between
the two bases used in the protocol, we derived a sufficient condition
for secret-key distillation, thus extending known results on depolarizing
quantum channels.
In the case of isotropic quantum channels, we were able to analyze
this condition further and to obtain an analytical expression for the tolerable error
rate as a function of the dimension $d$ of the information carriers.
Specifically, as $d\to\infty$, the tolerable error rate scales with dimension
as $1/2-1/4\sqrt{d}$, thus approaching its upper theoretical bound, that is $1/2$.
This asymptotic behavior is substantially different from the corresponding behavior
in the fully symmetric $(d+1)$-basis protocols, where the tolerable error rate scales
as $1-(3+\sqrt{5})/2d$.
Unfortunately, for moderate values of $d$, the tolerable error rate is always well below
its corresponding theoretical upper bound $D_{\rm th}(d)$. Hence, the development of new
classical key-distillation protocols which will be able to bridge this gap still
remains an interesting open problem.
\section{Acknowledgments}
This work is supported by the EU within the IP SECOQC. K.~S.~Ranade is supported by a graduate-student
scholarship of the Technische Universit\"at Darmstadt.
|
1,108,101,563,907 | arxiv | \section{Introduction}
\label{intro}
Neural Machine Translation (NMT) has achieved great successes on machine translation
tasks recently \cite{bahdanau+:2014,sutskever+:2015}. Generally, it relies on a recurrent neural network under the
Encode-Decode framework: it firstly encodes a source sentence into context vectors and then generates
its translation token-by-token, selecting from the target vocabulary.
Among different variants of NMT, attention based NMT, which is the focus of this paper, is attracting increasing interests in the community \cite{bahdanau+:2014,luong+:2015}.
One of its advantages is that it is able to dynamically make use of the encoded context through an attention mechanism
thereby allowing the use of fewer hidden layers while still maintaining high levels of translation performance.
An attention mechanism is designed to predict the alignment of a target word with respect to source words.
In order to facilitate incremental decoding, it tries to make this alignment prediction
without any information about the target word itself, and thus this attention can be considered to be a form of a reordering model
(see \S \ref{rnmt} for more details). However, it differs from conventional alignment models that are able to use the target word to infer
its alignments \cite{och+ney:2000,dyer+:2013,liu+sun:2015}, and as a result there is a substantial gap in quality between the alignments derived by this attention based NMT and conventional alignment models
(54 VS 30 in terms of AER for Chinese-to-English as reported in \cite{cheng+:2016}).
This discrepancy might be an indication that the potential of NMT is limited.
In addition, the attention in NMT is learned in an unsupervised manner without explicit prior knowledge about alignment.\footnote{
We do agree that NMT is a supervised model with respect to translation rather than reordering.}
In contrast, in conventional statistical machine translation (SMT), it is standard practice to learn reordering models in a supervised manner with the guidance
from conventional alignment models.
Inspired by the supervised reordering in conventional SMT, in this paper, we propose a {\em Supervised Attention} based NMT (SA-NMT) model.
Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ \cite{och+ney:2000} or fast\_align \cite{dyer+:2013} etc.)
to obtain the alignment of the bilingual training corpus in advance.
Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised
manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT
will be improved leading to better end-to-end translation performance.
One advantage of the proposed SA-NMT is that it implements the supervision of attention as a regularization in the joint training objective (\S 3.2).
Furthermore, since the supervision of attention lies in the middle of the entire network architecture rather than the top ( as in the supervision of translation (see Figure 1(b)),
it serves to mitigate the vanishing gradient problem during the back-propagation \cite{szegedy+:2015}.
This paper makes the following contributions:
\begin{itemize}
\iffalse
\item
thus it is possible to achieve better end-to-end translation performance than conventional SMT, even if its alignment might be worse than that from IBM models (IBM model1).
(It is observed that there is loose relationship between alignment and conventional SMT, and we put this forward?)
Standing upon this point, it explains that NMT can lead to reasonable reordering and thus promising performance, even if its alignment is not respectable.
\fi
\item
It revisits the attention model from the point view of reordering (\S 2), and propose a supervised attention
for NMT that is supervised by statistical alignment models (\S 3).
The proposed approach is simple and easy to be implemented, and it is generally applicable to any attention-based NMT models,
although in this case it is implemented on top of the model in \cite{bahdanau+:2014}.
\item
On two Chinese-to-English translation tasks, it empirically shows that the proposed approach gives rise to improved performance (\S 4):
on a large scale task, it outperforms three baselines including a state-of-the-art Moses,
and leads to improvements of up to 2.5 BLEU points over the strongest baseline;
on a low resource task, it even obtains about 5 BLEU points over the attention based NMT system on which is it based.
\end{itemize}
\section{Revisiting Neural Machine Translation}
\label{rnmt}
\begin{figure*}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=4.5cm]{anmt.pdf} &
\includegraphics[width=4.5cm]{sanmt.pdf} \\
(a) { NMT } &
(b) { SA-NMT}
\end{tabular}
\caption{
The computational graphs of both (a) NMT and (b) SA-NMT at timestep $t$.
Circles denote the hidden variables; while squares denote the
observable variables, which receive supervision during training. The difference (marked in red) in (b) regarding to (a) is treating $\alpha_t$ as an observable variable instead of a hidden variable.
\label{fig:search_error}}
\end{figure*}
\noindent Suppose $\seq{x}=\left \langle x_1,x_2,\cdots,x_m \right \rangle$ denotes a source sentence,
$\seq{y}=\left \langle y_1,y_2,\cdots,y_n \right \rangle$ a target sentence. In addition, let
$x_{<t}=\left \langle x_1,x_2,\cdots,x_{t-1} \right \rangle$ denote a prefix of $\seq{x}$.
Neural Machine Translation (NMT) directly maps a source sentence into a target under an encode-decode
framework. In the encoding stage, it uses two bidirectional recurrent neural networks to encode $\seq{x}$
into a sequence of vectors $E_{\seq{x}}=\left \langle E_{x_1},E_{x_2},\cdots,E_{x_m} \right \rangle$, with
$E_{x_i}$ representing the concatenation of two vectors for $i_{th}$ source word from two directional RNNs.
In the decoding stage, it generates the target translation
from the conditional probability over the pair
of sequences $\seq{x}$ and $\seq{y}$ via a recurrent neural network parametrized by $\theta$ as follows:
\begin{equation}
p(\seq{y}\mid \seq{x}; \theta) = \prod_{t=1}^{n}p(y_t\mid y_{<t}, E_{\seq{x}})
= \prod_{t=1}^{n} \text{softmax}\big(g(y_{t-1},h_t,c_t)\big)[y_t]
\label{eq-nmt}
\end{equation}
\noindent where
$h_t$ and $c_t$ respectively denote an RNN hidden state (i.e. a vector) and a context vector at timestep $t$;
$g$ is a transformation function mapping into a vector with
dimension of the target vocabulary size;
and $[i]$ denotes the $i_{th}$ component of a vector.\footnote{In that sense,
$y_t$ in Eq.\eqref{eq-nmt} also denotes the index of this word in its vocabulary.}
Furthermore, $h_t=f(h_{t-1},y_{t-1},c_t)$ is defined by an activation function, i.e. a Gated Recurrent Unit \cite{chung+:2014};
and the context vector $c_t$ is a dynamical source representation at timestep $t$, and calculated as the weighted sum of
source encodings $E_{\seq{x}}$, i.e. $c_t = \alpha_t^{\top} E_{\seq{x}}$.
Here the weight $\alpha_t$ implements an attention mechanism, and $\alpha_{t,i}$ is the alignment probability of
$y_t$ being aligned to $x_i$.
$\alpha_t$ is derived through a feedforward neural network $a$ as follows:
\begin{equation}
\alpha_t = a(y_{t-1},h_{t-1},E_{\seq{x}})
\label{att}
\end{equation}
\noindent where $a$ consists of two layers, the top one being a softmax layer.
We skip the detailed definitions of $a$ together with $E_{\seq{x}}$, $f$ and $g$, and refer the readers to \cite{bahdanau+:2014} instead.\footnote{In the original paper,
$\alpha_t$ is independent on the $y_{t-1}$ in Eq.\eqref{att}, $\alpha_t$ is independent on the $y_{t-1}$ in Eq.\eqref{att}, but this dependency was retained in our direct baseline NMT2.}
Figure 1(a) shows one slice of computational graph for NMT definition at time step $t$.
To train NMT, the following negative log-likelyhood is minimized:
\begin{equation}
-\sum_i\log p(\seq{y}^i \mid \seq{x}^i; \theta)
\label{loss}
\end{equation}
\noindent where $\left \langle \seq{x}^i,\seq{y}^i \right \rangle$ is a bilingual sentence pair from a given training corpus, $p(\seq{y}^i \mid \seq{x}^i; \theta)$
is as defined in Eq.\eqref{eq-nmt}.
Note that even though the training is conducted in a supervised manner with respect to translation, i.e., $\seq{y}$ are observable in Figure 1(a),
the attention is learned in a unsupervised manner,
since $\alpha$ is hidden.
In Figure 1(a), $\alpha_t$ can not be dependent on $y_{t}$, as the target word $y_t$ is unknown at the timestep $t-1$ during the testing.
Therefore, at timestep $t-1$,
NMT firstly tries to calculate $\alpha_t$, through which NMT figures out those source words will be translated next, even though the next target word $y_t$ is unavailable.
From this point of view, the attention mechanism plays a role in reordering and thus can be considered as a reordering model.
Unlike this attention model, conventional alignment models define the alignment $\alpha$ directly over
$\seq{x}$ and $\seq{y}$ as follows:
\begin{equation*}
p(\alpha\mid \seq{x},\seq{y})=\frac{\exp(F(\seq{x}, \seq{y}, \alpha))}{\sum_{a}\exp(F(\seq{x}, \seq{y},\alpha))}
\end{equation*}
where $F$ denotes either a log-probability $\log p(\seq{y},\alpha\mid \seq{x})$ for a generative model like IBM models \cite{brown+:1993} or a feature function for discriminative models \cite{liu+sun:2015}.
In order to infer $\alpha_t$, alignment models can readily use the entire $\seq{y}$, of course including $y_t$ as well, thereby they can model the alignment between $\seq{x}$ and $\seq{y}$ more
sufficiently. As a result, the attention based NMT might not deliver satisfying alignments, as reported in \cite{cheng+:2016}, compared to conventional alignment models.
This may be a sign that the potential of NMT is limited in end-to-end translation.
\section{Supervised Attention}
\label{spv}
In this section, we introduce supervised attention to improve the
alignment, which consequently leads to better translation performance
for NMT. Our basic idea is simple: similar to conventional SMT, it
firstly uses a conventional aligner to obtain the alignment on
the training corpus; then it employs these alignment results as
supervision to train the NMT. During testing, decoding proceeds in exactly the same
manner as standard NMT, since there is no alignment supervision available for
unseen test sentences.
\subsection{Preprocessing Alignment Supervision}
As described in \S 2, the attention model outputs a soft alignment $\alpha$,
such that $\alpha_t$ is a normalized probability distribution.
In contrast, most aligners are typically oriented to grammar induction
for conventional SMT, and they usually output `hard' alignments, such as \cite{och+ney:2000}.
They only indicate whether a target word is aligned to a source word or not, and
this might not correspond to a distribution for each target word. For
example, one target word may align to multiple source words, or no
source words at all.
Therefore, we apply the following heuristics to preprocess the hard
alignment: if a target word does not align to any source words, we
inherit its affiliation from the closest aligned word with preference
given to the right, following \cite{devlin+:2014}; if a target word is aligned to
multiple source words, we assume it aligns to each one evenly. In
addition, in the implementation of NMT, there are two special tokens `eol'
added to both source and target sentences. We assume they are aligned to
each other. In this way, we can obtain the final supervision of
attention, denoted as $\hat{\alpha}$.
\subsection{Jointly Supervising Translation and Attention}
We propose a soft constraint method to jointly supervise the translation
and attention as follows:
\begin{equation}
-\sum_i\log p(\seq{y}^i \mid \seq{x}^i; \theta) + \lambda \times \Delta(\alpha^i,\hat{\alpha}^i; \theta)
\label{spv-obj}
\end{equation}
\noindent where $\alpha^i$ is as defined in Eq. \eqref{eq-nmt}, $\Delta$ is a loss
function that penalizes the disagreement between $\alpha^i$ and
$\hat{\alpha}^i$, and $\lambda>0$ is a hyper-parameter that balances the
preference between likelihood and disagreement. In this way, we treat
the attention variable $\alpha$ as an observable variable as shown in Figure
1(b), and this is different from the standard NMT as shown in Figure
1(a) in essence. Note that this training objective resembles to that in multi-task learning \cite{evgeniou+pontil:2004}.
Our supervised attention method has two further
advantages: firstly, it is able to alleviate overfitting
by means of the $\lambda$; and secondly it is capable of addressing the
vanishing gradient problem because the supervision of $\alpha$ is
more close to $E_{\seq{x}}$ than $\seq{y}$ as in Figure 1(b).
In order to quantify the disagreement between $\alpha^i$ and
$\hat{\alpha}^i$, three different methods are investigated in our
experiments:
\begin{itemize}
\item {\em Mean Squared Error} (MSE)
\begin{equation*}
\Delta(\alpha^i,\hat{\alpha}^i; \theta) = \sum_m \sum_{n} \frac{1}{2}\big(\alpha(\theta)_{m,n}^i-\hat{\alpha}_{m,n}^i\big)^2
\end{equation*}
\noindent MSE is widely used as a loss for regression tasks \cite{lehmann+casella:1998}, and it
directly encourages $\alpha(\theta)_{m,n}^i$ to be equal to
$\hat{\alpha}_{m,n}^i$.
\item {\em Multiplication} (MUL)
\begin{equation*}
\Delta(\alpha^i,\hat{\alpha}^i; \theta) = -\log \big(\sum_m \sum_{n}\alpha(\theta)_{m,n}^i\times\hat{\alpha}_{m,n}^i\big)
\end{equation*}
MUL is particularly designed for agreement in word alignment and it
has been shown to be effective \cite{liang+:2006,cheng+:2016}. Note that
different from those in \cite{cheng+:2016}, $\hat{\alpha}$ is not a parametrized
variable but a constant in this paper.
\item {\em Cross Entropy} (CE)
\begin{equation*}
\Delta(\alpha^i,\hat{\alpha}^i; \theta) = -\sum_m \sum_{n} \hat{\alpha}_{m,n}^i\times\log\alpha(\theta)_{m,n}^i
\end{equation*}
Since for each $t$, $\alpha(\theta)_t$ is a distribution, it is
natural to use CE as the metric to evaluate the
disagreement \cite{rubinstein+kroese:2004}.
\end{itemize}
\iffalse
Supervising the attention with alignment
our model not only maintains the nature of NMT, which is a unified model between reordering and translation; but also
improves the reordering model with the supervision from a strong word alignment.
There are two ways to add the supervision for the attention:
1. hard constraint
alpha
2. soft constraint
Analyze the coeff:
a. if it is zero, it reduce to standard attention-based NMT.
b. if it is large enough and learns an identical model as external aligners,
it resembles an RNN with aligned source words as context.
Discussions:
weekness of our model.
a. It relies on the external aligners.
b. It has an additional hyperparameter to tune.
suppose the $\alpha_t$ is the alignment supervision for $y_t$
present our supervised training objective first.
explain the coefficient and its effects.
different alignment results.
For testing, we follow exactly the same as NMT, i.e. there is no cost from the alignment.
a. preprocessing the supervision
target2source side
one2many: normalized evenly.
non-aligned?
eol-to-eol:
b. supervised attention model
from hard constrain to soft constrain
c. decoding
our method involves only in training, for decoding, we do not have any supervision.
\fi
\section{Experiments}
\label{exps}
We conducted experiments on two Chinese-to-English translation tasks:
one is the NIST task oriented to NEWS domain, which is a large scale task and suitable to
NMT; and the other is the speech translation oriented to travel domain, which
is a low resource task and thus is very challenging for NMT.
We used the case-insensitive BLEU4 to evaluate translation quality
and adopted the multi-bleu.perl as its implementation.
\subsection{The Large Scale Translation Task}
\subsubsection{Preparation}
We used the data from the NIST2008 Open Machine Translation
Campaign.
The training data consisted of 1.8M sentence
pairs, the development set was nist02 (878 sentences),
and the test sets are were nist05 (1082 sentences),
nist06 (1664 sentences) and nist08 (1357
sentences).
We compared the proposed approach with three strong baselines:
\begin{itemize}
\vspace{-0.3cm}
\item Moses: a phrase-based machine translation system \cite{koehn+:2007};
\vspace{-0.3cm}
\item NMT1: an attention based NMT \cite{bahdanau+:2014} system at https://github.com/lisa-groundhog/GroundHog;
\vspace{-0.3cm}
\item NMT2: another implementation of \cite{bahdanau+:2014} at https://github.com/nyu-dl/dl4mt-tutorial.
\vspace{-0.3cm}
\end{itemize}
We developed the proposed approach based on NMT2, and denoted it as {\bf SA-NMT}.
\begin{table}[t]
\centering
\begin{tabular}{l|c}
Alignment Losses & BLEU \\
\hline
\hline
Mean Squared Error (MSE) &39.4 \\
Multiplication (MUL) &39.6 \\
Cross Entropy (CE) &40.0 \\
\end{tabular}
\caption{Performance of SA-NMT on development set for different loss functions to supervise the attention in terms of BLEU.}
\label{table:losses}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{l|c}
Alignment Methods & BLEU \\
\hline
\hline
fast\_align &39.6\\
GIZA++ &40.0 \\
\end{tabular}
\caption{Comparision of aligners between fast\_align and GIZA++ for SA-NMT in terms of BLEU on the development set.}
\label{table:aligner}
\end{table}
We followed the standard pipeline to
run Moses. GIZA++ with
grow-diag-final-and was used to build the translation
model. We trained a 5-gram target language model on
the Gigaword corpus, and used a lexicalized distortion
model. All experiments were run with the default settings.
To train NMT1, NMT2 and SA-NMT, we employed the
same settings for fair comparison. Specifically,
except the stopping iteration which
was selected using development data, we used the
default settings set out in \cite{bahdanau+:2014} for
all NMT-based systems: the dimension of word embedding
was 620, the dimension of hidden units was
1000, the batch size was 80, the source and target
side vocabulary sizes were 30000, the maximum sequence length was 50,
\footnote{This excludes all the sentences longer than 50 words in either source or target side only for NMT systems, but for Moses we use the entire training data.}
the beam size for decoding was 12, and the optimization was
done by Adadelta with all hyper-parameters suggested by \cite{zeiler:2012}.
Particularly for SA-NMT, we employed a conventional word aligner to obtain the word alignment on the training data before training SA-NMT.
In this paper, we used two different aligners, which are fast\_align and GIZA++.
We tuned the hyper-parameter $\lambda$ to be 0.3 on the development set, to balance the preference between the translation and alignment.
Training was conducted on a single Tesla K40 GPU machine.
Each update took about 3.0 seconds for both NMT2 and SA-NMT, and 2.4 seconds for NMT1.
Roughly, it took about 10 days to NMT2 to finish 300000 updates.
\subsubsection{Settings on External Alignments}
We implemented three different losses to supervise the attention as described in \S 3.2.
To explore their behaviors on the development set, we employed the GIZA++ to generate the alignment on the training set prior to the training SA-NMT.
In Table \ref{table:losses}, we can see that MUL is better than MSE.
Furthermore, CE performs best among all losses, and thus we adopt it for the following experiments.
In addition, we also run fast\_align to generate alignments as the supervision for SA-NMT and the results were reported in Table \ref{table:aligner}.
We can see that GIZA++ performs slightly better than fast\_align and thus we fix the external aligner as GIZA++ in the following
experiments.
\subsubsection{Results on Large Scale Translation Task}
\begin{figure}[t]
\begin{center}
\includegraphics[width=7cm]{learn_curve.pdf}
\vspace{-0.3cm}
\caption{ Learning curves of NMT2 and SA-NMT on the development set.
\label{fig:learn_curve} }
\vspace{-0.3cm}
\end{center}
\end{figure}
Figure \ref{fig:learn_curve} shows the learning curves of NMT2 and SA-NMT on the development set.
We can see that NMT2 generally obtains higher BLEU as the increasing of updates before peaking at
update of $150000$, while it is unstable from then on.
On the other hand, SA-NMT delivers much better BLEU for the beginning updates and
performs more steadily along with the updates, although it takes
more updates to reach the peaking point.
\begin{table}[t]
\centering
\begin{tabular}{c|c|ccc}
Systems & nist02 & nist05 & nist06 & nist08\\
\hline
\hline
Moses & 37.1 & 35.1 & 33.4 & 25.9\\
NMT1 & 37.8 & 34.1 & 34.7 & 27.4\\
\hline
NMT2 & 38.7 & 35.3 & 36.0 & 27.8\\
SA-NMT & $40.0^{*}$ & $37.8^{*}$ & $37.6^{*}$ & $29.9^{*}$\\
\end{tabular}
\caption{BLEU comparison for large scale translation task.
The development set is nist02, and the test sets are nist05,nist06 and nist08.
`*' denotes that SA-NMT is significantly better than Moses, NMT1 and NMT2 with $p<0.01$.
Note that Moses is trained with more bilingual sentences and an additional monolingual corpus.
}
\label{table:main}
\end{table}
Table \ref{table:main} reports the main end-to-end translation results for the large scale task.
We find that both standard NMT generally outperforms Moses except NMT1 on nist05.
The proposed SA-NMT achieves significant and consistent improvements over all three baseline systems,
and it obtains the averaged gains of 2.2 BLEU points on test sets over its direct baseline NMT2.
It is clear from these results that our supervised attention mechanism is highly effective in practice.
\subsubsection{Results and Analysis on Alignment}
\begin{figure}[t]
\begin{center}
\includegraphics[width=15.7cm]{alignment.pdf}
\caption{Example (soft) alignments of (a) NMT2 (i.e., standard NMT with unsupervised attention), (b) SA-NMT (i.e. NMT with supervised attention),
and (c) GIZA++ on two Chinese-English sentence pairs. The soft alignments of (c) is converted from hard alignment as in \S 3.1.
The first row shows the alignments of the sentence pair from the training set while the second row shows the alignments from test sets.
\label{fig:alignment} }
\vspace{-0.3cm}
\end{center}
\end{figure}
\iffalse
As explained in \S 2, standard NMT can not use the target word information to predict its aligned source words,
and thus might fail to predict the correct source words for some target words.
For example, for the sentence in the training set in Figure \ref{fig:alignment} (a), NMT2 aligned `following' to `皮诺契特' rather than `继',
and particularly it aligned the word `.' to `在' rather than `。' no matter this word is relatively easy to be aligned.
In contrast, with the help information of the target word itself, GIZA++ successfully aligned both `following' and `.' to the
expected source words in Figure\ref{fig:alignment}(c). With the alignment results from GIZA++ as supervision, we can see that our SA-NMT can imitate GIZA++ and thus align both words correctly.
More importantly, for the sentence in the unseen test set, like GIZA++, SA-NMT confidently aligned `but' and `.' to their correct source words respectively as in Figure\ref{fig:alignment}(b), on which
NMT2 failed. This fact seems that our SA-NMT can learn the alignment behaviors from GIZA++ and generalize its alignment abilities somehow on the unseen test sentences.
\fi
As explained in \S 2, standard NMT can not use the target word
information to predict its aligned source words, and thus might fail to
predict the correct source words for some target words. For example, for
the sentence in the training set in Figure \ref{fig:alignment} (a), NMT2
aligned `following' to `皮诺契特 (gloss: pinochet)'
rather than `继 (gloss: follow)', and worse still it
aligned the word `.' to `在 (gloss: in)' rather than `。' even though this word is
relatively easy to align correctly. In contrast, with the help of information from
the target word itself, GIZA++ successfully aligned both `following' and
`.' to the expected source words (see Figure\ref{fig:alignment}(c)). With
the alignment results from GIZA++ as supervision, we can see that our
SA-NMT can imitate GIZA++ and thus align both words correctly. More
importantly, for sentences in the unseen test set, like GIZA++,
SA-NMT confidently aligned `but' and `.' to their correct source words
respectively as in Figure\ref{fig:alignment}(b), where NMT2 failed.
It seems that SA-NMT can learn its alignment behavior from
GIZA++, and subsequently apply the alignment abilities it has learned to unseen test
sentences.
\begin{table}[t]
\centering
\begin{tabular}{c|l}
Methods & AER \\
\hline
\hline
GIZA++ & $30.6^{*}$ \\
\hline
NMT2 & 50.6 \\
SA-NMT & $43.3^{*}$ \\
\end{tabular}
\caption{Results on word alignment task for the large scale data. The evaluation metric is Alignment Error Rate (AER).
`*' denotes that the corresponding result is significanly better than NMT2 with $p<0.01$.}
\vspace{-0.5cm}
\label{table:alignment}
\end{table}
Table \ref{table:alignment} shows the overall alignment results on word alignment task in terms of the metric, alignment
error rate. We used the manually-aligned dataset as in \cite{liu+sun:2015} as the test set.
Following \cite{luong+manning:2015}, we force-decode both the bilingual sentences including source and reference sentences to
obtain the alignment matrices, and then for each target word
we extract one-to-one alignments by picking up the source word with the highest alignment confidence as the hard alignment.
From Table \ref{table:alignment}, we can see clearly that standard NMT (NMT2) is far behind GIZA++ in alignment quality.
This shows that it is possible and promising to supervise the attention with GIZA++.
With the help from GIZA++, our supervised attention based NMT (SA-NMT) significantly reduces the AER,
compared with the unsupervised counterpart (NMT2). This shows that the proposed approach is able to realize our intuition:
the alignment is improved, leading to better translation performance.
Note that there is still a gap between SA-NMT and GIZA++ as indicated in Table \ref{table:alignment}.
Since SA-NMT was trained for machine translation instead of word alignment,
it is possible to reduce its AER if we aim to the word alignment task only.
For example, we can enlarge $\lambda$ in Eq.\eqref{spv-obj} to bias the training objective towards word alignment task, or we can
change the architecture slightly to add the target information crucial for alignment as in \cite{yang+:2013,tamura+:2014}.
\iffalse
\begin{table}[t]
\centering
\begin{tabular}{c|c|ccc}
Systems & nist02 & nist05 & nist06 & nist08\\
\hline
\hline
Moses & 36.9 & 35.1 & 33.4 & 25.9\\
NMT1 & 0.342 & 0.269 & 0.547 &\\
NMT2 & 0.342 & 0.269 & 0.547 &\\
SA-NMT & 0.342 & 0.269 & 0.547 &\\
\end{tabular}
\caption{Evaluation of reordering in terms of RIBES.}
\label{table:reorder}
\end{table}
\fi
\subsection{Results on the Low Resource Translation Task}
\begin{table}[t]
\centering
\begin{tabular}{c|c|c}
Systems & CSTAR03 & IWSLT04 \\
\hline
\hline
Moses & 44.1 & 45.1 \\
\hline
NMT1 & 33.4 & 33.0 \\
NMT2 & 36.5 & 35.9 \\
SA-NMT & $39.8^{*}$ & $40.7^{*}$ \\
\end{tabular}
\caption{BLEU comparison for low-resource translation task.
CSTAR03 is the development set while IWSLT04 is the test set.
`*' denotes that SA-NMT is significantly better than both NMT1 and NMT2 with $p<0.01$.}
\label{table:small-main}
\end{table}
For the low resource translation task, we used the BTEC corpus as the training data,
which consists of 30k sentence pairs with 0.27M Chinese words and 0.33M English words.
As development and test sets, we used the CSTAR03 and IWSLT04 held out sets, respectively.
We trained a 4-gram language model on the target side of training corpus for running Moses.
For training all NMT systems, we employed the same settings as those in the large scale task,
except that vocabulary size is 6000, batch size is 16, and the hyper-parameter $\lambda=1$ for SA-NMT.
Table \ref{table:small-main} reports the final results. Firstly, we can see that
both standard neural machine translation systems NMT1 and NMT2 are much worse than Moses with a substantial gap.
This result is not difficult to understand: neural network systems typically require sufficient
data to boost their performance, and thus low resource translation tasks are very challenging
for them. Secondly, the proposed SA-NMT gains much over NMT2 similar to the case in the large scale task,
and the gap towards Moses is narrowed substantially.
While our SA-NMT does not advance the state-of-the-art Moses as in large scale translation,
this is a strong result if we consider that previous works on low resource translation tasks:
\newcite{arthur+:2016} gained over Moses on the Japanese-to-English BTEC corpus,
but they resorted to a corpus consisting of 464k sentence pairs;
\newcite{luong+manning:2015} revealed the comparable performance to Moses on English-to-Vietnamese with 133k sentences pairs,
which is more than 4 times of our corprus size.
Our method is possible to advance Moses by using reranking as in \cite{neubig+:2015,cohn+:2016},
but it is beyond the scope of this paper and instead we remain it as future work.
\section{Related Work}
\label{rlw}
\iffalse
Neural network based alignment (Nan Yang and Tamura)
Zhaopeng Tu, shujie Liu, yang Liu (agreement ijcai, supervised alignment),
their approaches lead to better attention models for NMT from the view of modeling.
while ours is from the view of learning.
Therefore, ours is orthognal to theirs and it is promising to combine them together, which remains as future work.
citation:
Neubig's work on reordering emnlp12 and iwslt
Trevor Cohn,naacl2016
a. attention based neural machine translation or nmt.
\fi
Many recent works have led to notable improvements in the attention mechanism
for neural machine translation. \newcite{tu+:2016} introduced an explicit coverage
vector into the attention mechanism to address the over-translation and
under-translation inherent in NMT. \newcite{feng+:2016} proposed an additional
recurrent structure for attention to capture long-term
dependencies. \newcite{cheng+:2016} proposed an agreement-based
bidirectional NMT model
for symmetrizing alignment.
\newcite{cohn+:2016} incorporated multiple structural alignment biases
into attention learning for better alignment. All of them improved
the attention models that were learned in an unsupervised manner.
While we do not modify the attention model itself, we learn it in a
supervised manner, therefore our approach is orthogonal to theirs.
It has always been standard practice to learn reordering models from
alignments for conventional SMT either at the phrase level or word
level. At the phrase level, \newcite{koehn+:2007} proposed a lexicalized
MSD model for phrasal reordering; \newcite{xiong+:2006} proposed a
feature-rich model to learn phrase reordering for BTG; and
\newcite{li+:2014} proposed a neural network method to learn a BTG
reordering model. At the word level, \newcite{bisazza+federico:2016}
surveyed many word reordering models learned from alignment models for
SMT, and in particular there are some neural network based reordering
models, such as \cite{zhang+:2016}. Our work is inspired by these works in spirit,
and it can be considered to be a recurrent neural network based word-level reordering
model. The main difference is that in our approach the reordering model
and translation model are trained jointly rather than separately as theirs.
\section{Conclusion}
It has been shown that attention mechanism in NMT is worse than conventional word alignment models in its alignment accuracy.
This paper firstly provides an explanation for this by viewing the atten- tion mechanism from the point view of reordering.
Then it proposes a supervised attention for NMT with guidance from external conventional alignment models,
inspired by the supervised reordering models in conventional SMT.
Experiments on two Chinese-to-English translation tasks show that the proposed approach achieves better alignment results
leading to significant gains relative to standard attention based NMT.
\section*{Acknowledgements}
We would like to thank Xugang Lu for invaluable discussions on this work.
|
1,108,101,563,908 | arxiv | \section{Introduction}
Research on quantum measurement theory has a long history and a rough architecture of it has already been built \cite{Braginsky1992,Wiseman2009,Jacobs2014,Arthurs1965,Busch1985,Stenholm1992,Jordan2005}. Promoted by the advancement of quantum experimental technology \cite{Blais2004,Campagne-Ibarcq2014,Xiang2013,Slichter2012,Hatridge2013,Murch2013}, some new sub-areas of quantum measurement theory have emerged and earned many attentions recently; for instance, quantum non-commuting measurement \cite{Wei2008,Ruskov2010,Ruskov2012,Hacohen-Gourgy2016,Atalaya2018}. Heisenberg's uncertainty principle points out that non-commuting observables such as position and momentum can not be precisely measured simultaneously. What's more, such measurements is impossible with projective measurement, while it has been proved that simultaneous non-commuting measurements can be performed using continuous weak quantum measurements \cite{Arthurs1965}. Moreover, continuous quantum measurement will eventually convert to projective measurement when the measurement strength increases to a certain extent, thus when a simultaneous measurement of non-commuting observables is given, whether it can be physically realized is a problem worth considering.
The theoretical analysis of the simultaneous measurement of non-commuting observables can be traced back to the middle and late 20th century \cite{Arthurs1965,Busch1985,Stenholm1992,Jordan2005}. In the last decade, it re-attracted many attentions thanks to the development of the quantum experimental technology \cite{Wei2008,Ruskov2010,Ruskov2012,Hacohen-Gourgy2016,Atalaya2018,Garcia-Pintos2016,Garcia-Pintos2017}. Ref.~\cite{Wei2008} analyzed the statistics of the measured outputs and the fidelity of system state monitoring via the measured outputs when non-commuting measurement is performed on a qubit. The dynamics caused by the simultaneous measurement of non-commuting observables has been theoretical discussed in Ref.~\cite{Ruskov2010}, and experimentally demonstrated in Ref.~\cite{Hacohen-Gourgy2016}.
In Ref.~\cite{Atalaya2018}, the temporal correlation of the two output signals of quantum non-commuting measurement has been discussed and further applied to quantum parameter estimation.
Up to now, the measure for the degree of non-commuting behavior of quantum measurements has not been discussed in relevant papers. Therefore this paper considers defining such a measure, and further determining whether a quantum measurement can be physically realized based on this measure. With the measure for non-commutability of quantum measurement, many related work in the field of quantum non-commuting measurement may be able to make further progress: when analyzing data coming from experiments or simulations, this measure can help to understand the phenomenon presented by the data, and can also provide a new idea for data processing; this measure is also expected to provide guidance for the design of the experiments. Most importantly, this measure is looked forward to help to {deep} the understanding of quantum non-commuting measurement, thereby driving the discovery of new physical phenomenon.
Different from the classical world, quantum measurements cause dynamical changes\cite{Braginsky1992,Wiseman2009,Jacobs2014}. To construct a general measure, the measure is expected not to refer to any specific mathematical representation of the dynamics, therefore the data that can be used as input is limited to the initial and final states of the measurement. Moreover, the dynamics induced by a specific quantum measurement is uncertain, which means that quantum states after measurement need to be represented using a set of density matrices. In this way, when measuring the non-commutability of a quantum measurement, $N$ (sufficiently large) copies with a same initial state $\rho_0$ are prepared, and the measurement is performed for each copy, the set of the obtained $N$ final states is denoted as $R_f$. Thus, the data that can be used as inputs are $\rho_0$ and $R_f$.
The dynamics of quantum continuous weak measurements considered in this paper can be described by the following stochastic master equation \cite{Wiseman2009}:
\begin{eqnarray}
d\rho&=&-\kappa_0\big[\sigma_\phi,[\sigma_\phi,\rho]\big]dt \nonumber\\&&+\sqrt{2\kappa_0}\big[\sigma_\phi\rho, \rho\sigma_\phi-2{\rm Tr}(\sigma_\phi\rho)\rho\big]dW,
\label{eq1}
\end{eqnarray}
where $dW$ is the standard Wiener process, $k$ and $\sigma_\phi$ represent the measurement strength and the measured observable, respectively. Furthermore, according to Reference \cite{Hacohen-Gourgy2016} , the dynamics of the systems can be described using the following stochastic master equation when the non-commuting observables are simultaneously measured
\begin{eqnarray}
d\rho&=&\sum\limits_{i=1}^{2}{-\kappa_i\big[\sigma_{\phi i},[\sigma_{\phi i},\rho]\big]dt}\nonumber \\&&+ \sum\limits_{i=1}^{2}{\sqrt{2\kappa_i}\big[\sigma_{\phi i}\rho,\rho\sigma_{\phi i}-2{\rm Tr}(\sigma_{\phi i}\rho)\rho\big]dW_i}
\label{eq2}
\end{eqnarray}
where $\sigma_{\phi i}$ is the measured observable, $k_i$ is the measurement strength, and $dW_i$ is the corresponding standard Wiener process.
Considering quantum measurements of a single observable that perform no non-commuting behavior, system states will gradually toward one of the two eigenstates of the measured observable \cite{Braginsky1992}, that is, the set of final states can be divided into two subsets corresponding to two eigenstates of the observable respectively. Furthermore, when quantum measurements of non-commuting observables are considered, there will be four eigenstates affecting the evolutions of system states. However, each eigenstate of certain observable will be closer to one of the eigenstate of the other observable when the angle $\theta$ between two observables is less than $\pi/2$, under the affect of which the set of final states can be divided into two subsets (correspond to the combinations of the eigenstate of one observable and the closer eigenstate of the other observable respectively) in addition.
Thus, a clustering method is demanded to divide the set of final states of the measurement into two subsets and the statistical characteristics of which are further analyzed to construct the measure. The $K$-means method is chosen in this paper. The $K$-means method is a prototype-based objective function clustering method that selects the sum of the Eucilidean distances of all objects to the prototype as the objective function of the optimization \cite{Lloyd1982,Jain2010}. A brief introduction to the flow and theory of the K-means method is given below.
To cluster all objects into $K$ class, first select $K$ initial particles randomly, assign each object to the particle with the smallest Euclidean distance to form $K$ clusters, and calculate the mean of each cluster as the new $K$ particles. Iterate continuously until the shutdown condition is met \cite{Lloyd1982} (the iteration is stopped when the distance between the new and old particles is less than a sufficiently small value $\lambda=0.01$ in this paper). In this way, we can easily classify all the objects into $K$ classes.
\begin{figure}
\setlength{\abovecaptionskip}{6pt} \centerline{\scalebox{0.8}[0.8]{\includegraphics{F1}}}
\caption{(Color online) { The intermediate variable $D(M)$ and $V(M)$ as functions of measurement strength $\kappa$ and angle $\theta$ between two measurement observables ($\sigma_{\phi 1}=\sigma_z$ and $\sigma_{\phi 2}={\rm sin}\theta\sigma_x+{\rm cos}\theta\sigma_z$). The initial state $\rho_0=[0.8,0.4; 0.4,0.2]$ and the dynamics of systems are calculated by Eq.~(\ref{eq2}) with measurement duration $T=200\mu s$.}}\label{fig1}
\end{figure}
Then we will give the expression of the measure for the non-commutability of quantum measurement defined and demonstrate that it satisfies the properties to be satisfied as a measure. For $N$ (sufficiently large) copies whose initial states are all $\rho_0$, perform the measurement $M$ on them and obtain the set of final states $R_f$. $R_f$ is clustered using the $K$-means method, $K=2$, and the two subclasses $R_{f1}$ and $R_{f2}$ are obtained. The measure defined by us thus can be given below:
\begin{eqnarray}
{P}_i(M)&=&\rho_{fi}\bigg(\mathop {\arg \min}\limits_{k=1,2,...,N_i} {\sum\limits_{j=1}^{N_i}{\parallel B(\rho_{fi}(k))-B(\rho_{fi}(j))\parallel_2^2}}\bigg),\nonumber\\
D(M)&=&\sum\limits_{i=1}^{2} {{\parallel {B(\rho_0)-B({P}_i(M))}\parallel_2^2}},\nonumber\\
V(M)&=&{1\over N}\sum\limits_{i=1}^{2} {\sum\limits_{j=1}^{N_i} {\parallel B(\rho_{fi}(j))-{1\over N_i}\sum\limits_{j=1}^{N_i}{B(\rho_{fi}(j))}}\parallel_2^2},\nonumber\\
\Phi(M)&=&\alpha {V(M)\over {D(M)(4-D(M)+\gamma)}}-\beta.
\end{eqnarray}
\begin{figure}
\setlength{\abovecaptionskip}{6pt}
\centerline{\scalebox{0.6}[0.6]
{\includegraphics{F2}}}
\caption{(Color online) Fig. (a) and Fig. (b) show the defined geometric measure of non-commuting simultaneous measurement $\Phi(M)$ as a function of measurement strength $\kappa$ and non-commutability angle $\theta$ for various parameters, respectively. The remaining system parameters are the same with Fig~\ref{fig1}.}\label{fig2}
\end{figure}
Here {$P_i(M),i=1,2$, is the element of certain subset that minimizes the sum of distances from other elements in the subset, and is expected to reflect the average position of subset elements, }$D(M)$ is the intermediate variable which is able to reflect the measurement strength obtained from $\rho_0$ and $P_i(M)$, {$V(M)$ is the weighted sum of variances of two subsets which is expected to reflect the angle $\theta$ between two observables, and $\Phi(M)$ is the measure constructed finally. }$N_1$ and $N_2$ are the cardinalities of $R_{f1}$ and $R_{f2}$. $\rho_{fi}(j)$ is the $j$-th element of $R_{fi}$ and $B(\rho)$ represents the Bloch vector of density matrix $\rho$. $\alpha$, $\beta$, and $\gamma$ are auxiliary parameters whose values need to be further determined under the constraints of $\alpha,\beta,\gamma>0$ and $\gamma\to0$.
Fig.~\ref{fig1} shows $D(M)$ and $V(M)$ as functions of the measurement strength $\kappa_i$ and the non-commuting angle $\theta$ between two observables, respectively. We consider the simplest case of two identical detectors: $\kappa_1=\kappa_2=\kappa$.
We can see that $D(M)$ simply increases with $\kappa$ as expected, while $V(M)$ is affected by not only $\theta$ but also $\kappa$.
Moreover, the defined geometric measure $\Phi(M)$ as a function of the measurement strength $\kappa$ for various non-commuting angles is plotted in Fig. 2(a). Fig. 2(b) represents the evolution as a function of the non-commuting angle $\theta$ for various measurement strength $\kappa$. Other parameters in Eq.(2) are
$\alpha=1, \beta=0, \gamma=0.01$. It's obvious that the
geometric measure defined increased with $\kappa$ as well as $\theta$ as expected, a more detailed analysis will be given below.
A simple simulation experiment is done to show that our measure is a useful tool in the field of quantum non-commuting measurement in addition. Considering the dynamics described by the stochastic master equation~(\ref{eq2}), $\kappa_1=\kappa_2=\kappa$, there must exist a bound above which quantum measurement will become unable to physically realized. Specific to the case we consider, there are only two parameters that can be changed ($\kappa$,$\theta$), thus the bound can be represented by a curve in the two-dimensional coordinate system whose axes represent the two variable parameters respectively. Without loss of generality, we assume that there exist three reference curves ($L_i, i=1,2,3$) closest to the real bound and use our measure to distinguish the three reference curves and finally select the optimal one.
From our point of view, $\Phi(M)$ below and above the real bound are supposed to have most different performance. Therefore we denote the sets of the values of $\Phi(M)$ below and above certain curve $L_i(i=1,2,3)$ as $S1_{L_i}$ and $S2_{L_i}$, use the Matlab function $ksdensity$ to plot the probability density curves of $S1_{L_i}$ and $S2_{L_i}$, and simply calculate the proportion of overlapping parts of two probability density curves to determine the optimal bound curve.
Fig.~\ref{fig4} shows the three reference curves chosen and two probability density curves of them. Tab.~\ref{table1} shows the proportion of overlapping parts of the three reference curves, through which we finally select $L_1$ to be the optimal bound curve.
\begin{figure}
\setlength{\abovecaptionskip}{6pt} \centerline{\scalebox{0.68}[0.68]{\includegraphics{F3}}}
\caption{(Color online) {(a): The three reference curves ($L_1$, $L_2$, and $L_3$) assumed to be closest to the real bound. Figs. (b),(c), and (d) show the two probability density curves coming from the two sets of values of $\Phi(M)$ below and above $L_1$, $L_2$, and $L_3$ , respectively. }}\label{fig4}
\end{figure}
\renewcommand\arraystretch{2}
\begin{table}
\caption{The proportion of overlapping parts of two probability density curves}
\begin{tabular}{|p{2cm}<{\centering}|p{2cm}<{\centering}|p{2cm}<{\centering}|p{2cm}<{\centering}|}
\hline
Reference Curve & $L_1$ & $L_2$ & $L_3$ \\
\hline
Proportion & 20.08\% & 21.82\% & 21.00\%\\
\hline
\end{tabular}
\label{table1}
\end{table}
\renewcommand\arraystretch{0.5}
\begin{figure*}
\setlength{\abovecaptionskip}{6pt} \centerline{\scalebox{0.8}[0.8]{\includegraphics{F4}}}
\caption{(Color online) Several typical cases in quantum measurements, where (a) and (b) correspond to projection measurements of a single observable and non-commuting observables, respectively; (c) and (d) correspond to continuous weak measurements of a single observable and non-commuting observables, respectively.}\label{fig3}
\end{figure*}
Finally, let us explore whether the measure for the non-commutability of quantum measurements defined above can give reasonable results for several typical cases in quantum measurements which are presented in Fig.~\ref{fig3}, thus indicating the rationality of the measure in the aspect of physical meaning. Here, we simply use $M_{\rm proj}$, $M_{\rm weak}$, and $M_{\rm ncpr}$ to represent projection measurement, continuous weak measurement, and non-commuting projection measurement, respectively. Firstly, the simplest projection measurement of a single observable $\sigma_z$ is considered (Fig.~\ref{fig3} (a)), after which the system will be in one of the basic states of $\sigma_z$ ($\rho_0=\left|0\right\rangle\left\langle0\right|$ and $\rho_1=\left|1\right\rangle\left\langle1\right|$), with probability $p_{0}={\rm Tr}(\rho\rho_{0})$ or $p_{1}={\rm Tr}(\rho\rho_{1})$. This means that the elements in a subset obtained by the $K$-means method can only take one of the two basic states of $\sigma_z$ (the elements of the other subset can only take the other basic state of $\sigma_z$), causing $V(M_{\rm proj})=0$ and $\Phi(M_{\rm proj})=-\beta$ gets its minimum. While there is obviously no non-commuting behavior for projection measurement of a single observable, so it's established that the measure defined works well with projection measurements of a single observable.
The influence of the increase in the angle $\theta$ between two observables on the measure defined when the measurement strength $k$ is small and fixed is considered in addition (Fig.~\ref{fig3} (b) and Fig.~\ref{fig3} (d)). In the physical sense, the non-commutability of quantum measurement in this case will definitely increase as $\theta$ increases. For the $\Phi(M_{\rm weak})$ defined in this paper, its value is mainly determined by $V(M_{\rm weak})$ with a small $k$, and the increase of $\theta$ will make the final states of the measurement more evenly distributed on the surface formed by the initial state and the steady plane which is determined by the eigenstates of the two observables, resulting in an increase in $V(M_{\rm weak})$ and then an increase in $\Phi(M_{\rm weak})$, which is consistent with physical sense.
The last typical case considered is the projection measurements of non-commuting observables (Fig.~\ref{fig3} (c)). In this case, the measurement will become unable to physically realize if $\theta$ takes any value other than $0$, which means that the measure defined should take a value larger than a certain boundary. When the measurement strength increases to the degree of projection measurement, there is $D(M_{\rm ncpr})=4$, causing $\Phi(M_{\rm ncpr})$ to take a sufficiently large value once $V(M_{\rm ncpr})$ takes a non-zeros value. Moreover, only when $\theta=0$, $\Phi(M_{\rm ncpr})=V(M_{\rm ncpr})=0$ will be established. Through those typical cases, the rationality of the measure defined in the aspect of physical meaning is illustrated.
In conclusion, this paper propose a measure for the non-commutability of quantum measurements based on the $K$-means Clustering method and demonstrate its rationality. We further consider the application of our measure in several typical cases in quantum measurements to indicate its practicality. Our work helps to advance the understanding of quantum non-commuting measurement.
{As quantum measurement has been applied to many other fields such as quantum control and quantum state estimation \cite{Yang2018,Gong2018,Weber2014,Vijay2012,Zhang2017,Gillett2010}, the applications of non-commuting quantum measurement in these fields are of great expectations in addition \cite{Ruskov2010,Hacohen-Gourgy2016}. However, the more information is gained with non-commuting measurement, the more backaction and uncertainty exist. Thus how to choose the measurement that makes use of as much information as possible with acceptable backaction and uncertainty is of great interest and importance. Our measure is expected to help to solve this problem originally.} Moreover, we wonder whether there will exist a connection between our measure and Heisenberg's uncertainty principle. It's known that the quantum measurement will become unable to physically realize when its measure exceeds certain boundary, we are looking forward to precisely obtain this boundary through Heisenberg's uncertainty principle, which might be completed in the future works.
\begin{acknowledgements}
This work was supported by the National Natural Science Foundation of China under Grant 61873317, and by the Fundamental Research Funds for the Central Universities.
\end{acknowledgements}
\label{sec:TeXbooks}
|
1,108,101,563,909 | arxiv | \section{Introduction}
Navigation systems, like Google Maps, became rapidly popular by providing expert knowledge and real-time personalized contextual information to guide travel. Personal lifestyle guidance in such a manner does not exist in health care systems. In order to shift the focus in health systems from temporary fixes to long-term solutions such a guidance system must be implemented \cite{Sagner2016TheHealthspan} \cite{McElwaine2015SystematicClinicians}.
Commonly, physicians focus primarily on medical methods to manage health when a patient becomes ill. By evolutionary design, optimal health is universally desired (and should be provided) at all times. True health outcomes result from actions taken in every moment and place, not just medical intervention during sickness. Future advancements in health must continuously sense individual needs and rapidly provide the relevant resources so corrective actions ensure health stability.
For example, optimal health for chronic diseases like type 2 diabetes (T2D) remains a challenge. Insulin resistance, obesity and other biological changes that lead to uncontrolled T2D start many years before a formal diagnosis and treatment plan is given. These biological changes can be reversed by lifestyle choices. If insulin resistance is caught early in a prediabetic state, the course of the disease is potentially reversible \cite{Perreault2014ApproachingPre-diabetes}.
Although biological scientific understanding has greatly progressed in the past few decades, diabetes and other lifestyle associated diseases continue to rapidly rise across the globe. Given this progress in research, we would expect the opposite. Historically, monitoring lifestyle factors has been difficult, and computational power with effective methods to address the needs of each individual have been limited. Trying to produce changes in routine lifestyle habits is also a tremendous psychological hurdle.
\begin{figure}
\centering
\includegraphics[width=0.96\columnwidth]{Images/p5cybernetichealth.png}
\caption{P5 Cybernetic Health coordinates the elements of personalized, predictive and precision medicine through persuasion techniques that result in disease prevention.}
~\label{fig:p5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{Images/Cybernetics.png}
\caption{Cybernetic Control pairs the individual user and digital health assistance to enact real-world changes to optimize health}~\label{fig:Cybernetics}
\end{figure}
This work targets three significant problems in current health care delivery through a cybernetic approach. First: Health systems largely react to problems, rather than avoiding problems through preventive measures. Second: Gold-standard medical practices depend on evidence-based medicine taken from population averages. A lack of individual contextual analysis results in compromised care and sub-optimal outcomes. Third: Access to medical guidance is limited due to poor information dissemination, and restricted physical time and space. If doctors give lifestyle suggestions to patients, they are hard to translate into everyday life decisions. When a need or question arises for health advice, such as "What should I eat?" or "Should I take this medicine now or later?", there is a large time delay to receive meaningful assistance. Patients usually scramble for unreliable information via web search engines when faced with such decisions. The difficulty of scaling physical systems, like hospitals and personnel, further limits high quality care. This is especially the case for under-served populations across the globe.
P5 Cybernetic Health (P5C) transforms these three major hurdles into opportunities (Figure \ref{fig:p5}). First: By analyzing individual data with context, we can predict problems as they arise and give the best solutions. In the instance of diabetes, we begin to predict an increase in insulin resistance risk factors for increasing complications. This leads to actionable information for the patient. Second: We tackle the issue of traditional "evidence-based" medicine by combining sub-population and individual data dynamically into a system to give "enhanced real-time personalized evidence-based" medicine unique for each patient. For a diabetic patient, we give specific actions that would result in the best blood glucose management. Third: We reduce the delay of health advice through real-time sensors and specific feedback guidance loops, while retaining the ability to scale to millions of patients through a virtual platform.
\section{Cybernetic Principles}
Cybernetic principles transformed the design of complex systems \cite{wiener1948cybernetics}. Continuous measurements are a key component in closed loop feedback control systems. Airplanes, ovens, and other machines use these feedback loops to safely and efficiently operate. Imagine if the thermometer in an oven gave a reading once every year. How would the oven know to heat or turn off? The thermometer, heater, and other components of the oven must all be coordinated and continuously working for the machine to operate correctly. Similarly, the human body maintains homeostasis amongst a remarkable array of perturbations. Biological systems use an intricate play of real-time sensors and actuators within the body to do this. Cellular sensors collect personalized information based on the individual body or tissue changes. These signals affect outputs for corrective action, and are especially effective if early warning signs are detected. These cybernetic mechanisms keep the human body naturally stable.
Occasionally, this human system becomes unstable which results in deteriorating health. With current day medical practice, the detection and corrective actions are greatly delayed, resulting in further unstable systems and further deteriorating health for the patient. Our vision is to reduce this latency of the health care system, by using continuous sensors that are specific to the individual, while enabling corrective actions to transform an unstable health condition back to full health stability. In the operating room, anesthesiologists are beginning to develop closed-loop drug administration systems that will replace much of their own job during a surgery \cite{Rinehart2015Closed-LoopCare}.
For type 1 diabetics, mechanical devices can replace the biological pancreas with a hormone pump and continuous glucose monitoring. Both of these examples are mechanical implementations of cybernetics for health. Unique to our platform, we begin to mesh real world information into a virtual environment that patients can use to peek into their live health status with persuasive guidance for optimal decision making. We call an application of this Health-Butler (HB).
We chose to implement HB for T2D for two key reasons. First, diabetes is an increasing global health problem, with almost double the disease burden in the last 30 years that reaches 8.5 percent of global population. It is a leading cause of blindness, kidney failure, heart attacks, stroke and lower limb amputation \cite{diabetes}. Second, T2D and the resulting complications are largely preventable if corrective action is taken promptly. These corrective measures are actionable by individuals through lifestyle factors such as nutrition, physical activity, stress management, and environmental exposure. By continuously monitoring the aforementioned factors, we can predict if the human system will start deviating from the homeostatic set point, allowing HB to intervene at the right time and place.
\subsection{P5 Cybernetic Health Concept}
Leading medical professionals have advocated for integrating more technology into health care \cite{topol-2}. Future health care systems will intertwine lifestyle data with medical knowledge to develop a new paradigm that optimizes individual health. We have developed a 5 component system to bring this vision to reality.
First, we use a multi-layer modeling system to understand how to build an increasingly accurate personalized model. Second, using this dynamic model connected with real time sensors allows us to predict evolving situations that an individual may encounter. Third, we use the predictions in conjunction with validated expert medical knowledge to give the most precise solutions to avoid emerging problems. Fourth, we effectively persuade the individual as an actuator in the system by optimizing their preferences, convenience, and specific health needs. Fifth, we give feedback on how the patient's actions have quantitatively affected their health. The realization of this personalized, predictive, precise, persuasive, and preventive system depends upon the coordination of available and future technologies. We call this above approach P5C (Figure \ref{fig:p5}).
\section{Related work}
Clinical and medical research in personalized and preventive medicine struggles to gain traction. Patients continue to receive sparse feedback on how to best face disease burdens in their unique daily life circumstances. Medications prevail as the primary tool to manage diabetes, because they are easy to physically scale and have standardized instructions for all patients. Currently, live exchange of personalized feedback from physical human(doctor) to human(patient) on lifestyle changes, although beneficial, is extremely costly in time and resources.\cite{Schmitt2016}
There is a significant demand for this type of virtual platform. Patients have a much higher probability of making better lifestyle choices that would combat diabetes if given guidance \cite{Sherifali2016EvaluatingDiabetes}. Unfortunately, modern (2016) smartphone tracking applications have not shown any benefit in improving glucose control \cite{Porter2016TheReview}. Physicians continue to vocalize this void in matching glycemic patterns with lifestyle history \cite{Goyal2016}.
Health monitoring research has quickly grown with pervasive computing methods. There are some research groups who have focused on lifestyle monitoring of diabetic patients. Smartphones have been used for data collection (e.g. GPS, wifi, activity) to power machine learning and symbolic reasoning to recognize lifestyle activities of diabetic patients \cite{luvstrek2015recognising}. Daily life data of diabetic pregnant women has been integrated with their network of health care institutions \cite{ballegaard2008healthcare}. Other groups have focused on health-related data monitoring for chronic disease care. Waki et al. implemented a smartphone self-management system which consisted of 4 modules; 1) data transmission, 2) evaluation, 3) communication, and 4) dietary evaluation, which resulted in improved HbA1c in 3 months \cite{waki2014dialbetics}. Mukherjee et al. provided an environment for caregivers to monitor patient data in real-time \cite{mukherjee2014patient}. Katz et al. and Mamykina et al. designed mobile systems to merge and analyze data streamed from multiple sensors to give user recommendations \cite{katz2016investigating, mamykina2015adopting}. Banos et al. explored existing personalized health data applications to develop a framework, called Mining Minds, to assimilate health data in order to better serve patients \cite{Banos2015MiningSupport}. However, to the best of our knowledge, there is no joint research between medical and computing fields that cover the scope of cybernetics to coordinate the elements of personalized, predictive and precision medicine through persuasion techniques to result in disease prevention (Figure \ref{fig:p5}).
\begin{figure}
\centering
\includegraphics[width=0.89\columnwidth]{Images/venn.png}
\caption{Cybernetic systems apply in all time scales of health from acute to chronic diseases. Here are just a few examples of components that can have both acute and chronic effects on health homeostasis. Other than genetics, most of these factors are controllable to some degree.}~\label{fig:venn}
\end{figure}
Computing work in this field has primarily focused on giving the user figures and statistics of past data. This is true for both hardware and software in personal health. Hardware such as the Fitbit, and health software like Apple's HealthKit only function to acquire and accumulate data. This does not fulfill the function of providing timely and personalized health advice in a predictive manner. Most importantly current digital health mechanisms are rudimentary in detecting context for each individual. Second, recommendation engines that are used in health applications ignore mechanisms to maintain retention and trust of the user. Users quickly get alert fatigue from poor recommendations. To sustain users, applications must give users autonomy, cater to their desires and convenience, while also informing them in an encouraging manner. Additionally, many lifestyle data parameters are gathered through manual mechanisms. For example, popular nutrition tracking applications ask users to manually enter information. This further causes a high loss in user retention, while having poor accuracy of input values \cite{Krebs2015HealthSurvey}. Users desire low data entry burden, with high functionality to help reach their goals \cite{Krebs2015HealthSurvey}.
\section{HealthButler Application}
To illustrate the delivery of P5C through HB, we describe how a T2D patient named "Bruce Uberschweet", uses the system to optimally manage his health condition in both positive and negative scenarios. This will include analysis to better control blood sugars and reduce drug dependency through improved metabolism \cite{McGarrah2016TheAction}. Bruce is looking for lunch on his commute to work on Monday and knows that HB always gives him the fastest access point to tasty and nutritious food. An intelligent recommendation engine takes into consideration his real-time personal tastes, logistical convenience, and current health needs to provide him a curated list of specific dishes that he can easily pre-order. He can also clearly see how each dish affects his diabetes so he can feel empowered to choose what is good for himself (Figure \ref{fig:screens}). After attending a wedding on Sunday, HB predicts a rising insulin resistance based on his previous lifestyle data, and gives him immediate actions to take in order to address the worsening condition. It simplifies his next steps such as booking an urgent appointment with his doctor to change his medication dose (Figure \ref{fig:screens}). These are two examples of how HB is actively engaged in predicting Bruce's health status, merging in with his daily life in an unobtrusive and useful way. Bruce can actively see how his external world and internal body are interacting through HB.
\begin{figure}
\centering
\includegraphics[width=1.01\columnwidth]{Images/Screens.png}
\caption{Left: Nutrition guidance is catered to the user's preferences, needs, and available resources. Center: Easy one-touch food and mood tracking. Right: real-time health status is shown, with direct actions to take for help.}~\label{fig:screens}
\end{figure}
\section{P5 Cybernetic System Framework}
Producing the exemplary application of HB requires the coordination of multiple modules in the P5C system. Each component is integrated into the system architecture in (Figure \ref{fig:system}) to produce the front end user interface of HB.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9999\columnwidth]{Images/Event-based_Approach_Diagram.png}
\caption{System Architecture of P5 Cybernetic Health}~\label{fig:system}
\end{figure}
\subsection{A. Personalization}
Allopathic medicine provides solutions based on averages from large clinical trials. This method improved population health dramatically in the last century, but is now hitting a bottleneck. Various diseases are on the rise despite the latest advances in biomedical science. Physicians and researchers do not have the capacity to maintain and analyze detailed records for every individual. With the recent advances in sensors, smartphones, and pervasive technology, it is becoming possible to record data to create a digital imprint of each user,resulting in the concept of quantified self \cite{swan2013quantified}. For example, mobile applications such as Google Fit, Moves \cite{moves_app}, or Fitbit, are actively recording user life data. Dey et al. started building a conceptual framework, named Context Toolkit and AWARE, for developing quantified self applications that understand context \cite{dey2001conceptual, ferreira2015aware}. Creating individual models became the next logical step in personalized systems. For example, Objective Self (OS) began to build a comprehensive human model using heterogeneous data sources for each individual \cite{jain2014objective}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\columnwidth]{Images/anecdotal_to_objective.png}
\caption{As data availability increases over time, we are able to build more accurate models of individuals. In ancient times, anecdotal oral and diary traditions documented life. With increasingly quantitative data we can build increasingly accurate models of human life.}
\label{fig:anecdotal}\textbf{}
\end{figure}
P5C builds upon these concepts to build a more comprehensive understanding of an individual. Smartphones can recognize many events in our daily life \cite{JordanThesis:2015, oh2015intelligent, ferreira2015aware, biegel2004framework}. The timeline of these life events that can be referred to as a personal chronicle (personicle) \cite{jalali2014personicle}, and in the near future will be gathered lifelong, from the 'womb to tomb'.
P5C is primarily focused on changing the lifestyle of a person to bring about clinically relevant positive health outcomes. Patient data is segregated into four levels of increasingly personalized rules (Figure \ref{fig:4layer}). The cybernetic system will target the optimally desired health state. In the first level, we apply universal general rules to build a skeleton individual based on medical and biological expert knowledge. Second, we incorporate specific knowledge that applies to sub-categories of people, such as, gender, ethnic background and more. This layer is applied as a function to the individual at a specific time and place. Third, we take into consideration firm variables about the person, such as genetics, age, home location, socioeconomic status etc. Fourth, we build dynamic individual models using the above three rule layers in addition to an event mining platform that ingests individual sensor data in real-time. The fourth layer captures the user's personicle based on life activities, food intake, medical and physiological parameters, emotional status and environmental conditions to build the live user OS. In the future, additional data streams can easily be incorporated into this data mining platform.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Images/4layer.png}
\caption{We integrate increasingly personalized layers of data modeling to build an objective self.}~\label{fig:4layer}
\end{figure}
\subsubsection{ActivitySense}
Lifelogging through sensors makes it possible to record the totality of an individual's experiences \cite{gurrin2014lifelogging}. Personalized lifelog raw data is then extracted by HB to semantic-level activities in real-time. These real-time events allow for intuitive analysis of a dynamic human life. Life activity events for health computation are parallel to objects in a picture for intelligent visual computing. To build realistic systems, we can never completely capture everything due to the lack of standard data formats and the complexity of life \cite{sellen2010beyond}. Therefore, the range of our semantic activity also needs to be refined in some number of meaningful activities rather than recognizing all possible things. Quantitative information of time use, frequency, intensity of stress, enjoyment, and other affective states are most meaningful to medical researchers \cite{kahneman2004survey}. HB currently targets 17 standard semantic level activities which include: socializing, relaxing, prayer, eating, exercising, home events(watching TV, preparing food, sleeping, housework), shopping, conversations, computer/e-mail/Internet usage, working, commuting, and important diabetes-related activities, such as the use of the toilet and hospital visits. High blood sugar increases urination frequency in diabetic patients.
Some research groups have been working on semantic-based life event recognition. Routinely visited locations such as home, work, or school can be tracked and these can indicate pursued activities such as leisure, working, or traveling \cite{liao2007extracting}. Contextual information can be processed together to infer everyday activities on a high level such as eating, cooking, walking, or talking etc \cite{wang2012semantics}. Mobile Lifelogging tracks activity \cite{moves_app, Fitbit_app} but also detects high-level life events \cite{Life_Cycle,JordanThesis:2015}.
\subsubsection{FoodSense}
Patients suffering from obesity, diabetes, cardiac disease and other chronic conditions continue to have difficulty following nutritional guidelines. Primary reasons include the failure to address individual differences, resources planning, and high burden of manual data entry \cite{my_fitness_pal}. A quantitative diary of food intake may be helpful in regulating dietary habits, but this type of a system is still not pervasive \cite{Darby2016}. Patients that measure food in conjunction with glucose and insulin only see a slight benefit \cite{Friedman2013}. An essential requirement is to have a personalized and objective approach that requires minimal user input.
Many companies and researchers try to encourage people to manually enter information whenever they eat. The "Accu-Chek 360" by Roche uses 7 glucose time points a day for 3 days and has been shown to only be slightly beneficial to clinicians and in most cases, however, only partial qualitative information was reported.
FoodSense captures and analyzes photos or transactions of food to create a quantitative nutrition diary along with emotions in one touch. Using these pictures we use open deep-learning techniques by Google and Clarifai for food recognition in photos \cite{clarifai_food}. Nutritional parameters for each person are analyzed based on their personal health status \cite{Goyal2016, Wood2015a} to give them the most relevant suggestions on guidance. Bruce receives his lunch recommendations on HB from this analysis, in conjunction with all his other data (Figure \ref{fig:screens}). For example, Bruce exercised before he received his nutritional recommendations which gave him options that are suited for glycogen replenishment. When he purchases the item from HB, it is automatically recorded into the system, with no extra user burden. On a broader scale, this decision making support shifts consumers towards healthier menus, and persuades businesses to offer healthier food options. \cite{walmart, mcdonald}
\subsubsection{MoodSense}
This measures the user's mood at a particular time. Relating this to life events is vital for making effective recommendations based on what the user enjoys or dislikes (described in later sections). We get an estimate of the user's emotional state using two types of inputs: 1. Active/Explicit input allows the user to directly mark a moment or an event on their timeline with their emotional state (Figure \ref{fig:screens}). We can use this application to correlate the emotional state with events (co occurring and delayed) and if the correlation is significant then we can associate that emotional state with the events. 2. Passive/Implicit input allows us to monitor the emotional state of the user by keeping track of their various interactions with environment and people. This includes their social media content and text communication. Similar emotion detection studies show strong promise with these methods \cite{Tausczik2010,Gamon2010}.
\subsubsection{MedicalSense}
Incorporating medical data into OS is essential to make informed health decisions. For diabetic patients, we collect blood glucose values through bluetooth glucose meters, their medication compliance with bluetooth pill boxes, and their pre-existing health conditions data from the hospital electronic health records. As continuous glucose monitoring becomes technologically advanced, patients will be able to report their blood sugar values without any invasive interventions \cite{AlphabetVerilySciences}.
\subsubsection{EnviroSense}
Environmental factors continuously affect the health of every individual. Having the ability to measure the local environment of each individual, we will give insight into how they are being affected by factors the user is otherwise unaware of. In diabetes, air and water pollution have been shown to increase the risk of diabetes \cite{Eze2016, Chen2013, Eze2014, Brauner2014}.
Long term exposure to particulate matter in the air can activate pathophysiological responses that can induce insulin resistance \cite{Chen2013, Eze2015}.
While public data regarding the quality of the environment is readily available, it is not incorporated and tracked at the individual level.
We are using an open source software platform, called \textit{EventShop} to ingest and assimilate different data streams \cite{pongpaichet2013eventshop,tang2015habits,singh_situation_2012}.
It combines different environmental data streams ranging from climate data, air quality, pollen counts, and micro-blogs (like Instagram and Twitter) to understand how the environment is evolving. For each individual, the environment stream will be stored by obtaining this information from EventShop. Additionally, we use available data from open-sources such as Yelp, Google Maps, and government websites to understand what resources are available to a user at any given location and time. These data sources will be used in the need-to-resource matching for daily life such as food suggestions, activity recommendations, emergency hospital directions and more.
We can create a behavior profile for the user with the input from the above modules, which can be referred to as \textit{habit waveform}.
The habit waveform represents a user's behavior averaged over a large period of time, and it is continuously affected by the person's actions and environment. We can view this as a control system where the habit waveform represents a person's equilibrium or steady state and all the events can be viewed as an impulse provided to the system. Our goal is to modify the steady state over a period of time to a configuration known to represent healthier lifestyle and minimize the effects of the events/impulses which are detrimental to the health of the user.
\subsection{B. Prediction}
We use data driven analysis and pattern mining algorithms to find event patterns in a personicle to build individual OS models. These models estimate and predict how future events will affect the habits waveform and variables of interest.
Event relationship operators formulate compound events and compute co-occurrences which are then tested with a new set of data \cite{jalali2016interactive, jalali2015bringing, jalali2016human}. This framework extends traditional complex event processing \cite{buchmann2009complex} significantly by including space, multiple event streams, with point and interval events to enable real world data analysis.
In using the event based computational paradigm, our analysis follows very intuitively from raw data to events to situations, which can be directly related to physiological measurements (Figure \ref{fig:data_pyramid}). Thus we are extracting live raw data to find a direct relationship between events of the user and their physiological condition (situations). This allows for clinically valid interpretations that feed into the recommendation engine.
Merged event streams in a personicle \cite{jalali2014personicle} produce a stream of time-indexed events, e.g., high fat meal eaten at 3pm Monday or 40 minutes of exercise at 2pm on Saturday or high blood glucose level at 5pm Thursday. Statistical models \cite{heins2014statistical} are used to identify recurring patterns in a sequence of events. By fitting such models it is possible to identify sequences of events that may predict high likelihood of an adverse medical event. Fitting the model to an individual's event stream data is a challenge that may require weeks of observations. Bayesian hierarchical models \cite{gelman2014bayesian} can be used to leverage information from a population of users to give upfront meaningful analysis until the data from a single user is sufficient. This approach provides an intuitive data-determined degree of synergistic sharing between individual and population information. Parameters of models that fit separate individuals can be described by a population distribution where recurring patterns are shared while some remain unique to each individual \cite{heins2014statistical}.
Essentially, our system initially provides strong population data driven assistance while becoming increasingly personalized as data accumulates.
We can also use these event patterns to identify the food preferences of a person. Clustering and factor analysis are used to identify eating habits using empirical methods\cite{Newby2004} in addition to using defined diet quality scores\cite{Waijers2007}. In the example of diet, once we have identified the preferences that are relevant, we create the personalized food habit waveform to illustrate the degree to which a user consumes particular food groups. Similar analysis is done for activities as well which gives us the historical habits of the person.
Life habits coupled with the MoodSense data identifies preferred activities. For example, the person may be stuck in traffic everyday while commuting to work (a life habit), but using the emotional response we can ascertain they do not prefer waiting in traffic. User preference prediction identifies activities/food items which are most likely to be executed. We focus this prediction to trigger positive changes in the user's health. As we will see in the persuasive aspect of the system, food events/activities which have a positive impact on the user's health and satisfy the above constraints are the most appropriate suggestions for the user.
Aside from this, we can use the event history of the user to identify anomalies in their behavior. Some of these anomalies may represent medically significant behavior changes, for example if we can identify sudden mood shifts using MoodSense, there is a high likelyhood of hyper or hypoglycemia. This activates the system to provide emergency relief services. We also use the personalized data to predict developing insulin resistance over time. An accumulation of low activity, high fat and sugar foods, with associated lethargy indicates the individual is not on track to improve their glucose sensitivity. After Bruce's attended the wedding on Saturday, a combination of these factors trigger the alert and actions based on a predicted increase in insulin resistance (indicated a deteriorating diabetic condition) (Figure \ref{fig:screens}).
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{Images/data_pyramid.png}
\caption{Observations are signals gathered from sensors. Events bring semantics to the raw data. Situations give a cognitive understanding of the current and past states. We use this to predict future states of the system.}
~\label{fig:data_pyramid}
\end{figure}
\subsection{C. Precision}
A lack of data on personal lifestyle in relation to biomarkers has been a struggle in the quest to provide the most precise treatments for patients \cite{Valencia2017}. President Barack Obama also began the Precision Medicine Initiative to begin following various cohorts of patients to understand what constitutes better treatments for some over others \cite{Fradkin2016NIHResearch}. Researchers are also trying to link genetic factors to diabetes outcomes, but they are confounded in their research due to a lack of high fidelity lifestyle data \cite{Type2DiabetesGeneticsTypeGenetics}.
By predicting the probability of physiologic events, we can send the most appropriate control signals to the user to take corrective action to maintain their health status before it starts becoming unstable. We develop an algorithm that incorporates various factors such as the severity of the adverse event along with the likelihood of occurrence. To prevent alarm fatigue, we dispatch the control signal only after the threshold of maintaining optimal health is crossed.
Precise diagnostic tools, medications and other medical interventions are also suggested to the physician as to reduce waste of resources and ensure better outcomes. Most importantly, giving the most precise treatment for an individual relies on the generation of actionable interventions \cite{Valencia2017}. To produce effective changes in diabetes status, glycemic patterns need to be accurately related to a continuously monitored lifestyle \cite{Goyal2016}. This allows for a clear understanding of how different factors affect a patient's blood glucose. Early lifestyle corrections are a primary method to prevent microvascular complications of diabetes, but include potential non-intuitive actions such as having a moderate amount of alcohol in the diet \cite{Valencia2017} or switching to a flexitarian diet (reduced meat intake) \cite{Derbyshire2017FlexitarianLiterature}. Specific exercises improve insulin resistance over others, especially aerobic training over resistance training \cite{Valencia2017} \cite{Marson2016EffectsMeta-analysis}. Certain patients are at higher risk of hypoglycemia from medications, and thus may have a less stringent glucose target to prevent severe hypoglycemic episodes, while also needing to be more informed of potential factors that may cause hypoglycemia \cite{Yun2016RiskMellitus}. Furthermore, exercising to lose weight is not necessarily the best therapy for poor glucose control \cite{Franz2015LifestyleTrials}.
Some clinical testing methods, such as monofilament testing, to check for diabetic peripheral neuropathy can also be easily executed by family or friends near to the patient. P5C uses a host of verified medical data in conjunction with physicians to direct precise diagnostic, treatment or control actions \cite{UpToDateUpToDate}. HB will suggest these actions in addition to prompting a doctors visit when predicted glucose control and insulin resistance from lifestyle data is not in the normal range (Figure \ref{fig:screens}. We deliver these non-intuitive signals to the user to take action via HB. Ultimately, merging individual quantified models with expert knowledge, translation of medical research to help the unique case of each individual is accelerated.
\subsection{D. Persuasion}
A 2016 review of modern smart phone food tracking has not shown benefits for glucose control \cite{Porter2016TheReview}. This highlights the need for considering the user's preferences while making recommendations. The incentives also need to be aligned for the patient to take positive actions \cite{Goyal2016}. The goal of our system is to induce gradual habit changes via suggestions which cater to user's preferences and cause incremental improvements in their long term habits and health.
Recommendations are a cost function of preferences along with health impact. Healthy options which are diametrically opposite to the person's preferences have a high cost as the user is unlikely to act on the suggestions. Thus we align their preferences by inducing minimal changes which are good for their health (Figure \ref{fig:persuasion}). Concepts from persuasive technology help us in generating recommendations that are most suited for the individual\cite{Fritz2014, Arteaga2009}. According to Fogg's Behavior Model \cite{Fogg2009}, there are three factors which determine behavior: motivation, ability and trigger.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{Images/persuasion.png}
\caption{ 1. The upper right quadrant of Ability represents options that are both healthy and preferred for a given user. Natural triggers (such as hunger) cause an increase in motivation to look for food. 2. As the motivation increases, the user mobilizes to access resources to fulfill the motivation, generally aligned with their preferences. HB tilts the Ability to optimize for convenience, preferences and health factors in real-time. 3. As the user is presented with choices, we use a synthetic trigger to increase the probability of the optimized action.}~\label{fig:persuasion}
\end{figure}
\begin{description}
\item[Motivation] refers to the individual's willingness to follow through with the suggestion. There are three factors which can affect motivation for any activity/event:
\textbf{Preference/Instant gratification} motivates the events which the user enjoys and would always prefer to perform (if the other factors permit it). These recommendations take into account the user's preferences, which increases the probability of follow through for the given suggestion. This is calculated based on past events similar to the recommendations with a positive emotional response.
\textbf{Goals/Fear} motivate the events which the user doesn't necessarily like but are important for achieving a long term goal or to avoid an outcome. Waiting in traffic while commuting to work is an example of this type of motivation. These events can be determined from frequent patterns in event history and include events which are repeated even regardless of emotional response.
\textbf{Social/External pressure} motivate the events where external pressure from other people maybe a factor. This factor controls significant portions of social behavior and hence may influence the user to pursue activities which they usually would not pursue. We use social media connections along with social media activity to understand which events are influenced by other people or social groups.
\item[Ability] represents the accessibility in performing the suggested behavior. This factor is an interplay of the individual's surroundings and their intrinsic capability. When ranking the recommendations, we will integrate the information about environment from Eventshop (as specified in EnviroSense) and the event history of the person. This will let us know whether the person's ability and the environmental constraints required for the event are satisfied or not, and matches their needs to the available resources. For example, we would only suggest driving to get a healthy lunch if Bruce had access to a car (intrinsic capability), and the traffic conditions (surrounding resources) allowed him to travel in time for his next meeting.
\item[Triggers] are reminders or suggestions personalized to the individual's needs to accomplish the task\cite{Fogg2009}. Two broad categories of triggers include: 1. Natural triggers from physiological events occurring in the body, such as hunger. 2. Synthetic triggers are provided by the system and aimed at facilitating the occurrence of an event. These can be in form of a notification on a smartphone or an intervention from a friend or relative. Synthetic triggers are synergistic when coupled with relevant natural triggers. For example, a notification about healthy food options when the person is hungry. In our system, synthetic triggers in the form of parsed dish menus are used to enhance the person's ability by recommending items which are similar to the person's preferred activities but have a positive impact on the user's health (in our case, food items similar to what user likes but comparatively healthier). This helps us match the user's needs to the available resources and generate recommendations which meet the criteria of the behavior model. Bruce's lunch menu on HB follows from this analysis (Figure \ref{fig:screens}).
\end{description}
The variance in recommendations are calibrated based on the different types of events in the user's history. HB caters to the range of events that hover in the vicinity of user event history. This increases the effectiveness of recommendations.
\subsection{E. Prevention}
By preventing negative outcomes, we accomplish several tasks. Users are given direct feedback to understand actions they are taking are working to benefit themselves. This keeps user motivation up and encourages further participation in ownership of their own health. Prevention also depends on informing the individual of risks regarding their choices. In the USA, calories are printed on menus of large franchise restaurants by law. This allows consumers to have direct basic information on what they are consuming. Similarly, the practice of having cancer warning signs on cigarette packages or labeling alcohol warnings for pregnant women are designed to inform the consumer of their choice. HB and any other system derived from the P5C concept focus on informing the user in a personalized fashion.
Preventing problems and managing health information benefits quality of life. Morbidity attached to disease management is a large factor or reduced quality of life. Giving patients the ability to prevent the progression of their disease has been shown to improve quality of life \cite{uczynski2016EmpowermentObesity}. As a bonus, corrective actions through lifestyle factors positively affect multiple comorbidities at the same time. These interventions reduce diabetes, dyslipidemia, and prevent cardiovascular disease \cite{Khavandi2017,Moon2017PreventionMellitus, Khavandi2017}.
Prevention involves early detection from continuous monitoring. Most importantly, earlier diagnosis makes treatment through lifestyle interventions much more effective. Continuous monitoring also allows the medical team to modify treatment more appropriately through tighter coordination. Ultimately, the prevention of deteriorating health conditions keeps the individual in the steady state of optimal health. This is the original goal of cybernetic systems, which aligns perfectly with the goals of an optimal health system.
\section{Current Status}
At the time of writing this project we have several modules fully working, some under active construction, and a couple that will begin construction soon (Figure \ref{fig:status}). Our functional active closed-loop system will be ultimately deployed in the hospital setting. The construction status for the system modules follows from the integration as follows.
\begin{enumerate}
\item Realtime data: Heterogeneous data sources from real-world events, such as social sources (e.g. Twitter, Facebook and Flickr), environmental sources (e.g. flood, hurricane, asthma, flu, population, pollution, and weather), camera, and traffic etc \cite{gao2012eventshop, pongpaichet2013eventshop}.
\item Eventshop: Providing operators for data stream ingestion, visualization, integration, situation characterization, and sending out alerts \cite{singh2016situation}.
\item Resource aggregation: Situation recognition obtaining actionable insights from observed spatio-temporal data \cite{singh2016situation, gao2012eventshop, pongpaichet2013eventshop, tang2015geospatial, tang2016integration}.
\item Realtime data: Heterogeneous data sources from human-related sensor data by smartphone and wearable sensors (e.g. activity, step, GPS, venue, call, calendar, wifi connection, smartphone application, photo, ambient light, ambient sound etc.) \cite{jalali2014personicle,oh2015intelligent,JordanThesis:2015}.
\item Personicle: Identifying semantic-level life events using heterogeneous data sources and creating a chronicle of life events \cite{jalali2014personicle,JordanThesis:2015}.
\item Quantified model: Comprehensive human model using objective quality data from heterogeneous data sources for individuals \cite{jain2014objective}.
\item Evolving situation: Human behavior analysis with causal modeling across multimedia data streams \cite{jalali2015bringing, jalali2016interactive, jalali2016human, jalali2016framework}.
\item Matching: Merging environmental situation and personal situation \cite{tang2015habits, tang2016research}.
\item Recommendation: We are actively engaged in developing a recommendation engine.
\item Reporting and 11. Expert Knowledge: We will begin integrating this aspect soon.
\end{enumerate}
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{Images/status.png}
\caption{A visualization of our progress to build P5C.}
~\label{fig:status}
\end{figure}
\section{Conclusions and Future Challenges}
Cybernetic principles lay the foundation of building health systems that are responsive in keeping individuals at optimal health over the course of a lifetime. Our conceptual foundation of P5C is constructed with the most fundamental principles in control theory while incorporating the ability to seamlessly integrate both present and future technological advancements. This is an absolute necessity to transform the archaic practices of current day health care, especially in light of how many diseases complications are preventable. The most important concept we describe is how these parts integrate to form a true closed-loop system. These closed-loop systems have been refined in mechanical systems over decades and in biological systems for millions of years.
This also ties together the span of our health ecosystem to work in synergy. It is absolutely necessary to use these principles to bridge the virtual and real world together to move human health forward.
Each individual block of P5C (personalized, predictive, precision, persuasion, prevention) for HB is under active development to be completed for beta testing in patients by March 2017. Real health progress relies on an interdisciplinary effort between hospital clinicians, engineers, computer scientists, and bioscience researchers. Our work focuses on the translation of interdisciplinary academic progress into real systems that patients will benefit from. HB is just one incarnation of P5C in a hospital focused application that we are launching with the UCI Health Diabetes Center and a cornerstone of the interdisciplinary UCI Institute for Future Health. There will be various technical challenges during large scale deployment of any P5C system. Consideration of how various countries have different habit patterns, regulations, medical systems, sensor and network connectivity, and environments is essential.
Tackling the reliability of sensor data is essential for systems like this to properly function. Integration of natural language processing for medical literature will also improve the ability to quickly disseminate actionable information to the masses. Security and privacy are of utmost concern in all circumstances for patients and providers. These challenges are constantly being addressed as P5C systems grow.
\section{Acknowledgments}
We thank Jonathan Lam for his design and user interface contributions. This research is partially funded by the National Institute of Health (NIH, United States of America) as part of the Medical Scientist Training Program (MSTP) and the Cardiovascular Applied Research and Entrepreneurship (CARE) grant under \#T32GM008620-15. Additionally funding is provided by the UC Irvine Donald Bren School of Information and Computer Science.
\balance{}
\bibliographystyle{SIGCHI-Reference-Format}
\section{Introduction}
This format is to be used for submissions that are published in the
conference proceedings. We wish to give this volume a consistent,
high-quality appearance. We therefore ask that authors follow some
simple guidelines. You should format your paper exactly like this
document. The easiest way to do this is to replace the content with
your own material. This document describes how to prepare your
submissions using \LaTeX.
\section{Page Size and Columns}
On each page your material should fit within a rectangle of 7 $\times$
9.15 inches (18 $\times$ 23.2 cm), centered on a US Letter page (8.5
$\times$ 11 inches), beginning 0.85 inches (1.9 cm) from the top of
the page, with a 0.3 inches (0.85 cm) space between two 3.35 inches
(8.4 cm) columns. Right margins should be justified, not
ragged. Please be sure your document and PDF are US letter and not A4.
\section{Typeset Text}
The styles contained in this document have been modified from the
default styles to reflect ACM formatting conventions. For example,
content paragraphs like this one are formatted using the Normal style.
\LaTeX\ sometimes will create overfull lines that extend into columns.
To attempt to combat this, the \texttt{.cls} file has a command,
\texttt{{\textbackslash}sloppy}, that essentially asks \LaTeX\ to
prefer underfull lines with extra whitespace. For more details on
this, and info on how to control it more finely, check out
{\url{http://www.economics.utoronto.ca/osborne/latex/PMAKEUP.HTM}}.
\subsection{Title and Authors}
Your paper's title, authors and affiliations should run across the
full width of the page in a single column 17.8 cm (7 in.) wide. The
title should be in Helvetica or Arial 18-point bold. Authors' names
should be in Times New Roman or Times Roman 12-point bold, and
affiliations in 12-point regular.
See \texttt{{\textbackslash}author} section of this template for
instructions on how to format the authors. For more than three
authors, you may have to place some address information in a footnote,
or in a named section at the end of your paper. Names may optionally
be placed in a single centered row instead of at the top of each
column. Leave one 10-point line of white space below the last line of
affiliations.
\subsection{Abstract and Keywords}
Every submission should begin with an abstract of about 150 words,
followed by a set of Author Keywords and ACM Classification
Keywords. The abstract and keywords should be placed in the left
column of the first page under the left half of the title. The
abstract should be a concise statement of the problem, approach, and
conclusions of the work described. It should clearly state the paper's
contribution to the field of HCI\@.
\subsection{Normal or Body Text}
Please use a 10-point Times New Roman or Times Roman font or, if this
is unavailable, another proportional font with serifs, as close as
possible in appearance to Times Roman 10-point. Other than Helvetica
or Arial headings, please use sans-serif or non-proportional fonts
only for special purposes, such as source code text.
\subsection{First Page Copyright Notice}
This template include a sample ACM copyright notice at the bottom of
page 1, column 1. Upon acceptance, you will be provided with the
appropriate copyright statement and unique DOI string for publication.
Accepted papers will be distributed in the conference
publications. They will also be placed in the ACM Digital Library,
where they will remain accessible to thousands of researchers and
practitioners worldwide. See
\url{http://acm.org/publications/policies/copyright_policy} for the
ACM's copyright and permissions policy.
\subsection{Subsequent Pages}
On pages beyond the first, start at the top of the page and continue
in double-column format. The two columns on the last page should be
of equal length.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/sigchi-logo}
\caption{Insert a caption below each figure. Do not alter the
Caption style. One-line captions should be centered; multi-line
should be justified. }~\label{fig:figure1}
\end{figure}
\subsection{References and Citations}
Use a numbered list of references at the end of the article, ordered
alphabetically by last name of first author, and referenced by numbers
in
brackets~\cite{acm_categories,ethics,Klemmer:2002:WSC:503376.503378}.
Your references should be published materials accessible to the
public. Internal technical reports may be cited only if they are
easily accessible (i.e., you provide the address for obtaining the
report within your citation) and may be obtained by any reader for a
nominal fee. Proprietary information may not be cited. Private
communications should be acknowledged in the main text, not referenced
(e.g., ``[Borriello, personal communication]'').
References should be in ACM citation format:
\url{http://acm.org/publications/submissions/latex_style}. This
includes citations to internet
resources~\cite{acm_categories,cavender:writing,CHINOSAUR:venue,psy:gangnam}
according to ACM format, although it is often appropriate to include
URLs directly in the text, as above.
\begin{table}
\centering
\begin{tabular}{l r r r}
& & \multicolumn{2}{c}{\small{\textbf{Test Conditions}}} \\
\cmidrule(r){3-4}
{\small\textit{Name}}
& {\small \textit{First}}
& {\small \textit{Second}}
& {\small \textit{Final}} \\
\midrule
Marsden & 223.0 & 44 & 432,321 \\
Nass & 22.2 & 16 & 234,333 \\
Borriello & 22.9 & 11 & 93,123 \\
Karat & 34.9 & 2200 & 103,322 \\
\end{tabular}
\caption{Table captions should be placed below the table. We
recommend table lines be 1 point, 25\% black. Minimize use of
table grid lines.}~\label{tab:table1}
\end{table}
\section{Sections}
The heading of a section should be in Helvetica or Arial 9-point bold,
all in capitals. Sections should \textit{not} be numbered.
\subsection{Subsections}
Headings of subsections should be in Helvetica or Arial 9-point bold
with initial letters capitalized. For sub-sections and
sub-subsections, a word like \emph{the} or \emph{of} is not
capitalized unless it is the first word of the heading.
\subsubsection{Sub-subsections}
Headings for sub-subsections should be in Helvetica or Arial 9-point
italic with initial letters capitalized. Standard
\texttt{{\textbackslash}section}, \texttt{{\textbackslash}subsection},
and \texttt{{\textbackslash}subsubsection} commands will work fine in
this template.
\section{Figures/Captions}
Place figures and tables at the top or bottom of the appropriate
column or columns, on the same page as the relevant text (see
Figure~\ref{fig:figure1}). A figure or table may extend across both
columns to a maximum width of 17.78 cm (7 in.).
\begin{figure*}
\centering
\includegraphics[width=1.75\columnwidth]{figures/map}
\caption{In this image, the map maximizes use of space. You can make
figures as wide as you need, up to a maximum of the full width of
both columns. Note that \LaTeX\ tends to render large figures on a
dedicated page. Image: \ccbynd~ayman on
Flickr.}~\label{fig:figure2}
\end{figure*}
Captions should be Times New Roman or Times Roman 9-point bold. They
should be numbered (e.g., ``Table~\ref{tab:table1}'' or
``Figure~\ref{fig:figure1}''), centered and placed beneath the figure
or table. Please note that the words ``Figure'' and ``Table'' should
be spelled out (e.g., ``Figure'' rather than ``Fig.'') wherever they
occur. Figures, like Figure~\ref{fig:figure2}, may span columns and
all figures should also include alt text for improved accessibility.
Papers and notes may use color figures, which are included in the page
limit; the figures must be usable when printed in black-and-white in
the proceedings.
The paper may be accompanied by a short video figure up to five
minutes in length. However, the paper should stand on its own without
the video figure, as the video may not be available to everyone who
reads the paper.
\subsection{Inserting Images}
When possible, include a vector formatted graphic (i.e. PDF or EPS).
When including bitmaps, use an image editing tool to resize the image
at the appropriate printing resolution (usually 300 dpi).
\section{Quotations}
Quotations may be italicized when \textit{``placed inline''} (Anab,
23F).
\begin{quote}
Longer quotes, when placed in their own paragraph, need not be
italicized or in quotation marks when indented (Ramon, 39M).
\end{quote}
\section{Language, Style, and Content}
The written and spoken language of SIGCHI is English. Spelling and
punctuation may use any dialect of English (e.g., British, Canadian,
US, etc.) provided this is done consis- tently. Hyphenation is
optional. To ensure suitability for an international audience, please
pay attention to the following:
\begin{itemize}
\item Write in a straightforward style.
\item Try to avoid long or complex sentence structures.
\item Briefly define or explain all technical terms that may be
unfamiliar to readers.
\item Explain all acronyms the first time they are used in your
text---e.g., ``Digital Signal Processing (DSP)''.
\item Explain local references (e.g., not everyone knows all city
names in a particular country).
\item Explain ``insider'' comments. Ensure that your whole audience
understands any reference whose meaning you do not describe (e.g.,
do not assume that everyone has used a Macintosh or a particular
application).
\item Explain colloquial language and puns. Understanding phrases like
``red herring'' may require a local knowledge of English. Humor and
irony are difficult to translate.
\item Use unambiguous forms for culturally localized concepts, such as
times, dates, currencies, and numbers (e.g., ``1--5--97'' or
``5/1/97'' may mean 5 January or 1 May, and ``seven o'clock'' may
mean 7:00 am or 19:00). For currencies, indicate equivalences:
``Participants were paid {\fontfamily{txr}\selectfont \textwon}
25,000, or roughly US \$22.''
\item Be careful with the use of gender-specific pronouns (he, she)
and other gendered words (chairman, manpower, man-months). Use
inclusive language that is gender-neutral (e.g., she or he, they,
s/he, chair, staff, staff-hours, person-years). See the
\textit{Guidelines for Bias-Free Writing} for further advice and
examples regarding gender and other personal
attributes~\cite{Schwartz:1995:GBF}. Be particularly aware of
considerations around writing about people with disabilities.
\item If possible, use the full (extended) alphabetic character set
for names of persons, institutions, and places (e.g.,
Gr{\o}nb{\ae}k, Lafreni\'ere, S\'anchez, Nguy{\~{\^{e}}}n,
Universit{\"a}t, Wei{\ss}enbach, Z{\"u}llighoven, \r{A}rhus, etc.).
These characters are already included in most versions and variants
of Times, Helvetica, and Arial fonts.
\end{itemize}
\section{Accessibility}
The Executive Council of SIGCHI has committed to making SIGCHI
conferences more inclusive for researchers, practitioners, and
educators with disabilities. As a part of this goal, the all authors
are asked to work on improving the accessibility of their
submissions. Specifically, we encourage authors to carry out the
following five steps:
\begin{enumerate}
\item Add alternative text to all figures
\item Mark table headings
\item Add tags to the PDF
\item Verify the default language
\item Set the tab order to ``Use Document Structure''
\end{enumerate}
For more information and links to instructions and resources, please
see: \url{http://chi2016.acm.org/accessibility}. The
\texttt{{\textbackslash}hyperref} package allows you to create well tagged PDF files,
please see the preamble of this template for an example.
\section{Page Numbering, Headers and Footers}
Your final submission should not contain footer or header information
at the top or bottom of each page. Specifically, your final submission
should not include page numbers. Initial submissions may include page
numbers, but these must be removed for camera-ready. Page numbers will
be added to the PDF when the proceedings are assembled.
\section{Producing and Testing PDF Files}
We recommend that you produce a PDF version of your submission well
before the final deadline. Your PDF file must be ACM DL
Compliant. The requirements for an ACM Compliant PDF are available at:
{\url{http://www.sheridanprinting.com/typedept/ACM-distilling-settings.htm}}.
Test your PDF file by viewing or printing it with the same software we
will use when we receive it, Adobe Acrobat Reader Version 10. This is
widely available at no cost. Note that most
reviewers will use a North American/European version of Acrobat
reader, so please check your PDF accordingly.
When creating your PDF from Word, ensure that you generate a tagged
PDF from improved accessibility. This can be done by using the Adobe
PDF add-in, also called PDFMaker. Select Acrobat | Preferences from
the ribbon and ensure that ``Enable Accessibility and Reflow with
tagged Adobe PDF'' is selected. You can then generate a tagged PDF by
selecting ``Create PDF'' from the Acrobat ribbon.
\section{Conclusion}
It is important that you write for the SIGCHI audience. Please read
previous years' proceedings to understand the writing style and
conventions that successful authors have used. It is particularly
important that you state clearly what you have done, not merely what
you plan to do, and explain how your work is different from previously
published work, i.e., the unique contribution that your work makes to
the field. Please consider what the reader will learn from your
submission, and how they will find your work useful. If you write with
these questions in mind, your work is more likely to be successful,
both in being accepted into the conference, and in influencing the
work of our field.
\section{Acknowledgments}
Sample text: We thank all the volunteers, and all publications support
and staff, who wrote and provided helpful comments on previous
versions of this document. Authors 1, 2, and 3 gratefully acknowledge
the grant from NSF (\#1234--2012--ABC). \textit{This whole paragraph is
just an example.}
\balance{}
\section{References Format}
Your references should be published materials accessible to the
public. Internal technical reports may be cited only if they are
easily accessible and may be obtained by any reader for a nominal
fee. Proprietary information may not be cited. Private communications
should be acknowledged in the main text, not referenced (e.g.,
[Golovchinsky, personal communication]). References must be the same
font size as other body text. References should be in alphabetical
order by last name of first author. Use a numbered list of references
at the end of the article, ordered alphabetically by last name of
first author, and referenced by numbers in brackets. For papers from
conference proceedings, include the title of the paper and the name of
the conference. Do not include the location of the conference or the
exact date; do include the page numbers if available.
References should be in ACM citation format:
\url{http://www.acm.org/publications/submissions/latex_style}. This
includes citations to Internet
resources~\cite{CHINOSAUR:venue,cavender:writing,psy:gangnam}
according to ACM format, although it is often appropriate to include
URLs directly in the text, as above. Example reference formatting for
individual journal articles~\cite{ethics}, articles in conference
proceedings~\cite{Klemmer:2002:WSC:503376.503378},
books~\cite{Schwartz:1995:GBF}, theses~\cite{sutherland:sketchpad},
book chapters~\cite{winner:politics}, an entire journal
issue~\cite{kaye:puc},
websites~\cite{acm_categories,cavender:writing},
tweets~\cite{CHINOSAUR:venue}, patents~\cite{heilig:sensorama},
games~\cite{supermetroid:snes}, and
online videos~\cite{psy:gangnam} is given here. See the examples of
citations at the end of this document and in the accompanying
\texttt{BibTeX} document. This formatting is a edited version of the
format automatically generated by the ACM Digital Library
(\url{http://dl.acm.org}) as ``ACM Ref.'' DOI and/or URL links are
optional but encouraged as are full first names. Note that the
Hyperlink style used throughout this document uses blue links;
however, URLs in the references section may optionally appear in
black.
\balance{}
\bibliographystyle{SIGCHI-Reference-Format}
|
1,108,101,563,910 | arxiv | \section{Introduction}
\label{sec:introduction}
This paper has two parts. Firstly, we characterize some equivalent statements of aperiodicity of an element on locally compact group. Secondly, we use those equivalent statements of aperiodicity to give the existence of hypercyclic weighted translations. For some cases we actually can find expilcit form of hypercyclic weighted translations.
In the field of linear chaos, people focus on the linear operators which act on a Banach space and discuss their dynamic properties like hypercyclicity and chaoticity. For our discussion, we focus on \cite{Hypercyclic_on_groups}, see also \cite{Chaotic_on_groups, non-torsion} which characterize the chaoticity and hypercyclicity of a weighted translation operator on the $L^p$ space of a locally compact group.
An operator $T$ on a Banach space $X$ is called {\it hypercyclic} if there exists a vector $x\in X$ such that its orbit is dense in the whole space (i.e. $orb(T,x):=\left\{T^nx|n\in \mathbb{N}\right\}$ is dense in $X$). An operator $T$ is called {\it weakly mixing} if $T\oplus T$ defined on $X\times X$ is hypercyclic. An operator $T$ is called {\it mixing} if for any nonempty opens $U,V$ in $X$, there exists $N\in \mathbb{N}$ such that $T^nU\cap V\neq \varnothing$ for all $n>N$. An operator $T$ is called {\it chaotic} if it is hypercyclic and the set of periodic points is dense. An operator $T$ is called {\it frequently hypercyclic} if there is some $x \in X$ such that for any nonempty open subset $U$ of $X$, $n_k= O(k)$, where $n_k$ is a strictly increasing sequence of integers such that $T^{n_k}x$ is $k$-th element lying in $U$ (by \cite[Proposition 9.3. p.237]{Linear_chaos}, this is an equivalent statement of \cite[Definition 9.2. p.237]{Linear_chaos}). (Note that we only consider the case that the operator is a weighted translation in this paper).
\begin{theorem*}
(Frequent Hypercyclicity Criterion, \cite[Theorem 9.9 and Proposition 9.11.]{Linear_chaos}). Let $T$ be an operator on a separable Fr\'echet space $X$. If there is a dense subset $X_0$ of $X$ and a map $S : X_0 \to X_0$ such that, for any $x\in X_0$,
\begin{enumerate}
\item
$\sum\limits ^{\infty}_{n=0}T^nx$ converges unconditionally,
\item
$\sum\limits ^{\infty}_{n=0}S^nx$ converges unconditionally,
\item
$TSx = x$,
\end{enumerate}
then $T$ is frequently hypercyclic. Moreover, $T$ is also chaotic and mixing. In particular, it is also weakly mixing and hypercyclic.
\end{theorem*}
The graph below characterizes some relations between these dynamical properties which have been discussed, see \cite{Linear_chaos}:
\vskip1em
\hskip-2em
\begin{tikzpicture}
\node (conI) {Frequently Hypercyclicity};
\node (conII) [below = of conI] {Chaos};
\node (conIII) [Mylong, left = of conII] {Frequent\break Hypercyclicity Criterion};
\node (conIV) [right = of conII] {Weakly Mixing};
\node (conV) [below = of conII] {Mixing};
\node (conVI) [right = of conIV] {Hypercyclicity};
\draw [myarr] (conIII) -- (conII);
\draw [myarr] (conII) -- (conIV);
\draw [myarr] (conI) -- (conIV);
\draw [myarr] (conIII) -- (conI);
\draw [myarr] (conIII) -- (conV);
\draw [myarr] (conV) -- (conIV);
\draw [myarr] (conIV) -- (conVI);
\end{tikzpicture}
\vskip1em
In the above, we describe the general conclusions. In this article, we only focus on the weighted translation operators.
Let $G$ be a locally compact group and $a$ be an element of $G$. The weighted translation operator $T_{a,w}$ is a bounded linear self-map on the Banach space $L^p(G)$ (by using the right Haar measure on $G$), for some $p \in [1, \infty)$, defined by
\[T_{a,w}(f)(x):=w(x)f(xa^{-1}),\]
where the weight $w$ is a bounded continuous function from $G$ to $(0,\infty )$. We denote $T_{a,1}$ by $T_a$ so that $T_a w$ is the function $w$ translation by $a$, while $T_{a,w}$ is a weighted translation operator.
To analyze $T_{a,w}$, we would like to classify some different topological properties of the elements in $G$. We call an element $a$ of $G$ {\it torsion} if it has finite order. An element $a$ is {\it periodic} if the closed subgroup $G(a)$ generated by $a$ (i.e. $G(a)=\overline{<a>}$) is compact in $G$. An element $a$ is {\it aperiodic} if it is not periodic.
Note that we don’t need $G$ to be Hausdorff in this paper, unless we say that $G$ is a Hausdorff group.
\begin{lemma*} $ $
\cite[Lemma 2.1., C. CHEN AND C-H. CHU]{Hypercyclic_on_groups} An element $a$ in a second countable group $G$ is aperiodic if, and only
if, for each compact subset $K\subseteq G$, there exists $N \in\mathbb{N}$ such that $K\cap Ka^n=\varnothing$ for
$n > N$.
\end{lemma*}
In \cite[Lemma 2.1]{Hypercyclic_on_groups}, they give an equivalence statement of aperiodicity when $G$ is a second countable locally compact Hausdorff group. We give another equivalence statement when $G$ is second countable (Theorem \ref{equaperterminal}), which we define in Definition \ref{defterminal} and we call it {\it terminal pair}.
\begin{lemma*} $ $
\cite[Lemma 1.1., C. CHEN AND C-H. CHU]{Hypercyclic_on_groups} Let $G$ be a locally compact group and let $a\in G$ be a torsion element.
Then any weighted translation $T_{a,w} : L^p(G) \to L^p(G)$ is not hypercyclic, for
$1 \le p <\infty$.
\end{lemma*}
On the other hand, \cite[Lemma 1.1]{Hypercyclic_on_groups} gives the non-existence of hypercyclic weighted translations when the element $a$ is torsion. So the question of the existence of hypercyclic weighted translations will be focused on the case in which $a$ is non-torsion, this is, $a$ is either non-torsion periodic or $a$ is aperiodic.
All examples of hypercyclic weighted translation operators in previously known literature are all associated to aperiodic $a$.
One of the most important concrete examples is the weighted backward shift operator on ${\ell}^p(\mathbb{Z})$ \cite[Example 4.15. p.102]{Linear_chaos} and there are also some classical analogous examples relevant to semigroups \cite{semigroups}; in fact, it can correspond to our case for several admissible weights by conjugation.
So we unify them to our Main Theorem \ref{main}.
\begin{theorem*}[Main Theorem]
Let $G$ be a second countable locally compact group and $a$ be an aperiodic element in $G$, then there exists a weighted translation operator $T_{a,w}$ which is mixing, choatic and frequently hypercyclic on $L^p(G)$ for all $p\in [1,\infty)$, simultaneously.
\end{theorem*}
\section{Equivalent statements of aperiodicity}
\label{sec:equivalent_statements_of_aperiodicity}
\begin{proposition}
\label{homoaper}
Let $G,G'$ be locally compact groups, $\phi (a)$ is an aperiodic element of $G'$, where $\phi :G \to G'$ is a continuous homomorphism, then $a$ is also an aperiodic element of $G$. In other words, continuous homomorphisms pullback the aperiodicity.
\end{proposition}
\begin{proof}
If $a$ is periodic, then $\overline{<a>}$ is a compact group and so does $\phi(\overline{<a>})$, but $\phi (a)\in \phi(\overline{<a>})$, contradiction as there are no aperiodic element in compact group.
\end{proof}
Let $G$ be a topological group. Define the {\it Hausdorffication} of $G$ by the natural continuous quotient map $\pi :G\to \widetilde{G}$, where $\widetilde{G}:=G/\overline{\{e\}}$ and $e$ denotes the identity element of $G$. (This may be related to the ``Hausdorffication'' which is defined as a left adjoint of forget functor in general topology). The reason that we consider the Hausdorffication is lots of statements in this paper without the assumption that $G$ is Hausdorff; but in fact, we first assume $G$ is Hausdorff in proving. Next, we prove the non-Hausdorff case by considering its Hausdorffication, and show it can preserve or pullback some conclusion we want. So we will claim several arguments below:
\begin{enumerate}
\item
Each open or closed subset of $G$ is a union of the cosets of $\overline{\{e\}}$.
Since $\overline{S}x=\overline{Sx}$ for any $S\subseteq G$ and $x\in G$. Choose $S=\{e\}$ and $x\in \overline{\{e\}}$, then $\overline{\{x\}}=\overline{\{e\}x}=\overline{\{e\}}x=\overline{\{e\}}$ (last equality follows by $x\in \overline{\{e\}}$ and $\overline{\{e\}}$ is a subgroup of $G$), which implies $\overline{\{e\}}$ is indiscrete topology space and so does all cosets of $\overline{\{e\}}$. This implies the statement we want.
\item
There's a one to one correspondence between the set of open(closed) of $G$ and $\widetilde{G}$'s. In particular, $G$ is first(second) countable if and only if $\widetilde{G}$ also, and $\widetilde{G}$ is Hausdorff.
This correspondence is given by $(U \mapsto \pi (U))$ and $(\widetilde{U}\mapsto \pi^{-1}(\widetilde{U}))$ for $U$ is open in $G$ and $\widetilde{U}$ is open in $\widetilde{G}$. To verify that these two maps compose to identity for two sides, we only need to say $\pi^{-1}\pi(U)\subseteq U$ for any $U$ is open in $G$, since the others are relatively obvious. Assume there exists $x\in \pi^{-1}\pi(U)\setminus U$, then $\pi(x)\in \pi(U)$, so there is some $y\in U$ such that $\pi(x)=\pi(y)$ (so $\pi(xy^{-1})=\pi(e)$, then $xy^{-1}\in \ke \pi =\overline{\{e\}}$, then $x\in \overline{\{e\}}y$). But by (1), $\overline{\{e\}}y\subseteq U$, since $y\in U$, a contradiction as this implies $x\in U \cap \pi^{-1}\pi(U)\setminus U=\varnothing$.
\item
$\pi$ is an open, closed and proper mapping. In particular, $G$ is locally compact iff $\widetilde{G}$ also. On the other hand, the one to one correspondence in (2), not only just closed sets, but also closed compact sets (since $\pi$ is proper).
The openness and closedness follow by (1) and (2) immediately. For the properness, let $\widetilde{K}$ be compact in $\widetilde{G}$, and we'll check $\pi^{-1}(\widetilde{K})$ also. Let $\{U_{\alpha}\}$ be a open covering of $\pi^{-1}(\widetilde{K})$, so by correspondence, $\{\pi(U_{\alpha})\}$ be a open covering of $\widetilde{K}$. Hence induce a finite subcovering $\{\pi(U_{i})\}$, then we can check $\{U_{i}\}$ is actually a finite subcovering of $\pi^{-1}(\widetilde{K})$.
\item
Let $Y$ be an arbitrary Hausdorff topology space, then $\Hom(G,Y)\cong \Hom(\widetilde{G},Y)$. (i.e. There's a natural one to one correspondence between the set of continuous functions from $G$ to $Y$ and $\widetilde{G}$'s.)
Given $w\in \Hom(G,Y)$, define $\widetilde{w}(\widetilde{x}):=w(x)$ where $\widetilde{x}=\overline{\{e\}}x$, then $\widetilde{w}$ is a well-defined continuous function on $\widetilde{G}$. On the other hand, Given $\widetilde{w}\in \Hom(\widetilde{G},Y)$, then we get $w:=\widetilde{w} \circ \pi$, which is also a well-defined continuous function on $G$. It easy to check these two mappings between $\Hom(G,Y)$ and $\Hom(\widetilde{G},Y)$ are inverse to each others. (Equivalently, one can say that any continuous function from $G$ to $Y$ factor through $\widetilde{G}$. In other words, this is the universal property of Hausdorffication.)
\item
$a$ is an aperiodic in $G$ iff $\pi(a)$ is an aperiodic in $\widetilde{G}$.
If $\pi(a)$ is a periodic, then $\overline{<\pi (a)>}$ is compact and so does $\pi^{-1}(\overline{<\pi (a)>})$, since $\pi $ is proper, but $a\in \pi^{-1}(\overline{<\pi (a)>})$. There is a contradiction. The other side follows from Proposition \ref{homoaper} immediately.
\end{enumerate}
The idea of the proofs of Proposition \ref{runawayaper}, Lemmas \ref{secondcompact} and \ref{discrete} and Proposition \ref{aperrunaway} follows from \cite[Lemma 2.1]{Hypercyclic_on_groups}.
\begin{proposition}
\label{runawayaper}
Let $G$ be a locally compact group, $a\in G$ has the following properties: For any compact subset $K$ of $G$, there exists $N\in \mathbb{N}$ such that $K\cap Ka^n = \varnothing$ for $n>N$. Then $a$ is an aperiodic element in $G$.
\end{proposition}
\begin{proof}
Suppose $a$ is a periodic element, so the closed subgroup $G(a)$ generated by $a$ is compact. Now set $K=G(a)$, then we have $K\cap Ka^n =G(a) \neq \varnothing$ for all $n\in \mathbb{Z}$, which is a negative statement of the condition.
\end{proof}
\begin{lemma}
\label{secondcompact}
Let $G$ be a first countable locally compact Hausdorff group with $a\in G$ being an aperiodic element, then $G(a)$ is a second countable compactly generated abelian group.
\end{lemma}
\begin{proof}
Since $a$ is an aperiodic element, so $G(a)$ is non-compact closed abelian subgroup of $G$ (the commutativity follows by net arguement and $G(a)$ is Hausdorff.). Moreover, by \cite[Theorem 5.14]{E_Hewitt}, there exists a compactly generated subgroup $G'$ of $G$ which containing $G(a)$, since $\left\{e,a\right\}$ is a compact subset of $G$.
First, we'll check that it is second countable. Since $G(a)$ is a first countable locally compact Hausdorff group, hence metrizable \cite[Birkhoff-Kakutani metrizable theorem]{Birkhoff}. On the other hand, since the set $\left\{a^j\right\}_{j\in \mathbb{Z}}$ is dense in $G(a)$, so $G(a)$ is a separable metrizable space, hence second countable.
So in the last step, we'll check that it is compactly generated. There's a important thing that not every subgroup of a compactly generated group is compactly generated (even a closed subgroup), but in our case it works. Let $G'$ be generated by the compact set $K_0$, where $K_0=\overline{V_0}$ for some open $V_0$ containing $\left\{e,a\right\}$ (the existence follows from the proof in \cite[Theorem 5.14]{E_Hewitt}). We'll claim that $G(a)$ is generated by the compact set $K_1$, where $K_1=\overline{V_1}$ and $V_1=V_0\cap G(a)$. Since the set $\left\{a^j\right\}_{j\in \mathbb{Z}}$ is dense in $G(a)$, which implies $\left\{V_1a^j\right\}_{j\in \mathbb{Z}}$ is a covering of $G(a)$ (we'll prove this in the Remark below). So $G(a)\subseteq \cup_{j\in \mathbb{Z}}V_1a^j\subseteq \cup_{j\in \mathbb{Z}}K_1a^j\subseteq G(a)$ (last step follows from $K_1\subseteq G(a)$), that means that any $x\in G(a)$ can write it as the form $ka^j$ for some $k\in K_1$ and some $j\in \mathbb{Z}$ (i.e. $G(a)$ is compactly generated, since $a$ is also in $K_1$).
\end{proof}
\begin{remark}
To prove that $\left\{V_1a^j\right\}_{j\in \mathbb{Z}}$ is a covering of $G(a)$, we need to say that every $x\in G(a)$ which is a limit of a subsequence $\left\{a^{n_k}\right\}$ has been covered by $\left\{V_1a^{n_k}\right\}$ (we view $V_1$ as the relative open neighborhood of $e$ in $G(a)$ here). Now choose $V_2$ the symmetric open neighborhood of $e$ in $V_1$. Then $a^{n_k}\in V_2x$ for $k$ large enough, hence $x\in V_2^{-1}a^{n_k}=V_2a^{n_k}\subseteq V_1a^{n_k}$, so we are done.
\end{remark}
\begin{lemma}
\label{discrete}
Let $G$ be a first countable locally compact Hausdorff group with $a\in G$ being an aperiodic element, then $G(a)$ is topologically isomorphic to $\mathbb{Z}$.
\end{lemma}
\begin{proof}
So by previous lemma and \cite[Theorem 9.8]{E_Hewitt}, $G(a)\cong \mathbb{R}^n\times \mathbb{Z}^m\times \mathbb{F}$ for some $n,m\in \mathbb{N}$ and $\mathbb{F}$ is a compact group, and $a$ identifies with the element in $\mathbb{R}^n\times \mathbb{Z}^m\times \mathbb{F}\setminus \left(\{0\}\times \{0\}\times \mathbb{F}\right)$ and $<a>$ the cyclic subgroup generated by $a$ will not have any accumulation points. That means $G(a)$ is actually a discrete group, hence $G(a)=<a>$, which is also isomorphic to $\mathbb{Z}$.
\end{proof}
\begin{proposition}
\label{aperrunaway}
Let $G$ be a first countable locally compact group with $a\in G$ being an aperiodic element, then for any compact subset $K$ of $G$, there exists $N\in \mathbb{N}$ such that $K\cap Ka^n = \varnothing$ for $n>N$.
\end{proposition}
\begin{proof}
We first consider the case that $G$ is Hausdorff.
If we assume there exists a compact set $K$ such that $K\cap Ka^n\neq \varnothing$ for infinitely many $n$'s, then for those $n$'s, $a^n\in K^{-1}K$, which is impossible since there must admit a convergent subsequence in the compact set $K^{-1}K$. But this contradicts with the previous lemma, so the case of Hausdorff has been verified.
Now consider the general case. Let $G$ be a first countable locally compact group and $\pi :G\to \widetilde{G}$ be its Hausdorffication. For any compact set $K$ in $G$, $\pi(K)$ is compact in $\widetilde{G}$, so there exsit $N$ such that
\[\pi(K)\cap \pi(K)\pi(a)^n=\varnothing \text{ for } n>N\]
then
\begin{align*}
K\cap Ka^n &\subseteq (\pi ^{-1}\circ \pi)(K)\cap (\pi ^{-1}\circ \pi)(K)a^n\\
&= \pi ^{-1}(\pi(K))\cap \pi ^{-1}(\pi(K)\pi(a)^n)\\
&= \pi ^{-1}(\pi(K)\cap \pi(K)\pi(a)^n)\\
&= \pi ^{-1}(\varnothing )\\
&=\varnothing \text{ for } n>N.
\end{align*}
\end{proof}
\begin{theorem}
Let $G$ be a first countable locally compact group, then the following are equivalent:
\begin{enumerate}
\item
$a\in G$ is an aperiodic element.
\item
For any compact subset $K$ of $G$, there exists $N\in \mathbb{N}$ such that $K\cap Ka^n = \varnothing$ for $n>N$.
\end{enumerate}
\end{theorem}
\begin{definition}
\label{defterminal}
Let $G$ be a locally compact group and $a\in G$. We say $G$ has a terminal pair $(A,B)$ w.r.t. $a$ if there exists a pair of disjoint closed subsets $(A,B)$ of $G$ such that for any given compact subset $K$ in $G$, we have
\[Ka^{n}\subseteq A,\]
\[Ka^{-n}\subseteq B\]
for $n$ large enough.
\end{definition}
More intuitively, $a$ shifts any compact subset positively (resp. nagetively) into $A$ (resp. $B$).
\begin{example}
One of the simplest cases is $G=\mathbb{Z}\text{ or }\mathbb{R}$ and $a=1$, the terminal pair w.r.t. $1$ can be given by $(A,B)=([100,\infty),(-\infty,-100])$.
\end{example}
\begin{example}
\label{terexam}
Let $G$ be a general linear group $GL(n,\mathbb{C})$, $a\in G$ with some eigenvalue $\lambda $ such that $|\lambda |\neq 1$, then $G$ admits a terminal pair w.r.t. $a$.
\end{example}
\begin{proof}
Without loss of generality, we can assume $a$ is itself a Jordan form, since we can act a conjugate automorphism on $G$ as
\[a= \begin{bmatrix}
\lambda & * & \cdots & 0 \\
\vdots & \ddots & \ddots & \vdots \\
0 & \cdots & \lambda '& * \\
0 & \cdots & 0 & \lambda ''
\end{bmatrix}\]
Notation: Let any $x\in G$, we write $x=[x_1|x_2|...|x_n]$, where $x_i\in \mathbb{C}^n$ are the column of $x$. (Note that $x_i$ will never be zero vector since $x$ is invertible.)
Consider the map $f:G\to \mathbb{R}$, $f(x)=ln\|x_1\|$. By the calculation, we have $f(xa^n)=f(x)+n*ln|\lambda|$.
Set $(A,B)=(f^{-1}([1,\infty )),f^{-1}((-\infty,-1]))$, so for any compact subset $K$ in $G$,
\[\inf_{x\in K} f(xa^n)=(\inf_{x\in K} f(x))+n*ln|\lambda|,\]
\[\sup_{x\in K} f(xa^n)=(\sup_{x\in K} f(x))+n*ln|\lambda|\]
which implies $(A,B)$ is a terminal pair w.r.t. $a$.
\end{proof}
\begin{proposition}
\label{teraper}
Let $G$ be a locally compact group and admit a terminal pair w.r.t. $a$, then $a$ is an aperiodic element.
\end{proposition}
\begin{proof}
Suppose $a$ is a periodic element but $G$ also admits a terminal pair w.r.t. $a$, then $G(a)a^n=G(a)$ for all $n\in\mathbb{Z}$, which implies that $A$ and $B$ are not disjoint.
\end{proof}
\begin{proposition}
\label{homoterminal}
Let $G,G'$ be locally compact groups, $G'$ admit a terminal pair w.r.t. $\phi (a)$, where $\phi :G \to G'$ is a continuous homomorphism, then $G$ also admits a terminal pair w.r.t. $a$. In other words, continuous homomorphisms pullback the terminal pair.
\end{proposition}
\begin{proof}
Let $(A_{G'},B_{G'})$ be a terminal pair w.r.t. $\phi (a)$. Equivalently, for any given compact subset $K'$ in $G'$, we have
\[K'\phi (a)^{n}\subseteq A_{G'},\]
\[K'\phi (a)^{-n}\subseteq B_{G'}\]
for $n$ large enough.
Set $(A_G,B_G)=(\phi ^{-1}(A_{G'}),\phi ^{-1}(B_{G'}))$. By continuity of $\phi$, $(A_G,B_G)$ is also a pair of disjoint closed sets in $G$. Now, for any given compact subset $K$ in $G$, $\phi (K)$ is also compact, so we have
\vskip 1.1em
\[Ka^{n}\subseteq \phi ^{-1}(\phi (Ka^{n}))=\phi ^{-1}(\phi (K)\phi (a)^{n})\subseteq \phi ^{-1}(A_{G'})=A_G,\]
\[Ka^{-n}\subseteq \phi ^{-1}(\phi (Ka^{-n}))=\phi ^{-1}(\phi (K)\phi (a)^{-n})\subseteq \phi ^{-1}(B_{G'})=B_G\]
for $n$ large enough.
\end{proof}
\begin{example}
Let $G=S^1\times\mathbb{R}$ and $a=(0,1)\in S^1\times\mathbb{R}$, where $S^1$ denotes the circle group. Then the natural quotient map $\phi :G \to \mathbb{R}$ sends $a$ to $1$, so terminal pair w.r.t. $a$ can be given by $(A,B)=(\phi^{-1}((-\infty,-100]),\phi^{-1}([100,\infty)))$.
\end{example}
\begin{proposition}
\label{terminalHausdorffication}
Let $G$ be locally compact group and $\pi :G\to \widetilde{G}$ be the Hausdorffication of $G$, then $(A,B)$ is a terminal pair w.r.t. $a$ iff $(\widetilde{A},\widetilde{B})=(\pi(A),\pi(B))$ is a terminal pair w.r.t. $\pi(a)$ with closed subsets $A,B$ of $G$.
\end{proposition}
\begin{proof}
Suppose $(A,B)$ is a terminal pair w.r.t. $a$. $\widetilde{A}$ and $\widetilde{B}$ are also disjoint closed follows by the closedness of $\pi$ and they are union of cosets of $\overline{\{e\}}$, immediately. $(\widetilde{A},\widetilde{B})$ is also a terminal pair w.r.t. $\pi(a)$ follows by there are one to one correspondence between the set of closed compact of $G$
and $\widetilde{G}$ (i.e. $\widetilde{K}=\pi(\pi^{-1}(\widetilde{K}))$). The other side follows from Proposition \ref{homoterminal}.
\end{proof}
Recall that any second countable locally compact Hausdorff groups are compatible with a proper ``right'' invariant metric $d$ \cite{plig}. In this metric space the Heine-Borel property holds. Therefore, $B_R(x):=\{y\in G|d(x,y)<R\}$ is a precompact open ball, and by right invariant $B_R(xa)=B_R(x)a$.
Note that \cite{plig} said that there's a proper ``left'' invariant metric say $d_L$, but it's easy to induce a proper ``right'' invariant metric $d$ by set $d(x,y):=d_L(x^{-1},y^{-1})$.
\begin{theorem}
\label{gotoinfty}
Let $G$ be a second countable locally compact Hausdorff group, then the following are equivalent:
\begin{enumerate}
\item
$a\in G$ is an aperiodic element.
\item
For any compact subsets $K$ and $K'$ in $G$, $d(Ka^{\ell},K')\to \infty$ as $\ell \to \infty$.
\end{enumerate}
\end{theorem}
\begin{proof}
(2$\Rightarrow $1). Choose $K=K'=\{e\}$, then $d(e,a^{\ell})\to \infty$ as $\ell \to \infty$. But if $a$ is periodic, then $G(a)$ is compact, hence bounded. This implies that $a^n$ are uniformly bounded, and there is a contradiction.
(1$\Rightarrow $2). Suppose $d(Ka^{\ell},K')$ is uniformly bounded by $C>0$, then for any $\ell\in\mathbb{N}$, there exists $k_{\ell}\in K$, $k'_{\ell}\in K'$ such that $d(k_{\ell}a^{\ell},k'_{\ell})=d(k_{\ell}a^{\ell}k'^{-1}_{\ell},e)<C$. Since the closed ball center at $e$ is compact, so there's a subsequence $\left\{\ell'\right\}$ , $k^{-1}\in K^{-1}$, $k'\in K'$ and an element $b$ in the ball such that
\[k^{-1}_{\ell'}\to k^{-1},\]
\[k'_{\ell'}\to k',\]
\[k_{\ell'}a^{\ell'}k'^{-1}_{\ell'}\to b \text{ as } \ell' \to \infty.\]
Hence $a^{\ell'}\to k^{-1}bk' \text{ as } \ell' \to \infty$, which contradicts with the Lemma \ref{discrete}, since $a^{\ell'}$ never converge in $G(a)\cong \mathbb{Z}$.
\end{proof}
\begin{theorem}
\label{equaperterminal}
Let $G$ be a second countable locally compact group, then the following are equivalent:
\begin{enumerate}
\item
$a\in G$ is an aperiodic element.
\item
For any compact subset $K$ of $G$, there exists $N\in \mathbb{N}$ such that $K\cap Ka^n = \varnothing$ for $n>N$.
\item
$G$ has a terminal pair w.r.t. $a$.
\end{enumerate}
\end{theorem}
\begin{proof}
By discussion above, we only need to prove the case (1$\Rightarrow $3).
We first consider the case that $G$ is Hausdorff.
Set
\[J:=\left\{xa^n\in G|x\in G\text{ and } d(xa^n,e)\le d(xa^{n'},e) \text{ for all }{n'}\in \mathbb{Z}\right\}\]
\[N_x:=2\text{ } min\left\{N\in \mathbb{N}|2d(x,e)+2< d(xa^n,e)\text{ for all }n\in\mathbb{Z}\text{ with }|n|>N\right\}\]
The reason for the definition of $J$ is we want to simulate the special case $G=\mathbb{R}^2, a=(1,0)$. In this special case, the terminal pair w.r.t. $a$ can be set as $(\{(x,y)|x\geq 100\},\{(x,y)|x\le -100\})$. To deduce this, we need some ``sense'' like y-axis which is orthogonal to $a$, hence we define $J$ to be the simulation of y-axis.
Note that they are well-defined, since Theorem \ref{gotoinfty} implies that both $d(xa^n,e)$ and $d(xa^{-n},e)$ $\to \infty$ as $n\to \infty$. And for each $x\in G$, there exists $xa^n\in J$ (i.e. $G=\left\{xa^n|x\in J,n\in \mathbb{Z}\right\}$), and this follows from the same reason.
Note that $\mathfrak{B}:=\left\{B_{\frac{1}{4}}(xa^n)\right\}_{x\in J,n\in \mathbb{Z}}$ is a covering of $G$.
\textbf{Claim}: For any $x,y\in J$, $n>N_x$ and $m>N_y$, we have $d(xa^n,ya^{-m})>1$.
Suppose $1\geq d(xa^n,ya^{-m})=d(x,ya^{-n-m})=d(xa^{n+m},y)$.
Then
\begin{align*}
2+2d(x,e) &< d(xa^{n+m},e)\\
&\le d(xa^{n+m},y)+d(y,e)\\
&\le 1+d(y,e)\\
&\le 1+d(ya^{-n-m},e)\\
&\le 1+d(ya^{-n-m},x)+d(x,e)\\
&\le 2+d(x,e).\\
\end{align*}
\vskip -2.1em
There is a contradiction.
\textbf{Claim}: For any $x,y\in J$, $n>N_x$ and $m>N_y$, we have
\[d(B_{\frac{1}{4}}(xa^n),B_{\frac{1}{4}}(ya^{-m}))>\frac{1}{4}.\]
Suppose $d(B_{\frac{1}{4}}(xa^n),B_{\frac{1}{4}}(ya^{-m}))\le\frac{1}{4}<\frac{1}{3}$.
Then there exists $x'\in B_{\frac{1}{4}}(xa^n)$ and $y'\in B_{\frac{1}{4}}(ya^{-m})$ such that $d(x',y')<\frac{1}{3}$, then we have $d(xa^n,ya^{-m})<\frac{1}{4}+\frac{1}{4}+\frac{1}{3}<1$, contradiction.
Now, we are ready to set our terminal pair. Set $(A,B):=(\overline{A'},\overline{B'})$, where
\[A':=\bigcup_{x\in J,n>N_x}B_{\frac{1}{4}}(xa^n),\]
\[B':=\bigcup_{x\in J,n>N_x}B_{\frac{1}{4}}(xa^{-n}).\]
By claim, $d(A',B')\geq\frac{1}{4}>0$, so $A$ and $B$ are disjoint closed set.
Now we will show that for each compact set $K$ shift into $A$ and $B$. By compactness, there exist finitely many balls in $\mathfrak{B}$, $\left\{B_{\frac{1}{4}}(x_ia^{n_i})\right\}^{\ell}_{i=1}$ which covers $K$. Set\\
$N:=2\text{ }max_{1\le i\le\ell}\left\{|n_i-N_{x_i}|,|n_i+N_{x_i}|\right\}$, so $n_i+n>N_{x_i}$ and $n_i-n<-N_{x_i}$ for $n>N$. Then
\[Ka^n\subseteq \cup^{\ell}_{i=1} B_{\frac{1}{4}}(x_ia^{n_i})a^n= \cup^{\ell}_{i=1} B_{\frac{1}{4}}(x_ia^{n_i+n})\subseteq A,\]
\[Ka^{-n}\subseteq \cup^{\ell}_{i=1} B_{\frac{1}{4}}(x_ia^{n_i})a^{-n}= \cup^{\ell}_{i=1} B_{\frac{1}{4}}(x_ia^{n_i-n})\subseteq B\]
for $n>N$, so the case of Hausdorff has been verified.
Now consider the general case. Let $G$ be a second countable locally compact group and $\pi :G\to \widetilde{G}$ be its Hausdorffication, so $\pi (a)$ is also aperiodic, hence $\widetilde{G}$ has a terminal pair w.r.t. $\pi(a)$. By Proposition \ref{homoterminal}, we are done.
\end{proof}
\section{Existence of hypercyclic weighted translations}
\label{sec:existence_of_hypercyclic_weighted_translations}
Now we would like to discuss how the existence of terminal pair affects the existence of hypercyclic weighted translation operators.
\begin{lemma}
\label{existencelemma}
Let $G$ be a second countable locally compact group and admit a terminal pair w.r.t. $a$, then there exists a weighted translation operator $T_{a,w}$ which satisfies the frequent hypercyclicity criterion on $L^p(G)$ for all $p\in [1,\infty)$, simultaneously.
\end{lemma}
\begin{proof}
We need to construct the weight $w$ by Urysohn's lemma and verify the frequent hypercyclicity criterion \cite[Theorem 9.9 and Proposition 9.11.]{Linear_chaos} directly.
Let $\pi :G\to \widetilde{G}$ be the Hausdorffication of $G$.
Recall that any locally compact Hausdorff group is normal \cite[Theorem 8.13]{E_Hewitt}, so $\widetilde{G}$ is normal. We use the notation in Proposition \ref{terminalHausdorffication}. Set $\widetilde{w}|_{\widetilde{A}}=2^{-1}$ and $\widetilde{w}|_{\widetilde{B}}=2$, by Urysohn's lemma $\widetilde{w}$ is a well-defined continuous function on $\widetilde{G}$ with the image lying in $[2^{-1},2]\subset(0,\infty)$, so we can induce $w:=\widetilde{w}\circ \pi$ with $w|_A=2^{-1}$, $w|_B=2$ and the image liying in $[2^{-1},2]\subset(0,\infty)$.
To verify the conclusion by frequent hypercyclicity criterion. We set \[X_0:=\{\text{bounded compact support functions on } G\}\] which is a dense subspace in $L^p(G)$ for all $p\in[1,\infty)$, and also set
\[T=T_{a,w} \text{ and } S=T^{-1}_{a,w}=T_{a^{-1},w'},\]
where $w':=\left(T_{a^{-1}}w\right)^{-1}$.
Now given $\varphi\in X_0$. Let $K:=supp (\varphi)$ (note that $supp(\varphi(\cdot a^i))=Ka^{-i}$), let us first claim this:
\[\left\|T^n \varphi\right\|_{\infty} \text{ and } \left\|S^n \varphi\right\|_{\infty} \text{ decay to zero by exponential type.}\]
That is,
\begin{align*}
\left\|T^n \varphi\right\|_{\infty} \le C \gamma ^{-n}&,\\
\left\|S^n \varphi\right\|_{\infty} \le C \gamma ^{-n}&\text{ for some $C>0$, $\gamma>1$ and $n$ large enough.}
\end{align*}
We only need to prove the first case, since the second case is symmetric with the first case by replacing $A$ to $B$, $B$ to $A$, $a$ to $a^{-1}$ , $w$ to $w'$ and $T$ to $S$. Since $K$ is compact, there is a large number $N$ such that
\[Ka^{n}\subseteq A,\]
\[Ka^{-n}\subseteq B\]
for $n\ge N$, then
\begin{align*}
\left\|T^n \varphi\right\|_{\infty}&=\left\|\prod \limits^{n-1}_{i=0}w(xa^{-i}) \varphi(xa^{-n})\right\|_{\infty}\\
&=\left\|\prod \limits^{n-1}_{i=0}w(xa^{n-i}) \varphi(x)\right\|_{\infty}\\
&=\left\|\prod \limits^{n}_{j=1}w(xa^{j}) \varphi(x)\right\|_{\infty}\quad(j=n-i)\\
&\le \left\|\prod \limits^{n}_{j=1}w(xa^{j})|_K\right\|_{\infty}\|\varphi\|_{\infty}\\
&\le \left\|\prod \limits^{N}_{j=1}w(xa^{j})\right\|_{\infty}\left\|\varphi\right\|_{\infty}\prod \limits^{n}_{j=N+1}\left\|w(xa^{j})|_K\right\|_{\infty}\\
&= \left\|\prod \limits^{N}_{j=1}w(xa^{j})\right\|_{\infty}\left\|\varphi\right\|_{\infty}\prod \limits^{n}_{j=N+1}\left\|w(x)|_{Ka^{j}}\right\|_{\infty}\\
&\le \left\|\prod \limits^{N}_{j=1}w(xa^{j})\right\|_{\infty}\left\|\varphi\right\|_{\infty}\prod \limits^{n}_{j=N+1} 2^{-1}\quad (\text{since }Ka^j\subseteq A).
\end{align*}
So it has been verified that $\|T^n \varphi\|_{\infty}$ and $\|S^n \varphi\|_{\infty} $ decay to zero by exponential type.
Finally, we will show that
\[\sum \limits^{\infty}_{n=0}T^n \varphi \text{ converges unconditionaly}\]
and
\[\sum \limits^{\infty}_{n=0}S^n \varphi \text{ converges unconditionaly}.\]
The same as above, we only need to verify the first case, and we will say they are actually converge absolutely. By aperiodicity of $a$, there is a large number $N_0$ such that
\[K\cap Ka^{-n}=\varnothing \text{ for }n\ge N_0\]
then we can easily check $\mathcal{C}_\ell:=\{Ka^{-(N_0j+\ell)}\}_{j\in \mathbb{Z}}$ is a mutually disjoint collection for each $\ell=0,1,...,N_0-1$. So
\begin{align*}
\left\|\sum \limits^{\infty}_{n=0}T^n \varphi\right\|_p &\le \left\|\sum \limits^{N_0-1}_{\ell=0}\sum \limits^{\infty}_{j=0}\left|T^{N_0j+\ell} \varphi\right|\right\|_p\\
&\le \sum \limits^{N_0-1}_{\ell=0}\left\|\sum \limits^{\infty}_{j=0}\left|T^{N_0j+\ell} \varphi\right|\right\|_p.\\
\end{align*}
Note that the next step follows by $supp(T^n\varphi)\subseteq Ka^n$ and $\mathcal{C}_\ell$ are mutually disjoint collections and the monotone convergence theorem. Now, for any given $\ell=0,1,...,N_0-1$.
\begin{align*}
\left\|\sum \limits^{\infty}_{j=0}\left|T^{N_0j+\ell} \varphi\right|\right\|^p_p
&=\int {\left|\sum \limits^{\infty}_{j=0}T^{N_0j+\ell}{\varphi}\right|^p}\\
&=\int {\sum \limits^{\infty}_{j=0}\left|T^{N_0j+\ell}{\varphi}\right|^p}\text{ ($C_{\ell}$ are mutually disjoint)}\\
&=\sum \limits^{\infty}_{j=0}\int {\left|T^{N_0j+\ell}{\varphi}\right|^p}\\
&\le \sum \limits^{\infty}_{n=0}\left\|T^n \varphi\right\|^p_p\\
&= \sum \limits^{\infty}_{n=0}\int_{Ka^n} {\left|T^n{\varphi}\right|^p}\\
&\le \sum \limits^{\infty}_{n=0}\left|Ka^n\right| \left\|T^n \varphi\right\|^p_{\infty}\text{ ($\left|Ka^n\right|=\left|K\right|$)}\\
&= \left|K\right|\sum \limits^{\infty}_{n=0} \left\|T^n \varphi\right\|^p_{\infty} < \infty.\\
\end{align*}
Note that the last step follows by $\left\|T^n \varphi\right\|_{\infty}$ decay to zero by exponential type.
So the condition of frequent hypercyclicity criterion has been verfied.
\end{proof}
\begin{remark}
The existence of weighted translation operator $T_{a,w}$ is not unique. In fact, there are uncountable many weighted translations satisfying this lemma by setting $w|_A\equiv \alpha$ and $w|_B\equiv \beta$ for $\alpha \in (0,1)$ and $\beta \in (1,\infty)$ whatever you like in the proof.
\end{remark}
\vskip 3em
Next theorem gives an answer to our main question.
\begin{theorem}
\label{main}
Let $G$ be a second countable locally compact group and $a$ be an aperiodic element in $G$, then there exists a weighted translation operator $T_{a,w}$ which is mixing, choatic and frequently hypercyclic on $L^p(G)$ for all $p\in [1,\infty)$, simultaneously.
\end{theorem}
\begin{proof}
By Theorem \ref{equaperterminal} and Lemma \ref{existencelemma}.
\end{proof}
\begin{example}
Let $G$ be an arbitrary Lie group and $a$ be an aperiodic element in $G$, then there exists a weighted translation operator $T_{a,w}$ which is mixing, choatic and frequently hypercyclic on $L^p(G)$ for all $p\in [1,\infty)$, simultaneously.
\end{example}
\begin{example}
Let $G$ be a general linear group $GL(n,\mathbb{C})$, then $a$ is a periodic element of $G$ iff $a$ is diagonalizable with each eigenvalue has norm $1$ (i.e. if $\lambda$ is a eigenvalue of $a$, then $|\lambda |=1$). In some case, it is hard to verify $G$ has a terminal pair w.r.t. $a$ by hand when $a$ is aperiodic. Specially, when
$a= \begin{bmatrix}
-1 & \hbox{\hskip1.7ex}1 \\
0 & -1
\end{bmatrix}$, but by the discussion above, $G$ has a terminal pair w.r.t. $a$.
\end{example}
\begin{remark}
To explain why $a$ is periodic if and only if $a$ is diagonalizable with all eigenvalue has norm $1$, we only need to prove the case that if $a$ is nondiagonalizable then $a$ is aperiodic. Since other cases follow by Example \ref{terexam} and Proposition \ref{teraper}. The same as the Example \ref{terexam}, we can assume $a$ is itself a Jordan form:
\[a= \begin{bmatrix}
\lambda & 1 & \cdots & 0 \\
0 & \lambda & \cdots & \vdots \\
\vdots & \cdots & \ddots & * \\
0 & \cdots & 0 & \lambda '
\end{bmatrix}.\]
Notation: Let any $x\in G$, we write $x=[x_1|x_2|...|x_n]$, where $x_i\in \mathbb{C}^n$ are the column of $x$. (Note that $x_i$ will never be zero vector since $x$ is invertible.)
Consider the map $f:G\to \mathbb{R}$, $f(x)=\left|ln\|x_2\|\right|$, by the calculation, we have $f(a^n)=\left|(n-1)ln|\lambda |+ln|n|+\frac{1}{2}ln |1+\frac{|\lambda |^2}{n^2}|\right|\to \infty$ as $n\to \infty$ whatever $\lambda$ might be ($\lambda$ never be zero since $a\in GL(n,\mathbb{C})$). Suppose $a$ is periodic, then $f(G(a))$ is compact in $\mathbb{R}$, a contradiction as $\{f(a^n)\}$ is unbounded in $\mathbb{R}$.
\end{remark}
\newpage
\phantomsection
\addcontentsline{toc}{section}{Summary}
\noindent
\textbf{Summary.}
Given an aperiodic element $a$ in $G$. In order to find an expilcit form of a hypercyclic weighted translations associated to $a$. We can first use Proposition \ref{homoterminal} or similar techniques to find a expilcit terminal pair $(A,B)$ w.r.t. to $a$ by pullback argument. And try to contruct a expilcit continuous function $w$ such that $w|_A=2^{-1}$, $w|_B=2$ and the image liying in $[2^{-1},2]\subset(0,\infty)$. Then the operator $T_{a,w}$ will satisfies the frequent hypercyclicity criterion on $L^p(G)$ for all $p\in [1,\infty)$, simultaneously, as Lemma \ref{existencelemma} says.
\begin{example}
Let $G=GL(n,\mathbb{C})$ and $a\in G$ with some eigenvalue $\lambda $ such that $|\lambda |> 1$. Write
\[a= P
\begin{bmatrix}
\lambda & * & \cdots & 0 \\
\vdots & \ddots & \ddots & \vdots \\
0 & \cdots & \lambda '& * \\
0 & \cdots & 0 & \lambda ''
\end{bmatrix}
P^{-1},\]
for some $P\in G$. It is an aperiodic element by Example \ref{terexam} and Proposition \ref{teraper}.
Then $(A,B)$ will be a expilcit terminal pair w.r.t. to $a$, where
\[A:=\left\{PxP^{-1}|\|x_1\|>=2\right\}\]
and
\[B:=\left\{PxP^{-1}|\|x_1\|<=\frac{1}{2}\right\}.\]
(Note that $x_1$ means the first column of $x$.)
Set
\[w(PxP^{-1})=
\begin{cases}
\frac{1}{2} & \text{ if } \|x_1\|>=2 \\
2 & \text{ if } \|x_1\|<=\frac{1}{2} \\
\|x_1\|^{-1} & \text{ others.}
\end{cases}
\]
Obviously, $w$ is a continue function.
Then as Lemma \ref{existencelemma} says. The operator $T_{a,w}$ will satisfies the frequent hypercyclicity criterion on $L^p(G)$ for all $p\in [1,\infty)$, simultaneously.
\end{example}
\phantomsection
\addcontentsline{toc}{section}{Acknowledgements}
\noindent
\textbf{Acknowledgements.}
The author would like to thank Chung-Chuan Chen, Chiun-Chuan Chen and Chun-Wei Lee for their useful suggestions. Special thanks to Yi-Chiuan Chen who advised me to consider the common properties of all known examples of hypercyclic weighted translation operators when $a$ is aperiodic. His advice leads to this paper.
\bibliographystyle{abbrv}
|
1,108,101,563,911 | arxiv | \section{Introduction}
\label{S:intro}
Consider a stationary sequence of random
variables $(X_n)_{n\geq 1}$ and its accompanying sequence of partial
sums $S_n=X_1+\cdots +X_n,\ {n\geq 1}$. The main goal of this paper
is to investigate the asymptotic distributional behavior of the
$D[0,1]$ valued process
$$V_{n}(t) = a_{n}^{-1} (S_{\floor{nt}} - \floor{nt} b_{n}), \quad t \in [0,1],$$
under the properties of weak dependence and regular variation with index $\alpha \in
(0,2)$, where
$(a_{n})_{n}$ is a sequence of positive real numbers such that
\begin{equation}\label{e:niz}
n \Pr( |X_{1}| > a_{n}) \to 1,
\end{equation}
as $n \to \infty$, and
$$ b_{n} = \Exp \bigl( X_{1} \, 1_{\{ |X_{1}| \le a_{n} \}} \bigr).$$
Here, $\floor{x}$ represents the integer part of the real number $x$ and $D[0, 1]$ is the space of real-valued c\`adl\`ag functions on $[0, 1]$.
Recall that if the sequence $(X_n)$ is i.i.d.\ and if there exist
real sequences $(a_n)$ and $(b_n)$ and a nondegenerate random
variable $S$ such that as $n\to\infty$
\begin{equation} \label{E:CLT}
\frac{S_n-b_n}{a_n} \xrightarrow{d} S\,,
\end{equation} then $S$ is necessarily an $\alpha$--stable random variable.
In
standard terminology, the law of $X_1$ belongs to the domain of
attraction of $V(1)$. The domain of attraction of non-Gaussian stable
random variables can be completely characterized by an appropriate
regular variation condition, see \eqref{e:regvar1} below. Classical
references in the i.i.d.\ case are the books by Feller
\cite{Feller71} and Petrov \cite{Petrov95}, while in LePage et al.\
\cite{LePage81} one can find an elegant probabilistic proof of
sufficiency and a nice representation of the limiting distribution.
Weakly dependent sequences can exhibit very similar behavior. The
first results in this direction were rooted in martingale theory,
see Durrett and Resnick~\cite{Durrett78}. In \cite{Da83}, Davis
proved that if a regularly varying sequence $(X_n)_n$ of random
variables has tail index $0<\alpha<2$ and satisfies a strengthened
version of Leadbetter's $D$ and $D'$ conditions familiar from
extreme value theory, then \eqref{E:CLT} holds for some
$\alpha$--stable random variable $S$ and properly chosen sequences
$(a_n)_n$ and $(b_n)_n$. These conditions are quite restrictive however,
even excluding $m$-dependent sequences. For strongly mixing random
sequences, a necessary and sufficient condition was obtained in
Denker and Jakubowski~\cite{DeJa89} for the weak convergence of
partial sums towards an $\alpha$--stable distribution. Later, in
\cite{DaHs95} Davis and Hsing showed that sequences which satisfy a
regular variation condition for some $\alpha\in (0,2)$ and certain
(even milder) mixing conditions also satisfy \eqref{E:CLT} with an
$\alpha$--stable limit. Building upon the same point process
approach, Davis and Mikosch~\cite{DaMi98} generalized these results
to multivariate sequences. Most recently, Bartkiewicz et al.~\cite{BaJaMiWi09} provided a detailed study of the conditions for
the convergence
of the partial sums of a strictly stationary process to an
infinite variance stable distribution. They also determined the
parameters of the limiting distribution in terms of some tail
characteristics of the underlying stationary sequence.
The asymptotic behaviour of the processes $V_n$ as $n \to \infty$ is
an extensively studied subject in the probability literature too. As
the index of regular variation $\alpha$ is assumed to be less than
$2$, the variance of $X_{1}$ is infinite. In the finite-variance
case, functional central limit theorems differ considerably and have
been investigated in greater depth, see for instance
Billingsley~\cite{Billingsley68}, Herrndorf~\cite{Herrndorf85},
Merlev\`ede and Peligrad~\cite{Merlevede00}, and Peligrad and
Utev~\cite{Peligrad05}.
A very readable proof of the functional limit theorem for the
processes $V_n$ for infinite variance i.i.d.\ regularly varying
sequences $(X_n)_n$ can be found in Resnick~\cite{Resnick07}.
Leadbetter and Rootz\'{e}n~\cite{Leadbetter88} studied
this question for more general sequences in the context of extreme
value theory. They found necessary and sufficient conditions for the
functional limit theorem to hold in Skorohod's $J_1$ topology.
However, this choice of topology excludes many important applied
models.
Avram and
Taqqu \cite{Avram92} obtained a functional limit theorem in $D[0,
1]$ endowed with Skorohod's $M_1$ topology for sums of moving
averages with nonnegative coefficients (see Section~\ref{S:flt} for the definition of the $M_1$ topology). They also showed why
the $J_1$ metric is not well suited for studying weak convergence of
the processes $V_n$ when the variables $X_n$ are not independent.
For some more recent articles with related but somewhat different
subjects we refer to Sly and Heyde~\cite{Sly08} who obtained
nonstandard limit theorems for functionals of regularly varying
sequences with long-range Gaussian dependence structure sequences,
and also to Aue et al.~\cite{Aue08} who investigated the limit
behavior of the functional CUSUM statistic and its randomly permuted
version for i.i.d.\ random variables which are in the domain of
attraction of a strictly $\alpha$--stable law, for $\alpha \in
(0,2)$.
The main theorem of our article shows that for a stationary,
regularly varying sequence for which clusters of high-threshold
excesses can be broken down into asymptotically independent blocks,
the properly centered partial sum process $(V_n(t))_{t\in[0,1]}$
converges to an $\alpha$--stable L\'evy process in the space
$D[0,1]$ endowed with Skorohod's $M_1$ metric under the condition
that all extremes within one such cluster have the same sign.
Our method of proof combines some ideas used in the
i.i.d.\ case by Resnick~\cite{Resnick86, Resnick07} with a new point
process convergence result and some particularities of the $M_1$
metric on $D[0,1]$ that can be found in Whitt~\cite{Whitt02}. The
theorem can be viewed as a generalization of results in Leadbetter
and Rootz\'{e}n~\cite{Leadbetter88}, where clustering of extremes is
essentially prohibited, and in Avram and Taqqu~\cite{Avram92}.
The paper is organized as follows. In Section~\ref{S:statpoint} we
determine precise conditions needed to separate clusters of extremes
asymptotically. We also prove a new limit theorem for point
processes which is the basis for the rest of the paper and which is
of independent interest too. In Section~\ref{S:flt} we state and
prove our main functional limit theorem. We also discuss possible
extensions of this result to other topologies. Finally, in
Section~\ref{S:examples} several examples of stationary sequences
covered by our main theorem are discussed, in particular moving
average and squared GARCH(1,1) processes.
\section{Stationary regularly varying sequences}
\label{S:statpoint}
The extremal dynamics of a regularly varying stationary time series
can be captured by its tail process, which is the conditional
distribution of the series given that at a certain moment it is far
away from the origin (Subsection~\ref{SS:statpoint:tail}). In
particular, the tail process allows explicit descriptions of the
limit distributions of various point processes of extremes
(Subsection~\ref{SS:statpoint:point}). The main result in this
section is Theorem~\ref{T:pointprocess:complete}, providing the weak
limit of a sequence of time-space point processes, recording both
the occurrence times and the values of extreme values.
\subsection{Tail processes}
\label{SS:statpoint:tail}
Denote $\mathbb{E}=\overline{\mathbb{R}} \setminus \{ 0 \}$ where
$\overline{\mathbb{R}}=[-\infty, \infty]$. The space $\mathbb{E}$ is
equipped with the topology which makes it homeomorphic to $[-1, 1]
\setminus \{0\}$ (Euclidean topology) in the obvious way. In
particular, a set $B \subset \mathbb{E}$ has compact closure if and
only if it is bounded away from zero, that is, if there exists $u >
0$ such that $B \subset \mathbb{E}_u = \mathbb{E} \setminus [-u,
u]$. Denote by $C_{K}^{+}(\mathbb{E})$ the class of all nonnegative, continuous
functions on $\mathbb{E}$ with compact support.
We say that a strictly stationary process $(X_{n})_{n \in
\mathbb{Z}}$ is \emph{(jointly) regularly varying} with index
$\alpha \in (0,\infty)$ if for any nonnegative integer $k$ the
$k$-dimensional random vector $\boldsymbol{X} = (X_{1}, \ldots ,
X_{k})$ is multivariate regularly varying with index $\alpha$, i.e.\
for some (and then for every) norm $\| \, \cdot \, \|$ on
$\mathbb{R}^{k}$ there exists a random vector $\boldsymbol{\Theta}$
on the unit sphere $\mathbb{S}^{k-1} = \{ x \in \mathbb{R}^{k} :
\|x\|=1 \}$ such that for every $u \in (0,\infty)$ and as $x \to
\infty$,
\begin{equation}\label{e:regvar1}
\frac{\Pr(\|\boldsymbol{X}\| > ux,\,\boldsymbol{X} / \| \boldsymbol{X} \| \in \cdot \, )}{\Pr(\| \boldsymbol{X} \| >x)}
\xrightarrow{w} u^{-\alpha} \Pr( \boldsymbol{\Theta} \in \cdot \,),
\end{equation}
the arrow ``$\xrightarrow{w}$'' denoting weak convergence of finite measures.
For an extensive and highly readable account of (multivariate)
regular variation, see the monograph by Resnick \cite{Resnick07}.
Theorem~2.1 in Basrak and Segers \cite{BaSe} provides a convenient
characterization of joint regular variation: it is necessary and
sufficient that there exists a process $(Y_n)_{n \in \mathbb{Z}}$
with $\Pr(|Y_0| > y) = y^{-\alpha}$ for $y \ge 1$ such that as
$x \to \infty$,
\begin{equation}\label{e:tailprocess}
\bigl( (x^{-1}X_n)_{n \in \mathbb{Z}} \, \big| \, |X_0| > x \bigr)
\xrightarrow{\text{fidi}} (Y_n)_{n \in \mathbb{Z}},
\end{equation}
where ``$\xrightarrow{\text{fidi}}$'' denotes convergence of finite-dimensional
distributions. The process $(Y_{n})_{n \in \mathbb{Z}}$ is called
the \emph{tail process} of $(X_{n})_{n \in \mathbb{Z}}$. Writing
$\Theta_n = Y_n / |Y_0|$ for $n \in \mathbb{Z}$, we also have
\[
\bigl( (|X_0|^{-1}X_n)_{n \in \mathbb{Z}} \, \big| \, |X_0| > x \bigr)
\xrightarrow{\text{fidi}} (\Theta_n)_{n \in \mathbb{Z}},
\]
see Corollary 3.2 in \cite{BaSe}. The process $(\Theta_n)_{n \in
\mathbb{Z}}$ is independent of $|Y_0|$ and is called the \emph{spectral
(tail) process} of $(X_n)_{n \in \mathbb{Z}}$. The law of $\Theta_0 = Y_0 /
|Y_0| \in \mathbb{S}^{0} = \{-1, 1\}$ is the spectral measure of the
common marginal distribution of the random variables $X_i$. Regular
variation of this marginal distribution can be expressed in terms of
vague convergence of measures on $\mathbb{E}$: for $a_n$ as in
\eqref{e:niz} and as $n \to \infty$,
\begin{equation}
\label{e:onedimregvar}
n \Pr( a_n^{-1} X_i \in \cdot \, ) \xrightarrow{v} \mu( \, \cdot \,),
\end{equation}
the Radon measure $\mu$ on $\mathbb{E}$ being given by
\begin{equation}
\label{E:mu}
\mu(\mathrm{d} x) = \bigl( p \, 1_{(0, \infty)}(x) + q \, 1_{(-\infty, 0)}(x) \bigr) \, \alpha |x|^{-\alpha-1} \, \mathrm{d} x,
\end{equation}
where
\begin{align*}
p &= \Pr(\Theta_0 = +1) = \lim_{x \to \infty} \frac{\Pr(X_i > x)}{\Pr(|X_i| > x)}, \\
q &= \Pr(\Theta_0 = -1) = \lim_{x \to \infty} \frac{\Pr(X_i < -x)}{\Pr(|X_i| > x)}.
\end{align*}
\subsection{Point processes convergence}
\label{SS:statpoint:point}
Define the time-space point processes
\begin{equation}
\label{E:ppspacetime}
N_{n} = \sum_{i=1}^{n} \delta_{(i / n,\,X_{i} / a_{n})} \qquad \text{ for all $n\in\mathbb{N}$,}
\end{equation}
with $a_n$ as in \eqref{e:niz}. The aim of this section is to
establish weak convergence of $N_n$ in the state space $[0, 1]
\times \mathbb{E}_u$ for $u > 0$, where $\mathbb{E}_u = \mathbb{E} \setminus [-u, u]$.
The limit process is a Poisson superposition of cluster processes,
whose distribution is determined by the tail process $(Y_i)_{i \in
\mathbb{Z}}$. Convergence of $N_n$ was already alluded to without proof in
Davis and Hsing~\cite{DaHs95} with a reference to
Mori~\cite{Mori77}.
To control the dependence in the sequence $(X_n)_{n \in \mathbb{Z}}$ we first have to assume that clusters of large values of $|X_{n}|$ do not last for too long.
\begin{cond}
\label{c:anticluster}
There exists a positive integer sequence $(r_{n})_{n \in \mathbb{N}}$ such that $r_{n} \to \infty $ and $r_{n} / n \to 0$ as $n \to \infty$ and such that for every $u > 0$,
\begin{equation}
\label{e:anticluster}
\lim_{m \to \infty} \limsup_{n \to \infty}
\Pr \biggl( \max_{m \le |i| \le r_{n}} |X_{i}| > ua_{n}\,\bigg|\,|X_{0}|>ua_{n} \biggr) = 0.
\end{equation}
\end{cond}
Put $M_{1,n} = \max \{ |X_{i}| : i=1, \ldots , n \}$ for $n \in \mathbb{N}$. In Proposition~4.2 in \cite{BaSe}, it has been shown that under the finite-cluster Condition~\ref{c:anticluster} the following value
\begin{multline}
\label{E:theta:spectral}
\theta = \lim_{r \to \infty} \lim_{x \to \infty} \Pr \bigl(M_{1,r} \le x \, \big| \, |X_{0}|>x \bigr) \\
= \Pr ({\textstyle\sup_{i\ge 1}} |Y_{i}| \le 1) = \Pr ({\textstyle\sup_{i\le -1}} |Y_{i}| \le 1)
\end{multline}
is strictly positive. By Remark~4.7 in \cite{BaSe}, alternative expressions for $\theta$ in \eqref{E:theta:spectral} are
\begin{eqnarray*}
\theta
&=& \int_1^\infty
\Pr \biggl( \sup_{i \geq 1} \norm{\Theta_i}^\alpha \leq y^{-\alpha} \biggr)
\, \mathrm{d}(-y^{-\alpha}) \\
&=& \Exp \biggl[ \max \biggl( 1 - \sup_{i \geq 1} \norm{\Theta_i}^\alpha, 0 \biggr) \biggr]
= \Exp \biggl[ \sup_{i \geq 0} \norm{\Theta_i}^\alpha
- \sup_{i \geq 1} \norm{\Theta_i}^\alpha \biggr].
\end{eqnarray*}
Moreover it also holds that $\Pr( \lim_{|n| \to \infty} \norm{Y_n} = 0 ) = 1$, and that for every $u \in (0, \infty)$
\begin{equation}
\label{E:runsblocks}
\Pr(M_{1,r_n} \leq a_n u \mid \norm{X_0} > a_n u)
= \frac{\Pr(M_{1,r_n} > a_n u)}{r_n \Pr(\norm{X_0} > a_n u)} + o(1)
\to \theta
\end{equation}
as $n \to \infty$.
Since $\Pr(M_{1,r_n} > a_n u) \to 0$ as $n \to \infty$, we call the point process
\[
\sum_{i=1}^{r_n} \delta_{(a_n u)^{-1} X_i} \quad \text{conditionally on} \quad M_{1,r_n} > a_n u
\]
a \emph{cluster process}, to be thought of as a cluster of exceptionally large values occurring in a relatively short time span. Theorem~4.3 in \cite{BaSe} yields the weak convergence of the sequence of cluster processes in the state space $\mathbb{E}$:
\begin{equation}
\label{E:clusterprocess}
\biggl( \sum_{i=1}^{r_n} \delta_{(a_n u)^{-1} X_i} \, \bigg| \, M_{1,r_n} > a_n u \biggr)
\xrightarrow{d} \biggl( \sum_{n \in \mathbb{Z}} \delta_{Y_n} \, \bigg| \, \sup_{i \le -1} |Y_i| \le 1 \biggr).
\end{equation}
Note that since $|Y_n| \to 0$ almost surely as $|n| \to \infty$, the point process $\sum_n \delta_{Y_n}$ is well-defined in $\mathbb{E}$. By \eqref{E:theta:spectral}, the probability of the conditioning event on the right-hand side of \eqref{E:clusterprocess} is nonzero.
To establish convergence of $N_n$ in \eqref{E:ppspacetime}, we need to impose a certain mixing condition called $(\mathcal{A}'(a_n))$ which is slightly stronger than the condition~$\mathcal{A}(a_n)$ introduced in Davis and Hsing~\cite{DaHs95}.
\begin{cond}[$\mathcal{A}'(a_{n})$]
\label{c:mixcond}
There exists a sequence of positive integers $(r_{n})_{n}$ such that $r_{n} \to \infty $ and $r_{n} / n \to 0$ as $n \to \infty$ and such that for every $f \in C_{K}^{+}([0,1] \times \mathbb{E})$, denoting $k_{n} = \lfloor n / r_{n} \rfloor$, as $n \to \infty$,
\begin{equation}\label{e:mixcon}
\operatorname{E} \biggl[ \exp \biggl\{ - \sum_{i=1}^{n} f \biggl(\frac{i}{n}, \frac{X_{i}}{a_{n}}
\biggr) \biggr\} \biggr]
- \prod_{k=1}^{k_{n}} \operatorname{E} \biggl[ \exp \biggl\{ - \sum_{i=1}^{r_{n}} f \biggl(\frac{kr_{n}}{n}, \frac{X_{i}}{a_{n}} \biggr) \biggr\} \biggr] \to 0.
\end{equation}
\end{cond}
It can be shown that Condition~\ref{c:mixcond} is implied by the strong mixing property, see Krizmani\'c~\cite{Kr10}.
\begin{thm}
\label{T:pointprocess:complete}
If Conditions~\ref{c:anticluster} and \ref{c:mixcond} hold, then for every $u \in (0, \infty)$ and as $n \to \infty$,
\[
N_n \xrightarrow{d} N^{(u)}
= \sum_i \sum_j \delta_{(T^{(u)}_i, u Z_{ij})} \bigg|_{[0, 1] \times \mathbb{E}_u}\,,
\]
in $[0, 1] \times \mathbb{E}_u$, where $\mathbb{E}_u = \mathbb{E} \setminus [-u,u]$ and
\begin{enumerate}
\item $\sum_i \delta_{T^{(u)}_i}$ is a homogeneous Poisson process on $[0, 1]$ with intensity $\theta u^{-\alpha}$,
\item $(\sum_j \delta_{Z_{ij}})_i$ is an i.i.d.\ sequence of point processes in $\mathbb{E}$, independent of $\sum_i \delta_{T^{(u)}_i}$, and with common distribution equal to the weak limit in \eqref{E:clusterprocess}.
\end{enumerate}
\end{thm}
In the setting of Theorem~\ref{T:pointprocess:complete}, the quantity $\theta$ in \eqref{E:theta:spectral} is the \emph{extremal index} of the sequence $(\norm{X_n})_{n \in \mathbb{Z}}$: for all $u \in (0, \infty)$ and as $n \to \infty$,
\[
\Pr (M_{1,n} \le a_n u)
= \bigl( \Pr (\norm{X_1} \le a_n u) \bigr)^{n \theta} + o(1)
\to e^{-\theta u^{-\alpha}},
\]
see \cite[Remark~4.7]{BaSe}. It can be shown that Theorem~\ref{T:pointprocess:complete} is still valid if $\mathbb{E}_u$ is replaced by $\overline{\mathbb{E}}_u = [-\infty, -u] \cup [u, \infty]$.
\begin{proof}[Proof of Theorem~\ref{T:pointprocess:complete}]
Let $(X_{k,j})_{j \in \mathbb{N}}$, with $k \in \mathbb{N}$, be independent copies
of $(X_j)_{j \in \mathbb{N}}$, and define
\[
\hat{N}_n
= \sum_{k=1}^{k_n} \hat{N}_{n,k}
\qquad \mbox{with} \qquad
\hat{N}_{n,k} = \sum_{j=1}^{r_n}
\delta_{(k r_n/n, X_{k,j}/a_n)}.
\]
By Condition~\ref{c:mixcond}, the weak limits of $N_n$ and
$\hat{N}_n$ must coincide. By Kallenberg \cite[Theorem
4.2]{Kallenberg83} it is enough to show that the Laplace functionals
of $\hat{N}_n$ converge to those of $N^{(u)}$. Take $f \in
C_K^+([0,1] \times \mathbb{E}_u)$. {We extend $f$ to the whole of $[0, 1]
\times \mathbb{E}$ by setting $f(t,x)=0$ whenever $|x| \le u$; in this way,
$f$ becomes a bounded, nonnegative and continuous function on $[0,
1] \times \mathbb{E}$.} There exists $M \in (0, \infty)$ such that $0 \le
f(t, x) \le M \, 1_{[-u,u]^c}(x)$. Hence as $n \to \infty$,
\begin{align*}
1
\ge \operatorname{E} e^{-\hat{N}_{n,k} f}
&\ge \operatorname{E} e^{-M\sum_{i=1}^{r_n} 1(\norm{X_i} > a_n u)} \\
&\ge 1 - M r_n \Pr(\norm{X_0} > a_n u) = 1 - O(k_n^{-1}).
\end{align*}
In combination with the elementary bound $0 \le - \log z -
(1-z) \le (1-z)^2/z$ for $z \in (0,1]$, it follows that as $n
\to \infty$,
\[
-\log \Exp e^{- \hat{N}_{n} f}
= - \sum_{k=1}^{k_n} \log \Exp e^{- \hat{N}_{n,k} f}
= \sum_{k=1}^{k_n} (1 - \Exp e^{- \hat{N}_{n,k} f}) + O(k_n^{-1}).
\]
By \eqref{E:runsblocks}, $k_n \Pr (M_{1,r_n} >a_n u ) \to \theta
u^{-\alpha}$ for $u \in (0, \infty)$ and as $n \to \infty$. Hence
\begin{multline}\label{E:Kall1}
\sum_{k=1}^{k_n} (1 - \Exp e^{- \hat{N}_{n,k} f}) \\
=
k_n \Pr (M_{1,r_n}>a_n u ) \;
\frac{1}{k_n} \sum_{k=1}^{k_n}
\operatorname{E} \biggl[ 1- e^{- \sum_{j=1}^{r_n} f(k r_n/n, X_{j}/a_n) } \,\bigg|\,
M_{1,r_n}>a_n u \biggr]\\
=
\theta u^{-\alpha} \; \frac{1}{k_n} \sum_{k=1}^{k_n}
\operatorname{E} \biggl[ 1- e^{- \sum_{j=1}^{r_n} f(k r_n/n, X_j/a_n)}
\,\bigg|\, M_{1,r_n}>a_n u \biggr] + o(1).
\end{multline}
Let the random variable $T_n$ be uniformly distributed on $\{k r_n /
n : k = 1, \ldots, k_n\}$ and independent of $(X_j)_{j \in
\mathbb{Z}}$. By the previous display, as $n \to \infty$,
\[
\sum_{k=1}^{k_n} (1 - \operatorname{E} e^{- \hat{N}_{n,k} f})
= \theta u^{-\alpha} \operatorname{E} \biggl[ 1- e^{- \sum_{j=1}^{r_n}
f(T_n, u X_j/(ua_n)) } \,\bigg|\, M_{1,r_n}>a_n u \biggr] + o(1).
\]
The sequence $T_n$ converges in law to a uniformly distributed random
variable $T$ on $(0,1)$. By \eqref{E:clusterprocess} and by
independence of sequences $(T_n)$ and $(X_n)_n$
$$
\biggl( T_n, \sum_{i=1}^{r_n} \delta_{a_n^{-1} X_i} \, \bigg| \,
M_{1,r_n} > a_n u \biggr)
\xrightarrow{d} \biggl(T, \sum_{n \in \mathbb{Z}} \delta_{u Z_n} \biggr).
$$
where $\sum_n \delta_{Z_{n}}$ is a point process on $\mathbb{E}$,
independent of the random variable $T$, and with distribution equal
to the weak limit in \eqref{E:clusterprocess}.
Thus, the expressions in \eqref{E:Kall1}
converge as $n \to \infty$ to
\begin{equation}
\label{E:complete:limit}
\theta u^{-\alpha}
\operatorname{E} \biggl[ 1- e^{- \sum_j f(T, u Z_j) } \biggr]
= \int_0^1 \operatorname{E} \biggl[ 1 - e^{-\sum_j f(t, u Z_j) } \biggr]
\theta u^{-\alpha} \, \mathrm{d} t.
\end{equation}
It remains to be shown that the right-hand side above equals $-\log
\operatorname{E} e^{-N^{(u)}f}$ for $N^{(u)}$ as in the theorem.
Define $g(t) = \operatorname{E} \exp \{ - \sum_j f(t, u Z_j) \}$ for $t \in [0,
1]$. Since $\sum_i \delta_{T_i^{(u)}}$ is independent of the i.i.d.\
sequence $(\sum_j \delta_{Z_{ij}})_i$,
\begin{align*}
\operatorname{E} e^{- N^{(u)}f}
&= \operatorname{E} e^{- \sum_i \sum_j f(T^{(u)}_i, u Z_{ij})} \\
&= \operatorname{E} \biggl[ \prod_{i} \operatorname{E} \biggl( e^{- \sum_j f(T^{(u)}_i, u Z_{ij})} \,\bigg|\,(T^{(u)}_k)_k \biggr) \biggr]
= \operatorname{E} e^{\sum_i \log g(T^{(u)}_i)}.
\end{align*}
The right-hand side is the Laplace functional of a
homogeneous Poisson process on $[0,1]$ with intensity $\theta
u^{-\alpha}$ evaluated in the function $- \log g$. Therefore, it is
equal to
\[
\exp \biggl( - \int_0^1 \{1 - g(t)\} \theta u^{-\alpha} \, \mathrm{d} t \biggr),
\]
see for instance Embrechts et al.~\cite[Lemma~5.1.12]{Em97}; note
that $0 \le g \le 1$. By the definition of $g$, the
integral in the exponent is equal to the one in
\eqref{E:complete:limit}. This completes the proof of the
theorem.
\end{proof}
\section{Functional limit theorem}
\label{S:flt}
The main result in the paper states convergence of the partial sum process $V_n$ to a stable L\'evy process in the space $D[0, 1]$ equipped with Skorohod's $M_1$ topology. The core of the proof rests on an application of the continuous mapping theorem: the partial sum process $V_n$ is represented as the image of the time-space point process $N_n$ in \eqref{E:ppspacetime} under a certain summation functional. This summation functional enjoys the right continuity properties by which the weak convergence of $N_n$ in Theorem~\ref{T:pointprocess:complete} transfers to weak convergence of $V_n$.
The definition and basic properties of the $M_1$ topology are recalled in Subsection~\ref{SS:M1}. In Subsection~\ref{SS:sumfunct}, the focus is on the summation functional and its continuity properties. The main result of the paper then comes in Subsection~\ref{SS:main}, some discussion of which is provided in Subsection~\ref{SS:disc}.
\subsection{The $M_1$ topology}
\label{SS:M1}
The metric $d_{M_{1}}$ that generates the $M_{1}$ topology on $D[0, 1]$ is defined using completed graphs. For $x \in D[0,1]$ the \emph{completed graph} of $x$ is the set
\[
\Gamma_{x}
= \{ (t,z) \in [0,1] \times \mathbb{R} : z= \lambda x(t-) + (1-\lambda)x(t) \ \text{for some}\ \lambda \in [0,1] \},
\]
where $x(t-)$ is the left limit of $x$ at $t$. Besides the points of the graph $ \{ (t,x(t)) : t \in [0,1] \}$, the completed graph of $x$ also contains the vertical line segments joining $(t,x(t))$ and $(t,x(t-))$ for all discontinuity points $t$ of $x$. We define an \emph{order} on the graph $\Gamma_{x}$ by saying that $(t_{1},z_{1}) \le (t_{2},z_{2})$ if either (i) $t_{1} < t_{2}$ or (ii) $t_{1} = t_{2}$ and $|x(t_{1}-) - z_{1}| \le |x(t_{2}-) - z_{2}|$. A \emph{parametric representation} of the completed graph $\Gamma_{x}$ is a continuous nondecreasing function $(r,u)$ mapping $[0,1]$ onto $\Gamma_{x}$, with $r$ being the time component and $u$ being the spatial component. Let $\Pi(x)$ denote the set of parametric representations of the graph $\Gamma_{x}$. For $x_{1},x_{2} \in D[0,1]$ define
\[
d_{M_{1}}(x_{1},x_{2})
= \inf \{ \|r_{1}-r_{2}\|_{[0,1]} \vee \|u_{1}-u_{2}\|_{[0,1]} : (r_{i},u_{i}) \in \Pi(x_{i}), i=1,2 \},
\]
where $\|x\|_{[0,1]} = \sup \{ |x(t)| : t \in [0,1] \}$. This definition introduces $d_{M_{1}}$ as a metric on $D[0,1]$. The induced topology is called Skorohod's $M_{1}$ topology and is weaker than the more frequently used $J_{1}$ topology which is also due to Skorohod.
The $M_{1}$ topology allows for a jump in the limit function $x \in D[0, 1]$ to be approached by multiple jumps in the converging functions $x_n \in D[0, 1]$. Let for instance
\begin{align*}
x_n(t) &= \frac{1}{2} 1_{[\frac{1}{2}-\frac{1}{n},\,\frac{1}{2})}(t) + 1_{[\frac{1}{2},\,1]}(t), &
x(t) &= 1_{[\frac{1}{2},\,1]}(t),
\end{align*}
for $n \geq 3$ and $t \in [0, 1]$. Then $d_{M_{1}}(x_{n},x) \to 0$ as $n \to \infty$, although $(x_{n})_n$ does not converge to $x$ in either the uniform or the $J_{1}$ metric. For more discussion of the $M_{1}$ topology we refer to Avram and Taqqu~\cite{Avram92} and Whitt~\cite{Whitt02}.
\subsection{Summation functional}
\label{SS:sumfunct}
Fix $0 < v < u < \infty$. The proof of our main theorem depends on the continuity properties of the summation functional
\[
\psi^{(u)} \colon \mathbf{M}_{p}([0,1] \times \mathbb{E}_{v}) \to D[0,1]
\]
defined by
\[
\psi^{(u)} \bigl( {\textstyle\sum}_{i}\delta_{(t_{i},\,x_{i})} \bigr) (t)
= \sum_{t_{i} \le t} x_{i} \,1_{\{u < |x_i| < \infty\}}, \qquad t \in [0, 1].
\]
Observe that $\psi^{(u)}$ is well defined because $[0,1] \times \mathbb{E}_{u}$ is a relatively compact subset of $[0,1] \times \mathbb{E}_{v}$. The space $\mathbf{M}_p$ of Radon point measures is equipped with the vague topology and $D[0, 1]$ is equipped with the $M_1$ topology.
We will show that $\psi^{(u)}$ is continuous on the set $\Lambda = \Lambda_{1} \cap \Lambda_{2}$, where
\begin{multline*}
\Lambda_{1} =
\{ \eta \in \mathbf{M}_{p}([0,1] \times \mathbb{E}_{v}) :
\eta ( \{0,1 \} \times \mathbb{E}_{u}) = 0 = \eta ([0,1] \times \{ \pm \infty, \pm u \}) \}, \\[1em]
\shoveleft \Lambda_{2} =
\{ \eta \in \mathbf{M}_{p}([0,1] \times \mathbb{E}_{v}) :
\eta ( \{ t \} \times (v, \infty]) \wedge \eta ( \{ t \} \times [-\infty,-v)) = 0 \\
\text{for all $t \in [0,1]$} \};
\end{multline*}
we write $s \wedge t$ for $\min (s,\,t)$. Observe that the elements
of $\Lambda_2$ have the property that atoms with the same time
coordinate are all on the same side of the time axis.
\begin{lem}
\label{l:prob1}
Assume that with probability one, the tail process $(Y_{i})_{i \in \mathbb{Z}}$ in \eqref{e:tailprocess} has no two values of the opposite sign. Then $ \Pr ( N^{(v)} \in \Lambda ) = 1$.
\end{lem}
\begin{proof}
From the definition of the tail process $(Y_{i})_{i \in \mathbb{Z}}$ we know that $\Pr(Y_{i}= \pm \infty)=0$ for any $i \in \mathbb{Z}$. Moreover, by the spectral decomposition $Y_i = |Y_0| \Theta_i$ into independent components $|Y_0|$ and $\Theta_i$ with $|Y_0|$ a Pareto random variable, it follows that $Y_i$ cannot have any atoms perhaps maybe at the origin. As a consequence, it holds with probability one that $\sum_j \delta_{ v Y_{j}} (\{\pm u\} ) =0$ and thus that $\sum_j \delta_{ v Z_{ij}} (\{\pm u\} ) =0$ as well. Together with the fact that $\Pr( \sum_{i}\delta_{T_{i}^{(v)}} (\{0,1\}) = 0 )=1$ this implies $\Pr( N^{(v)} \in \Lambda_{1})=1$.
Second, the assumption that with probability one the tail process $(Y_{i})_{i \in \mathbb{Z}}$ has no two values of the opposite sign yields $\Pr(N^{(v)} \in \Lambda_{2})=1$.
\end{proof}
\begin{lem}
\label{L:contsf}
The summation functional $\psi^{(u)} \colon \mathbf{M}_{p}([0,1] \times \mathbb{E}_{v}) \to D[0,1]$ is continuous on the set $\Lambda$, when $D[0,1]$ is endowed with Skorohod's $M_{1}$ metric.
\end{lem}
\begin{proof}
Suppose that $\eta_{n} \xrightarrow{v} \eta$ in $\mathbf{M}_p$ for some $\eta \in \Lambda$. We will show that $\psi^{(u)}(\eta_n) \to \psi^{(u)}(\eta)$ in $D[0, 1]$ according to the $M_1$ topology. By Corollary~12.5.1 in Whitt~\cite{Whitt02}, $M_1$ convergence for monotone functions amounts to pointwise convergence in a dense subset of points plus convergence at the endpoints. Our proof is based on an extension of this criterion to piecewise monotone functions. This cut-and-paste approach is justified in view of Lemma~12.9.2 in Whitt~\cite{Whitt02}, provided that the limit function is continuous at the cutting points.
As $[0, 1] \times \mathbb{E}_u$ is relatively compact in $[0, 1] \times \mathbb{E}_v$ there exists a nonnegative integer $k=k(\eta)$ such that
$$ \eta ( [0,1] \times \mathbb{E}_{u}) = k < \infty.$$
By assumption, $\eta$ does not have any atoms on the horizontal ligns at $u$ or $-u$. As a consequence, by Lemma~7.1 in Resnick~\cite{Resnick07} there exists a positive integer $n_{0}$ such that for all $n \ge n_{0}$ it holds that
$$ \eta_{n} ( [0,1] \times \mathbb{E}_{u} ) = k.$$
If $k = 0$, there is nothing to prove, so assume $k \ge 1$ and let $(t_i, x_i)$ for $i \in \{1, \ldots, k\}$ be the atoms of $\eta$ in $[0,1] \times {\mathbb{E}}_u$. By the same lemma, the $k$ atoms $(t_{i}^{(n)},\,x_{i}^{(n)})$ of $\eta_n$ in $[0, 1] \times \mathbb{E}_u$ (for $n \ge n_0$) can be labelled in such a way that for $i \in \{1, \ldots, k\}$ we have
\[
(t_{i}^{(n)},x_{i}^{(n)}) \to (t_{i},x_{i}), \qquad \text{as} \ n \to \infty.
\]
In particular, for any $\delta >0$ we can find a positive integer $n_{\delta}$ such that for all $n \ge n_{\delta}$,
\begin{gather}
\eta_{n} ([0,1] \times \mathbb{E}_{u}) = k, \nonumber \\
|t_{i}^{(n)} - t_{i}| < \delta \quad \text{and} \quad
|x_{i}^{(n)} - x_{i}| < \delta, \quad \text{for} \ i=1, \ldots , k. \label{e:pointsdif}
\end{gather}
Let the sequence
$$ 0< \tau_{1} < \tau_{2} < \ldots < \tau_{p} < 1$$
be such that the sets $\{ \tau_{1}, \ldots , \tau_{p} \}$ and
$\{t_{1}, \ldots , t_{k} \}$ coincide. Note that $p \le k$
always holds, but since $\eta$ can have several atoms with the same
time coordinate, equality does not hold in general. Put
$\tau_{0}=0$, $\tau_{p+1}=1$ and take
$$ 0 < r < \frac{1}{2}\min_{0 \le i \le p}|\tau_{i+1} -
\tau_{i}|.$$
For any $t \in [0,1] \setminus \{ \tau_{1}, \ldots ,
\tau_{p} \}$ we can find $\delta \in (0,u)$ such that
$$ \delta < r \quad \textrm{and} \quad \delta < \min_{1 \le i \le
p} |t - \tau_{i}|.$$
Then relation \eqref{e:pointsdif}, for $n \ge n_{\delta}$,
implies that $t_{i}^{(n)} \le t$ is equivalent to $t_{i} \le
t$, and we obtain
$$ |\psi^{(u)}(\eta_{n})(t) - \psi^{(u)}(\eta)(t)| = \bigg| \sum_{t_{i}^{(n)} \le
t}x_{i}^{(n)} - \sum_{t_{i} \le
t}x_{i} \bigg| \le \sum_{t_{i} \le t}\delta \le k\delta.$$
Therefore
$$\lim_{n \to \infty} |\psi^{(u)}(\eta_{n})(t) - \psi^{(u)}(\eta)(t)| \le k\delta,$$
and if we let $\delta \to 0$, it follows that
$\psi^{(u)}(\eta_{n})(t) \to \psi^{(u)}(\eta)(t)$ as $n \to
\infty$.
Put
\[
v_i = \tau_i + r, \qquad i \in \{1, \ldots, p\}.
\]
For any $\delta < u \wedge r$, relation \eqref{e:pointsdif} and the
fact that $\eta \in \Lambda$ imply that the functions
$\psi^{(u)}(\eta)$ and $\psi^{(u)}(\eta_{n})$ ($n \ge n_{\delta}$)
are monotone on each of the intervals $[0,v_{1}], [v_{1},v_{2}],
\ldots , [v_{p},1]$. A combination of Corollary~12.5.1 and Lemma
12.9.2 in Whitt~\cite{Whitt02} yields
$d_{M_{1}}(\psi^{(u)}(\eta_{n}), \psi^{(u)}(\eta)) \to 0$ as $n \to
\infty$. The application of Lemma~12.9.2 is justified by continuity
of $\psi^{(u)}(\eta)$ in the boundary points $v_{1}, \ldots ,
v_{p}$. We conclude that $\psi^{(u)}$ is continuous at $\eta$.
\end{proof}
\subsection{Main theorem}
\label{SS:main}
Let $(X_n)_n$ be a strictly stationary sequence of random variables,
jointly regularly varying with index $\alpha \in (0, 2)$ and tail
process $(Y_i)_{i \in \mathbb{Z}}$. The theorem below gives conditions
under which its partial sum process satisfies a nonstandard
functional limit theorem with a non-Gaussian $\alpha$--stable
L\'{e}vy process as a limit. Recall that the distribution of a
L\'{e}vy process $V(\,\cdot\,)$ is characterized by its
\emph{characteristic triple}, i.e.\ the characteristic triple of the
infinitely divisible distribution of $V(1)$. The characteristic
function of $V(1)$ and the characteristic triple $(a, \nu, b)$ are
related in the following way:
\[
\Exp [e^{izV(1)}] = \exp \biggl( -\frac{1}{2}az^{2} + ibz + \int_{\mathbb{R}} \bigl( e^{izx}-1-izx 1_{[-1,1]}(x) \bigr)\,\nu(\mathrm{d} x) \biggr)
\]
for $z \in \mathbb{R}$; here $a \ge 0$, $b \in \mathbb{R}$ are constants, and $\nu$ is a measure on $\mathbb{R}$ satisfying
\[
\nu ( \{0\})=0 \qquad \text{and} \qquad \int_{\mathbb{R}}(|x|^{2} \wedge 1)\,\nu(\mathrm{d} x) < \infty,
\]
that is, $\nu$ is a L\'{e}vy measure. For a textbook treatment of
L\'{e}vy processes we refer to Bertoin~\cite{Bertoin96} and
Sato~\cite{Sato99}. The description of the L\'{e}vy triple of the
limit process will be in terms of the measures $\nu^{(u)}$ ($u > 0$)
on $\mathbb{E}$ defined for $x > 0$ by
\begin{equation}
\label{E:nuu}
\begin{array}{rl}
\nu^{(u)}(x, \infty) &= \displaystyle u^{-\alpha} \, \Pr \biggl( u \sum_{i \ge 0} Y_i \, 1_{\{|Y_i| > 1\}} > x, \, \sup_{i \le -1} |Y_i| \le 1 \biggr), \\[1em]
\nu^{(u)}(-\infty, -x) &= \displaystyle u^{-\alpha} \, \Pr \biggl( u \sum_{i \ge 0} Y_i \, 1_{\{|Y_i| > 1\}} < -x, \, \sup_{i \le -1} |Y_i| \le 1 \biggr).
\end{array}
\end{equation}
In case $\alpha \in [1, 2)$, we will need to assume that the contribution of the smaller increments of the partial sum process is close to its expectation.
\begin{cond}
\label{c:step6cond}
For all $\delta > 0$,
\[
\lim_{u \downarrow 0} \limsup_{n \to \infty} \Pr \bigg[
\max_{0 \le k \le n} \bigg| \sum_{i=1}^{k} \bigg( \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg) \bigg| > \delta
\bigg]=0.
\]
\end{cond}
Condition~\ref{c:step6cond} holds for instance if $(X_n)_n$ is $\rho$--mixing at a certain rate, see Proposition~\ref{p:rhomix} in Subsection~\ref{SS:disc}, in which some variations of Theorem~\ref{t:2} are discussed as well.
\begin{thm}
\label{t:2}
Let $(X_{n})_{n \in \mathbb{N}}$ be a strictly stationary sequence of random variables, jointly regularly varying with index $\alpha\in(0,2)$, and of which the tail process $(Y_{i})_{i \in \mathbb{Z}}$ almost surely has no two values of the opposite sign. Suppose that Conditions~\ref{c:anticluster} and \ref{c:mixcond} hold. If $1 \le \alpha < 2$, also suppose that Condition~\ref{c:step6cond} holds. Then the partial sum stochastic process
\begin{equation*}
V_{n}(t) =
\sum_{k=1}^{[nt]} \frac{X_{k}}{a_{n}} - \lfloor nt \rfloor \Exp \bigg( \frac{X_{1}}{a_{n}} 1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le 1 \big\} } \bigg), \quad t \in [0,1],
\end{equation*}
satisfies
\[
V_{n} \xrightarrow{d} V, \qquad n \to \infty,
\]
in $D[0,1]$ endowed with the $M_{1}$ topology, where $V(\,\cdot\,)$
is an $\alpha$--stable L\'{e}vy process with L\'{e}vy triple $(0,
\nu, b)$ given by the limits
\begin{align*}
\nu^{(u)} &\xrightarrow{v} \nu, & \int_{x : u < |x| \le 1} x \, \nu^{(u)}(\mathrm{d} x) - \int_{x : u < |x| \le 1} x \, \mu(\mathrm{d} x) &\to b
\end{align*}
as $u \downarrow 0$, with $\nu^{(u)}$ as in \eqref{E:nuu} and $\mu$ as in \eqref{E:mu}.
\end{thm}
\begin{proof
Note that from Theorem~\ref{T:pointprocess:complete} and the fact
that $|Y_{n}| \to 0$ almost surely as $|n| \to \infty$, the random
variables
\[
u \sum_{j}Z_{ij}1_{\{ |Z_{ij}|>1 \}}
\]
are i.i.d.\ and almost surely finite. Define
$$ \widehat{N}^{(u)} = \sum_{i} \delta_{(T_{i}^{(u)},\,u\sum_{j}Z_{ij}1_{\{ |Z_{ij}|>1
\}})}.$$
Then by Proposition~5.3 in Resnick~\cite{Resnick07}, $\widehat{N}^{(u)}$ is a Poisson process (or a Poisson random measure) with mean measure \begin{equation}\label{e:prodmeas}
\theta u^{-\alpha} \lambda \times F^{(u)},
\end{equation}
where $\lambda$ is the Lebesgue measure and $F^{(u)}$ is the distribution of the random variable $u \sum_{j}Z_{1j}1_{\{ |Z_{1j}|>1 \}}$. But for $0 \le s < t \le 1$ and $x>0$, using the fact that the distribution of $\sum_{j}\delta_{Z_{1j}}$ is equal to the one of $\sum_{j}\delta_{Y_{j}}$ conditionally on the event $\{ \sup_{i \le -1}|Y_{i}| \le 1 \}$, we have
\begin{align*}
\lefteqn{ \theta u^{-\alpha} \lambda \times F^{(u)} ([s,t] \times
(x,\infty)) = \theta u^{-\alpha} (t-s)
F^{(u)}((x,\infty))} \\
& = \theta u^{-\alpha} (t-s)
\Pr \biggl( u \sum_{j}Z_{1j}1_{\{ |Z_{1j}|>1 \}} >x \biggr)
\\
& = \theta u^{-\alpha} (t-s) \Pr \biggl( u \sum_{j}Y_{j}1_{\{ |Y_{j}|>1 \}}
>x\,\bigg|\, \sup_{i \le -1}|Y_{i}| \le 1 \biggr)\\
& = \theta u^{-\alpha} (t-s)
\frac{\Pr \bigl( u \sum_{j}Y_{j}1_{\{ |Y_{j}|>1 \}} >x,\,\sup_{i\le -1}|Y_{i}| \le 1
\bigr)}{\Pr(\sup_{i \le -1}|Y_{i}| \le 1)} \\
& = u^{-\alpha} (t-s)
\Pr \biggl( u \sum_{j}Y_{j}1_{\{ |Y_{j}|>1 \}}
>x,\,\sup_{i\le -1}|Y_{i}| \le 1 \biggr)\\%[0.4em]
& = \lambda \times \nu^{(u)}([s,t] \times (x, \infty))
\end{align*}
The same can be done for the set $[s, t] \times (-\infty, -x)$, so that the mean measure in \eqref{e:prodmeas} is equal to $\lambda \times \nu^{(u)}$.
Consider now $0<u<v$ and
$$ \psi^{(u)} (N_{n}\,|\,_{[0,1] \times \mathbb{E}_{u}}) (\,\cdot\,)
= \psi^{(u)} (N_{n}\,|\,_{[0,1] \times \mathbb{E}_{v}}) (\,\cdot\,)
= \sum_{i/n \le \, \cdot} \frac{X_{i}}{a_{n}} 1_{ \big\{ \frac{|X_{i}|}{a_{n}} > u
\big\} },$$
which by Lemma~\ref{L:contsf} converges in distribution in $D[0,1]$ under the $M_{1}$ metric to
$$
\psi^{(u)}
(N^{(v)})(\,\cdot\,)
=\psi^{(u)} (N^{(v)}\,|\,_{[0,1] \times \mathbb{E}_{u}})(\,\cdot\,).
$$
However, by the definition of the process $N^{(u)}$ in
Theorem~\ref{T:pointprocess:complete} it holds that
$$ N^{(u)} \stackrel{d}{=}
N^{(v)} \bigg|_{[0, 1] \times \mathbb{E}_u}\,, $$ for every $v\in (0,u)$.
Therefore the last expression above is equal in distribution to
$$
\psi^{(u)} (N^{(u)})(\,\cdot\,)
= \sum_{T_{i}^{(u)} \le \, \cdot}
\sum_{j}uZ_{ij}1_{ \{ |Z_{ij}| > 1 \} }.
$$
But since
$\psi^{(u)}(N^{(u)}) = \psi^{(u)} (\widehat{N}^{(u)})\,\stackrel{d}{=}\,\psi^{(u)} (\widetilde{N}^{(u)})$,
where
$$ \widetilde{N}^{(u)} = \sum_{i} \delta_{(T_{i},\,K_{i}^{(u)})}
$$
is a Poisson process with mean measure $\lambda \times \nu^{(u)}$,
we obtain
$$ \sum_{i = 1}^{\lfloor n \, \cdot \, \rfloor} \frac{X_{i}}{a_{n}} 1_{ \big\{ \frac{|X_{i}|}{a_{n}} > u
\big\} } \xrightarrow{d} \sum_{T_{i} \le \, \cdot} K_{i}^{(u)}, \quad \text{as} \ n \to \infty,$$
in $D[0,1]$ under the $M_{1}$ metric. From (\ref{e:onedimregvar}) we
have, for any $t \in [0,1]$, as $n \to \infty$,
\begin{eqnarray*}
\lfloor nt \rfloor \Exp \bigg( \frac{X_{1}}{a_{n}} \, 1_{ \big\{ u < \frac{|X_{1}|}{a_{n}} \le 1 \big\} }
\bigg) & = & \frac{\lfloor nt \rfloor}{n} \int_{\{x\,:\,u < |x| \le 1 \}}x
n \Pr \bigg( \frac{X_{1}}{a_{n}} \in \mathrm{d} x \bigg) \\[0.5em]
& \to & t \int_{\{x\,:\,u < |x| \le 1 \}}x \, \mu(\mathrm{d} x).
\end{eqnarray*}
This convergence is uniform in $t$ and hence
$$ \lfloor n \, \cdot \, \rfloor \Exp \bigg( \frac{X_{1}}{a_{n}} 1_{ \big\{ u < \frac{|X_{1}|}{a_{n}} \le 1 \big\} }
\bigg) \to (\,\cdot\,) \int_{\{x\,:\,u < |x| \le 1 \}}x \, \mu(\mathrm{d} x)$$
in $D[0,1]$.
Since the latter function is continuous, we can apply Corollary~12.7.1 in Whitt~\cite{Whitt02}, giving a sufficient
criterion for addition to be continuous. We obtain, as $n \to \infty$,
\begin{multline}
\label{e:mainconv}
V_{n}^{(u)}(\,\cdot\,) = \sum_{i = 1}^{\lfloor n \, \cdot \, \rfloor} \frac{X_{i}}{a_{n}}
1_{ \bigl\{ \frac{|X_{i}|}{a_{n}} > u \bigr\} } - \lfloor n \,\cdot \, \rfloor
\Exp \biggl( \frac{X_{1}}{a_{n}}
1_{ \bigl\{ u < \frac{|X_{1}|}{a_{n}} \le 1 \bigr\} }
\biggr) \\%[0.4em]
\xrightarrow{d} V^{(u)}(\,\cdot\,) := \sum_{T_{i} \le \, \cdot}
K_{i}^{(u)} - (\,\cdot\,) \int_{\{x\,:\,u < |x| \le 1 \}}x \, \mu(\mathrm{d} x).
\end{multline}
The limit (\ref{e:mainconv}) can be rewritten as
\begin{multline*}
\sum_{T_{i} \le \,\cdot}
K_{i}^{(u)} - (\,\cdot\,) \int_{\{x\,:\,u < |x| \le 1 \}}x \, \nu^{(u)}(\mathrm{d} x) \\
+ (\,\cdot\,) \biggl( \int_{\{x\,:\,u < |x| \le 1 \}}x \, \nu^{(u)}(\mathrm{d} x)
- \int_{\{x\,:\,u < |x| \le 1 \}}x \, \mu(\mathrm{d} x) \biggr).
\end{multline*}
Note that the first two terms represent a L\'{e}vy--Ito
representation of the L\'{e}vy process with characteristic triple
$(0, \nu^{(u)}, 0)$, see Resnick~\cite[p.\ 150]{Resnick07}. The
remaining term is just a linear function of the form $t \mapsto t \,
b_{u}$. As a consequence, the process $V^{(u)}$ is a L\'{e}vy
process for each $u<1$, with characteristic triple $(0, \nu^{(u)},
b_{u})$, where
$$ b_{u} = \int_{\{x\,:\,u < |x| \le 1 \}}x \,
\nu^{(u)}(\mathrm{d} x) - \int_{\{x\,:\,u < |x| \le 1 \}}x \,
\mu(\mathrm{d} x).$$
By Theorem~3.1 in Davis and Hsing~\cite{DaHs95}, for $t=1$,
$V^{(u)}(1)$ converges to an $\alpha$--stable random variable. Hence
by Theorem~13.17 in Kallenberg~\cite{Kallenberg97}, there is a
L\'{e}vy process $V(\,\cdot\,)$ such that, as $u \to 0$,
$$ V^{(u)}(\,\cdot\,) \xrightarrow{d} V(\,\cdot\,)$$
in $D[0,1]$ with the $M_{1}$ metric. It has characteristic triple
$(0, \nu, b)$, where $\nu$ is the vague limit of $\nu^{(u)}$ as $u
\to 0$ and $b=\lim_{u \to 0}b_{u}$, see Theorem~13.14
in~\cite{Kallenberg97}. Since the random variable $V(1)$ has an
$\alpha$--stable distribution, it follows that the process
$V(\,\cdot\,)$ is $\alpha$--stable.
If we show that
$$ \lim_{u \downarrow 0} \limsup_{n \to \infty}
\Pr[d_{M_{1}}(V_{n}^{(u)}, V_{n}) > \delta]=0$$
for any $\delta>0$, then by Theorem~3.5 in Resnick~\cite{Resnick07} we
will have, as $n \to \infty$,
$$ V_{n} \xrightarrow{d} V$$
in $D[0,1]$ with the $M_{1}$ metric. Since the Skorohod $M_{1}$
metric on $D[0,1]$ is bounded above by the uniform metric on
$D[0,1]$, it suffices to show that
$$ \lim_{u \downarrow 0} \limsup_{n \to \infty} \Pr \biggl(
\sup_{0 \le t \le 1} |V_{n}^{(u)}(t) - V_{n}(t)| >
\delta \biggr)=0.$$
Recalling the definitions, we have
\begin{equation*}
\begin{split}
\lim_{u \downarrow 0} & \limsup_{n \to \infty} \Pr \bigg(
\sup_{0 \le t \le 1} |V_{n}^{(u)}(t) - V_{n}(t)| > \delta \bigg) \\
& = \lim_{u \downarrow 0} \limsup_{n \to \infty} \Pr \bigg[
\sup_{0 \le t \le 1} \bigg| \sum_{i=1}^{\lfloor nt \rfloor} \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \lfloor nt \rfloor \Exp \bigg( \frac{X_{1}}{a_{n}}
1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le u \big\} } \bigg) \bigg| > \delta
\bigg]\\
& =\lim_{u \downarrow 0} \limsup_{n \to \infty} \Pr \bigg[
\sup_{0 \le t \le 1} \bigg| \sum_{i=1}^{\lfloor nt \rfloor} \bigg\{ \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg\} \bigg| > \delta
\bigg]\\
& = \lim_{u \downarrow 0} \limsup_{n \to \infty} \Pr \bigg[
\max_{1 \le k \le n} \bigg| \sum_{i=1}^{k} \bigg\{ \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg\} \bigg| > \delta
\bigg].
\end{split}
\end{equation*}
Therefore we have to show
\begin{equation}\label{e:slutskycond}
\lim_{u \downarrow 0} \limsup_{n \to \infty} \Pr \bigg[
\max_{1 \le k \le n} \bigg| \sum_{i=1}^{k} \bigg\{ \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}}
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg\} \bigg| > \delta
\bigg]=0.
\end{equation}
For $\alpha \in [1,2)$ this relation is simply Condition~\ref{c:step6cond}. Therefore it remains to show
(\ref{e:slutskycond}) for the case when $\alpha \in (0,1)$. Hence
assume $\alpha \in (0,1)$. For an arbitrary (and fixed)
$\delta >0$ define
$$ I(u,n) = \Pr \bigg[
\max_{1 \le k \le n} \bigg| \sum_{i=1}^{k} \bigg\{ \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg\} \bigg| > \delta
\bigg].$$
Using stationarity and Chebyshev's inequality we get the bound
\begin{eqnarray}\label{e:alpha01}
\nonumber I(u,\,n) & \le & \Pr \bigg[ \sum_{i=1}^{n} \bigg| \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg| > \delta
\bigg]\\%[0.3em]
\nonumber & \le & \delta^{-1} \Exp \bigg[ \sum_{i=1}^{n} \bigg| \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg|
\bigg]\\%[0.3em]
\nonumber & \le & 2 \delta^{-1} n \Exp \bigg( \frac{|X_{1}|}{a_{n}} \, 1_{ \big\{ \frac{|X_{1}|}{a_{n}}
\le u \big\} } \bigg)\\%[0.3em]
& = & 2 \delta^{-1}u \cdot n \Pr (|X_{1}|>a_{n}) \cdot \frac{\Pr(|X_{1}|>ua_{n})}{\Pr(|X_{1}|>a_{n})}
\cdot \frac{\Exp(|X_{1}| \, 1_{ \{ |X_{1}| \le u a_{n} \}
})}{ua_{n}\Pr(|X_{1}|>ua_{n})}.
\end{eqnarray}
Since $X_{1}$ is a regularly varying random variable with index
$\alpha$, it follows immediately that
$$ \frac{\Pr(|X_{1}|>ua_{n})}{\Pr(|X_{1}|>a_{n})} \to
u^{-\alpha},$$
as $n \to \infty$. By Karamata's theorem
$$ \lim_{n \to \infty} \frac{\Exp(|X_{1}| \, 1_{ \{ |X_{1}| \le u a_{n} \}
})}{ua_{n}\Pr(|X_{1}|>ua_{n})} =
\frac{\alpha}{1-\alpha}.$$
Thus from (\ref{e:alpha01}), taking into account
relation (\ref{e:niz}), we get
$$ \limsup_{n \to \infty} I(u,\,n) \le 2\delta^{-1}
\frac{\alpha}{1-\alpha}u^{1-\alpha}.$$
Letting $u \to 0$, since $1-\alpha >0$, we finally obtain
$$ \lim_{u \downarrow 0} \limsup_{n \to \infty} I(u,\,n)=0,$$
and relation (\ref{e:slutskycond}) holds.
Therefore $V_{n} \xrightarrow{d} V$ as $n \to \infty$ in
$D[0,1]$ endowed with the $M_{1}$ topology.
\end{proof}
\subsection{Discussion}
\label{SS:disc}
Here we revisit the conditions and the conclusions of
Theorem~\ref{t:2} and provide some additional insights. If the
spectral tail process $(\Theta_i)_{i \in \mathbb{Z}}$ satisfies a certain
integrability condition, the formula for the L\'evy measure $\nu$
simplifies considerably (Remark~\ref{R:LevyChar3}). In case $\alpha
\in (0, 1)$, the centering function in the definition of $V_n$ can
be removed (Remark~\ref{r:cent}). In the other case, $\alpha \in [1,
2)$, the centering function cannot be omitted, and one way of
checking Condition~\ref{c:step6cond} is via $\rho$--mixing
(Proposition~\ref{p:rhomix}). Finally, convergence in the $L_1$
metric does not require the assumption about the signs in the tail
process, while convergence in the $J_1$ metric is possible if the
definition of the partial sum process is altered in suitable way
(Remark~\ref{r:L1conv}).
\begin{rem}\label{R:LevyChar3}
The L\'evy measure $\nu$ satisfies the scaling property
\[
\nu (s \, \cdot \,) = s^{-\alpha} \nu(\, \cdot \,)
\]
see Theorem~14.3 in Sato~\cite{Sato99}. In particular, $\nu$ can be written as
\[
\nu(\mathrm{d} x)
= \bigl( c_+ \, 1_{(0, \infty)}(x) + c_- \, 1_{(-\infty,0)}(x) \bigr) \, \alpha |x|^{-\alpha-1} \, \mathrm{d} x,
\]
for some nonnegative constants $c_+$ and $c_-$, and therefore $\nu(\{x \})=0$ for every $x \in \mathbb{E}$. Thus, from Theorem~3.2 in Resnick~\cite{Resnick07} we have
\begin{align*}
c_+
&= \nu(1,\infty) = \lim_{u \to 0} \nu^{(u)}(1,\infty) \\
&= \lim_{u \to 0} u^{-\alpha} \, \Pr \biggl( u \sum_{i \ge 0} Y_i \, 1_{\{|Y_i| > 1\}} > 1, \, \sup_{i \le -1} |Y_i| \le 1 \biggr) \\
&= \lim_{u \to 0} u^{-\alpha} \, \int_1^\infty \Pr \biggl( u \sum_{i \ge 0} r \Theta_i \, 1_{\{r|\Theta_i| > 1\}} > 1, \, \sup_{i \le -1} r|\Theta_i| \le 1 \biggr) \,
\mathrm{d}(-r^{-\alpha}) \\
&= \lim_{u \to 0} \int_{u}^{\infty}
\Pr \biggl( \sum_{i \ge 0} r \Theta_{j} \, 1_{\{ r|\Theta_{j}|>u \}} > 1, \, \sup_{i \le -1} r|\Theta_{i}| \le u \biggr)\,\mathrm{d}(-r^{-\alpha}),
\end{align*}
and similarly
\[
c_-
= \lim_{u \to 0} \int_{u}^{\infty}
\Pr \bigg( \sum_{i \ge 0} r \Theta_{j} \, 1_{\{ r|\Theta_{j}|>u \}} < -1,\,\sup_{i \le -1}r|\Theta_{i}| \le u \bigg)
\, \mathrm{d}(-r^{-\alpha}).
\]
Now suppose further that
\begin{equation}
\label{E:sum:finite}
\Exp \bigl[ \bigl( {\textstyle\sum}_{i \ge 0} |\Theta_i| \bigr)^\alpha \bigr] < \infty.
\end{equation}
Then by the dominated convergence theorem,
\begin{align}
\label{E:cplus}
c_+
&= \int_0^\infty \Pr \biggl( \sum_{i \ge 0} r \Theta_i > 1 ; \; \forall i \le -1 : \Theta_i = 0 \biggr) \, \mathrm{d}(-r^{-\alpha}) \\
&= \Exp \bigl[ \bigl\{ \max \bigl( {\textstyle\sum}_{i \ge 0} \Theta_i, 0 \bigr) \bigr\}^\alpha \, 1_{\{\forall i \le -1 : \Theta_i = 0\}} \bigr], \nonumber \\[1em]
\label{E:cminus}
c_-
&= \Exp \bigl[ \bigl\{ \max \bigl( - {\textstyle\sum}_{i \ge 0} \Theta_i, 0 \bigr) \bigr\}^\alpha \, 1_{\{\forall i \le -1 : \Theta_i = 0\}} \bigr].
\end{align}
These relations can be applied to obtain the L\'evy measure $\nu$ for certain heavy-tailed moving average processes (Example~\ref{ex:FiniteMA}). \qed
\end{rem}
\begin{rem}\label{r:cent}
If $\alpha \in (0,1)$, the centering function in the definition of
the stochastic process $V_{n}(\,\cdot\,)$ can be removed and this
removing affects the characteristic triple of the limiting process
in the way we describe here.
By Karamata's theorem, as $n \to
\infty$,
$$ n \Exp \bigg( \frac{X_{1}}{a_{n}} \, 1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le 1 \big\} }
\bigg) \to (p-q) \frac{\alpha}{1-\alpha},$$
with $p$ and $q$ as in \eqref{E:mu}. Thus, as $n \to \infty$,
$$ \lfloor n \, \cdot \, \rfloor \Exp \bigg( \frac{X_{1}}{a_{n}} \, 1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le 1 \big\} }
\bigg) \to (\,\cdot\,)(p-q) \frac{\alpha}{1-\alpha}$$
in $D[0,1]$, which leads to
$$ \sum_{k=1}^{\lfloor n \, \cdot \, \rfloor} \frac{X_{k}}{a_{n}}
\xrightarrow{d} V(\,\cdot\,) + (\,\cdot\,) (p-q)\frac{\alpha}{1-\alpha}$$
in $D[0,1]$ endowed with the $M_{1}$ topology. The characteristic
triple of the limiting process is therefore $(0, \nu, b')$ with $b' = b + (p-q)\alpha / (1-\alpha)$. \qed
\end{rem}
Condition~\ref{c:step6cond} is in general difficult to check. The
next proposition gives one sufficient condition for Condition~\ref{c:step6cond} to hold. It contains the notion of
$\rho$-mixing. We say that a strictly stationary sequence of random
variables $(X_{i})_{i \in \mathbb{Z}}$ is \emph{$\rho$-mixing} if
$$ \rho_{n} = \sup \{ |\operatorname{corr} (Y, Z)| : Y \in
L^{2}(\mathcal{F}_{-\infty}^{0}),\,Z \in
L^{2}(\mathcal{F}_{n}^{\infty}) \} \to 0 \quad \textrm{as} \ n \to 0.$$
Note that $\rho$-mixing implies strong mixing, whereas the converse in
general does not hold, see Bradley~\cite{Bradley05}.
\begin{prop}
\label{p:rhomix}
Let $(X_{n})_n$ be a strictly stationary sequence of regularly varying random variables with index $\alpha \in [1,2)$, and $(a_{n})_n$ a sequence of positive real numbers such that \eqref{e:niz} holds. If the sequence $(X_{n})_n$ is $\rho$-mixing with
\[
\sum_{j \ge 0} \rho_{\floor{2^{j/3}}} < \infty,
\]
then Condition~\ref{c:step6cond} holds.
\end{prop}
\begin{proof}
Let $\delta >0$ be arbitrary. As in the proof of Theorem~\ref{t:2}, define
$$ I(u,n) = \Pr \bigg[
\max_{0 \le k \le n} \bigg| \sum_{i=1}^{k} \bigg\{ \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{i}}{a_{n}} \,
1_{ \big\{ \frac{|X_{i}|}{a_{n}} \le u \big\} } \bigg) \bigg\} \bigg| > \delta
\bigg].$$
Then from Corollary~2.1 in Peligrad~\cite{Peligrad99} we obtain
\begin{multline*}
I(u,n) \le \delta^{-2} C \, \exp \bigg( 8 \sum_{j=0}^{\lfloor \log_{2}n \rfloor} \rho_{\lfloor 2^{j/3} \rfloor} \bigg) \\
n \Exp \bigg[ \bigg\{ \frac{X_{1}}{a_{n}} \;
1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le u \big\} } - \Exp \bigg( \frac{X_{1}}{a_{n}} \,
1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le u \big\} } \bigg) \bigg\}^{2} \bigg],
\end{multline*}
for some positive constant $C$. By assumption there exists a constant $L>0$ such that, for all $n \in
\mathbb{N}$,
\begin{eqnarray*}
I(u,n) & \le & CL\delta^{-2} n \Exp \biggl[ \biggl(
\frac{X_{1}}{a_{n}} \, 1_{ \big\{ \frac{|X_{1}|}{a_{n}} \le u \big\} } \biggr)^{2}
\biggr] \\%[0.4em]
& = & CL \delta^{-2} u^{2} \cdot
\frac{\Exp (X_{1}^{2}1_{ \{ |X_{1}| \le ua_{n} \} } )}{(ua_{n})^{2} \Pr (|X_{1}| > ua_{n})}
\cdot n \Pr (|X_{1}| > ua_{n}).
\end{eqnarray*}
Now using Karamata's theorem and the fact
that $X_{1}$ is regularly varying, we obtain
$$ \limsup_{n \to \infty} I(u,n) \le
CL\delta^{-2} \frac{\alpha}{2-\alpha} u^{2-\alpha}.$$
Since $2- \alpha >0$, we find
$\lim_{u \downarrow 0} \limsup_{n \to \infty} I(u,n) = 0$, yielding Condition~\ref{c:step6cond}.
\end{proof}
\begin{rem}
\label{r:L1conv} The assumption that the tail process has no two
values of the opposite sign almost surely is crucial to obtain
weak convergence of the partial sum process in the $M_{1}$
topology. If we drop this assumption, then it is still possible to
obtain the convergence, but in the topology induced by the $L_{1}$
metric on $D[0,1]$. It is known that the $L_{1}$ topology is
weaker than $M_{1}$ topology (see Whitt~\cite[p.~460]{Whitt02}).
The key step is to prove a version of Lemma~\ref{L:contsf} with
$D[0,1]$ endowed with the $L_{1}$ topology. Then one can repeat the
proof of Theorem \ref{t:2}, with all the assumptions from that
theorem apart from the assumption on the tail process, to obtain the
weak convergence of the partial sum process $V_{n}(\,\cdot\,)$ to a
stable L\'{e}vy process in $D[0,1]$ under the $L_{1}$ metric. For
details, we refer to~Krizmani\'c~\cite{Kr10}.
Theorem~\ref{t:2} becomes false if we replace the $M_1$ topology by Skorohod's $J_{1}$ topology: for finite order moving average processes with at least two nonzero coefficients, Theorem~1 in Avram and Taqqu~\cite{Avram92} shows that the sequence of partial sum stochastic processes $V_{n}$ cannot have a weak limit in the $J_{1}$ topology. Still, by altering the definition of the partial sum process so as to kill the within-cluster fluctations, one can recover the $J_{1}$ convergence for mixing sequences as well. Instead of the process $t \mapsto S_{\lfloor nt \rfloor}$, one might for instance consider the process $t \mapsto S_{r_{n} \lfloor k_{n}t \rfloor}$. Again, we decided not to pursue this here. \qed
\end{rem}
\section{Examples}
\label{S:examples}
In case of asymptotic independence, the limiting stable L\'evy process is the same as in the case of an i.i.d.\ sequence with the same marginal distribution (Examples~\ref{ex:D'cond} and \ref{ex:StochVol}). Heavy-tailed moving averages and GARCH(1,1) processes (Example~\ref{ex:FiniteMA} and~\ref{ex:GARCH} respectively) yield more interesting limits.
\begin{ex}[Isolated extremes models]
\label{ex:D'cond}
Suppose $(X_{n})$ is a strictly stationary and strongly mixing sequence of regularly varying random variables with index $\alpha \in (0,2)$ that satisfies the dependence condition $D'$ in Davis~\cite{Da83}, i.e.
\[
\lim_{k \to \infty} \limsup_{n \to \infty} n \sum_{i=1}^{\lfloor n/k
\rfloor} \Pr \bigg( \frac{|X_{0}|}{a_{n}} > x,
\frac{|X_{i}|}{a_{n}} > x \bigg) = 0 \quad \textrm{for all} \ x
>0,
\]
where $(a_{n})_n$ is a positive real sequence such that $n \Pr(|X_{0}|>a_{n}) \to 1$ as $n \to \infty$. Condition $D'$ implies
\[
\Pr(|X_i| > a_n \mid |X_0| > a_n) = \frac{n \Pr(|X_0| > a_n, \, |X_i| > a_n)}{n \Pr(|X_0| > a_n)} \to 0, \qquad \text{as $n \to \infty$},
\]
for all positive integer $i$, that is, the variables $|X_0|$ and
$|X_i|$ are asymptotically independent. As a consequence, the series
$(X_n)_n$ is regularly varying and its tail process is the same as
that for an i.i.d.\ sequence, that is, $Y_n = 0$ for $n \ne 0$ and
$Y_0$ is as described in Subsection~\ref{SS:statpoint:tail}. It is
trivially satisfied that no two values of $(Y_{n})_n$ are of the
opposite sign.
Since the sequence $(X_{n})$ is strongly mixing, Condition~\ref{c:mixcond} is verified. Condition~\ref{c:anticluster} follows from condition $D'$, for the latter implies
\[
\lim_{n \to \infty} n \sum_{i=1}^{r_{n}} \Pr \bigg( \frac{|X_{0}|}{a_{n}} > x,
\frac{|X_{i}|}{a_{n}} > x \bigg) = 0 \quad \textrm{for all} \ x
>0,
\]
for any positive integer sequence $(r_{n})_n$ such that $r_{n} \to \infty$ and $r_{n} / n \to 0$ as $n \to \infty$.
If we additionally assume that the sequence $(X_{n})$ satisfies
Condition~\ref{c:step6cond} in case $\alpha \in [1,2)$, then by
Theorem~\ref{t:2} the sequence of partial sum stochastic processes
$V_{n}(\,\cdot\,)$ converges in $D[0,1]$ with the $M_{1}$ topology
to an $\alpha$--stable L\'{e}vy process $V(\,\cdot\,)$ with
characteristic triple $(0, \mu, 0)$ with $\mu$ as in \eqref{E:mu},
just as in the i.i.d.\ case. It can be shown that the above
convergence holds also in the $J_{1}$ topology, see
Krizmani\'{c}~\cite{Kr10}.
Condition~\ref{c:step6cond} applies for instance if the series
$(X_{n})_n$ is a function of a Gaussian causal ARMA process, i.e.\
$X_{n} = f(A_{n})$, for some Borel function $f : \mathbb{R} \to
\mathbb{R}$ and some Gaussian causal ARMA process $(A_{n})_n$. From
the results in Brockwell and Davis~\cite{BrDa91} and Pham and
Tran~\cite{PhTr85} (see also Davis and Mikosch~\cite{DaMi09}) it
follows that the sequence $(A_{n})_n$ satisfies the strong mixing
condition with geometric rate. In this particular case this implies
that the sequence $(A_{n})_n$ satisfies the $\rho$-mixing condition
with geometric rate (see Kolmogorov and Rozanov
\cite[Theorem~2]{KoRo60}), a property which transfers immediately to
the series $(X_{n})_n$. Hence by Proposition~\ref{p:rhomix},
Condition~\ref{c:step6cond} holds.
\end{ex}
\begin{ex}[Stochastic volatility models]
\label{ex:StochVol}
Consider the stochastic volatility model
$$ X_{n} = \sigma_{n} Z_{n}, \qquad n \in \mathbb{Z},$$
where the noise sequence $(Z_{n})$ consists of i.i.d.\ regularly
varying random variables with index $\alpha \in
(0,2)$, whereas the volatility sequence $(\sigma_{n})_n$ is strictly
stationary, is independent of the sequence $(Z_{n})_n$, and consists of
positive random variables with finite $4 \alpha$-th moment.
Since the random variables
$Z_{i}$ are independent and regularly varying, it follows that the
sequence $(Z_{n})_n$ is regularly varying with index $\alpha$. By
an application of the multivariate version of Breiman's lemma (see
Proposition~5.1 in Basrak et al.\ \cite{BDM02b}), the
sequence $(X_{n})_n$ is regularly varying with index $\alpha$ too.
From the results in Davis and Mikosch~\cite{DaMi10}, it follows that
\begin{equation}
\label{e:anticl1}
n \sum_{i=1}^{r_{n}} \Pr(|X_{i}|>ta_{n}, |X_{0}|>ta_{n}) \to 0, \qquad \text{as $n \to \infty$},
\end{equation}
for any $t>0$, where $(r_{n})_n$ is a sequence of positive integers
such that $r_{n} \to \infty$ and $r_{n} / n \to 0$ as $n \to
\infty$, and $(a_{n})_n$ is a positive real sequence such
that $n \Pr(|X_{1}|>a_{n}) \to 1$ as $n \to \infty$.
From this relation, as in Example~\ref{ex:D'cond}, it follows that
Condition \ref{c:anticluster} holds. Moreover, the tail process $(Y_n)_n$ is the same as in the case of an i.i.d.\ sequence, that is, $Y_n = 0$ for $n \ne 0$. In particular, the tail process hos no two values of the opposite sign.
Assume that $(\log \sigma_{n})_n$ is a Gaussian casual ARMA process.
Then $(X_{n})_n$ satisfies the strong mixing condition with geometric
rate; see Davis and Mikosch \cite{DaMi09}. Hence Condition
\ref{c:mixcond} holds.
In case $\alpha \in [1, 2)$, we also assume Condition~\ref{c:step6cond} holds. Then all conditions in Theorem \ref{t:2}
are satisfied and we obtain
the convergence of the partial sum stochastic process toward an
$\alpha$--stable L\'{e}vy process with characteristic triple
$(0,\mu,0)$, with $\mu$ as in \eqref{E:mu}.
\end{ex}
\begin{ex}[MA($m$) models]
\label{ex:FiniteMA}
Consider the finite order moving average defined by $$ X_{n} = \sum_{i=0}^{m} c_{i}Z_{n-i}, \quad n \in \mathbb{Z},$$ where $(Z_{i})_{i\in \mathbb{Z}}$ is an i.i.d.\ sequence of regularly
varying random variables with index $\alpha \in (0,2)$, $m \in \mathbb{N}$, $c_{0}, \ldots , c_{m}$ are nonnegative constants and at least $c_{0}$ and $c_{m}$ are not equal to $0$. Take a sequence of positive real numbers $(a_{n})$ such that
\begin{equation}\label{e:nizan}
n \Pr (|Z_{1}|>a_{n}) \to 1 \qquad \textrm{as} \ n \to \infty.
\end{equation}
The finite-dimensional distributions of the series $(X_n)_n$ can be seen to be multivariate regularly varying by an application of Proposition~5.1 in Basrak et al.\ \cite{BDM02b}; see also Davis and Resnick~\cite{DaRe85}). Moreover, if we assume (without loss of generality) that $\sum_{i=0}^{m} c_{i}^{\alpha} = 1$, then also
\[
n \Pr (|X_{0}|>a_{n}) \to 1 \qquad \textrm{as} \ n \to \infty.
\]
The tail process $(Y_n)_n$ in \eqref{e:tailprocess} of the series
$(X_n)_n$ can be found by direct calculation. First, $Y_0 = |Y_0|
\Theta_0$ where $|Y_0|$ and $\Theta_0 = \operatorname{sign}(Y_0)$
are independent with $P(|Y_0| > y) = y^{-\alpha}$ for $y \ge 1$ and
$\Pr(\Theta_0 = 1) = p = 1 - \Pr(\Theta_0 = -1)$. Next, let $K$
denote a random variable with values in the set $\{0,\ldots, m\}$,
independent of $Y_0$, and such that $\Pr (K=k)= |c_k|^{\alpha}$
(recall the assumption $\sum_{i=0}^{m} c_{i}^{\alpha}=1$). To
simplify notation, put $c_i :=0 $ for $i \not\in \{ 0,\ldots, m \}$.
Then
\[
Y_n = (c_{n+K}/c_K) \, Y_0, \qquad \Theta_n = (c_{n+K}/c_K) \, \Theta_0, \qquad n \in \mathbb{Z},
\]
represents the tail process and spectral process of $(X_n)_n$,
respectively. Clearly, at most $m+1$ values $Y_n$ and $\Theta_n$ are
different from 0 and all have the same sign.
Since the sequence $(X_{n})$ is $m$-dependent, it is also strongly mixing,
and therefore Condition~\ref{c:mixcond} holds. By the same property it is easy to see that
Condition~\ref{c:anticluster} holds. Moreover, in view of Proposition~\ref{p:rhomix},
Condition~\ref{c:step6cond} holds as well when $\alpha\in[1,2)$.
As a consequence, the sequence $(X_n)_n$ satisfies all the conditions
of Theorem~\ref{t:2}, and the partial sum process converges towards
a stable L\'evy process $V(\,\cdot\,)$. The L\'evy measure $\nu$ can
be derived from Remark~\ref{R:LevyChar3}: since \eqref{E:sum:finite}
is trivially fulfilled, we obtain from \eqref{E:cplus} and
\eqref{E:cminus},
\[
\nu(\mathrm{d} x) = \bigl( {\textstyle\sum}_{i=0}^{m} c_{i} \bigr)^{\alpha} \, \bigl( p \, 1_{(0,\infty)}(x) + q \, 1_{(-\infty,0)}(x) \bigr) \, \alpha |x|^{-1-\alpha} \, \mathrm{d} x,
\]
which corresponds with the results in Davis and
Resnick~\cite{DaRe85} and Davis and Hsing~\cite{DaHs95}. Further, if
$\alpha \in (0,1) \cup (1,2)$, then in the latter two references it
is shown that
\[
b = (p-q) \, \frac{\alpha}{1-\alpha} \, \bigl\{ \bigl( {\textstyle\sum}_{i=0}^{m} c_{i} \bigr)^{\alpha} -1 \bigr\},
\]
with $q = 1-p$. The case when $\alpha =1$ can be treated
similarly, but the corresponding expressions are much more
complicated (see Theorem 3.2 and Remark 3.3 in Davis and
Hsing~\cite{DaHs95}), so we decided to omit them here.
Infinite order moving averages with nonnegative coefficients are
considered in Avram and Taqqu~\cite{Avram92}. In principle, one can
approximate such processes by a sequence of finite order moving
averages, for which Theorem \ref{t:2} applies, and show that the
error of approximation is negligible in the limit. We do not pursue
this here, since the functional limit theorem for these processes
already appears in~\cite{Avram92}.
\end{ex}
\begin{ex}[ARCH/GARCH models]
\label{ex:GARCH}
We consider the GARCH(1,1) model
\[
X_{n}=\sigma_{n} Z_{n},
\]
where $(Z_{n})_{n \in \mathbb{Z}}$ is a sequence of i.i.d.\ random variables with $\Exp (Z_{1}) = 0$ and $\operatorname{var}(Z_{1}) = 1$, and
\begin{equation}\label{e:stochvol}
\sigma_{n}^{2} = \alpha_{0} + (\alpha_{1} Z_{n-1}^{2} +
\beta_{1}) \sigma_{n-1}^{2},
\end{equation}
with $\alpha_{0}, \alpha_{1}, \beta_{1}$ being nonnegative constants. Assume that $\alpha_{0}>0$ and
\begin{equation*}
-\infty \le \Exp \ln (\alpha_{1}Z_{1}^{2} + \beta_{1}) < 0.
\end{equation*}
Then there exists a strictly stationary solution to the stochastic recurrence equation \eqref{e:stochvol}; see Goldie~\cite{Goldie91} and Mikosch and St\u{a}ric\u{a}~\cite{MiSt00}. The process
$(X_{n})$ is then strictly stationary too. If $\alpha_{1}>0$ and $\beta_{1}>0$ it is called a GARCH(1,1) process, while if $\alpha_{1}>0$ and $\beta_{1}=0$ it is called an ARCH(1) process.
In the rest of the example we consider a stationary squared GARCH(1,1) process $(X_{n}^{2})_n$. Assume that $Z_{1}$ is symmetric, has a positive Lebesgue density on $\mathbb{R}$ and there exists $\alpha \in (0,2)$ such that
\begin{equation*}
\Exp [(\alpha_{1}Z_{1}^{2} + \beta_{1})^{\alpha}]=1
\quad \textrm{and} \quad
\Exp [(\alpha_{1}Z_{1}^{2} + \beta_{1})^{\alpha} \ln (\alpha_{1}Z_{1}^{2} + \beta_{1})] < \infty.
\end{equation*}
Then it is known that the processes $(\sigma_{n}^{2})_n$ and
$(X_{n}^{2})_n$ are regularly varying with index $\alpha$ and
strongly mixing with geometric rate \cite{BDM02b, MiSt00}. Therefore
the sequence $(X_{n}^{2})_n$ satisfies Condition \ref{c:mixcond}.
Condition~\ref{c:anticluster} for the sequence $(X_{n}^{2})_n$
follows immediately from the results in Basrak et al.\
\cite{BDM02b}.
The (forward) tail process of the bivariate sequence
$((\sigma_{n}^{2},X_{n}^{2}))_n$ is not too difficult to
characterize, see Basrak and Segers~\cite{BaSe}. Obviously, the tail
process of $(X_{n}^{2})_n$ cannot have two values of the opposite
sign.
If additionally Condition~\ref{c:step6cond} holds when $\alpha \in [1,2)$,
then by Theorem~\ref{t:2}, the sequence of partial sum stochastic processes $(V_{n}(\,\cdot\,))_n$, defined by
\[
V_{n}(t)
= \sum_{k=1}^{[nt]} \frac{X_{k}^{2}}{a_{n}} - \floor{nt} \Exp \bigg( \frac{X_{1}^{2}}{a_{n}} 1_{ \big\{ \frac{X_{1}^{2}}{a_{n}} \le 1 \big\} } \bigg), \qquad t \in [0,1],
\]
converges weakly to an $\alpha$--stable L\'{e}vy process
$V(\,\cdot\,)$ in $D[0,1]$ under the $M_{1}$ topology. Here
$(a_n)_n$ is a positive sequence such that $n \Pr(X_0^2 > a_n) \to
1$ as $n \to \infty$.
In case $\alpha \in (0,1) \cup (1,2)$, the characteristic triple
$(0, \nu, b)$ of the stable random variable $V(1)$ and thus of the
stable L\'evy process $V(\,\cdot\,)$ can be determined from
Bartkiewicz et al.\ \cite[Proposition~4.6]{BaJaMiWi09}, Davis and
Hsing \cite[Remark~3.1]{DaHs95} and Remark~\ref{r:cent}: after some
calculations, we find
\begin{align*}
\nu(\mathrm{d} x) &= c_+ \, 1_{(0, \infty)}(x) \, \alpha x^{-\alpha-1} \, \mathrm{d} x, &
b &= \frac{\alpha}{1-\alpha} (c_{+}-1),
\end{align*}
where
\begin{align*}
c_{+} &= \frac{\Exp [(Z_{0}^{2} + \widetilde{T}_{\infty})^{\alpha} - \widetilde{T}_{\infty}^{\alpha}]}{ \Exp (|Z_{1}|^{2\alpha})}, &
\widetilde{T}_{\infty} &= \sum_{t=1}^{\infty} Z_{t+1}^{2} \prod_{i=1}^{t} (\alpha_{1} Z_{i}^{2} + \beta_{1}).
\end{align*}
\end{ex}
\section*{Acknowledgements}
Bojan Basrak's research was supported by the research grant MZO\v
S project nr. 037-0372790-2800 of the Croatian government.
Johan Segers
gratefully acknowledges financial support from IAP research
network grant nr.\ P6/03 of the Belgian government (Belgian
Science Policy) and from the contract ``Projet d'Actions de
Recherche Concert\'ees'' nr.\ 07/12/002 of the Communaut\'e
fran\c{c}aise de Belgique, granted by the Acad\'emie universitaire
Louvain.
\small
|
1,108,101,563,912 | arxiv | \section{Introduction}
\label{sec_intro}
The new era in planetary science has started in 1990s, after the discovery of the first
exoplanet orbiting a main-sequence star \citep{MayorQueloz95}. This discovery was followed
by similar ones in a continuously accelerating regime, and by now the number of known
exoplanets candidates is approaching the notable milestone of thousand (see \emph{The
Extrasolar Planets Encyclopaedia} at \texttt{exoplanet.eu}).
The main method of the exoplanets detection is still the precision radial-velocity (RV)
monitoring. Although with the launch of the specialized spacecrafts like CoRoT
(\texttt{smsc.cnes.fr/COROT}) and Kepler (\texttt{kepler.nasa.gov}) the role of the
photometric searches of exoplanetary transits was considerably emphasized, the RV method
still remains superior in many positions. Even if the discovered transiting exoplanets
will eventually outnumber the ones detected by RV monitoring, the photometric method
introduces a severe bias in favour of the short-period planets. On contrary, the long-term
RV monitoring allows for a detection of exoplanetary systems with architectures resembling
the Solar System, e.g. containing Jupiter analogs~--- giant planets with orbital periods
about a decade or more. Giants with short orbital periods are easier to detect, but they
would hardly allow existence of a terrestial planet on a dynamically stable Earth-like
orbit (although it is still possible to have an Earth-like sattelite of such a gas giant
in the habitable zone). In addition, the RV data are typically necessary to confirm a
photometric exoplanetary detection. From only the transit photometry data we can derive
only the transiter's radius, which does not reliably imply its mass value, and thus
radial-velocity observations are needed to confirm its planetary nature.
Another promising exoplanets detection method is astrometry. It looks relatively latent at
present, but it may become much more productive and efficient in the near future, after
the launch of GAIA. However, we are a bit sceptical about its ability to reliably detect
and characterize long-period exoplanets, because of the relatively short $5$-year expected
duration of the mission. On contrary to space missions, ground-based programmes are able
to accumulate much longer time series. The RV exoplanet searches have already reached the
${\sim} 20$ years baseline.
Therefore, the RV technique is the main tool of exoplanetary seraches at present, and it
will continue to play at least an important, if not central, role in the future. It is
already quite obvious that efficient RV exoplanetary detections request sophisticated
methods of data analysis, which need a specialized software: a good such software complex
is the Systemic Console \citep{Meschiari09}. Our paper represents a scientific description
of another such software tool that we developed for similar same goals. The need for
another software tool was justified by the following argumentation:
\begin{enumerate}
\item Systemic Console relies on rather simple statistical methods and models that appear
inadequate when working with high-precision exoplanetary RV data. For example, it relies
on the plain $\chi^2$ fitting and on the textbook $F$-test, which are unreliable for the
RV noise appearing in our task \citep{Baluev08b}. We needed to implement some more
intricate statistical treatment, especially in what concerns periodograms.
\item Systemic Console was written in JAVA to reach cross-platform compatibility, but this
leaded to a dramatic decrease in the computational performance, which appears pretty
obvious when working with Systemic. In scientific tasks the speed of calculations is
usually a more important matter than the wide compatibility. \textbf{PlanetPack}\xspace is written in standard
C++, and thus it is quick. It should be easily compilable by different compilers and for
various platforms, although it was extensively tested only with the GCC and Linux-based
environments.
\item It appears that Systemic Console is targeted to amateur astronomers: e.g.\ it is more
focused on the graphical interface rather than on a dense scientific content. We needed to
focus mainly on the scientific contribution and more scripting capabilities, shifting the
basis of the software to a command-line interface, since it allows for a more controllable
and powerful work environment.
\item According to the information in the official Systemic web page at oklo.org, this
package was last updated in 2009.
\end{enumerate}
A few more recent software tools intended for exoplanetary data fits are available today.
In particular, \citet{WrightHoward09} provide an algorithm of exoplanets RV fitting,
taking into account the fact that there are a few strictly linear parameters of Keplerian
RV variation, that can be efficiently eliminated during the fitting of the remaining
non-linear parameters. This algorithm assumes that the gravitational perturbations in the
exoplanetary system are negligible. \citet{Pal10} provided an RV fitting algorithm for the
self-perturbed exoplanetary systems, based on the Lie integration scheme. And finally,
\citet{Eastman13} offer an algorithm of simultaneous ``photometry+RV'' fitting, also
equipped by some Bayesian Markov chain Monte Carlo simulation tools.
We have not done a ``field-test'' performance comparison of these packages with \textbf{PlanetPack}\xspace.
Nevertheless, we may note that \textbf{PlanetPack}\xspace offers some highly importaint algorithms that are
unavailable in other packages (in particular, the red-noise RV fitting and the advanced
periodograms construction) and, on contrary, the mentioned packages offer some important
tools that are absent in \textbf{PlanetPack}\xspace (in particular, the joint analysis of photometry and radial
velocities, Bayesian statistics). The practical value of our new package, as we see it, is
in its wide task coverage: it unites under the same umbrella a large number of very
different particular tools in a single place. When developing \textbf{PlanetPack}\xspace the main effort was
done in the direction of the data-analysis methods, rather than just in programming or
optimizing computational performance. Almost all data-analysis methods that \textbf{PlanetPack}\xspace
incorporates belong to the self-consistent theory work that we carried out over a few
years.
This was not just a pure theoretical investigation: we applied our tools to real
exoplanetary systems, so that these data-analysis methods were evolving and improving in
this process. Moreover, this allowed to obtain new results concerning the relevant
exoplanetary systems, and most of these concrete results were eventually confirmed by
independent authors, often based on the enlarged and/or improved datasets. Such examples
include the rejection of the planet HD74156~\emph{d} (disclaimed by \citet{Baluev08b},
further retracted by \citet{Wittenmyer09} and \citet{Meschiari11}); the revealing of the
2/1 resonance in the HD37124 planetary system (first revealed in \citet{Baluev08c}, later
confirmed by \citet{Wright11}), and the detection of the hints of the planet
GJ876~\emph{e} (as we discuss in \citet{Baluev08-IAUS249,Baluev11}, a good and stable
orbit for this planet can be found in the old RV data, long before its announcement by
\citet{Rivera10}). We draw the reader's attention to these examples not for bragging,
but in order to highlight the potential of the theory and ideas that we collect now under
the name ``\textbf{PlanetPack}\xspace''. The demonstrated examples prove that this software tool may
significantly increase the outcome of the ongoing exoplanetary RV data-analysis work, as
well as to prevent us from too hasteful conclusions.
\textbf{PlanetPack}\xspace source, along with its technical manual, is available for download as a project at
\texttt{sourceforge.net/projects/planetpack}. In the further sections of the present
paper, we consider the main \textbf{PlanetPack}\xspace abilities and the related theory. This paper does not say
anything about the use of \textbf{PlanetPack}\xspace commands, its data organization, and other technical
documentation necessary to use it in practice. The mentioned technical documentation is
given in a standalone file downloadable together with
\textbf{PlanetPack}\xspace sources.
\section{Data and basic models}
\label{sec_datamod}
Let us first describe the general structure of the observational data set that we deal
with. Assume that we have $J$ RV time-series, referring to the same star but to different
observatories or spectrographs. A $j$-th such time series contains $N_j$ elementary data
packets, consisting of the time of an observation, $t_{ji}$, of the RV measurement itself,
$v_{ji}$, and of its expected uncertainty $\sigma_{\mathrm{meas},ji}$. The total number of
these observations is $N=\sum_{j=1}^J N_j$.
In addition to this raw input data, \textbf{PlanetPack}\xspace uses a time reference epoch, $T_0$, as an
unfittable parameter. Before any fitting, the values of $t_i$ are always shifted by this
quantity and divided by the total time-series span, $T$ (calculated internally).
Therefore, the values that are actually used are $(t_i-T_0)/T$. This process should
normally remain invisible to the user, but to minimize numerical errors, it is recommended
to choose $T_0$ close to the middle of the time series. This $T_0$ is also used as a
reference epoch for the orbital parameters, when such a reference epoch is necessary (see
below). The desired value of $T_0$ can be assigned explicitly by the user or it may be
chosen automatically (a round number close to the weighted mean of $t_i$). Below we assume
that $T_0=0$ for the simplicity of the formulae. The transition to the case of $T_0$ is
obvious.
Now let us specify the general functional model of the RV curve. It is basically the same
as we used in \citet{Baluev08c}. For each of the $J$ time series we have a separate model
that can be represented as the following sum:
\begin{equation}
\mu_j(t,\vec\theta) = \mu_{\mathrm{obs},j}(t,\vec\theta_{\mathrm{obs},j}) +
\mu_\star(t,\vec\theta_\star).
\label{RVmod}
\end{equation}
This is a sum of two terms. The first term, $\mu_{\mathrm{obs},j}$, depends on the time
series through the index $j$, and it represents an observatory-specific part of the
measured radial velocity:
\begin{equation}
\mu_{\mathrm{obs},j} = c_{0,j} +
\sum_{n=1}^{s_j} A_{jn} \cos\left(\frac{2\pi}{P_{jn}}(t-\tau_{jn})\right).
\label{RVmod_obs}
\end{equation}
In this definition, the term $c_{0,j}$ is a constant term denoting an RV offset of the
$j$-th time series, and the remaining (periodic) terms model possible observatory-specific
periodic components, e.g.\ systematic errors. The compound vector
$\vec\theta_{\mathrm{obs},j}$ contains the variables $c_{0,j}$, $A_{jn}$ (the
semi-amplitude of a systematic term), and $\tau_{jn}$ (the epoch of the maximum systematic
variation, treated relatively to $T_0$). The periods $P_{jn}$ are treated as fixed
parameters.
We may recall that e.g.\ annual systematic errors can be rather frequently met in the
published exoplanetary RV data, especially in the old datasets, where they may exceed
${\sim} 10$~m/s \citep{Baluev08c,Baluev08b}. Although this our conclusion was first
considered with some scepticism by other researchers, at present such errors have been
revealed by independent teams \citep{Wittenmyer09,Meschiari11} and sometimes they can be
clearly and undoubtfully seen when comparing published old and revised RV data for the
same star \citep{Baluev11}. We believe that the existence of such errors in some of the
publicly released RV data of exoplnetary systems is already proven well. Although we must
admit that in recent years the major observing teems seem to do a good job on removing
this issue, the old data, which are certainly useful, may still suffer from such errors.
Therefore \textbf{PlanetPack}\xspace still allows to deal with this issue by means of an expanded
model~(\ref{RVmod_obs}).
The second term in~(\ref{RVmod}) is common for all time series; it referes to the star and
its planetary system and it has the general form of
\begin{equation}
\mu_\star(t,\vec\theta_\star) = \sum_{n=1}^r c_n t^n + \mu_\mathrm{pl}(t,\vec\theta_\mathrm{pl}),
\end{equation}
where $c_n$ are coefficients of a polynomial trend modelling some long-term underlying RV
variation (usually it reflects the compound RV contribution from some long-period seen or
unsees bodies in the system), and $\mu_\mathrm{pl}$ describes the RV variation due to the
assumed orbiting exoplanets (each with an individual and independent RV contribution). The
vector $\vec\theta_\star$ contains the coefficients $c_n$ and the elements of
$\vec\theta_\mathrm{pl}$. Notice that $c_n$ are understood in view of the reference epoch
$T_0$.
The first published version of \textbf{PlanetPack}\xspace may set only a \emph{common} polynomial trend for the
whole time series. Sometimes it might be useful to allow for separate datasets to have
different trends, reflecting e.g. some long-term instrumental drifts. This ability was not
implemented in \textbf{PlanetPack}\xspace till now, because we have not yet faced a practical task where this
would be necessary, but this may be done in the future. At present, the models with
different trends may already be constructed with a help of a ruse: to obtain, e.g. an
almost quadratic trend in the model of only some specific dataset, we need to specify in
the relevant sum~(\ref{RVmod_obs}) a harmonic term with a very large period value (larger
than the observations time span). A linear trend can be mimiced by means of setting a
constraint (Sect.~\ref{sec_con}) to fix one of the two parameters of this long-period
harmonic term.
In the simplest and most frequent case, when the interplanetary gravitational
perturbations in the system are negligible, we may assume the multi-Keplerian model
\begin{equation}
\mu_\mathrm{pl} = \sum_{k=1}^{\mathcal N} K_k (\cos(\omega_k+\upsilon_k)+e_k\cos\omega_k).
\end{equation}
Here $\mathcal N$ is the number of orbiting exoplanets, $K_k$ is the RV semi-amplitude
induced by $k^\mathrm{th}$ exoplanet, $e_k$ is the relevant orbital eccentricity,
$\omega_k$ is the pericenter argument, and $\upsilon_k$ is the true anomaly. The true
anomaly can be represented as a function of the time $t$, of the mean-motion $n_k$, of
$e_k$, and of an additional phase parameter $\lambda_k$. We choose this phase parameter to
be the mean longitude at $T_0$. Therefore, the vector $\vec\theta_\mathrm{pl}$ contains
the variables $(n,K,\lambda,e,\omega)_k$ for each of the $\mathcal N$ planets. Notice that
for an exoplanet on a circular orbit we have the relevant RV variation looking like $K_k
\cos(n_k t + \lambda_k)$.
For some time, we investigated the possibility to fit the parameters $e\cos\omega$ and
$e\sin\omega$ instead of $e$ and $\omega$, since the last pair implies an undesired
singularity at $e=0$. However, we did not note any increase in the fitting performance
after the transition to $(e\cos\omega,e\sin\omega)$. Moreover, it appeared that in the
practical tests the resulting convergence rate actually dropped after that transition, and
even for small-eccentricity orbits. We therefore abandoned this idea and returned to the
direct fitting of $(e,\omega)$. However, when $e$ is small, the user should be careful
with the interpretation of its uncertainty reported by \textbf{PlanetPack}\xspace: in this case, the uncertainty
of $e$ becomes meaningless without an accompanying uncertainty of $\omega$ and without the
correlation between $e$ and $\omega$. Actually, in this case the best course of action
would be to look at the 2D confidence contours (Sect.~\ref{sec_confreg}) plotted in the
plane $(e\cos\omega,e\sin\omega)$, assuming that $e$ and $\omega$ are polar coordinates.
Such a plot would be much more informative in this case, than e.g. just an upper limit on
$e$.
The minimum mass of an exoplanet, $m\sin i$, and the semi-major axis of its orbit, $a$,
can be expressed via the primary fit parameters using the well-known relations
\begin{eqnarray}
m \sin i \simeq \tilde K \left(\frac{M_\star^2}{G n}\right)^{1/3} = \mathcal M \tilde K M_\star^{2/3} n^{-1/3}, \nonumber\\
a \simeq \left(\frac{G M_\star}{n^2}\right) = \mathcal A M_\star^{1/3} n^{-2/3},
\label{ma}
\end{eqnarray}
where $\tilde K = K \sqrt{1-e^2}$, $G$ is the gravitational constant, and $M_\star$ is the
mass of the star (which should be derived from some external considerations), and
$\mathcal M$ and $\mathcal A$ are conversion constants ($\mathcal M \approx 9.077\cdot
10^{-3}$ and $\mathcal A \approx 6.664\cdot 10^{-2}$ when the unit of $m$ is
$M_\mathrm{Jup}$, of $M_\star$ is $M_\odot$, of $n$ is day$^{-1}$, and of $\tilde K$ is
m/s). \textbf{PlanetPack}\xspace uses $\tilde K$ as primary parameter instead of $K$, since then its conversion
to $m\sin i$ does not involve the eccentricity $e$ (which also eliminates the need to take
into account the corelation with $e$ when evaluating the uncertainty of $m\sin i$).
The approximate formulae~(\ref{ma}) are valid when $m\ll M_\star$, which is true for the
most practical cases. More accurate formulae, which take into account barycenter effects,
exist \citep{Ferraz-Mello-lec1,Pal10,Beauge12} and are rather popular in practice.
However, for multi-Keplerian firts we do not accept this approach due to the following
reasons:
\begin{enumerate}
\item These formulae are implicit and therefore more difficult for practical use.
\item They involve significant dependence on the orbital inclination (the famous $\sin
i$), which is typically unknown. Eventually we have to assume e.g.\ $i=90^\circ$, and if
this assumtion is wrong, the ``corrected'' mass value will anyway contain a remaining
error comparable to the original one.
\item They are not actually more accurate than~(\ref{ma}), unless we deal with a
single-planet system. When the system contains two or more planets we should also take
into account mutual gravitational perturbations, including e.g.\ the offset in the
apparent period value \citep{Ferraz-Mello-lec1}, which would affect the resulting mass
value too. These biases of the order $m/M_\star$ are typically neglected, but then there
is no reason to take into account any other terms with a similar magnitude, including
those due to the barycenter displacement.
\item For the unperturbed exoplanetary case the formulae~(\ref{ma}) are more than
satisfactory, because the errors due to statistical uncertainties are dominating anyway.
\end{enumerate}
We may note that in the case of the Newtonian $N$-body fitting (Sect.~\ref{sec_nbody}),
\textbf{PlanetPack}\xspace will honestly evaluate the correct planet masses, taking into account all
gravitational effects and the best-fit value of $\sin i$. This is achieved using an
artificial ``osculating RV semi-amplitude'' parameter, see the details in
\citep{Baluev11}. Also, we would like to emphasize that the primary fit parameters are $K$
and $P$, not $m$, and the formula used to obtain $m$ do not affect the fitting process in
any way. It only affects the value of $m$ derived \emph{after} the fit.
\textbf{PlanetPack}\xspace deals with the parametrized RV noise. The basic noise model assumes that the errors
of all $v_{ji}$ are independent and Gaussian with the variances expressed as
\begin{equation}
\sigma_{ji}^2 = \sigma_{\mathrm{meas},ji}^2 + \sigma_{\star,j}^2,
\end{equation}
where the quantities $p_j=\sigma_{\star,j}^2$ represent additional unknown parameters (RV
``jitter'') to be estimated from the data. These parameters can be combined in a single
vector $\vec p$. Notice that we understand $\sigma_{\star,j}^2$ as a solid symbol here,
because in practice we may sometimes deal with the cases $p_j<0$, meaning that the values
of $\sigma_{\mathrm{meas},ji}$ supplied by the observers possess rather poor quality, and
the real errors of $v_{ji}$ are systematically smaller \citep{Baluev08b}. As we have
already discussed in that paper, in practice the apparent RV jitter often have little
resemblance with the actual RV instability caused by astrophysical effects on the star
itself. The instrumental errors and various spectrum reduction imperfections may introduce
a comparable and even dominating contributions. We should treat $p_j$ just as free
parameters introduced to reach some degree of model consistency, avoiding to assign any
concrete physical sense to them.
\section{Maximum-likelihood RV curve fitting}
Assuming the uncorrelated Gaussian distribution of RV errors, we can write down the
likelihood function of the task as
\begin{eqnarray}
\ln \mathcal L(\vec\theta,\vec p) = -\frac{1}{2} \sum_{j=1}^J \sum_{i=1}^{N_j} \left[ \ln\sigma_{ji}^2(\vec p) + \phantom{\frac{\left(v_{ji}\right)^2}{\gamma \sigma_{ji}^2}} \right. & & \nonumber\\
\left. +\frac{\left(v_{ji}-\mu_j(t_{ji},\vec\theta)\right)^2}{\sigma_{ji}^2(\vec p)} \right] - \frac{N}{2} \ln{2\pi}. & &
\label{loglik}
\end{eqnarray}
The position of the maximum of~(\ref{loglik}) would yield the classic maximum-likelihood
estimation of the parameters $\vec\theta$ and $\vec p$. However, \textbf{PlanetPack}\xspace uses a modification
of the maximum-likelihood method, which is based on the following modified likelihood
function:
\begin{eqnarray}
\ln \tilde{\mathcal L}(\vec\theta,\vec p) = -\frac{1}{2} \sum_{j=1}^J \sum_{i=1}^{N_j} \left[ \ln\sigma_{ji}^2(\vec p) + \phantom{\frac{\left(v_{ji}\right)^2}{\gamma \sigma_{ji}^2}} \right. & & \nonumber\\
\left. +\frac{\left(v_{ji}-\mu_j(t_{ji},\vec\theta)\right)^2}{\gamma \sigma_{ji}^2(\vec p)} \right] - \frac{N}{2} \ln{2\pi}. & &
\label{loglikmod}
\end{eqnarray}
The best-fitting estimations of $\vec\theta$ and $\vec p$ are obtained as the position of
the maximum of $\ln \tilde{\mathcal L}$. The thing that makes the
definition~(\ref{loglikmod}) to differ from the classic one in~(\ref{loglik}) is the
correction divisor $\gamma$. It is equal to $\gamma=1-d/N$, where $d$ is the number of the
degrees of freedom of the RV model, here equal to $\dim\vec\theta$. The purpose of the
corrector $\gamma$ is to reduce the systematic bias in the estimation of $\vec p$ that
would otherwise appear due to the fact that the best-fit residuals are systematically
smaller than the actual measurement errors. See the details in \citep{Baluev08b}.
The larger is the resulting maximum value of $\tilde{\mathcal L}$, the better is the fit
quality. The value of $\tilde{\mathcal L}$ is not very intuitive, however. As a numerical
measure of the fit quality we offer a more useful quantity
\begin{equation}
\tilde l = 0.2420 \tilde{\mathcal L}^{-1/N},
\end{equation}
because it is resembling the traditional r.m.s. measure. First, the smaller is $\tilde l$,
the better is the fit. Second, $\tilde l$ is measured in the same units as the
observations $v_{ji}$ (i.e., in m/s). And third, the normalization of $\tilde l$ is chosen
so that $\tilde l$ approximately reflects an average of the RV residuals.
The detailed theory and justification of this method is given in \citep{Baluev08b}. \textbf{PlanetPack}\xspace
performs the non-linear maximization of~(\ref{loglikmod}) using a variant of the
Levenberg--Marquardt (LM) algorithm. Our implementation of this algorithm is different
from e.g.\ the one used in the wide-spread MINPACK library, because the latter was
designed to deal with only a sum-of-squares objective function, emerging in the
least-squares regression task. This special structure of the objective would allow to use
certain simplifying relations between its gradient and the Hessian matrix, but our
objective~(\ref{loglikmod}) does not belong to this class. Although we describe in
\citep{Baluev08b} a way to ``fool'' the MINPACK or MINPACK-like algorithms, forcing them
to solve the task we actually need, we eventually decided to use our own variant of the LM
algorithm, more general than the one used in MINPACK. Our implementation represents some
hybrid method between the MINPACK variant and the classic general one described in
\citep{Bard}. It does not rely on the assumption that the objective is a sum of squares.
\section{Advanced periodograms}
\label{sec_prdg}
\textbf{PlanetPack}\xspace is equipped with improved versions of the periodograms, which have many advantages in
comparison with the classic \citet{Lomb76}--\citet{Scargle82} periodogram. Their main
improvements are listed below.
\begin{enumerate}
\item These periodograms are the likelihood-ratio periodograms. Their values basically
represent the likelihood-ratio statistic associated with the modified likelihood
function~(\ref{loglikmod}). The motivation and details of this approach are given in
\citep{Baluev08a,Baluev13b}. In particular, such periodograms involve a built-in
estimation of the RV jitter and other RV noise parameters, which allows for a
self-consistent data fitting already at the period-search stage.
\item At the very beginning of the analysis, these periodograms can used to just detect a
periodic signal in a raw input time series. But they may also be used in further steps,
when one or a few planets have been already extracted from the data, and we need to check
whether the residuals hide an additional planet. However, these periodograms are not just
the plain periodograms of the relevant pre-calculated and then frozen residuals, as it is
typically done in this task. \textbf{PlanetPack}\xspace evaluates \emph{each} value of such a periodogram by
means of a full multi-planet fit, which is performed almost \emph{anew}, re-adjusting
e.g.\ the parameters of already extracted planets. The advantage of such periodograms is
clearly demonstrated by \citet{Anglada-Escude12}, who call them ``recursive
periodograms''. We prefer to call them as the ``residual periodograms'', on contrary to
the ``periodograms of residuals''. This can also be treated as a broad extension of the
generalized ``floating-mean'' periodogram \citep{FerrazMello81,Cumming99,ZechKur09}.
\item The \textbf{PlanetPack}\xspace's periodograms can utilize the simple sinusoidal model of the signal, as
well as a more complicated periodic model representing a segment of the Fourier series
(trigonometric polynomial of a given degree). The periodogram with the sinusoidal model
represents an extension of the classic Lomb--Scargle periodogram, while the periodogram
with the Fourier model is a similar extension of the so-called multiharmonic periodogram
\citep{SchwCzerny96,Baluev09a}. The Fourier model may be more suitable for non-sinusoidal
RV signals, which may appear due to planets on highly-eccentric orbits.
\end{enumerate}
Therefore, the individual values of the \textbf{PlanetPack}\xspace's periodograms actually represent the
modified likelihood-ratio statistic $\tilde Z$ of Section~\ref{sec_stat} below. The base
RV model describes our knowledge of the planetary system at the current step of the
analysis, while the alternative one also involves a trial periodic signal modelled by a
sinusoid or trigonometric polynomial (having a given basic period). The issues related to
the stiatistical significance levels of these periodograms will be discussed in detail in
Section~\ref{sec_stat}.
\section{Constrained fitting}
\label{sec_con}
\textbf{PlanetPack}\xspace allows to perform the maximum-likelihood fitting under some simple equality
constraints. Let us denote full vector of the RV curve parameters, consisting of the RV
curve parameters $\vec\theta$ and of the noise parameters $\vec p$, as $\vec\xi$. Let us
assume that we need to maximize~(\ref{loglikmod}) under a condition $\vec\eta(\vec\xi) =
\vec\eta_0$, where $\vec\eta$ is a specified vector function of a vector argument, and
$\vec\eta_0$ is a vector constant. In this case we need to find
\begin{eqnarray}
\tilde{\mathcal L}^*(\vec\eta_0) &=& \left. \max_{\vec\xi} \tilde{\mathcal L}(\vec\xi)\right|_{\vec\eta(\vec\xi)=\vec\eta_0}, \nonumber\\
\vec\xi^* (\vec \eta_0) &=& \left. \arg \max_{\vec\xi} \tilde{\mathcal L}(\vec\xi)\right|_{\vec\eta(\vec\xi)=\vec\eta_0}.
\label{likmaxeta}
\end{eqnarray}
At present there is only rather limited, though useful, set of functions that can be
chosen as constraints. Namely, it is allowed to constrain any single fit parameter (either
primary or derived one, including the amplitudes $K$ and $\tilde K$, and the minimum mass
$m\sin i$), a mutual inclination between planetary orbits (in the case when it appears
constrainable from the RV data thanks to the gravitational perturbations), and the mutual
inclination with an accompanying nodes line orientation angle (see \citealt{Baluev11} and
\textbf{PlanetPack}\xspace manual for further details).
The procedure of the constrained fitting when one or more primary fit parameters are held
fixed is trivial: we just need to ignore the relevant parameters in the LM algorithm. When
a combination of two or more primary parameters is constrained, we use the method of
elimination to perform this constrained fitting. That is, during the fitting we directly
express some of the parameters involved in $\vec\eta$ via the remaining ones by means of
explicit formulae, and also adjust the gradient and the Hessian approximation for
$\tilde{\mathcal L}$ to take this elimination into account.
Notice that the constraint in~(\ref{likmaxeta}) implies a decrease in the number of
degrees of freedom of the RV model, which affects the value of the corrector $\gamma$
in~(\ref{loglikmod}). In the constrained case we have
$\gamma=1-(\dim\vec\theta-\dim\vec\eta)/N$, provided that all constraints in $\vec\eta$
refer to the RV curve model (the RV noise parameters, as well as any their constraints, do
not affect $\gamma$).
\section{Parametric confidence regions}
\label{sec_confreg}
\textbf{PlanetPack}\xspace makes it easy to construct the level contours of the function~(\ref{loglikmod}),
which can serve as asymptotic parametric confidence regions. The method is generally
similar to the one described in \citep{Baluev08c}. Let our full vector of the RV curve
parameters be $\vec\xi$, and we need to to construct the confidence region for the
variables $\vec\eta=\vec\eta(\vec\xi)$. This new vector $\vec\eta$ has necessarily smaller
dimension than $\vec\xi$ (in practice usually there are only one or two parameters in
$\vec\eta$) and it may represent just a subset of $\vec\xi$ or some simple function of
$\vec\xi$ (among those described in Section~\ref{sec_con}). Then, for a given trial
$\vec\eta_0$ from a multi-dimensional grid, we perform the following constrained
fitting~(\ref{likmaxeta}).
The partly-maximized function $\tilde{\mathcal L}^*$ in~(\ref{likmaxeta}) can be plotted
on a multi-dimensional grid of $\vec\eta$, and its level contours will represent the
necessary confidence regions. We need to notice that \textbf{PlanetPack}\xspace does not contain any graphical
plotting facilities; it only generates a table of the quantities $\vec\eta$,
$\tilde{\mathcal L}^*(\vec\eta)$, $\vec\xi^*(\vec\eta)$, which is supposed to be used
later by an external graphical plotter (like e.g.\ GNUPLOT).
We still need to calibrate these level contours with the actual significance probability.
For this goal, we also need to define the following quantities, produced by the usual
unconstrained fitting:
\begin{eqnarray}
\tilde{\mathcal L}^{**} &=& \max_{\vec\xi} \tilde{\mathcal L}(\vec\xi) , \nonumber\\
\vec\xi^{**} &=& \arg \max_{\vec\xi} \tilde{\mathcal L}(\vec\xi).
\label{likmax}
\end{eqnarray}
Then, following \citep{Baluev08b}, we can pose a hypothesis testing task, with the
encompassing (alternative) hypothesis $\mathcal K$: ``$\vec\xi$ is arbitrary'' (implying
the best-fitting estimation $\vec\xi = \vec\xi^{**}$ and $\tilde{\mathcal L}_{\mathcal K} =
\tilde{\mathcal L}^{**}$), and the restricted (base) hypothesis $\mathcal H$: ``$\vec\xi$
satisfies the constraint $\vec\eta(\vec\xi) = \mathop{\mathrm{const}}$'' (implying $\vec\xi =
\vec\xi^*(\vec\eta)$ and $\tilde{\mathcal L}_{\mathcal H} = \tilde{\mathcal
L}^*(\vec\eta)$). The numbers of the degrees of freedom in the relevant models are now
$d_{\mathcal H} = \dim\vec\xi-\dim\vec\eta$ and $d_{\mathcal K} = \dim\vec\xi$. Note that
due to the divisor $\gamma$ in~(\ref{loglikmod}), which depends on the number of free
parameters (hence, on the number of constraints too), the function $\tilde{\mathcal L}$ is
a bit different in~(\ref{likmaxeta}) and in~(\ref{likmax}). This means, e.g., that
$\tilde{\mathcal L}^{**} \neq \max_{\vec\eta} \tilde{\mathcal L}^*(\vec\eta)$ in our
case.
The confidence level for a given likelihood contour $\tilde{\mathcal L}^*(\vec\eta) =
\mathop{\mathrm{const}}$ can be mapped with the relevant likelihood-ratio statistic $\tilde Z$ of the below
Section~\ref{sec_stat}, with $\tilde{\mathcal L}_{\mathcal H} = \tilde{\mathcal L}^*$ and
$\tilde{\mathcal L}_{\mathcal K} = \tilde{\mathcal L}^{**}$. From the well-known classical
results it follows that when $N\to\infty$, the quantity $2\tilde Z$ asymptotically follows
the $\chi^2$ distribution with $d=d_{\mathcal K}-d_{\mathcal H}=\dim\vec\eta$ degrees of
freedom.\footnote{Possible constraints set on the parameters of the RV \emph{noise}
(rather than \emph{curve}) model, do not affect the corrector $\gamma$, but still affect
the asymptotic $\chi^2$ distribution of $\tilde Z$, increasing its number of degrees of
freedom. \textbf{PlanetPack}\xspace is aware of this subtle issue and deals with it properly.} Therefore, the
overall sequence to obtain the asymptotic confidence regions for the parameters $\vec\eta$
looks like the following:
\begin{enumerate}
\item Obtain the necessary table of $\tilde{\mathcal L}^*(\vec\eta)$.
\item For a selected contour value $\tilde{\mathcal L}^*$ and pre-calculated
$\tilde{\mathcal L}^{**}$, construct the statistic $\tilde Z$.
\item Evaluate the corresponding asymptotic $\chi^2$ confidence probability as $P_d(\tilde
Z)$, where $P_d(z) = \left. \Gamma_z\left(\frac{d}{2}\right)\right/
\Gamma\left(\frac{d}{2}\right)$, with $\Gamma_z$ being the incomplete gamma function.
\end{enumerate}
The step two is actually done by \textbf{PlanetPack}\xspace automatically: the relevant value of $\tilde Z$ is
written in the output table along with the other data. The probability $P_d(\tilde Z)$ is
not evaluated by \textbf{PlanetPack}\xspace, because it would require an access to non-standard math libraries,
which we try to avoid. However, the necessary gamma function is available in GNUPLOT,
which we recommend to use when plotting the relevant probability contours in a graph.
\section{Red-noise analysis}
\label{sec_rednoise}
\textbf{PlanetPack}\xspace can deal with the RV data contaminated by the correlated (``red'') noise. This red
noise appears rather frequently in practice and imposes a lot of misleading effects
without proper treatment \citep{Baluev11,Baluev13a}. \textbf{PlanetPack}\xspace uses the maximum-likelihood
algorithm of the red-noise reduction, which is described in \citep{Baluev13a} in full
details. The RV noise model is now more complicated than the basic white-noise one
described in Section~\ref{sec_datamod}. It is modelled by a Gaussian random process with
an exponentially decreasing correlation function. In the white-noise case we had only a
single noise parameter, $\sigma_\star^2$ (or a few such parameters for different time
series). The free parameters of the correlated RV noise model are: the variance of the
``white'' noise component $\sigma_\mathrm{white}^2$, the variance of the ``red'' noise
component $\sigma_\mathrm{red}^2$, and the noise correlation timescale
$\tau_\mathrm{red}$, such that the covariance coefficient between two RV measurements
separated by the time gap $\Delta t$ is equal to $\sigma_\mathrm{red}^2 e^{-\Delta
t/\tau_\mathrm{red}}$. This parameter $\tau_\mathrm{red}$ should not be mixed with
$\tau_{jn}$ from~(\ref{RVmod_obs}).
\textbf{PlanetPack}\xspace allows for two types of the red-noise models: the model with a ``shared'' red noise
and with a ``separated'' red noise. In the shared model, \textbf{PlanetPack}\xspace deals with only a single
pair of the red-noise parameters $(\sigma_\mathrm{red}^2,\tau_\mathrm{red})$, and the red
component of the noise is the same for all RV data points of the joint time series. This
model means that the red noise is generated by the star itself, and not by individual
instruments. The white parts of the noise are still assumed different for individual data
sets of the compound time series. In the second, separated, model, the red noise is
treated separately for to each individual datasets, and the number of the red-noise
parameters is increased accordingly. The correlations between RV data points belonging to
different datasets (i.e., different spectrographs) are set to zero in such a case. It is
also possible to specify red-noise component to only some of the datasets, leaving the
others purely white. It is not allowed to specify one or more separated red-noise
component when a shared red noise is defined, because such a model would be very close to
the degeneracy.
Although this method of red-noise reduction is rather new, it have proven its high
practical efficiency in the case of the exoplanetary systems of GJ876 \citep{Baluev11} and
GJ581 \citep{Baluev13a}. We believe that it should appear useful in other cases too, so we
offer its implementation in \textbf{PlanetPack}\xspace.
\section{Newtonian $N$-body fitting and dynamics}
\label{sec_nbody}
Some exoplanetary systems show detectable hints of non-Keplerian dynamics due to
interplantery perturbations. In this case a more complicated RV curve model should be
used, which should be based on the numerical integration of the relevant $N$-body task.
The algorithm of $N$-body fitting used by \textbf{PlanetPack}\xspace is the one described in \citep{Baluev11} in
all details. This algorithm involves an integration of the $N$-body equations for the
planetary coordinates and velocities together with the associated differential equations
for their partial derivatives with respect to the osculating orbital elements (i.e., the
variational or sensitivity equations). This method allows to calculate the necessary
objective function, its gradient, and its Hessian matrix with much better speed/error
ratio then e.g.\ evaluating the gradient via finite differences. The osculating orbital
parameters are referenced in the Jacobi coordinate system. Please see \citep{Baluev11} for
the explanation of the method, coordinate system, and other details.
The choice of the Jacobi system is motivated by the fact that it allows much more smooth
switching between perturbed and unperturbed RV models. The main difficulty in such a
transition comes from the planetary orbital period estimaions: the \emph{apparent} period
(the one seen in an RV periodogram) is different from the \emph{osculating} period. The
first-order formula for this displacement is given in \citep{Ferraz-Mello-lec1}. This
offset appears due to the secular perturbations in the planetary mean longitude, and it
has the following bad consequence. Assume we performed a non-perturbed fit and found a
best-fit (apparent, or periodogram) period for some planet. After that, we may wish to see
what will change if we add the interplanetary perturbations. When making a perturbed fit,
we have no other option than to treat the apparent period value as the osculating one, but
these values are different by their definition! In other words, feeding the $N$-body model
with the observed period value generates a biased actual (averaged) model period; it is
displaced from the observed period that we have just substituted as the osculating one. In
the worst cases, we may discover that our fitting algorithm refuses to converge to
anything reasonable at all, because the relevant frequency displacement exceeds (even
significantly exceeds) the periodogram resolution $\sim 1/T$. This was the case for the
planet GJ876~\emph{d} in \citep{Baluev11}, for example. Ideally, we should first reduce
this displacement between the periods, e.g.\ according to the formulae by
\citet{Ferraz-Mello-lec1}. However, in our previous works we have established (rather
empirically) that the use of the Jacobi coordinates practically eliminates the need of
such a period correction: the osculating Jacobi periods appear much closer to the apparent
periods, than the osculating periods referenced in other coordinate systems. Such an
effect is achieved because we refer the osculating orbital periods to an increased star
mass value, incorporating the mass of the planet, whose osculating period we want to
define, and also of the planets below it (among those included in the integration). Again,
see \citep{Baluev11} for the details.
To make such $N$-body fitting to work we obviously need a numerical integrator. \textbf{PlanetPack}\xspace uses
an extension of the old \citet{Everhart74} integrator for this goal. As it was discussed
by \citet{Avdushev10}, the Everhart integrator is, basically, an implicit Runge-Kutta
integrator, equipped by an efficent predictor evaluation. The original Everhart integrator
was based on the Gauss--Radau or Gauss--Lobatto splitting of each integration step.
\citet{Avdushev10} gives the general formulae suitable for an arbitary sequence of the
splitting nodes. In particular, the Gauss--Legendre and Gauss--Lobatto spacings generate an
integrator with a useful symplectic property (when the integration step is constant). The
integrator used in \textbf{PlanetPack}\xspace is an $16$-th order integrator, based on $8$ Gauss--Legendre nodes.
In comparision with the \citet{Avdushev10} implementation, we introduced some changes to
increase the calculation speed:
\begin{enumerate}
\item The formulae given in \citep{Avdushev10} are valid for a system of first-order
equations $\dot{\vec x}=F(\vec x)$. We extended them to the second-order case which we
actually deal with, $\ddot{\vec x}=F(\vec x)$. This allowed to increase the integrator
performance roughly twice, in comparison with the trivial substitution $\vec y = \{\vec
x,\dot{\vec x}\}$ leading to the first-order system $\dot{\vec y} = G(\vec y) =
\{\dot{\vec x}, F(\vec x)\}$. The necessary corrections are fairly obvious when comparing
the formulae of the original Everhart method with the general formulae by Avdushev. We do
not detail these changes here, since this would require to replicate a large part of the
Avdushev's paper.
\item On contrary to \citep{Avdushev10}, in the source code we define the integration
nodes as compile-time constants. All other derived coefficients and constants of the
scheme are pre-calculated at the compilation stage as well (i.e., before the execution of
\textbf{PlanetPack}\xspace itself). This also improves the calculation speed significantly. However, this goal
was reached by means of rather sophisticated template metaprogramming tools of C++, which
requires a fully standard-compliant compiler with a clever code optimization. For example,
GCC or Intel C++ Compiler work well (when proper optimization options are turned on),
while with MS Visual C++ compiler we failed to achieve the same fast code.
\item The step-size control method of the original Everhart integrator is imperfect. If
$s$ stands for the number of integration nodes, the step size is adjusted as if the
integrator had the order of $s$, but for the specific node systems like, e.g., Lobatto,
Radau, or Legendre ones, the actual integrator order is equal to $2s-2$, $2s-1$, or $2s$,
respectively. In case of our $16$-th order integrator, the step would be chosen in a very
pessimistic manner, as if the integration order was only $8$. Then the resulting
integration errors would be much smaller than what we request. We have established
empirically that the actual integration error appears roughly equal to the square of the
requested one. Therefore, we correct the step-size control procedure by passing the
\emph{square root} of the desired relative precision, instead of the desired relative
precision itself. In practice this simple method works nicely: the step is scaled
according to the actual integrator order ($16$), and the actual precision of the
integrator is in much better agreement with the requested one ($1-2$ orders in magnitude).
\end{enumerate}
All these changes leaded to a significant cumulative increase in the speed of the
calculations, in comparison with the FORTRAN code provided by \citet{Avdushev10}, as well
as in comparison with the RADAU15, a traditional wide-spread FORTRAN implementation of the
Everhart integrator.
In addition to the $N$-body fitting, which requires a short-term $N$-body integration,
\textbf{PlanetPack}\xspace can perform the traditional long-term numeric integration. The integration scheme is
the same for both cases~--- it is the one based on $8$ Gauss--Legendre nodes. The
difference is in the step-size controlling: for short-term integrations we use a variable
step-size (aimed to achieve the maximum performance), while for long-term integrations we
use constant step (aimed to preserve the symplectic property).
\section{Statistical issues: analytical methods}
\label{sec_stat}
Statistics is an important component of \textbf{PlanetPack}\xspace. It includes some theoretical results
(classic and recent ones), as well as tools for numerical Monte Carlo simulations. The
newly-developed statistical theory implemented in \textbf{PlanetPack}\xspace mainly concerns the significance
levels of the periodograms. \textbf{PlanetPack}\xspace calculates the false alarm probability (${\rm FAP}$) of
individual periodogram peaks using the method explained in
\citep{Baluev08a,Baluev08b,Baluev09a}, which is based on theory of extreme values of
random processes (the generalized Rice method). For a periodogram where the signal is
modelled by a trigonometric polynomial of degree $n$, the main ${\rm FAP}$ estimation formula
derived in the works by \citet{Baluev08a,Baluev09a} looks like
\begin{eqnarray}
{\rm FAP}(z) \lesssim M(z) \simeq W \alpha_n e^{-z} z^{n-1/2}, \nonumber\\
\alpha_n = \frac{2^n}{(2n-1)!!} \sum_{k=1}^n \frac{(-1)^{n-k} k^{2n+1}}{(n+k)!(n-k)!}, \nonumber\\
W = \Delta f T_\mathrm{eff}.
\label{prdg_fap}
\end{eqnarray}
where $z$ is the observed maximum peak on th periodogram, $\Delta f$ is the width of the
frequency band, and $T_\mathrm{eff}$ is the effective length of the time series. The
latter quantity is defined as $\sqrt{4\pi \mathbb{D} t}$, where $\mathbb{D} t$ is the weighted
variance of the times $t_{ji}$ (with the weights taken as $1/\sigma_{ji}^2$ at the
best-fit $\vec p$). This effective length is usually close to the plain time span of the
time series. The sign `$\lesssim$' in~(\ref{prdg_fap}) means that $M(z)$ represents an
upper bound for ${\rm FAP}(z)$ and simultaneously an asymptotic approximation for ${\rm FAP}(z)$
when $z\to\infty$. The approximation for the function $M(z)$ in~(\ref{prdg_fap}) was
obtained using the so-called ``assumption of the uniform phase coverage''. Regardless of
so apparently restrictive name, for the stated ${\rm FAP}$ evaluation task this assumption
works well in the majority of the practical cases, as we have shown, even for ultimately
strong spectral leakage (aliasing).
Strictly speaking, the formula~(\ref{prdg_fap}) was derived for the case when the RV
models are linear (except for the frequency parameter), and the noise uncertainties are
known a priori (no noise models involved). However, for more practical cases, including
weakly non-linear models and parametrized noise, the same formulae can be used in the
asymptotic sense for $N\to\infty$. See \citep{Baluev08b,Baluev13b} for the details.
Unfortunately, for the periodograms involving models with correlated noise of
Section~\ref{sec_rednoise}, we have not yet developed a reliable theory of the
significance levels. In this case \textbf{PlanetPack}\xspace will evaluate an approximation of the ${\rm FAP}$
according to some suggestive generalization of~(\ref{prdg_fap}) to the red-noise models,
but at present Monte Carlo simulations must be considered superior in this case.
\textbf{PlanetPack}\xspace is tuned to utilize the likelihood-ratio test for comparison of nested models. Given
two rival RV models: a base (more simple one) $\mu_\mathcal{H}$ and an alternative (more
complicated one) $\mu_\mathcal{K}$, we have the classical hypothesis testing task: is the
base hypothesis $\mathcal H$ consistent with the data, or it should be rejected in favour
of its alternative $\mathcal K$? This question can be answered after calculation of the
classic likelihood-ratio statistic
\begin{eqnarray}
Z = \ln \mathcal L_\mathcal{K}^* - \ln \mathcal L_\mathcal{H}^*, \nonumber\\
\mathcal L_\mathcal{H}^* = \max_{\vec\xi_\mathcal{H}} \mathcal L_\mathcal{H}(\vec\xi_\mathcal{H}), \quad
\mathcal L_\mathcal{K}^* = \max_{\vec\xi_\mathcal{K}} \mathcal L_\mathcal{K}(\vec\xi_\mathcal{K}).
\label{likrat}
\end{eqnarray}
The larger is $Z$, the greater is the observable advantage of $\mathcal K$ over $\mathcal
H$. \textbf{PlanetPack}\xspace, however, should honour the bias-reducing modification~(\ref{loglikmod}), which
leads to a modified likelihood-ratio by \citet{Baluev08b}, which is defined as
\begin{eqnarray}
\tilde Z = \frac{N_\mathcal{K}}{N}\left( \ln\tilde{\mathcal L}_\mathcal{K}^* - \ln\tilde{\mathcal L}_\mathcal{H}^* \right) +
\frac{N_\mathcal{K}}{2} \ln \frac{N_\mathcal{H}}{N_\mathcal{K}}, \nonumber\\
\tilde{\mathcal L}_\mathcal{H}^* = \max_{\vec\xi_\mathcal{H}} \tilde{\mathcal L}_\mathcal{H}(\vec\xi_\mathcal{H}), \quad
\tilde{\mathcal L}_\mathcal{K}^* = \max_{\vec\xi_\mathcal{K}} \tilde{\mathcal L}_\mathcal{K}(\vec\xi_\mathcal{K}).
\label{likratmod}
\end{eqnarray}
where $N_{\mathcal H,\mathcal K} = N - d_{\mathcal H,\mathcal K}$ with $d_{\mathcal H,
\mathcal K}$ being the numbers of the degrees of freedom in the RV models to compare.
The quantity $\tilde Z$ represents a critical quantity for the decision: the larger is
$\tilde Z$, the less likely is $\mathcal H$ in comparison with $\mathcal K$. When the RV
models are linearisable, the asymptotic distribution of $2\tilde Z$ (for $N\to\infty$) is
the $\chi^2$-distribution with $d = d_\mathcal{K} - d_\mathcal{H}$ degrees of freedom
(under $\mathcal H$). This framework is used in \textbf{PlanetPack}\xspace to define the generalized
periodograms (Section~\ref{sec_prdg}) and the asymptotic confidence regions
(Section~\ref{sec_confreg}). In practice, at least for the confidence regions
determination task, the asymptotic $\chi^2$ distribution may work well, even when the RV
model is pretty complicated and non-linear \citep{Baluev13a}. For the periodograms we
however should use the formulae~(\ref{prdg_fap}) and the related statistical theory,
rather than the classical $\chi^2$ distribution. This is because the models involved in
the periodogram definition are not entirely linearisable \citep{Baluev13b}.
The definition~(\ref{likratmod}) differs from the classic one in~(\ref{likrat}) in the
normalization and offset which were introduced to compensate for the corrector $\gamma$
in~(\ref{loglikmod}). This $\gamma$ is different for the model $\mathcal H$ or $\mathcal
K$, so we needed to introduce the bias of $(N_\mathcal{K}/2) \ln
(N_\mathcal{H}/N_\mathcal{K}) \simeq d/2$ to make $\tilde Z$ asymptotically equivalent to
$Z$ (with a possible residual error of ${\sim} 1/N$). The normalizing factor
$N_\mathcal{K}/N$ does not alter the asymptotic properties of $\tilde Z$ and it has only
rather cosmetic purpose: it was chosen so that for the multiplicative noise model,
$\sigma_i^2 = \kappa/w_i$ with fixed weights $w_i$, the statistic $\tilde Z$ appears equal
to the statistic $z_3$ from \citep{Baluev08a}.
It is important that the model $\mathcal{K}$ includes $\mathcal{H}$ as a partial case or a
subset of lesser dimension, i.e. these models are nested. This implies, in particular,
that $d_\mathcal{H} < d_\mathcal{K}$ and the fit parameters of $\vec\xi_\mathcal{H}$
represent a subset of $\vec\xi_\mathcal{K}$.
Another small but useful statistical method, implemented in \textbf{PlanetPack}\xspace, is the Vuong test for
the comparison of non-nested rival models \citep{Baluev12}. It can be used to resolve the
period ambiguity due to the aliasing, or other types of ambiguity involving peer
(non-nested) models.
\section{Statistical issues: simulations}
\label{sec_simul}
For more intricate statistical tasks, \textbf{PlanetPack}\xspace allows to perform numerical Monte Carlo
simulations in a user-friendly manner. There are a few Monte Carlo algorithms that are
implemented in \textbf{PlanetPack}\xspace.
\subsection{Plain Monte Carlo assuming Gaussian noise and a single nominal model}
\label{subsec_MC}
This is a classical Monte Carlo scheme, which is used to model the distribution function
of the statistic $\tilde Z$ or the probability density of the best-fit estimations
$\vec\xi^*$. In this algorithm we assume that the true values of the parameters are more
or less known. The relevant simulated distribution functions, $P(\tilde Z |
\hat{\vec\xi})$ and $p(\vec\xi^* | \hat{\vec\xi})$ depend on the assumed nominal values
$\hat{\vec\xi}$, which a considered as true.
\begin{enumerate}
\item First of all, select some ``nominal'' (assumed ``true'') values $\hat{\vec\xi}$
somewhere in the region of interest. We may select e.g.\ the best-fitting model for this
goal, although such choice is not mandatory.
\item Given the chosen nominal vector $\hat{\vec\xi}$, evaluate the implied nominal RV
values and the RV errors variances $\sigma_i$ (or, for the red-noise framework, the full
noise covariance matrix).
\item Construct a simulated RV dataset by means of adding to the nominal RV curve the
simulated Gaussian errors, generated on the basis of previously evaluated uncertainties
and correlations.
\item Based on the simulated dataset, evaluate the value of the likelihood function at the
nominal parameter values from step~1, and the maximum value of $\tilde{\mathcal L}$ for
this trial. Based on these two values, evaluate the necessary modified likelihood ratio
statistic $\tilde Z$ for this trial.
\item Save this value of $\tilde Z$, as well as the set of the simulated best fitting
parameters (when necessary), and return to step~3, if the desired number of trials has not
been accumulated yet.
\end{enumerate}
Therefore, this classical method is not self-closed: we should feed the simulation with
the nominal vector $\hat{\vec\xi}$. Due to this weakness, the results of the simulation
are usable when the functions $P(\tilde Z | \hat{\vec\xi})$ and $p(\vec\xi^* |
\hat{\vec\xi})$ do not demonstrate large dependence on $\hat{\vec\xi}$ (at least for
the expected realistic values of $\hat{\vec\xi}$, e.g.\ the ones covering the uncertainty
region).
We actually recommend this simulation in practice only to detect a significant
non-linearity of the task specified, or, vice versa, to show that a particular task can be
dealt with by means of the linear asymptotic methods discussed in previous sections. This
is achieved by means of the comparison of the asymptotic confidence regions or of the
asymptotic $\chi^2$ likelihood-ratio distribution with the results of simulations. We must
note, that e.g.\ non-elliptic shape of the parametric confidence regions (asymptotic or
Monte Carlo ones) does not yet imply any genuine non-linearity at all. Although such a
deviation from ellipticity is usually deemed as an indicator of the non-linearily, it may
be often caused by other reasons like, e.g.\ an uncareful choice of the parametrization
\citep{Baluev13a}. To check that the non-linearity is indeed genuine (``endogeneous'') and
that it really requests the use of complicated non-asymptotic treatment, it is necessary
to verify the agreement between the asymptotic results and the results of the classic
Monte Carlo simulation.
When the model is proven to be significantly non-linear, the classic Monte Carlo scheme
does not offer more realistic confidence regions or confidence probabilities, so it should
not be used for this goal either. The naive interpretation of the classic Monte Carlo
results leads to some caveats. For example, it may double the statistical bias of the
maximum-likelihood estimations, rather than to compensate it: the Monte Carlo trials will
generate ``mock'' best-fit parametric solutions that are biased relatively to the actual
best-fit one in the same manner as this actual best-fit configuration is biased relatively
to the truth.
\subsection{Bootstrap simulation}
\label{subsec_bstrp}
The bootstrap is used when there is a danger that the RV errors are not really Gaussian,
although we must note that the actual practical profit from the bootstrap in the exoplanet
searches still remains without detailed investigation.
\begin{enumerate}
\item Evaluate the best fitting model and the resulting RV residuals.
\item Apply random shuffling procedure of the residuals (we do this separately to each
sub-dataset of our combined time series).
\item Evaluate the statistic $\tilde Z$ and best fitting parameters in the same manner as
in the plain Monte Carlo simulation.
\item Save the resulting value of $\tilde Z$ and parameters and return to step~2.
\end{enumerate}
The bootstrap shares all weaknesses of the classical Monte Carlo scheme, except for the
assumption of the noise Gaussianity. In addition, it possesses extra disadvantages, for
example it is meaningful only with a white-noise RV model, because random shuffling of the
residuals basically destroys any correlational structure of the RV noise, which a
red-noise model tries to deal with.
Another weakness of the bootstrap simulation is that it does not work well for the noise
parameters \citep{Baluev13a}. Their bootstrap-simulated values are concentrated in an
unexpectedly small region, much smaller than the real uncertainty domain. Consequently,
the result of such a simulation looks roughly as if these noise parameters were held fixed
during the simulation. This also results in a different and rather unpredictable behaviour
of the statistic $\tilde Z$ obtained using such a simulation.
\subsection{Genuinely frequentist Monte Carlo simulation}
\label{subsec_FMC}
The existing criticism of the above-described Monte Carlo algorithms is due to their
sensitivity to some assumed constant ``true'' or ``nominal'' vector of the fit parameters.
This is one of the main arguments of many statisticians, which they often use to highlight
the advantages of the Bayesian approach. The Bayesian methods do not rely on a single
nominal vector: instead, they deal with a scattered prior distributions covering a large
parametric domain.
However, it appears that this led us to an unjustified opposing the Bayesian methods with
the frequentist ones. Although the above simulation schemes do suffer from the issue of
constant nominal values, this issue can be eliminated in the genuinely frequentist
framework. Leaving aside the philosophy, the main \emph{techincal} difference between the
Bayesian and frequentist methods is in how they treat the uncertainty of the nominal
values of the fit parameter. While the Bayesian methods use weighted averaging with some
pre-set prior probability density, the genuine frequentist approach is based on the
worst-case principle, and uses the maximization or minimization in place of the averaging.
If in the simplified frequentist approach we dealt with the distribution function
$P(\tilde Z | \hat{\vec\xi})$, in the genuine frequentist treatment we should replace it
with
\begin{equation}
P_\mathrm{worst}(\tilde Z) = \min_{\vec\xi \in \Xi} P(\tilde Z | \vec\xi),
\label{Pw}
\end{equation}
which means the worst-case confidence probability. Standing on the Bayesian positions with
probabilistic $\vec\xi$, the distribution of $\tilde Z$ would be expressed as
\begin{equation}
P_\mathrm{mean}(\tilde Z) = \int\limits_{\Xi} P(\tilde Z | \vec\xi) p(\vec\xi) d\vec\xi
\label{Pm}
\end{equation}
with $p(\vec\xi)$ being the prior distribution of the parameters. Obviously, the
difference between~(\ref{Pw}) and~(\ref{Pm}) is crucial, but we cannot say that one of
them is generally better, or on contrary deprecated. Each approach has its own advantages
and disadvantages in concrete special circumctances; some of them are briefly discussed in
\citep{Baluev13a}. Obviously, in the frequentist approach we need only to outline a
parametric domain $\Xi$, and any prior density inside this domain does not play any role
when we find the minimum.
The entire algorithm of the genuine frequentist simulation would look as the following:
\begin{enumerate}
\item Select an $i$th trial point $\hat{\vec\xi}_i$ (possibly, residing inside a given
parametric domain $\Xi$).
\item Run the plain Monte Carlo algorithm of Sect.~\ref{subsec_MC} assuming that the true
parameters correspond to the selected point.
\item Save the simulated distribution $P(\tilde Z | \hat{\vec\xi}_i)$ of the test
statistic of interest ($\tilde Z$ in our case) and return to step~1.
\item When a sufficiently dense coverage of the mentioned in step~1 parametric domain is
reached, evaluate the function $P_\mathrm{worst}(\tilde Z) = \min_i P(\tilde Z |
\hat{\vec\xi}_i)$.
\end{enumerate}
After that, the rigorous frequentist false alarm probability associated with an
\emph{observed} value $\tilde Z_*$ (which was obtained using exactly the same models that
were used during the simulation) can be calculated as $1-P_\mathrm{worst}(\tilde Z_*)$.
At present, \textbf{PlanetPack}\xspace does not incorporate Bayesian tools, but the genuine frequentist
simulation can be organized by means of calling it subsequently from an external shell
script. To do this we should first generate some set of $\hat{\vec\xi}_i$, saving it in a
file. This can be done with \textbf{PlanetPack}\xspace by means of the plain Monte Carlo algorithm, or using
another preferred external procedure. After that, \textbf{PlanetPack}\xspace can be subsequently executed for
each saved $\hat{\vec\xi}_i$ to perform the simulation of step~2, saving the relevant
distribution $P(\tilde Z | \hat{\vec\xi}_i)$. Then these distributions should be processed
externally to generate $P_\mathrm{worst}(\tilde Z)$. This is exactly how we estimated the
significance of the planet GJ~581~\emph{e} in \citep{Baluev13a}.
\section{Conclusions}
\label{sec_conc}
We have a hope that \textbf{PlanetPack}\xspace functionality will grow further in future, not limited by the
things that we have described in the paper. In particular, it would be tempting to add
some algorithms of Bayesian simulations, and to have some capabilities of dealing with
astrometric data, because of the forthcoming domination of GAIA astrometry. Among more
technical things, we would like to make \textbf{PlanetPack}\xspace able to work in a multi-threaded mode,
profiting from the full capabilities of modern multi-core CPUs or even from GPU computing
(at present, \textbf{PlanetPack}\xspace is single-threaded).
\textbf{PlanetPack}\xspace is a free and open-source software. We do not set any limitation on the use of itself
or of its source code (except for providing a proper reference to the present paper).
Anyone who is interested is allowed to freely modify its code to improve it or to adapt it
to their specific needs, although it would be of course preferrable to incorporate a
significant and worthy improvement in \textbf{PlanetPack}\xspace itself rather than to make an independent fork.
\section*{Acknowledgements}
This work was supported by Russian Foundation for Basic Research (project 12-02-31119
mol\_a) and by the programme of the Presidium of Russian Academy of Sciences
``Non-stationary phenomena in the objects of the Universe''. I would like to express the
gratitude to my collegue, Dr. I.I.~Nikiforov, as well as to the reviewers, for providing
useful comments during preparation of this manuscript. Also, we acknowledge that a few of
linear algebra algorithms used in \textbf{PlanetPack}\xspace represent re-worked versions of the relevant GNU
Scientific library subroutines.
\bibliographystyle{model2-names}
|
1,108,101,563,913 | arxiv |
\section{ATLAS detector}
\label{sec:detector}
The ATLAS experiment~\cite{DetectorPaper:2008} at the LHC is a multi-purpose particle detector
with a cylindrical forward-backward and $\phi$-symmetric
geometry
and an approximate $4\pi$ coverage in
solid angle.
It consists of an inner tracking detector surrounded by a thin superconducting solenoid
providing a \SI{2}{\tesla} axial magnetic field, electromagnetic and hadron calorimeters, and a muon spectrometer.
The inner tracking detector covers the pseudorapidity range $|\eta| < 2.5$.
It consists of silicon pixel, silicon microstrip, and transition
radiation tracking detectors. The newly installed innermost layer of pixel sensors~\cite{IBL} was operational for the first time during the 2015 data-taking.
Lead/liquid-argon (LAr) sampling calorimeters provide electromagnetic (EM) energy measurements
with high granularity.
A hadron (steel/scintillator-tile) calorimeter covers the central pseudorapidity range ($|\eta| < 1.7$).
The end-cap and forward regions are instrumented with LAr calorimeters
for both the EM and hadronic energy measurements up to $|\eta| = 4.9$.
The muon spectrometer surrounds the calorimeters and features
three large air-core toroidal superconducting magnets with eight coils
each, providing coverage up to $|\eta| = 2.7$.
The field integral of the toroids ranges between 2.0 and \SI{6.0}{\tesla m} across most of the detector.
It includes a system of precision tracking chambers and fast detectors for triggering.
\section{Background estimation}
\label{sec:background}
The main SM background process in SRA, SRB, SRD, and SRE is $\Znunu$ production in association with heavy-flavour jets. The second most significant background is \ttbar\ production where one
\Wboson\ boson decays via a lepton and neutrino and the lepton (particularly a hadronically decaying $\tau$
lepton) is either not identified or is reconstructed as a jet. This
process gives the major background contribution in SRC and an important
background in SRB, SRD and SRE as well. Other important background
processes are $\Wboson\to\ell\nu$ plus heavy-flavour jets, single top quark, and the
irreducible background from $\ttZ$, where the $\Zboson$ boson decays into two neutrinos.
The main background contributions are estimated primarily from
comparisons between data and simulation outside the signal regions. Control regions (CRs) are
designed to enhance a particular background process, and are orthogonal to the SRs while probing a similar event topology. The CRs are used to normalize the simulation to data, but extrapolation from the CR to the SR is taken from simulation. Sufficient data are needed to avoid large statistical uncertainties in the background estimates, and the CR definitions are chosen to be
kinematically as close as possible to all SRs, to minimize the
systematic uncertainties associated with extrapolating the background
yield from the CR to the SR. Where CR definitions are farther from the SR definition, validation regions are employed to cross-check the extrapolation. In addition, control-region selection criteria
are chosen to minimize potential contamination from signal that could shadow contributions in the signal regions. The signal contamination is below 8\% in all CRs for all signal points that have not been excluded by previous ATLAS searches. No significant difference in the background estimates was found between the case where only SM backgrounds were considered and when signal is included in the estimation. As the CRs are not $100\%$ pure in the process of
interest, the cross-contamination between CRs from other processes is
estimated. The normalization factors and the cross-contamination are determined simultaneously for all regions using a
fit described below.
Detailed CR definitions are given in Tables~\ref{tab:selectionCRZs},~\ref{tab:selectionCRTs}, and~\ref{tab:selectionCRWSTTTGamma}.
They are used for the \Zboson\ (CRZs), \ttbar\ (CRTs), \Wboson\ (CRW), single top (CRST), and \ttZ\ (CRTTGamma) background estimation.
The \dphijetthreemet\ and
$\mT(\ell,\met)$ requirements are designed
to reduce contamination from SM multijet processes
. The number of leptons (from this point on, lepton is used to mean electron or muon) is
indicated by $N_{\ell}$ and the transverse momentum of the lepton is
indicated by $\pT^{\ell}$. In all one-lepton CRs, once the trigger and
minimum $\pT^{\ell}$ selection are applied, the lepton is treated as a
non-$b$-tagged jet (to emulate the hadronic $\tau$ decays in
the SRs) in the computation of all jet-related variables. In the
two-lepton CRZs, a lepton-\pt\ requirement of at least 28 GeV is made to ensure the trigger selection is fully efficient.
The invariant mass of the two oppositely charged
leptons, denoted by $m_{\ell\ell}$, must be consistent
with the leptons having originated from a $\Zboson$ boson. The transverse
momenta of these leptons are
then vectorially added to the \ptmiss\ to mimic the $\Znunu$ decays in the SRs,
forming the quantity \metprime. Quantities that depend on the \met\ are recalculated in the CRZs using \metprime\ and identified by the addition of a prime (e.g. \mtbminprime\ and \mtbmaxprime). Requirements such as the maximum
$\mT(\ell,\met)$ and the minimum $\Delta R$ between the two
highest-weight $b$-tagged jets and the lepton, $\drblmin$, are used to
enforce orthogonality between CRT, CRW, and CRST. In CRST, the requirement on the $\Delta R$ between
the two highest-weight $b$-tagged jets, $\drbjetbjet$, is used to reject
$\ttbar$ contamination from the control region enriched in single-top
events. Finally, the normalization of the \ttV\ background in the
signal region, which is completely dominated by $\ttbar+Z(\to\nu\nu)$,
is estimated with a $\ttbar+\gamma$ control region in a way similar to the method described in Ref.~\cite{GtcStop1L}. The
same lepton triggers and lepton-\pt\ requirements are used for the $\ttbar+\gamma$ control region as in the
CRZs. Additionally, the presence of an isolated photon with $\pt>150\gev$ is
required and it is used to model the \Zboson\ decay in the signal
regions because of the similarity between the diagrams for photon and \Zboson\ production. Similarly to the \Zboson\ control region, the photon is used in the estimation of \met-related variables.
To estimate the \Zjets\ and \ttbar\ background in the different kinematic regions of the signal regions, individual control regions are designed for all signal regions where possible. Only if the statistical power of control regions is low, are they merged to form one control region for multiple signal regions. In the case of CRST, CRW, and CRTTGamma, this results in the use of one common CR for all signal regions. Distributions from the $\Zjets$, $\ttbar$, $\Wjets$, single top, and $\ttbar\gamma$ control regions are shown in Figure~\ref{fig:CRs}.
\begin{table}[htpb]
\caption{Selection criteria for the $\Zjets$ control regions used to estimate the $\Zjets$ background contributions in the signal regions.}
\begin{center}
\def1.4{1.4}
\begin{tabular}{c||c|c|c|c}
\hline \hline
Selection & CRZAB-TT-TW & CRZAB-T0 & CRZD & CRZE \\ \hline \hline
Trigger & \multicolumn{4}{c}{electron or muon} \\
\hline
$N_{\ell}$ & \multicolumn{4}{c}{2, opposite charge, same flavour} \\
\hline
$\pT^{\ell}$ & \multicolumn{4}{c}{$>28 \gev$} \\
\hline
$m_{\ell\ell}$ & \multicolumn{4}{c}{[86,96] \gev} \\
\hline
$N_{\mathrm{jet}}$ & \multicolumn{4}{c}{$\ge 4$} \\
\hline
\ptzero, \ptone, \pttwo, \ptthree & \multicolumn{4}{c}{$80,80,40,40\gev$} \\
\hline
$\met$ & \multicolumn{4}{c}{$<50 \gev$} \\
\hline
$\metprime$ & \multicolumn{4}{c}{$ > 100$ \gev} \\
\hline
\nBJet & \multicolumn{4}{c}{$\ge 2 $} \\
\hline
\mantikttwelvezero & \multicolumn{2}{c|}{$>120\gev$} & \multicolumn{2}{c}{-} \\
\hline
\mantikttwelveone & $>60\gev$ & $<60\gev$ & \multicolumn{2}{c}{-} \\
\hline
\mtbminprime & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{$>200\,$\gev} \\
\hline
\mtbmaxprime & \multicolumn{2}{c|}{-} & $>200\,$\gev & - \\
\hline
\HT & \multicolumn{3}{c|}{-} & $>500\,$\gev \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:selectionCRZs}
\end{table}
\begin{landscape}
\begin{table}[htpb]
\caption{Selection criteria for the \ttbar\ control regions used to estimate the \ttbar\ background contributions in the signal regions.}
\begin{center}
\def1.4{1.4}
\scriptsize
\begin{tabular}{c||c|c|c|c|c|c|c|c|c}
\hline \hline
Selection & CRTA-TT & CRTA-TW & CRTA-T0 & CRTB-TT & CRTB-TW & CRTB-T0 & CRTC & CRTD & CRTE \\ \hline \hline
Trigger & \multicolumn{9}{c}{\met} \\
\hline
$N_{\ell}$ & \multicolumn{9}{c}{1} \\
\hline
$\pT^{\ell}$ & \multicolumn{9}{c}{$>20 \gev$} \\
\hline
$N_{\mathrm{jet}}$ & \multicolumn{9}{c}{$\ge 4$ (including
electron or muon)} \\
\hline
\ptzero, \ptone, \pttwo, \ptthree & \multicolumn{9}{c}{$80,80,40,40\gev$} \\
\hline
\nBJet & \multicolumn{9}{c}{$ \ge 2 $}\\
\hline
$\dphijettwomet$ & \multicolumn{9}{c}{$>0.4$} \\
\hline
$\dphijetthreemet$ & \multicolumn{6}{c|}{$>0.4$} & - & \multicolumn{2}{c}{$>0.4$}\\
\hline
$\mT(\ell,\met)$ & \multicolumn{6}{c|}{$[30,100]\gev$} & $<100\gev$ & \multicolumn{2}{c}{$[30,100]\gev$} \\
\hline
\mtbmin & \multicolumn{6}{c|}{$>100\,$\gev} & - & \multicolumn{2}{c}{$>100\,$\gev} \\
\hline
$\drblmin$ & \multicolumn{6}{c|}{$<1.5$} & $<2.0$ & \multicolumn{2}{c}{$<1.5$}\\
\hline
\mantikttwelvezero & \multicolumn{6}{c|}{$>120 \gev$} & \multicolumn{3}{c}{-} \\
\hline
\mantikttwelveone & $>120 \gev$ & $[60, 120] \gev$ & $<60 \gev$ & $>120 \gev$ & $[60, 120] \gev$ & $<60 \gev$ & \multicolumn{3}{c}{-} \\
\hline
\mantikteightzero & \multicolumn{3}{c|}{$>60 \gev$} & \multicolumn{5}{c|}{-} & $>120 \gev$ \\
\hline
\mantikteightone & \multicolumn{8}{c|}{-} & $>80 \gev$ \\
\hline
$\met$ & $ >250$ \gev & $ >300$ \gev & $ >350$ \gev & \multicolumn{6}{c}{$>250 \gev$} \\
\hline
$\drbjetbjet$ & $>1.0$ & \multicolumn{2}{c|}{-}& \multicolumn{3}{c|}{$>1.2$} & - & $>0.8$ & - \\
\hline
$\mtbmax$ & \multicolumn{3}{c|}{-}& \multicolumn{3}{c|}{$>200\gev$} & - & $>100\gev$ & - \\
\hline
$\ptone$ & \multicolumn{7}{c|}{-} & $>150\gev$ & - \\
\hline
$\ptthree$ & \multicolumn{7}{c|}{-} & $>80\gev$ & - \\
\hline
$\ptbzero+\ptbone$ & \multicolumn{7}{c|}{-} & $>300\gev$ & - \\
\hline
$\NjV$ & \multicolumn{6}{c|}{-}& $ \ge 5$ & \multicolumn{2}{c}{-} \\
\hline
$\NbV$ & \multicolumn{6}{c|}{-}& $ \ge 1$ & \multicolumn{2}{c}{-} \\
\hline
$\PTISR$ & \multicolumn{6}{c|}{-}& $>400\gev$ & \multicolumn{2}{c}{-} \\
\hline
$\pTSFour$ & \multicolumn{6}{c|}{-}& $>40\gev$ & \multicolumn{2}{c}{-} \\
\hline
$\HT$ & \multicolumn{8}{c|}{-}& $>500\gev$ \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:selectionCRTs}
\end{table}
\end{landscape}
\begin{table}[htpb]
\caption{Selection criteria for the common \Wjets, single-top, and $\ttbar+\gamma$ control-region definitions.}
\begin{center}
\def1.4{1.4}
\begin{tabular}{c||c|c|c}
\hline \hline
Selection & CRW & CRST & CRTTGamma \\ \hline \hline
Trigger & \multicolumn{2}{c|}{\met} & electron or muon \\
\hline
$N_{\ell}$ & \multicolumn{3}{c}{1} \\
\hline
$\pT^{\ell}$ & \multicolumn{2}{c|}{$>20 \gev$} & $>28\gev$ \\
\hline
$N_{\gamma}$ & \multicolumn{2}{c|}{-} & 1 \\
\hline
$\pT^{\gamma}$ & \multicolumn{2}{c|}{-} & $>150\gev$ \\
\hline
$N_{\mathrm{jet}}$ & \multicolumn{2}{c|}{$\ge 4$ (including electron or muon)} & $\ge4$ \\
\hline
\ptzero, \ptone,\pttwo,\ptthree & \multicolumn{3}{c}{$80,80,40,40\gev$} \\
\hline
\nBJet & $1$ & \multicolumn{2}{c}{$ \ge 2 $} \\
\hline
$\dphijettwomet$ & \multicolumn{2}{c|}{$>0.4$} & - \\
\hline
$\mT(\ell,\met)$ & \multicolumn{2}{c|}{$[30,100]\gev$} & - \\
\hline
$\drblmin$ & \multicolumn{2}{c|}{$>2.0$} & - \\
\hline
$\met$ & \multicolumn{2}{c|}{$>250 \gev$} & - \\
\hline
$\drbjetbjet$ & - & $>1.5$ & - \\
\hline
$\mantikttwelvezero$ & $<60\gev$ & $>120\gev$ & - \\
\hline
$\mtbmin$ & - & $>200\gev$ & - \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:selectionCRWSTTTGamma}
\end{table}
\begin{figure}[htpb]
\begin{center}
\subfloat[]{\includegraphics[width=0.44\textwidth]{figures/Znunu/MT2Chi2Lep_CRZAB_T0_log}}
\subfloat[]{\includegraphics[width=0.44\textwidth]{figures/Znunu/MetLep_CRZE_log}}\\
\subfloat[]{\includegraphics[width=0.44\textwidth]{figures/ttbar/CA_RISR_CRTopC}}
\subfloat[]{\includegraphics[width=0.44\textwidth]{figures/Wjets/MtBMax_CRW_log}}\\
\subfloat[]{\includegraphics[width=0.44\textwidth]{figures/singleTop/JetPt_1__CRST_log}}
\subfloat[]{\includegraphics[width=0.44\textwidth]{figures/ttGamma/SigPhotonPt_0__CRTTGamma_log}}
\caption{Distributions of (a)~\mttwoprime\ in CRZAB-T0, (b)~\metprime\ in CRZE, (c)~\rISR\ in CRTC,
(d)~\mtbmax\ in CRW, (e)~the transverse momentum of the second-leading-\pT\ jet in CRST, and
(f)~the photon \pT\ in CRTTGamma. The stacked histograms show the SM prediction, normalized using scale factors derived from the simultaneous fit to all backgrounds. The ``Data/SM" plots show the ratio of data events to the total SM prediction. The hatched uncertainty band around the SM prediction and in the ratio plot illustrates the combination of MC statistical and detector-related systematic uncertainties. The rightmost bin includes overflow events.}
\label{fig:CRs}
\end{center}
\end{figure}
Contributions from all-hadronic \ttbar\ and multijet production are found to be negligible. These are
estimated from data using a procedure described in
Ref.~\cite{jetSmearing}. The procedure determines the jet response from simulated dijet events, and then uses this response function to smear the jet response in low-\met\ events. The jet response is cross-checked with data where the \met\ can
be unambiguously attributed to the mismeasurement of one of the
jets. Diboson production, which is also subdominant, is estimated directly from simulation.
\subsubsection*{Simultaneous fit to determine SM background}
\label{sec:simultaneousfit}
The observed numbers of events in the various control regions are
included in a binned profile likelihood fit~\cite{likelihoodFit} to determine the SM background
estimates for \Zboson, \ttbar, \Wboson, single top, and \ttZ\ in each signal region.
The normalizations of these backgrounds are determined simultaneously to best match the observed data in each control region, taking contributions from all backgrounds into account. A likelihood function is built as the
product of Poisson probability density functions, describing the observed and
expected numbers of events in the control regions~\cite{histFitter}. This procedure takes common
systematic uncertainties (discussed in
Section~\ref{sec:Systematics}) between the control and signal regions and
their correlations into account as they are treated as nuisance
parameters in the fit and are modelled by Gaussian probability density
functions.
The contributions from all other background processes (dibosons and multijets) are fixed at the values expected from
the simulation, using the most accurate theoretical cross sections
available, as described in Section~\ref{sec:simulation}, while their uncertainties are used as nuisance parameters in the fit.
\input{valRegions}
\section{Conclusions}
\label{sec:conclusions}
Results from a search for top squark pair production based
on an integrated luminosity of \lumi\ of $\rts = 13 \tev ~pp$
collision data recorded by the ATLAS experiment at the LHC in 2015 and
2016 are presented. Top squarks are searched for in
final states with high-\pT\ jets and large missing transverse
momentum. In this paper, direct top squark production is studied assuming top squarks decay via $\stop
\rightarrow t^{(*)} \ninoone$ with large or small mass differences between the top squark and the neutralino $\Delta
m(\stop,\ninoone)$ and via $\stop\to b \chinoonepm$, where $m_{\chinoonepm}=\mLSP+1\gev$.
Additionally, gluino-mediated \stop\ production is
studied, in which gluinos decay via $\gluino\rightarrow t\stop$, with a
small $\Delta m(\stop,\ninoone)$.
No significant excess above the expected SM background is observed. Exclusion limits at 95\% confidence level in the plane of the top-squark and LSP masses are derived, resulting in the exclusion of top-squark masses in the range
\stopLimLowLSP\ GeV for $\ninoone$ masses below $160\GeV$. For the case where $\mstop\sim m_t+\mLSP$, top-squark masses in the range \stopLimDiag\ GeV are excluded. In addition, model-independent limits and $p$-values for each signal region are reported. Limits that take into account an additional decay of $\stop\to b \chinoonepm$ are also set with an exclusion of top-squark masses between 450 and 850 GeV for $\mLSP<240\gev$ and $B(\stop\to t \LSP)=50\%$ for $m_{\chinoonepm}=\mLSP+1\gev$. Limits are also derived in two pMSSM models, where one model assumes a wino-like NLSP and the other model is constrained by the dark-matter relic density. In addition to limits in pMSSM slices, limits are set in terms of one pMSSM-inspired simplified model where $m_{\chinoonepm}=\mLSP+5\gev$ and $m_{\ninotwo}=\mLSP+10\gev$. Finally, exclusion contours are reported for gluino production where $\mstop=\mLSP+5\gev$, resulting in gluino masses being constrained to be above 1800 GeV for \stop\ masses below 800 GeV.
\section{Introduction}
\label{sec:intro}
Supersymmetry (SUSY)~\cite{Golfand:1971iw,Volkov:1973ix,Wess:1974tw,Wess:1974jb,Ferrara:1974pu,Salam:1974ig} is an extension of the Standard Model (SM) that
can resolve, for example, the gauge hierarchy
problem~\cite{Dimopoulos:1981zb,Sakai:1981gr,PhysRevD.24.1681,IBANEZ1981439}
by introducing supersymmetric partners of the known bosons and fermions. The SUSY partner to the top quark, the top squark ($\tilde{t}$), plays an important role in cancelling potentially large top-quark loop corrections in the Higgs boson mass. The superpartners of the left- and right-handed top quarks, $\stopL$ and $\stopR$, mix to form the two mass eigenstates $\stopone$ and $\stoptwo$, where $\stopone$ is the lighter one. Throughout this paper it is assumed that the analysis is only sensitive to $\stopone$.
In $R$-parity-conserving SUSY models~\cite{Farrar:1978xj}, the supersymmetric partners are produced in pairs. Top squarks are produced by strong interactions through quark--antiquark ($\qqbar$) annihilation or gluon--gluon fusion, and the cross section of direct top-squark pair production is largely decoupled from the specific choice of SUSY model parameters~\cite{Beenakker:1997ut,Beenakker:2010nq,Beenakker:2011fu,Borschensky:2014cia}. The decay of the top squark depends on the mixing of the superpartners of left- and right-handed top quarks, the masses of the top superpartner, and the mixing parameters of the fermionic partners of the electroweak and Higgs bosons. The mass eigenstates of the partners of electroweak gauge and Higgs bosons (binos, winos, higgsinos) are collectively known as charginos,
$\tilde{\chi}_{i}^{\pm}$, $i=1,2$, and neutralinos, $\tilde{\chi}_{i}^{0}$, $i=1, ..., 4$, where \ninoone\ is assumed to be the lightest supersymmetric particle (LSP) which is stable and a dark-matter candidate~\cite{Goldberg:1983nd,Ellis:1983ew}. For the models considered, either \ninotwo\ or \chinoonepm\ is assumed to be the next lightest supersymmetric particle (NLSP).
Three different decay scenarios are considered in this search: (a) both top squarks decay via $\stop\rightarrow t^{(*)} \ninoone$, (b) at least one of the top squarks decays via $\stop\rightarrow b \chinoonepm \rightarrow b W^{(*)}\ninoone$, with various hypotheses for $\mLSP$ and $m_{\chinoonepm}$, and (c) where $m_{\ninotwo}$ is small enough for at least one top squark to decay via $\stop\to t \ninotwo \to h/Z \ninoone$, where $h$ is the SM-like Higgs boson with a mass of 125 GeV, as illustrated in
Figure~\ref{fig:feynDiagModels}(a)$-$(c), respectively. The interpretation of the results uses simplified models
~\cite{Alwall:2008ve,Alwall:2008ag,Alves:2011wf} where only one or two decay steps are allowed. In the case with two allowed decays, referred to later in this paper as a natural SUSY-inspired mixed grid, the mass splitting between the \chinoonepm\ and the \ninoone, $\Delta m(\chinoonepm,\ninoone)$, is assumed to be 1~\gev. A grid of signal samples is generated across the plane of the top-squark and \ninoone\ masses with a grid spacing of $50\gev$ across most of the plane,
assuming
maximal mixing between the partners of the left- and right-handed top quarks.
In both the one- and two-step decay scenarios the LSP is considered to be a pure bino state. Additionally, results are interpreted in two slices of phenomenological MSSM (pMSSM)~\cite{Djouadi:1998di,Berger:2008cq} models, referred to as wino-NLSP and well-tempered neutralino pMSSM models in the remainder of this paper. The pMSSM models are based on the more general MSSM~\cite{Fayet:1976et,Fayet:1977yc} but with the additional requirements of no new sources of CP violation and flavour-changing neutral currents, as well as first- and second-generation sfermion mass and trilinear coupling degeneracy. Finally, results are also interpreted in a simplified model which is inspired by the pMSSM and is referred to as non-asymptotic higgsino. Details of the models that are used in the various interpretations are given in Section~\ref{sec:result}.
In addition to direct pair production, top squarks can be produced indirectly through gluino decays, as shown in Figure~\ref{fig:feynDiagModels}(d). This search considers models where the mass difference between the top squark and the neutralino is small, i.e. $\Delta m(\stop,\ninoone)=5\GeV$. In this scenario, the jets originating from the \stop\ decays have momenta below the experimental acceptance, resulting in a signature nearly identical to that of $\stop\to t \ninoone$ signal models (Figure~\ref{fig:feynDiagModels}(a)).
\begin{figure}[htb]
\begin{center}
\subfloat[$\stop\ra t^{(*)}\ninoone$]{\includegraphics[width=0.25\textwidth]{figures/intro/stst-bqqbqqN1N1-tt}}\hspace{0.05\textwidth}
\subfloat[$\stop\ra b\chinoonepm\ra b \Wboson^{(*)}\ninoone$]{\includegraphics[width=0.25\textwidth]{figures/intro/stst-bbWWN1N1}}\hspace{0.05\textwidth}
\subfloat[$\stop\ra t\ninotwo\to h/Z\ninoone$]{\includegraphics[width=0.25\textwidth]{figures/intro/stst-tbhWN1N1}}\hspace{0.05\textwidth}
\subfloat[$\gluino \ra t\stop\ra t\ninoone +$soft]{\includegraphics[width=0.25\textwidth]{figures/intro/gogo-tsofttsoftN1N1-stst}}\hspace{0.05\textwidth}
\end{center}
\caption{The decay topologies of the signal models considered with experimental signatures of four or more jets plus missing transverse momentum. Decay products that have transverse momenta below detector thresholds are designated by the term ``soft".}
\label{fig:feynDiagModels}
\end{figure}
This paper presents the search for top-squark pair production using a time-integrated luminosity of \lumi\ of proton--proton ($pp$) collisions data provided by the Large Hadron Collider (LHC) at a centre-of-mass energy of $\rts = 13\tev$. The data were collected by the ATLAS detector in 2015 and 2016. All-hadronic final states with at least four jets and large missing transverse momentum\footnote{\AtlasCoordFootnote} ($\ptmiss$, whose magnitude is referred to as \met) are considered, and the results are interpreted according to a variety of signal models as described above. Signal regions are defined to maximize the experimental sensitivity over a large region of kinematic phase space. Sensitivity to high top-squark masses $\sim1000\GeV$ (as in Figure~\ref{fig:feynDiagModels}(a)) and top squarks produced through gluino decays (as in Figure~\ref{fig:feynDiagModels}(d)) is achieved by exploiting techniques designed to reconstruct top quarks that are Lorentz-boosted in the lab frame. The dominant SM background process for this kinematic region is $Z\rightarrow \nu \bar{\nu}$ produced in association with jets initiated by heavy-flavour quarks (heavy-flavour jets). The sensitivity to the decay into $b\chinoonepm$ is enhanced by vetoing events containing hadronically decaying top-quark candidates to reduce the \ttbar\ background, leaving $Z\rightarrow \nu \bar{\nu}$ as the largest SM background. Sensitivity to the region where $m_{\stop}-m_{\ninoone} \sim m_t$, which typically has relatively low-\pT\ final-state jets and low \MET, is achieved by exploiting events in which high-\pT\ jets from initial-state radiation (ISR) boosts the di-top-squark system in the transverse plane. For this regime, \ttbar\ production gives the dominant background contribution. Similar searches based on $\rts = 8\tev$ and $\rts = 13\tev$ data collected at the LHC have been performed by both the ATLAS~\cite{Atlas8TeV,Atlas8TeVSummary,GtcStop1L,stop2L8TeV} and CMS~\cite{Khachatryan:2015pwa,Khachatryan:2016pxa,Sirunyan:2016jpr,Khachatryan:2017rhw,Khachatryan:2016xvy} collaborations.
\part*{Appendix}
\printbibliography
\newpage
\input{atlas_authlist}
\end{document}
\section{Event reconstruction}
\label{sec:reconstruction}
Events are required to have a primary vertex~\cite{vertexReco} reconstructed from at
least two tracks with $\pT>400\mev$.
Among the vertices found, the vertex with the largest summed $\pT^2$ of the associated tracks is chosen.
\begin{sloppypar}
Jets are reconstructed from three-dimensional topological clusters of noise-suppressed calorimeter cells~\cite{caloClusters} using the anti-$k_t$ jet algorithm~\cite{antiKt,Cacciari:2011ma} with a radius parameter
$R=0.4$. An area-based correction is applied to account for energy
from additional $pp$ collisions based on an estimate of the pile-up activity in a given event~\cite{pileupSub}. Calibrated~\cite{jetsCalibrated} jet candidates are required to have
$\pT>20 \gev$ and $|\eta|<2.8$. Events containing jets arising from
non-collision sources or detector noise~\cite{jetCleaning} are removed (``no bad jets'' requirement).
Additional selections based on track information are applied to
jets with $\pT<60 \gev$ and $|\eta|<2.4$ to reject jets that originate from
pile-up interactions~\cite{jetTagger}.
\end{sloppypar}
Jets containing $b$-hadrons and which are within the inner detector
acceptance ($|\eta|<2.5$) are identified (as $b$-tagged jets) with a multivariate algorithm that exploits
the impact parameters of the charged-particle tracks, the presence of secondary
vertices, and the reconstructed flight paths of $b$- and $c$-hadrons inside
the jet~\cite{btagging,MV10c,ATL-PHYS-PUB-2014-014}. The output of the multivariate algorithm is a single $b$-tagging weight which signifies the likelihood of a jet containing $b$-hadrons. The average identification efficiency of jets containing $b$-hadrons is $77\%$ as determined in simulated \ttbar\ events. A rejection factor of approximately 130 is reached for jets initiated by light quarks and
gluons and 6 for jets initiated by charm quarks.
Electron candidates are reconstructed from clusters of energy deposits in the
electromagnetic calorimeter that are matched to a track in the inner
detector. They are required to have $|\eta|<2.47$, $\pT>7\gev$ and
must pass a variant of the ``very loose'' likelihood-based selection~\cite{egamma,egamma2}. The electromagnetic shower of an electron can also form a jet such that a procedure is required to resolve this ambiguity. In the case where the separation between an electron candidate and a non-$b$-tagged ($b$-tagged) jet is $\Delta R < 0.2$,\footnote{For the overlap removal, rapidity, defined as $\frac{1}{2} \ln \frac{E+p_z}{E-p_z}$, is used instead of pseudorapidity in the $\Delta R$ definition.} the candidate is considered to be an electron ($b$-tagged jet). If the
separation between an electron candidate and any jet satisfies $0.2<
\Delta R < 0.4$, the candidate is considered to be a jet, and the electron candidate is removed.
Muons are reconstructed by matching tracks in the inner detector to tracks
in the muon spectrometer and are required to have $|\eta|<2.7$ and
$\pT>6\gev$. If the separation between a muon and any jet is $\Delta R
< 0.4$, the muon is omitted. Events containing muons identified as originating from cosmic rays ($|d_0| > 0.2$ mm and $|z_0| > 1$ mm) or as poorly reconstructed ($\sigma(q/p)/|(q/p)| > 0.2$) are removed (``cosmic and bad muon'' requirement).
Here, $d_0$ is the transverse impact parameter of a track with respect to the primary vertex, $z_0$ is the distance of this point from the primary vertex projected onto the $z$-axis, and $\sigma(q/p)/|(q/p)|$ provides a measure of the momentum uncertainty for a particle with charge $q$.
The $\ptmiss$ vector is the negative vector sum
of the \pT\ of all selected and calibrated electrons, muons, and jets in the
event. An extra term is added to account for small energy depositions in the event that are not associated with any of the selected objects. This ``soft'' term is calculated from inner detector tracks with $\pT > 400 \MeV$ matched to the primary vertex, to make it resilient to pile-up contamination, not associated with physics objects~\cite{met}. The missing transverse momentum from the tracking system (denoted by \ptmisstrk, with magnitude \mettrk)
is computed from the vector sum of the reconstructed inner detector tracks with $\pt > 400\MeV$, $|\eta|<2.5$, that are associated with the primary vertex in the event. The \ptmisstrk\ and \mettrk\ are used to reject events with large calorimeter-based \met\ due to pile-up contamination or jet energy mismeasurements. These events, where the \ptmisstrk\ tends to not be aligned with the \ptmiss\ and the \met\ tends to be much larger than the \mettrk, are rejected by requiring that the $\Delta\phi$ between the \ptmiss\ and \ptmisstrk\ is less than $\pi$/3 and that the $\mettrk>30$ GeV.
The requirements on electrons and muons are tightened for the
selection of events in background control regions (described in
Section~\ref{sec:background}) containing leptons. Electron and muon candidates are required to have $\pT>20\GeV$ ($\pT>28\GeV$) for regions using the \met\ (lepton) triggers and to satisfy $\pt$-dependent track- and calorimeter-based isolation criteria.
The calorimeter-based isolation is determined by taking the ratio of the sum of energy deposits in a cone of $R=0.2$ around the electron or muon candidate and the energy deposits associated with the electron and muon. The track-based isolation is estimated in a similar way but using a variable cone size with a maximum value of $R=0.2$ for electrons and $R=0.3$ for muons. An isolation requirement is made that is 95\% efficient for electron or muon candidates with $\pt=25$ GeV and 99\% for candidates with $\pt=60$ GeV.
Electron candidates are required to pass a ``tight'' likelihood-based selection~\cite{egamma}.
The impact parameter of the electron in the transverse plane with respect to the reconstructed event primary
vertex is required to be less than five times the impact
parameter uncertainty ($\sigma_{d_0}$). The impact parameter along the
beam direction, $\left|z_0 \times \sin\theta\right|$, is required to be less than $0.5$~mm. Further
selection criteria are also imposed on reconstructed muons: muon candidates are required to pass a ``medium" quality selection~\cite{PERF-2015-10}. In addition, the requirements $|d_0| < 3\sigma_{d_0}$ and
$\left|z_0 \times \sin\theta\right| <0.5$~mm are imposed for muon candidates.
\section{Results and interpretation}
\label{sec:result}
The observed event yields are compared to the expected total number of
background events in
Tables~\ref{tab:srABYields},~\ref{tab:srCYields}, ~\ref{tab:srDEYields},
and Figure~\ref{fig:srSummary}. The total background estimate is
determined from a simultaneous fit to all control regions, based on a procedure described in Section~\ref{sec:simultaneousfit} but including the corresponding signal regions as well as control regions. Figure~\ref{fig:SRs}
shows the distribution of \met, \mttwo, \mtbmax, \mt, \rISR, and \HT\ for the various signal regions, with \rISR\ being shown combining SRC1--5. In these distributions, the background predictions are scaled to the values determined from the simultaneous fit.
\begin{table}[htpb]
\caption{Observed and expected yields, before and after the fit, for SRA and SRB.
The uncertainties include MC statistical uncertainties, detector-related systematic uncertainties, and theoretical uncertainties in the extrapolation from CR to SR.}
\begin{center}
{\renewcommand{1.4}{1.2}
\input{SRABYieldTable.tex}
}
\end{center}
\label{tab:srABYields}
\end{table}
\begin{table}[htpb]
\caption{Observed and expected yields, before and after the fit.
The uncertainties include MC statistical uncertainties, detector-related systematic uncertainties, and theoretical uncertainties in the extrapolation from CR to SR.}
\begin{center}
{\renewcommand{1.4}{1.2}
\input{SRC1-5YieldTable.tex}
}
\end{center}
\label{tab:srCYields}
\end{table}
\begin{table}[htpb]
\caption{Observed and expected yields, before and after the fit, for SRD and SRE.
The uncertainties include MC statistical uncertainties, detector-related systematic uncetainties, and theoretical uncertainties in the extrapolation from CR to SR.}
\begin{center}
{\renewcommand{1.4}{1.2}
\input{SRDEYieldTable.tex}
}
\end{center}
\label{tab:srDEYields}
\end{table}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.8\textwidth]{figures/fit/regionSummaryLog}
\caption{Yields for all signal regions after the likelihood fit. The stacked histograms show the SM prediction and the hatched uncertainty band around the SM predicttion shows total uncertainty, which consists of the MC statistical uncertainties, detector-related systematic uncertainties, and theoretical uncertainties in the extrapolation from CR to SR.}
\label{fig:srSummary}
\end{center}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/SRA/Met_SRA_TT}
\includegraphics[width=0.49\textwidth]{figures/SRA/MT2Chi2_SRA_T0}\\%\hspace{0.05\textwidth}
\includegraphics[width=0.49\textwidth]{figures/SRB/MtBMax_SRB_TW}
\includegraphics[width=0.49\textwidth]{figures/SRC/CA_RISR_SRC1_5}\\%\hspace{0.05\textwidth}
\includegraphics[width=0.49\textwidth]{figures/SRD/MtBMax_SRD_high}
\includegraphics[width=0.49\textwidth]{figures/SRE/Ht_SRE}
\caption{Distributions of \met\ for SRA-TT, \mttwo\ for SRA-T0, \mtbmax\ for SRB-TW, \rISR\ for SRC1--5, \mtbmax\ for SRD-high and \HT\ for SRE after the likelihood fit. The stacked histograms show the SM prediction and the hatched uncertainty band around the SM prediction shows the MC statistical and detector-related systematic uncertainties. For each variable, the distribution for a representative signal point is shown.}
\label{fig:SRs}
\end{center}
\end{figure}
No significant excess above the SM prediction is observed in any of
the signal regions.
The smallest $p$-values, which express the probability that the background fluctuates to the data or above,
are 27\%, 27\%, and 29\% for SRB-T0, SRD-high, and SRA-TT,
respectively. The largest deficit in the data can be found in SRC4 where one event is observed while 7.7 background events were expected.
The $95\%$ confidence level (CL) upper limits on the number of beyond-the-SM (BSM) events in each signal region are derived using the CL$_\mathrm{s}$ prescription~\cite{CLs1,CLs2} and calculated from asymptotic formulae~\cite{likelihoodFit}. Model-independent limits on the visible BSM cross sections, defined as $\sigma_{\mathrm{vis}} = S^{95}_{\textnormal{obs}}/\int\!\!{\cal L}\,dt$,
where $S^{95}_{\textnormal{obs}}$ is the 95\% CL upper limit on the number of signal events,
are reported in Table~\ref{tab:upLimits}.
\begin{table}[htpb]
\caption{Left to right: 95\% CL upper limits on the average visible cross section
($\langle\sigma A \epsilon\rangle_{\rm obs}^{95}$) where the average comes from possibly multiple production channels and on the number of
signal events ($S_{\rm obs}^{95}$ ). The third column
($S_{\rm exp}^{95}$) shows the 95\% CL upper limit on the number of
signal events, given the expected number (and $\pm 1\sigma$
excursions of the expected number) of background events.
The two last columns indicate the CL$_\mathrm{B}$ value, i.e. the confidence level observed for the background-only hypothesis, and the discovery $p$-value ($p$) and the corresponding significance ($z$).
}
\label{tab:upLimits}
\begin{center}
\input{UpperLimitTableNew}
\end{center}
\end{table}
The detector acceptance multiplied by the efficiency ($A\cdot\epsilon$) is calculated for several signal regions and their
benchmark points. The $A\cdot\epsilon$ values for signal regions aimed at high-energy final states, SRA and SRE, are 9\% and 6\% for their respective signal benchmark points of $\mstop=1000\gev,\mLSP=1\gev$, and $m_{\gluino} = 1700\GeV, \mstop=400\GeV, \mLSP=395\GeV$. SRB, SRD-low, and SRD-high have $A\cdot\epsilon$ of 1.4\%, 0.05\%, and 0.5\% for $\mstop=600\gev,\mLSP=300\gev$; $\mstop =400\GeV, m_{\chinoonepm}=100\GeV, \mLSP=50\GeV$; and $\mstop =700\GeV, m_{\chinoonepm}=200\GeV, \mLSP=100\GeV$ where the branching ratio, $B$($\stop\to b \chinoonepm$) = 100\% is assumed for the SRD samples, respectively. Finally, SRC1--5 (combining the \rISR\ windows) has an $A\cdot\epsilon$ of 0.08\% for $\mstop= 400\GeV, \mLSP=227\GeV$.
The profile-likelihood-ratio test statistic is used to set limits on direct pair production of top squarks. The signal strength parameter is allowed to float in the fit~\cite{histFitter}, and any signal contamination in the CRs is taken into account. Again, limits are derived using the CL$_\mathrm{s}$ prescription and calculated from asymptotic formulae. Orthogonal signal subregions, such as SRA-TT, SRA-TW, and SRA-T0, are statistically combined by multiplying their likelihood functions. A similar procedure is performed for the signal subregions in SRB and SRC. For the overlapping signal regions defined for SRD (SRD-low and SRD-high), the signal region with the smallest expected CL$_\mathrm{s}$ value is chosen for each signal model. Once the signal subregions are combined or chosen, the signal region with the smallest expected CL$_\mathrm{s}$ is chosen for each signal model in the $\stop$--$\ninoone$ signal grid. The nominal event yield in each SR is set to the mean background expectation to determine the expected limits; contours that correspond to $\pm1\sigma$ uncertainties in the background estimates ($\sigma_{\mathrm{exp}}$) are also evaluated. The observed event yields determine the observed limits for each SR; these are evaluated for the nominal signal cross sections as well as for $\pm1\sigma$ theory uncertainties in those cross sections, denoted by $\sigma^{\mathrm{SUSY}}_{\mathrm{theory}}$.
Figure~\ref{fig:SRABC_exclusion} shows the observed (solid red line) and
expected (solid blue line) exclusion contours at 95\% CL in the \stop--\ninoone\ mass plane for \lumi.
The data excludes top-squark masses between
\stopLimLowLSPLow\ and \stopLimLowLSPHigh\ GeV for $\ninoone$ masses below $160\GeV$, extending Run-1 limits from the combination of zero- and one-lepton channels by 260~\GeV. Additional constraints are set in the case where $\mstop\approx m_t+\mLSP$, for which top-squark masses in the range \stopLimDiag\ GeV are excluded. The limits in this region of the exclusion are new compared to the 8~\TeV\ results and come from the inclusion of SRC, which takes advantage of an ISR system to discriminate between signal and the dominant \ttbar\ background.
For signal models also considering top-squark decays into $b \chinoonepm$ or into additional massive neutralinos, four interpretations are considered:
\begin{description}
\item[\boldmath Natural SUSY-inspired mixed grid:] A simplified model~\cite{naturalSUSY} where $m_{\chinoonepm}=\mLSP+1\gev$ with only two decay modes, $\stop\to b \chinoonepm$ and $\stop\to t\LSP$, and only on-shell top-quark decays are considered. The same
maximal mixing between the partners of the left- and right-handed top quarks
and nature of the \LSP\ (pure bino) as for the $B$($\stop\to t\LSP$)=100\% case is assumed. The branching ratio to $\stop\to t\LSP$ is set to 0\%, 25\%, 50\%, and 75\% and yield the limits shown in Figure~\ref{fig:tbMet_exclusion}.
\item[\boldmath Non-asymptotic higgsino:] A pMSSM-inspired simplified model with a higgsino LSP, ${m_{\chinoonepm}=\mLSP+5\gev}$, and ${m_{\ninotwo}=\mLSP+10\gev}$, assumes three sets of branching ratios for the considered decays of $\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$~\cite{naturalSUSY}. A set of branching ratios with $B$($\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$) = 33\%, 33\%, 33\% is considered, which is equivalent to a pMSSM model with the lightest top squark mostly consisting of the superpartner of left-handed top quark and $\tanb=60$ (ratio of vacuum expectation values of the two Higgs doublets). Additionally, $B$($\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$) = 45\%, 10\%, 45\% and $B$($\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$) = 25\%, 50\%, 25\% are assumed, which correspond to scenarios with $\mqlthree < \mtr$ (regardless of the choice of \tanb) and $\mtr<\mqlthree$ with $\tanb=20$, respectively. Here \mqlthree\ represents the left-handed third-generation mass parameter and \mtr\ is the mass parameter of the superpartner to the right-handed top-quark. Limits in the \mstop\ and \mLSP\ plane are shown in Figure~\ref{fig:nonAsymhiggsino_exclusion}.
\item[\boldmath Wino-NLSP pMSSM:] A pMSSM model where the LSP is bino-like and has mass \mone\ and where the NLSP is wino-like with mass \mtwo, while $\mtwo=2\mone$ and $\mstop>\mone$~\cite{naturalSUSY}. Limits are set for both positive and negative $\mu$ (the higgsino mass parameter) as a function of the \stop\ and \ninoone\ masses which can be translated to different \mone\ and \mqlthree, and are shown in Figure~\ref{fig:winoNLSP_exclusion}. Only bottom and top-squark production are considered in this interpretation. Allowed decays in the top-squark production scenario are $\stop\to t \ninotwo\to h/Z \LSP$, at a maximum branching ratio of 33\%, and $\stop \to b \chinoonepm$. Whether the $\ninotwo$ dominantly decays into a $h$ or $Z$ is determined by the sign of $\mu$. Along the diagonal region, the $\stop\to t\LSP$ decay with 100\% branching ratio is also considered. The equivalent decays in bottom-squark production are $\sbottom\to t\chinoonepm$ and $\sbottom\to b\ninotwo$. The remaining pMSSM parameters have the following values: $\mthree=2.2$ TeV (gluino mass parameter), $\ms=\sqrt{m_{\stopone} m_{\stoptwo}}=1.2$ TeV (geometric mean of top-squark masses), $\xtms=\sqrt{6}$ (mixing parameter between the superpartners of left- and right-handed states, where $X_{t}=\at-\mu/\tanb$ and $\at$ is the trilinear coupling parameter in the top-quark sector), and $\tanb=20$. All other pMSSM masses are set to $>$3 TeV.
\item[\boldmath Well-tempered neutralino pMSSM:] A pMSSM model in which three light neutralinos and a light chargino, which are mixtures of bino and higgsino states, are considered with masses within $50$~\GeV\ of the lightest state~\cite{atlasDM,wellTemp}. The model is designed to satisfy the SM Higgs boson mass and the dark-matter relic density ($0.10<\Omega h^{2}<0.12$, where $\Omega$ is energy density parameter and $h$ is the Planck constant~\cite{relic_density}) with pMSSM parameters: $\mone=-(\mu+\delta)$ where $\delta=20$--$50\gev$, $\mtwo=2.0$ TeV, $\mthree=1.8$ TeV, $\ms=0.8$--$1.2$~\TeV, $\xtms\sim\sqrt{6}$, and $\tanb=20$. For this model, limits are shown in Figure~\ref{fig:wellTemp_exclusion}. Only bottom- and top-squark production are considered in this interpretation. The signal grid points were produced in two planes, $\mu$ vs \mtr\ and $\mu$ vs \mqlthree, and then projected to the corresponding \stop\ and \ninoone\ masses. All other pMSSM masses are set to $>$3 TeV.
\end{description}
\begin{figure}[htpb]
\begin{center} \includegraphics[width=0.7\textwidth]{figures/fit/atlascls_m0m12_wband1_showcms0_StopZL2016_SRABCDE}
\caption{Observed (red solid line) and expected (blue solid line)
exclusion contours at 95\% CL as a function of $\stop$ and
$\ninoone$ masses in the scenario where both top squarks decay
via $\stop\to t^{(*)} \ninoone$. Masses that are within the contours are excluded. Uncertainty bands corresponding to the $\pm 1
\sigma$ variation of the expected limit (yellow band) and the
sensitivity of the observed limit to $\pm 1\sigma$ variations of
the signal theoretical uncertainties (red dotted lines) are also
indicated. Observed limits from all third-generation Run-1 searches~\cite{Atlas8TeVSummary} at $\sqrt{s}=8$ TeV overlaid for comparison in blue.}
\label{fig:SRABC_exclusion}
\end{center}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/fit/SRABCD_mixed_dm1}
\caption{Observed (solid line) and expected (dashed line) exclusion contours at 95\% CL as a function of $\stop$ and $\ninoone$ masses and branching ratio to $\stop\to t\LSP$ in the natural SUSY-inspired mixed grid scenario where $m_{\chinoonepm}=\mLSP+1\gev$.
}
\label{fig:tbMet_exclusion}
\end{center}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/fit/SRABCD_tN1tN2bC1.pdf}
\caption{Observed (solid line) and expected (dashed line) exclusion contours at 95\% CL as a function of \mstop\ and \mLSP\ for the pMSSM-inspired non-asymptotic higgsino simplified model for a small tan$\beta$ with $B$($\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$) = 45\%, 10\%, 45\% (blue), a large tan$\beta$ with $B$($\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$) = 33\%, 33\%, 33\% (red), and a small $\tilde t_{R}$ with $B$($\stop\to t\ninotwo$, $\stop\to t\LSP$, $\stop\to b\chinoonepm$) = 25\%, 50\%, 25\% (green) assumption. Uncertainty bands correspond to the $\pm 1 \sigma$ variation of the expected limit.}
\label{fig:nonAsymhiggsino_exclusion}
\end{center}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/fit/SRABCD_winoNLSP.pdf}
\caption{Observed (solid line) and expected (dashed line) exclusion contours at 95\% CL as a function of $\stop$ and $\ninoone$ masses for the Wino NLSP pMSSM model for both positive (blue) and negative (red) values of $\mu$. Uncertainty bands correspond to the $\pm 1 \sigma$ variation of the expected limit.
}
\label{fig:winoNLSP_exclusion}
\end{center}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/fit/SRABCD_wellTempered.pdf}
\caption{Observed (solid line) and expected (dashed line) exclusion contours at 95\% CL as a function of $\stop$ and $\ninoone$ masses for the $\tilde t_{L}$ scan (red) as well as for the $\tilde t_{R}$ scan (blue) in the well-tempered pMSSM model. Uncertainty bands correspond to the $\pm 1 \sigma$ variation of the expected limit.}
\label{fig:wellTemp_exclusion}
\end{center}
\end{figure}
The SRE results are interpreted for indirect top-squark production
through gluino decays in terms of the \stop\ vs $\gluino$ mass
plane with $\Delta m(\stop,\ninoone)=5\GeV$.
Gluino masses up to $m_{\gluino}=1800\GeV$ with $\mstop<800\GeV$ are excluded as shown in Figure~\ref{fig:SRE_exclusion}.
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/fit/SRE_exclusion}
\caption{Observed (red solid line) and expected (blue solid line)
exclusion contours at 95\% CL as a function
of $\gluino$ and $\stop$ masses in the scenario where both
gluinos decay via $\gluino\to t\stop\to t\ninoone+$soft
and $\Delta m(\stop,\ninoone)=5\GeV$. Uncertainty bands corresponding to the $\pm 1
\sigma$ variation of the expected limit (yellow band) and the
sensitivity of the observed limit to $\pm 1\sigma$ variations of
the signal theoretical uncertainties (red dotted lines) are also
indicated. Observed limits from previous searches with the ATLAS detector at $\sqrt{s}=8$ and $\sqrt{s}=13$ TeV are overlaid in grey and blue~\cite{GtcStop1L,Gtc1L,GtcMonojet}.}
\label{fig:SRE_exclusion}
\end{center}
\end{figure}
\section{Signal region definitions}
\label{sec:signalregions}
The main experimental signature for all signal topologies is the presence of multiple jets (two of which are $b$-tagged), no muons or electrons, and significant missing transverse momentum.
Five sets of signal regions (SRA--E) are defined to target each topology and kinematic regime. SRA (SRB) is sensitive to production of high-mass \stop\ pairs with large (intermediate) $\Delta m(\stop,\LSP)$. Both SRA and SRB employ top-mass reconstruction techniques to reject background. SRC is designed for the highly compressed region with $\Delta m(\stop,\LSP)\sim m_{t}$. In this signal region, initial-state radiation (ISR) is used to improve sensitivity to these decays. SRD is targeted at $\stop\to b\chinoonepm$ decays, where no top-quark candidates are reconstructed. SRE is optimized for scenarios with highly boosted top quarks that can occur in gluino-mediated top-squark production.
A common preselection is defined for all signal regions. At least four jets are required, of which at least one must be $b$-tagged. The four leading jets (ordered in \pt) must satisfy $\ptzerotothree>80,80,40,40 \gev$ due to the tendency for signal events to have more energetic jets than background. Events containing reconstructed electrons or muons are vetoed. The $\met$ trigger threshold motivates the requirement $\met>250\gev$ and rejects most of the background from multijet and all-hadronic $\ttbar$
events. In order to reject events with mismeasured \met\
originating from multijet and hadronic \ttbar\ decays, an angular
separation between the azimuthal angle of the two highest-\pT\ jets
and the \ptmiss\ is required: $\dphijettwomet
> 0.4$. Further rejection of such events is achieved by requiring the \ptmisstrk\ to be aligned in $\phi$
with respect to the \ptmiss\ calculated from the calorimeter system:
$\mettrk > 30 \gev$ and $\dphimettrk <\pi/3$.
\subsubsection*{Signal Regions A and B}
\begin{figure}[t]
\begin{center}
\subfloat[]{\includegraphics[width=0.5\textwidth]{figures/preselection/AntiKt12M0_preCutSRPlot_withRatio}}
\subfloat[]{\includegraphics[width=0.5\textwidth]{figures/preselection/MtBMin_preCutSRPlot_withRatio}}
\caption{Distributions of the discriminating variables (a)~\mantikttwelvezero\ and (b)~\mtbmin\ after the common preselection and an additional $\mtbmin>50\gev$ requirement. The stacked histograms show the SM prediction before being normalized using scale factors derived from the simultaneous fit (detailed in Section~\ref{sec:simultaneousfit}) to all dominant backgrounds. The ``Data/SM" plots show the ratio of data events to the total SM prediction. The hatched uncertainty band around the SM prediction and in the ratio plots illustrates the combination of statistical and detector-related systematic uncertainties. The rightmost bin includes overflow events.}
\label{fig:preselection}
\end{center}
\end{figure}
SRA and SRB are targeted at direct top-squark pair production where the top squarks decay via $\stop
\rightarrow t \ninoone$ with $\Delta m(\stop,\ninoone) > m_t$. SRA is optimized for $\mstop=1000\gev$ and $\mLSP=1\gev$, while SRB is optimized for $\mstop=600\gev,\mLSP=300\gev$. At least two $b$-tagged jets ($\nBJet\ge2$) are required and an additional requirement on the $\Delta\phi$ of the three leading jets and \ptmiss\ of $\dphijetthreemet > 0.4$ is made.
The decay products of the $\ttbar$ system in the all-hadronic decay mode can often be reconstructed as six distinct $R=0.4$ jets. The transverse shape of these jets is typically circular with a radius equal to this radius parameter, but when two of the jets are less than $2R$ apart in $\eta$--$\phi$ space, the one-to-one correspondence of a jet with a top-quark daughter may no longer hold. Thus, the two hadronic top candidates are reconstructed by applying the \antikt\ clustering algorithm~\cite{antiKt} to the $R=0.4$ jets, using reclustered radius parameters of $R=0.8$ and $R=1.2$. Two $R=1.2$ reclustered jets are required; the mass of the highest-\pT\ $R=1.2$ reclustered jet is shown in Figure~\ref{fig:preselection}(a). The events are divided into three categories based on the resulting $R=1.2$ reclustered jet masses ordered in \pt, as illustrated in Figure~\ref{fig:categories}: the ``TT'' category includes events with two top candidates, i.e.\ with masses $\mantikttwelvezero>120\gev$ and $\mantikttwelveone>120\gev$; the ``TW'' category contains events with one top candidate and a $\Wboson$ candidate, i.e.\ where $\mantikttwelvezero>120\gev$ and $60<\mantikttwelveone<120\gev$; and the ``T0" category represents events with only one top candidate, i.e.\ where $\mantikttwelvezero>120\gev$ and $\mantikttwelveone<60\gev$. Since the signal-to-background ratio is different in each of these categories, they are optimized individually for SRA and SRB.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/SRA/CategoryDefs}
\caption{Illustration of signal-region categories (TT, TW, and T0) based on the $R=1.2$ reclustered top-candidate masses for simulated direct top-squark pair production with $(m_{\stop},m_{\ninoone})=(1000,1) \GeV$ after the loose preselection requirement described in the text. The black lines represent the requirements on the reclustered jet masses.}
\label{fig:categories}
\end{center}
\end{figure}
The most powerful discriminating variable against SM \ttbar\ production is the $\met$ value, which for the signal results from the undetected $\ninoone$ neutralinos. Substantial $\ttbar$ background rejection is provided by additional requirements that reject events in which one $\Wboson$ boson decays via a charged lepton plus neutrino. The first requirement is that the transverse mass (\mt) calculated from the \met\ and
the $b$-tagged jet with minimum distance in $\phi$ to the $\ptmiss$ direction is above $200\GeV$:
\begin{equation*}
\mtbmin\ = \sqrt{2\,\ptb\,\met \left[1-\cos{\Delta\phi\left(\vecptb,\ptmiss\right)}\right]} > 200\,\GeV,
\end{equation*}
since its upper bound (ideally, without consideration of resolution effects) is below the top-quark mass for the \ttbar\ background, as illustrated in Figure~\ref{fig:preselection}(b). An additional requirement is made on the mass of the leading (in \pt) $R=0.8$ reclustered jet to be consistent with a \Wboson\ candidate: $\mantikteightzero>60\gev$. Additionally, requirements on the stransverse mass (\mttwo)~\cite{Lester:1999tx,Barr:2003rg}
are made which are especially powerful in the T0 category where a $\chi^2$ method is applied to reconstruct top quarks with lower momenta where reclustering was suboptimal.
The \mttwo\ variable is constructed from the direction and magnitude of the \ptmiss\ vector in the transverse plane as well as the direction of two top-quark candidates reconstructed using a $\chi^2$ method.
The minimization in this method is done in terms of a $\chi^2$-like penalty function, $\chi^2 = (m_{\mathrm{cand}}-m_{\mathrm{true}})^2/m_{\mathrm{true}}$, where $m_{\mathrm{cand}}$ is the candidate mass and $m_{\mathrm{true}}$ is set to 80.4 GeV for \Wboson\ candidates and 173.2 GeV for top candidates.
Initially, single or pairs of $R=0.4$ jets form \Wboson\ candidates which are then combined with additional $b$-tagged jets in the event to construct top candidates. The top candidates selected by the $\chi^2$ method are only used for the momenta in \mttwo\ while the mass hypotheses for the top quarks and the invisible particles are set to 173.2 GeV and 0 GeV, respectively. Finally, a ``$\tau$-veto'' requirement is applied to reject semi-hadronically decaying $\tau$-lepton candidates
likely to have originated from a $W \rightarrow \tau \nu$ decay. Here, events that contain
a non-$b$-tagged jet within $|\eta|<2.5$ with fewer than four associated charged-particle tracks
with $\pT > 500 \mev$, and where the $\Delta \phi$ between the jet and the
$\ptmiss$ is less than $\pi / 5$, are vetoed. The systematic uncertainties for this requirement are found to be negligible~\cite{Atlas8TeV}. In SRB, additional discrimination is provided by $\mtbmax$ and $\Delta R(b,b)$. The former quantity is analogous to \mtbmin\ except that the transverse mass is computed with the $b$-tagged jet that has the largest $\Delta\phi$ with respect to the $\ptmiss$ direction.
The latter quantity provides additional discrimination against background where the two jets with highest $b$-tagging weights originate from a gluon splitting. Table~\ref{tab:SignalRegionAB} summarizes the selection criteria that are used in these two signal regions. The categories are statistically combined within SRA and SRB to maximize the sensitivity to signal.
\begin{table}[htb]
\caption{Selection criteria for SRA and SRB, in addition to the common preselection requirements described in the text. The signal regions are separated into topological categories based on reconstructed top-candidate masses.}
\begin{center}
\def1.4{1.4}
\begin{tabular}{c||l|c|c|c} \hline\hline
{\bf Signal Region} & & {\bf TT} & {\bf TW} & {\bf T0} \\ \hline \hline
& \mantikttwelvezero & \multicolumn{3}{c}{$>120\gev$} \\ \cline{2-5}
& \mantikttwelveone & $>120\gev$ & $[60,120]\gev$ & $<60\gev$ \\ \cline{2-5}
& \mtbmin & \multicolumn{3}{c}{$>200\gev$} \\ \cline{2-5}
& \nBJet & \multicolumn{3}{c}{$\ge2$} \\ \cline{2-5}
& $\tau$-veto & \multicolumn{3}{c}{yes} \\ \cline{2-5}
& \dphijetthreemet & \multicolumn{3}{c}{$>0.4$} \\ \cline{2-5}\hline \hline
\multirow{3}{*}{{\bf A}} & \mantikteightzero & \multicolumn{3}{c}{$>60\gev$} \\ \cline{2-5}
& \drbjetbjet & $>1$ & \multicolumn{2}{c}{-} \\ \cline{2-5}
& \mttwo & $>400$ GeV & $>400$ GeV & $>500$ GeV \\ \cline{2-5}
& \met & $>400 \gev$ & $> 500 \gev$ & $> 550 \gev$ \\ \hline \hline
\multirow{2}{*}{{\bf B}} & \mtbmax & \multicolumn{3}{c}{$>200\gev$} \\ \cline{2-5}
& \drbjetbjet & \multicolumn{3}{c}{$>1.2$} \\ \cline{2-5}
\hline\hline
\end{tabular}
\end{center}
\label{tab:SignalRegionAB}
\end{table}
\subsubsection*{Signal Regions C}
SRC is optimized for direct top-squark pair production where $\Delta m(\stop,\ninoone)\approx m_t$, a regime in which the signal topology is similar to SM \ttbar\ production. In the presence of high-momentum ISR, which can be reconstructed as multiple jets forming an ISR system, the di-top-squark system is boosted in the transverse plane. The ratio of the \met\ to the \pt\ of the ISR system in the centre-of-mass (CM) frame of the entire (ISR plus di-top-squark) system (\pTISR), defined as \rISR, is proportional to the ratio of the $\ninoone$ and $\stop$ masses~\cite{An,Macaluso}:
\begin{equation*}
\rISR \equiv \frac{\met}{\pTISR} \sim \frac{m_{\ninoone}}{m_{\stop}}.
\end{equation*}
A ``recursive jigsaw reconstruction technique'', as described
in Ref.~\cite{RJR_ISR}, is used to divide each event into an ISR hemisphere
and a sparticle hemisphere, where the latter consists of the pair of
candidate top squarks, each of which decays via a top quark and a $\ninoone$. Objects are grouped together based on their proximity in
the lab frame's transverse plane by minimizing the reconstructed
transverse masses of the ISR system and sparticle system simultaneously over all choices of object assignment. Kinematic variables are then defined based on this assignment of objects to either the ISR system or the sparticle system.
This method is equivalent to grouping the event objects according to the axis of maximum back-to-back \pt\ in the event's CM frame where the \pt\ of all accepted objects sums vectorially to zero. In events with a high-\pT\ ISR gluon, the axis of maximum back-to-back \pt, also known as the thrust axis, approximates the direction of the ISR and sparticles' back-to-back recoil.
The selection criteria for this signal region are summarized in
Table~\ref{tab:SignalRegionC}. The events are divided into five
windows (SRC1--5) defined by non-overlapping ranges of the reconstructed $\RISR$, which target
different top-squark and $\ninoone$ masses: e.g., SRC2 is optimized
for $\mstop = 300\GeV$ and $\mLSP=127\GeV$, and SRC4 is optimized for $\mstop
= 500\GeV$ and $\mLSP=327\GeV$. At least five jets must be
assigned to the sparticle hemisphere of the event (\nJetS), and at least one of those jets (\nBJetS) must be $b$-tagged. Transverse-momentum requirements on \pTISR, the highest-\pt\ $b$-jet in the sparticle hemisphere (\pTSBZero), and the fourth-highest-\pt\ jet in the sparticle hemisphere (\pTSFour) are applied. The transverse mass formed by the sparticle system and the \met, defined as \mS, is required to be $> 300\GeV$. The ISR system is also required to be separated in azimuth from the \ptmiss\ in the CM frame; this variable is defined as $\dPhiISRMET$.
Similarly to the categories defined for SRA and SRB, the individual SRCs are statistically combined to improve signal sensitivity.
\begin{table}[htpb]
\caption{Selection criteria for SRC, in addition to the common preselection requirements described in the text. The signal regions are separated into windows based on ranges of $\RISR$.}
\begin{center}
\def1.4{1.4}
\begin{tabular}{c||c|c|c|c|c} \hline\hline
{\bf Variable} & SRC1 & SRC2 & SRC3 & SRC4 & SRC5 \\ \hline \hline
\nBJet & \multicolumn{5}{c}{$\ge1$} \\ \hline
\nBJetS & \multicolumn{5}{c}{$\ge1$} \\ \hline
\nJetS & \multicolumn{5}{c}{$\ge5$} \\ \hline
\pTSBZero & \multicolumn{5}{c}{$>40\gev$} \\ \hline
\mS & \multicolumn{5}{c}{$>300\gev$} \\ \hline
\dPhiISRMET & \multicolumn{5}{c}{$>3.0$} \\ \hline
\pTISR & \multicolumn{5}{c}{$>400$ GeV} \\ \hline
\pTSFour & \multicolumn{5}{c}{$>50$ GeV} \\ \hline
\rISR & 0.30--0.40 & 0.40--0.50 & 0.50--0.60 & 0.60--0.70 & 0.70--0.80\\ \hline \hline
\end{tabular}
\end{center}
\label{tab:SignalRegionC}
\end{table}
\subsubsection*{Signal Regions D}
SRD is optimized for direct top-squark pair production where both top squarks decay via $\stop\to b \chinoonepm$ where $m_{\chinoonepm}=2\mLSP$. In this signal region, at least five jets are required, two of which must be $b$-tagged. The scalar sum of the transverse momenta of the two jets with the highest $b$-tagging weights (\ptbzero+\ptbone) as well as the second (\ptone), fourth (\ptthree), and fifth (\ptfour) jet transverse momenta are used for additional background rejection. Subregions SRD-low and SRD-high are optimized for $\mstop = 400\GeV$ with $\mLSP=50\GeV$, and $\mstop = 700\GeV$ with $\mLSP=100\GeV$, respectively.
Tighter leading and sub-leading jet $\pT$ requirements are made for SRD-high, as summarized in Table~\ref{tab:SignalRegionD}.
\begin{table}[!htb]
\caption{Selection criteria for SRD, in addition to the common preselection requirements described in the text.}
\begin{center}
\def1.4{1.4}
\begin{tabular}{c||c|c}
\hline\hline
{\bf Variable} & {\bf SRD-low} & {\bf SRD-high} \\
\hline \hline
\dphijetthreemet & \multicolumn{2}{c}{$>0.4$} \\ \hline
\nBJet & \multicolumn{2}{c}{$\geq$2} \\\hline
\drbjetbjet & \multicolumn{2}{c}{$>$ 0.8} \\ \hline
\ptbzero+\ptbone & $>300$ GeV & $>400$ GeV \\ \hline
$\tau$-veto & \multicolumn{2}{c}{yes} \\ \hline
\ptone\ & \multicolumn{2}{c}{$>150\GeV$} \\ \hline
\ptthree\ & $>100\GeV$ & $>80\GeV$ \\ \hline
\ptfour\ & \multicolumn{2}{c}{$>60\GeV$} \\ \hline
\mtbmin\ & $>250\GeV$ & $>350\GeV$ \\ \hline
\mtbmax\ & $>300\GeV$ & $>450\GeV$ \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:SignalRegionD}
\end{table}
\subsubsection*{Signal Region E}
SRE is designed for models which have highly boosted top quarks. Such signatures can arise from direct pair production of high-mass top partners, or from the gluino-mediated compressed \stop\ scenario with large $\Delta m(\gluino,\stop)$. In this regime, reclustered jets with $R=0.8$ are utilized to optimize the experimental sensitivity to these highly boosted top quarks. In this signal region, at least two jets out of the four or more required jets must be $b$-tagged. Additional discrimination is provided by the $\met$ significance: $\htsig$, where $\HT$ is the scalar sum of the $\pT$ of all reconstructed $R=0.4$ jets in an event. The selection criteria for SRE, optimized for $m_{\gluino} = 1700 \GeV, \mstop=400\GeV$, and $\mLSP=395\GeV$, are summarized in Table~\ref{tab:SignalRegionE}.
\begin{table}[!htb]
\caption{Selection criteria for SRE in addition to the common preselection requirements described in the text.}
\begin{center}
\def1.4{1.4}
\begin{tabular}{c||c}
\hline\hline
{\bf Variable} & {\bf SRE} \\
\hline \hline
\dphijetthreemet & $>0.4$ \\ \hline
\nBJet & $\geq$2 \\ \hline
\mantikteightzero & $>120$ \gev \\ \hline
\mantikteightone & $>80$ \gev \\ \hline
\mtbmin\ & $>200$ \gev \\ \hline
\met\ & $> 550 \gev$ \\ \hline
\HT & $>800 \gev$ \\ \hline
\htsig & $> 18 \sqrt{\GeV}$ \\
\hline\hline
\end{tabular}
\end{center}
\label{tab:SignalRegionE}
\end{table}
\section{Simulated event samples and signal modelling}
\label{sec:simulation}
Simulated events are used to model the SUSY signal and to aid in the description of the background processes. Signal models were all generated with {\scshape MG5\_aMC\/@NLO} 2.2-2.4~\cite{madgraph} interfaced to \pythiaeight~\cite{pythia8} for the parton showering (PS) and hadronization and with {\scshape EvtGen} 1.2.0~\cite{evtGen} for the $b$- and $c$-hadron decays. The matrix element (ME) calculation was performed at tree level and includes the emission of up to two additional partons for all signal samples. The parton distribution function (PDF) set used for the generation of the signal samples is NNPDF2.3LO~\cite{PDFs} with the A14~\cite{UEtune} set of tuned underlying-event and shower parameters (UE tune). The ME--PS matching was performed with the CKKW-L~\cite{CKKW} prescription, with a matching scale set to one quarter of the mass of the \stop, or \gluino\ for the gluino pair production model. All signal cross sections were calculated to next-to-leading order in the strong coupling constant, adding the resummation of soft-gluon emission at next-to-leading-logarithm accuracy (NLO+NLL)~\cite{Beenakker:1997ut,Beenakker:2010nq,Beenakker:2011fu}. The nominal cross section and the uncertainty were taken from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales, as described in Ref.~\cite{Borschensky:2014cia}. For pMSSM models, the sparticle mass spectra were calculated with Softsusy 3.7.3~\cite{Allanach:2001kg,Allanach:2013kza} while the decays of each sparticle were performed by HDECAY 3.4~\cite{hdecay} and SDECAY 1.5/1.5a~\cite{sdecay}.
SM background samples were generated with different MC event generators depending on the process. The background sources of \Zjets\ and \Wjets\ events were generated with \sherpa\ 2.2.1~\cite{sherpa} using the NNPDF3.0NNLO~\cite{PDFs} PDF set and the UE tune provided by \sherpa. Top-quark pair production where at least one of the top quarks decays semileptonically and single-top production were simulated with \textsc{Powheg-Box}~2~\cite{powheg-box} and interfaced to \pythia 6~\cite{pythia6} for PS and hadronization, with the CT10~\cite{CT10} PDF set and using the {\scshape Perugia2012}~\cite{Perugia2012} set of tuned shower and underlying-event parameters. {\scshape MG5\_aMC\/@NLO} interfaced to \pythiaeight\ for PS and hadronization was used to generate the \ttbar+$V$ (where $V$ is a \Wboson\ or \Zboson\ boson) and \ttbar+$\gamma$ samples at NLO with the NNPDF3.0NLO PDF set. The underlying-event tune used is A14 with the NNPDF2.3LO PDF set. Diboson production was generated with \sherpa~2.2.1 using the CT10 PDF set. Finally, $V\gamma$ processes were generated with \sherpa~2.1 using the CT10 PDF set. Additional information can be found in Refs.~\cite{Vjets,multibosons,top,ttV,MC15val}.
The detector simulation~\cite{ATLdetSim} was performed using either \geant 4~\cite{GEANT} or a fast simulation framework where the showers in the electromagnetic and hadronic calorimeters are simulated with a parameterized description~\cite{fastSim} and the rest of the detector is simulated with \geant 4. The fast simulation was validated against full \geant 4 simulation for several selected signal samples and subsequently used for all signal samples because of the large number of signal grid points needed for interpretation. All SM background samples used the GEANT4 set-up. All MC samples were produced with a varying number of simulated minimum-bias interactions overlaid on the hard-scattering event to account for multiple $pp$ interactions in the same or nearby bunch crossing (pile-up). These events were produced using \pythiaeight\ with the A2 tune~\cite{A2} and MSTW 2008 PDF set~\cite{MSTW}. The simulated events were reweighted to match the distribution of the number of $pp$ interactions per bunch crossing in data. Corrections were applied to the simulated events to correct for differences between data and simulation for the lepton-trigger and reconstruction efficiencies, momentum scale, energy resolution, isolation, and for the efficiency of identifying jets containing $b$-hadrons, together with the probability for mis-tagging jets containing only light-flavour and charm hadrons.
\section{Systematic uncertainties}
\label{sec:Systematics}
Experimental and theoretical systematic
uncertainties in the SM predictions and signal
predictions are included in the profile likelihood fit described
in Section~\ref{sec:background}.
Statistical uncertainties dominate the total uncertainties of the background predictions in all SRs except SRB.
The dominant systematic uncertainties for SRA and SRB are shown in Table~\ref{tab:srABSysts} while the systematic uncertainties for the remaining SRs are shown in Table~\ref{tab:srCDESysts}. The uncertainties are shown as a relative uncertainty to the total background estimate. The main sources of detector-related systematic uncertainty in the
SM background estimates are the jet energy scale
(JES) and jet energy resolution (JER), $b$-tagging efficiency, \met\ soft term, and pile-up. The effect of the JES and JER uncertainties on the
background estimates in the signal regions can reach 17\%. The uncertainty in the $b$-tagging
efficiency is nowhere more than 9\%. All jet- and lepton-related uncertainties are propagated
to the calculation of the \met, and additional uncertainties in the
energy and resolution of the soft term are also included~\cite{met}. The
uncertainty in the soft term of the \met\ is most significant in
SRC5 at 15\%. An uncertainty due to the pile-up modelling is also considered, with a contribution
up to 14\%. Lepton reconstruction and identification uncertainties are also considered but have a small impact.
The uncertainty in the combined 2015+2016 integrated luminosity is 3.2\%. It is derived, following a methodology similar to that detailed in Ref.~\cite{lumi}, from a preliminary calibration of the luminosity scale using $x$--$y$ beam-separation scans performed in August 2015 and May 2016.
Theoretical uncertainties in the modelling of the SM background are
estimated. For the \Vjets\ background processes, the modelling uncertainties
are estimated using \sherpa\ samples by varying the renormalization and
factorization scales, and the merging
and resummation
scales (each varied up and down by a
factor of two). PDF uncertainties were found to have a negligible impact. The resulting impact on the total background yields from the
\Zjets\ theoretical uncertainties is up to 3\% while the uncertainties from the \Wjets\ sample variations are less than 3\%.
For the \ttbar\ background, uncertainties are estimated from the
comparison of different matrix-element calculations, the choice of parton-showering model and
the emission of additional partons in the initial and final
states (comparing \textsc{Powheg-Box}+\pythia\ vs \herwig++ and \sherpa). More details are given in Ref.~\cite{top}.
The largest impact of the \ttbar\ theory
systematic uncertainties on the total background yields arises for SRC and it
varies from 11\% to 71\% by tightening the \RISR requirement. For the \ttbar+$W/Z$ background, the theoretical uncertainty is
estimated through variations, in both \ttbar+$W/Z$ and $\ttbar\gamma$ MC simulation, including the choice of
renormalization and factorization scales (each varied up and down by a
factor of two), the choice of PDF, as well as a comparison between \mcatnlo\ and OpenLoops+\sherpa\ generators, resulting in a maximum uncertainty of 2\% in SRA-TT.
The single-top background is dominated by the $Wt$ subprocess. Uncertainties are estimated for
the choice of parton-showering model (\pythia\ vs \herwig++) and
for the emission of additional partons in the initial- and final-state
radiation. A 30\% uncertainty is assigned to the single-top background estimate to account for the effect of
interference between single-top-quark and \ttbar\ production. This uncertainty is estimated by comparing yields in the signal and control
regions for a sample that includes resonant and non-resonant $WW$+$bb$ production with the sum of the yields of
resonant \ttbar\ and single-top+$b$ production. The
final single-top uncertainty relative to the total background estimate
is up to 12\%. The detector systematic uncertainties are also applied to the signal
samples used for interpretation. Theoretical uncertainties in the
signal cross section as described in Section~\ref{sec:simulation} are
treated separately and limits on top-squark and neutralino masses are given for the $\pm 1\sigma$
values as well as the central cross section.
Signal systematic uncertainties due to detector and acceptance effects are taken into account.
The main sources of these uncertainties are the JER, ranging from 3\% to
6\%, the JES, ranging from 2\% to 5.7\%, pile-up, ranging from 0.5\% to
5.5\% and from $b$-tagging efficiency, ranging from 3\% to
5.5\%. Uncertainties in the acceptance due to theoretical variations
are taken into consideration. Those originate from variations of the QCD
coupling constant $\alpha_\mathrm{s}$, the variations of the renormalization
and factorization scales, the CKKW matching scale at which the parton-shower description and the matrix-element description are separate and
the parton-shower tune variations (each varied up and down by a
factor of two). These uncertainties range across
the SRs between 10\% and 25\% for the $\stop\to t^{(*)} \ninoone$ grid,
the mixed grid, the non-asymptotic higgsino grid, and the $\gluino\to
t\stop\to t\ninoone+$soft grid. For the wino-NLSP model, they range from 15\% to
20\%, and for the well-tempered neutralino pMSSM model they range from 10\% to 35\%.
Finally, the uncertainty in the estimated number of signal events which arises from the cross-section uncertainties for the various
processes is taken into account by calculating two additional limits considering a $\pm1\sigma$ change in cross section.
The cross-section uncertainty is $\sim$15--20\% for direct top-squark production and $\sim$15--30\% for gluino production~\cite{Beenakker:1997ut,Beenakker:2010nq,Beenakker:2011fu,Borschensky:2014cia} depending on the top-squark and gluino masses.
\begin{table}[htpb]
\caption{Dominant systematic uncertainties (greater than 1\% for at
least one SR) for SRA and SRB in percent relative to the total
background estimates. The uncertainties due to the normalization
from a control region for a given signal region and background are
indicated by $\mu_{\ttZ}$, $\mu_{\ttbar}$, $\mu_{Z}$, $\mu_{W}$,
and $\mu_{\mathrm{single~top}}$. The theory uncertainties are the
total uncertainties for a given background. Additionally, the
uncertainty due to the number of MC events in the background samples is shown as ``MC statistical''. }
\begin{center}
{\renewcommand{1.4}{1.2}
\input{systSummarySRAB.tex}
}
\end{center}
\label{tab:srABSysts}
\end{table}
\begin{table}[htpb]
\caption{Dominant systematic uncertainties (greater than 1\% for at
least one SR) for SRC, SRD, and SRE in percent relative to the
total background estimates. The uncertainty due to the
normalization from a control region for a given signal region and
background are indicated by $\mu_{\ttZ}$, $\mu_{\ttbar}$,
$\mu_{Z}$, $\mu_{W}$, and $\mu_{\mathrm{single~top}}$. The theory
uncertainties are the total uncertainties for a given
background. Additionally, the uncertainty due to the number of MC events in the background samples is shown as ``MC statistical''.}
\begin{center}
{\renewcommand{1.4}{1.2}
\input{systSummarySRCDE.tex}
}
\end{center}
\label{tab:srCDESysts}
\end{table}
\section{Trigger and data collection}
\label{sec:trigger}
The data were collected from August to November 2015 and April to October 2016 at a $pp$ centre-of-mass energy of $13\tev$ with $25\ns$ bunch spacing. A two-level trigger system~\cite{trigger} is used to select events.
The first-level trigger is implemented in hardware and uses a subset of the detector information
to reduce the event rate to at most \SI{100}{\kilo\hertz}.
This is followed by a software-based trigger that reduces the accepted event rate to \SI{1}{\kilo\hertz} for offline storage.
In all search regions, a missing transverse momentum trigger, which is fully efficient for offline calibrated $\met > 250$~\GeV\ in signal events, was used to collect data events.
Data samples enriched in the major sources of background were
collected with electron or muon triggers. The electron trigger
selects events based on the presence of clusters of energy in the electromagnetic calorimeter, with a
shower shape consistent with that expected for an electron, and a matching track
in the tracking system. The muon trigger selects events containing one or more muon candidates based on tracks
identified in the muon spectrometer and inner detector.
The electron and muon triggers used are more than $99\%$ efficient for isolated electrons and muons with \pt\ above 28~\GeV.
Triggers based on the presence of high-\pt\ jets were used to collect data samples for the estimation of
the multijet and all-hadronic \ttbar~background.
The jet \pt~thresholds ranged from $20$ to $400\,\GeV$. In order
to stay within the bandwidth limits of the trigger system, only a
fraction of the events passing these triggers was recorded to permanent
storage.
|
1,108,101,563,914 | arxiv | \section{Setting}
\label{setting} In this section, we present some basic notation and the field theory setting we work in. To wit, we would like to perturbatively calculate Wightman
functions of hermitian
scalar quantum fields on a globally hyberbolic smooth Lorentzian manifold ($M$,
$g$) in $\phi^p$-theory. That is, our quantum fields, operator valued distributions on a Hilbert space, satisfy the formal\footnote{Recall that we work in unrenormalised formal perturbation theory throughout this paper.} equation
$$(\mbox{\large{$\sqcup\!\!\!\!\sqcap$}}\;+\;m^2+\kappa R)\phi = -\lambda\phi^{p-1}$$
with coupling constant $\lambda$, scalar curvature $R$, mass $m$
and $\mbox{\large{$\sqcup\!\!\!\!\sqcap$}} = \nabla_a \nabla^a$,
$\nabla^a$ being the covariant derivative associated with $g$.
\indent We start by introducing the fundamental functions of the
theory. Let $G_r$ ($G_a$) be the unique \cite{Baer} retarded
(advanced) fundamental solution of the Klein-Gordon operator
$(\mbox{\large{$\sqcup\!\!\!\!\sqcap$}}\;+\;m^2+\kappa R)$, {\it i.e.}, $G_{r/a}$ are real valued bidistributions on $M$
satisfying
$$(\mbox{\large{$\sqcup\!\!\!\!\sqcap$}}\;+\;m^2+\kappa R) G_{r/a}(x,y)=\delta(x,y)$$
and supp$_x$ $G_{r/a}(x,y)\subset \overline V_{y}^{\;\pm}$,
$\overline V_{x}^{\;+}$ ($\overline V_{x}^{\;-}$) being the closed
causal forward (backward) cone with base-point $x$. $\delta(x,y)$ is the
delta-distribution associated with $g$, {\it i.e.},
$\int_{M\times M}d_gxd_gy\delta(x,y)f(x)g(y)=\int_Md_gxf(x)g(x)$ for all compactly supported (complex-valued) test functions $f,g\in{\cal D}(M)\;\colon\!\!\!\!= C^\infty_0(M,{\mathbb C})$, where
$d_gx=\sqrt{-\bf g}\,dx$, $ {\bf g}=\det g$, is the canonical
volume form associated with $g$. We note that $G_{r}(x,y) =
G_{a}(y,x)$ and define the antisymmetric bidistribution
\begin{equation}
D(x,y) = G_{r}(x,y) - G_{a}(x,y).\label{def_d}
\end{equation}
Obviously, $D$ fulfils the Klein-Gordon equation in both arguments
and vanishes for $x \perp y$, {\it i.e.}, for spacelike separated $x$
and $y$.
To calculate the Wightman functions, we need to specify
initial conditions for the field $\phi(x)$.
We achieve this by postulating that for large asymptotic times $x^0 \to \mp \infty$
the interacting field $\phi(x)$ converges to incoming or outgoing fields $\phi^{\mbox{\scriptsize{in/out}}}(x)$, where we demand that the "in"-field satisfies the free Klein-Gordon equation $(\mbox{\large{$\sqcup\!\!\!\!\sqcap$}}\;+\;m^2+\kappa R)\phi^{\mbox{\scriptsize{in}}}=0$.
For
space-times $(M,g)$ that are "large" enough to allow the fields to disperse
quickly enough to become finally non-interacting, the abovementioned asymptotic conditions are formulated
in terms of the Yang-Feldman equations \cite{YF}
\begin{equation}
\phi^{\text{loc}}(x)\;\colon\!\!\!\!= \phi(x)=
\phi^{\mbox{\scriptsize{in/out}}}(x) + (G_{r/a}\,j)(x),
\label{yf_eq}
\end{equation}
where $(G_{r/a}\,j)(x)$ stands for (the formal expression)
$$G_{r/a}(x,j)\;\colon\!\!\!\!=\int\limits_M d_gy\,G_{r/a}(x,y) j(y)$$
and the current $j$ equals $-\lambda\phi^{p-1}$ in our case.
It is necessary to specify a representation for $\phi^\text{in}(x)$ (or, equivalently, an algebraic "in"-state for the Borchers-Uhlmann algebra of the free scalar field) "by hand" as, in absence of isometries and spectral conditions on general
curved manifolds, the Klein-Gordon equation
and the CCR are not sufficient to fix it uniquely\footnote{The standard "Fourier" spectrum condition has been successfully replaced by a microlocal spectrum condition \cite{BFK} to advance quantum field theory on curved spacetime in many ways. The microlocal spectrum condition does, however, not determine a unique state, but only a class of states \cite{Ver}.} \cite{Wa}.
This can be accomplished by first choosing propagators $D^{\pm}(x,y)$, that is, complex valued bidistributions on $M$ satisfying the Klein-Gordon
equation in both arguments and Im$D^{\pm}$ = $\pm\frac{1}{2} D$,
$D^{+}(x,y) = \overline{D^{+}(y,x)} = D^{-}(y,x)$, such that $\tilde
D\;\colon\!\!\!\!= 2\text{Re}D^{+}$ is symmetric and constitutes the choice of a state. Here, the bar denotes complex comjugation. To select a pure state, we
require $\tilde D(f,f)=\frac{1}{4}\inf_{h\in{\cal
D}(M)}|D(f,h)^2/\tilde D(h,h)|$ for all $f\in{\cal D}(M)$, {\it cf.},
\cite{Wa}. Particularly, this implies that $D^{+}$ is positive,
{\it i.e.}, $D^+(\overline f,f) \ge
0$ $\forall$ $f\in{\cal D}(M)$. We
furthermore demand that $D^{+}$ is invariant under any existing isometric
diffeomorphisms of $(M,g)$ preserving the time direction, which only
constrains $\tilde D$, as $D$ automatically fulfils this
condition. For a discussion of the existence of such (and even
more general) bi-distributions, {\it cf.}, \cite{Wa}.
Before proceeding to select an incoming state, we need to introduce the notion of truncated Wightman functions. These are defined via a cluster
expansion as
\begin{equation}
\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle =
\sum_{I\in\mathcal{P}^{(n)}}\prod_{\{j_1,\cdots,j_l\}\in I}
\langle\Omega,\phi^{a_{j_1}}(x_{j_1})\cdots\phi^{a_{j_l}}(x_{j_l})\Omega\rangle^T,
\label{def_truncated_wf}
\end{equation}
where $a_j\in\{\text{in, loc, out}\}$ and $\mathcal{P}^{(n)}$ is the collection of all partitions of
$\{ 1,\cdots,n\}$ into disjoint, non-empty subsets $\{
j_1,\cdots,j_l\}$ with $j_1 < \cdots < j_l$. A quasifree state is characterised by the property to have vanishing truncated Wightman functions for all $n\neq 2$.
Let us anticipate at this point that, via the Yang-Feldman equations, both $\phi^\text{loc}$ and $\phi^\text{out}$ are formal power series in $-\lambda$ with monomials in $\phi^\text{in}$ as coefficients and can thus formally be understood as operators on the same, {\it i.e.}, the incoming, Hilbert space, of which $\Omega$ is a vacuum state as we will specify in the following. This motivates our choice to write all Wightman functions as VEVs w.r.t. $\Omega$.
We can now finally specify the state for the incoming field as a quasifree state with two-point function $D^+$, {\it i.e.}, \begin{eqnarray}
\langle\Omega,\phi^{\text{in}}(x)\phi^{\text{in}}(y)\Omega\rangle^T
= D^+(x,y),\;\;\;\;\;\; \label{def_truncated_two_point_in} \\
\label{def_tuncated_n_point_in} \langle\Omega,\phi^{{\rm
in}}(x_1)\cdots\phi^{\text{in}}(x_n)\Omega\rangle^T = 0 \;\;\;
\mbox{for}\; n\not=2. \label{def_truncated_n_point_in}
\end{eqnarray}
Taking
(\ref{def_truncated_two_point_in})-(\ref{def_truncated_n_point_in})
into account, it follows immediately that (\ref{def_truncated_wf}) simplifies considerably if we only consider Wightman functions of
"in"-fields, namely,
\begin{equation}
\langle\Omega,\phi^{\text{in}}(x_1)\cdots\phi^{{\rm
in}}(x_n)\Omega\rangle = \left\{
\begin{array}{l l}
\displaystyle \sum_{I\in\mathcal{P}^{\prime(n)}}\prod_{\{j_1,j_2\}\in I}
D^+(x_{j_1},x_{j_2}) & \quad\mbox{if $n$ is even,}\\\\
\displaystyle \quad 0 & \quad \mbox{if $n$ is odd.}\\ \end{array}
\right.\label{cluster_expansion_in}
\end{equation}
Here, $\mathcal{P}^{\prime(n)}$ is the collection of all partitions of
$\{ 1,\cdots,n\}$ into disjoint subsets containing two elements
$\{ j_1,j_2\}$ with $j_1 < j_2$, {\it i.e.}, $\mathcal{P}^{\prime(n)}$ is
a collection of all possible pairings made out of $\{
1,\cdots,n\}$.
We now explain how (\ref{def_truncated_two_point_in}) and (\ref{def_truncated_n_point_in}) determine a particle picture in the remote past. By our assumptions, $\tilde D$ is a symmetric bidistribution that fulfils the Klein-Gordon equation in both
arguments. Consequently, the solution part ${\cal S}f$ of any test function defined as ${\cal S}f\;\colon\!\!\!\!=\tilde D\,f$ solves the Klein-Gordon equation and clearly $({\cal S}f,{\cal S}h)\;\colon\!\!\!\!= \tilde D(\overline f,g)$
constitutes a well-defined inner product on the space of complex
solutions of the Klein-Gordon equation with compactly supported
initial data. Let us indicate the completion of this space w.r.t. $(\,.\,,\,.\,)$ by ${\cal H}$
and note that the imaginary part $\frac{1}{2}D$ of
$D^+$ defines a (${\mathbb C}$-bilinear) symplectic form $\Sigma$ on the
space of complex valued solutions via $\Sigma(Df,Dg)\;\colon\!\!\!\!= D(f,g)$
that extends continuously to ${\cal H}$.
Upon comparison with the symplectic form, the inner product then
induces a complex structure $J$ via $(\psi,J\chi)\;\colon\!\!\!\!=\Sigma(\overline
\psi,\chi)$ for any two solutions. One straightforwardly obtains
$J^*=-J$, $J^2=-1$, and, thus, $J=i(K^+-K^-)$ with $K^\pm$ the
projector on the eigenspace of $J$ with eigenvalue $\pm i$. In the following, we call ${\cal H}^\pm\;\colon\!\!\!\!= K^\pm{\cal H}$ the
positive/negative frequency spaces respectively. We note that
$\overline{{\cal H}^\pm}={\cal H}^\mp$ since $J\overline\psi=-i\overline \psi$ for $\psi\in{\cal
H}^+$.
Let now ${\cal F}$ be the symmetric Fock space over ${\cal H}^+$ with Fock-vacuum $\Omega$. By $a^\dagger(\psi)$, $a(\psi)$, $\psi\in{\cal H}^+$, we
denote the usual creation and annihilation operators on $\cal F$.
We use the convention $a(\overline\psi)^*=a^\dagger(\psi)^*$,
$\psi\in{\cal H}^+$, in order to obtain a ${\mathbb C}$-linear definition
for $a(\chi)$, $\chi \in{\cal H}^-$. Here, $^*$ stands for taking
the adjoint (neglecting domain questions). Let ${\cal S}_\pm\;\colon\!\!\!\!= K^\pm{\cal S}$ be the
operator that maps test functions to the positive/negative frequency solution part.
The incoming field can now finally be defined as the ${\mathbb C}$-linear
operator valued distribution
\begin{equation}
\label{infield.eqa}
\phi^\text{in}(f)=a({\cal S}_-f)+a^\dagger({\cal S}_+f)\, .
\end{equation}
Furthermore, by \cite[Lemma 3.2.1]{Wa}, $(\tilde Df,\tilde
Dh)=\tilde D(\bar f,h)=\Sigma(\overline{Df},\tilde Dh)=(Df,J\tilde Dh)$ for all
test functions $f,h$, from which $-JDf=\tilde Df$ follows.
Application to (\ref{infield.eqa}) gives $\phi^{\rm
in}(f)=ia(K^-Df)-ia^\dagger(K^+Df)$ which is the definition given in
\cite{Wa}.
It follows from the Fock construction,
(\ref{def_truncated_two_point_in}), and the properties of $D^+$ that
\begin{equation}
[\phi^{\text{in}}(x), \phi^{\text{in}}(y)] = iD(x,y)
\label{commutator_in}
\end{equation}
which constitutes that the incoming field fulfils the CCR and closes the specification and analysis of the properties of the "in"-field.
Fixing both $\phi^{\text{in}}(x)$ and $\phi^{\text{out}}(x)$ would
over-determine the system, we therefore only employ the Yang-Feldman equations (\ref{yf_eq}) as
a definition of $\phi^{\text{out}}(x)$ without specifying any further properties of it. From (\ref{def_d}) and (\ref{yf_eq}) it follows immediately that
\begin{equation}
\phi^{\text{out}}(x) = \phi^{\text{in}}(x) + (D\,j) (x).
\label{yf_out}
\end{equation}
Furthermore, because both $\phi^{\text{in}}(x)$ and $D$ fulfil the
Klein-Gordon equation,
$\phi^{\text{out}}(x)$ does as well. However, we still need to
check if the outgoing field fulfils the CCR and determine whether it is in a quasifree state or not.
\section{Calculation of the Wightman functions}
\label{calculation_of_wf}
To evaluate Wightman functions, we will make use of generalised
Feynman graphs. In the following figures, we draw all graphs
in $\phi^3$-theory for simplicity.
For the actual calculations, the degree $p$ of the
$\phi^p$-theory is irrelevant. We begin developing
the graphical calculus by introducing the symbols for the
propagators of our theory.
\begin{figure} [htb]\center \includegraphics[width=150pt]{props.eps} \caption{Propagators} \label{fig_propagators}
\end{figure}
$D^+(x,y)$ is being represented by a line with an open arrow, $D(x,y)$ by a line with a closed arrow and $G_r(x,y)$ by a line
with a double open arrow, the arrows pointing to $x$. Furthermore, $\tilde D(x,y)$ is drawn with two arrows pointing apart as shown in figure \ref{fig_propagators}.
Next, we introduce a tree expansion for the fields according to the
Parisi-Wu method \cite{PW}. We expand the fields in powers of the negative coupling
constant $-\lambda$:
\begin{equation}
\phi^{a}(x) = \sum_{\sigma=0}^{\infty}(-\lambda)^{\sigma}
\phi^{a}_{\sigma}(x).\label{field_expansion}\end{equation}
\noindent Clearly,
\begin{equation}
\phi^{\text{in}}_{\sigma}(x) = \left\{
\begin{array}{l l}
\phi^{\text{in}}(x) & \quad\mbox{if}\; \sigma=0,\\
\displaystyle 0 & \quad \mbox{otherwise.}\\ \end{array}\right. \label{field_expansion_in}
\end{equation}
We calculate $\phi^{a}_{\sigma}(x)$ for $a=$ loc/out recursively
using (\ref{yf_eq}) and (\ref{yf_out}):
\begin{equation}
\phi^{a}_{\sigma}(x) = \left\{
\begin{array}{l l}
\phi^{\text{in}}(x) & \quad\mbox{if}\; \sigma=0 \mbox{ and}\\\\
\displaystyle \left(\Delta^a \, \sum^{\infty}_{\sigma_1,\cdots,\:\sigma_{p-1}=0,\atop \sigma_1+\cdots+\sigma_{p-1} =
\sigma-1}\prod_{i=1}^{p-1}\phi^{\text{loc}}_{\sigma_i}\right)(x) & \quad \mbox{otherwise,}\\
\end{array}\right. \label{field_expansion_loc_out}
\end{equation}
\noindent where $\Delta^{\text{loc}}\;\colon\!\!\!\!= G_r$, $\Delta^{\text{out}}\;\colon\!\!\!\!= D$.
Following the recursion in (\ref{field_expansion_loc_out}), we
define tree graphs corresponding to the summands in
(\ref{field_expansion_loc_out}) by an induction over $\sigma$. To
fix the initial step, we draw $\phi^{a}_{0}(x)$,
{\it i.e.}, an
"in"-field, as a leaf attached to a root corresponding to an
external $x$-vertex. A tree corresponding to
$\phi^{a}_{\sigma}(x)$ is drawn by taking $p-1$ trees
corresponding to perturbative local fields of order $\sigma_1,
\cdots, \sigma_{p-1}$ s.t. $\sum \sigma_l = \sigma - 1$,
assembling their roots $y_1, \cdots, y_{p-1}$ to form a single
internal $y$-vertex and adding a trunk, {\it i.e.}, a new line from $y$
to $x$ corresponding to $\Delta^a(x,y)$. Therefore, a tree
correspoding to $\phi^{a}_{\sigma}(x)$ has a root corresponding to
an external $x$-vertex, a trunk corresponding to $\Delta^a$,
several branches corresponding to $G_r$s, several leaves
corresponding to "in"-fields and $\sigma$ branching points
corresponding to internal vertices with a total number of $p-1$
branches and leaves emerging from them. We note that the causal
flow (the direction of the $G_r$-arrows) always points to the
root.
We label the different tree components inductively,
accounting for the fact that the indices $\sigma_i$ in
(\ref{field_expansion_loc_out}) are distinguishable. The initial
step of the labelling induction is fixed by defining the label of a trunk to
be the index of the external vertex variable attached to it. To
assign a label to a branch/leaf, one takes the label of the branch
or trunk the considered branch/leaf emerges from as a basis.
One then extends it by a dot followed by a number reflecting the
position of the considered branch/leaf (field) at the actual
branching point (in the corresponding current), {\it i.e.}, the index $i$
of $\phi^{\text{loc}}_{\sigma_i}$ in
(\ref{field_expansion_loc_out}). Some examples of trees are displayed in figure \ref{fig_trees} for the convenience of the reader.
\begin{figure} [htb]\center \includegraphics[width=250pt]{trees.eps} \caption{The only possible tree for $\phi^{a}_{0}(x)$, the only possible tree for $\phi^{\text{loc}}_{1}(x)$, and one possible tree for $\phi^{\text{out}}_{3}(x)$ }\label{fig_trees} \end{figure}
We shall proceed to consider the Wightman functions of our theory. To compute them, it is
sufficient to consider only their connected parts, {\it i.e.}, the
truncated Wightman functions.
In order to calculate the truncated Wightman functions, we
will first expand them in powers of the coupling constant:
\begin{eqnarray}
\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle^T
=
\sum_{\sigma=0}^{\infty}(-\lambda)^{\sigma}\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle^T_{\sigma}.\label{perturbative_expansion_wf}
\end{eqnarray}
Inserting (\ref{field_expansion}) into the left side of
(\ref{perturbative_expansion_wf}) and comparing terms of equal
order in $-\lambda$, we get:
\begin{eqnarray}
\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle^T_{\sigma}
= \sum^{\infty}_{\sigma_1,\cdots,\:\sigma_n=0,\; \sum \sigma_l =
\sigma}\langle\Omega,\phi^{a_1}_{\sigma_1}(x_1)\cdots\phi^{a_n}_{\sigma_n}(x_n)\Omega\rangle^T.
\label{wf_+_tree_expansion}
\end{eqnarray}
We know from the tree expansion that, for a fixed
$\sigma$, every $\phi^{a}_{\sigma}(x)$ can be expressed in terms
of incoming fields convoluted with fundamental functions. Therefore, it follows that
$\langle\Omega,\phi^{a_1}_{\sigma_1}(x_1)\cdots\phi^{a_n}_{\sigma_n}(x_n)\Omega\rangle^T$
can be expressed in terms of Wightman functions of "in"-fields integrated with additional propagators.
Finally, combining (\ref{field_expansion_in}),
(\ref{field_expansion_loc_out}), (\ref{cluster_expansion_in}) and
(\ref{wf_+_tree_expansion}), we can express truncated Wightman
functions of arbitrary fields merely in terms of fundamental
functions.
Let us now introduce Feynman graphs corresponding to perturbative
$n$-point Wightman functions. A Feynman graph of order $\sigma$
consists of $n$ external vertices corresponding to the arguments
$x_1, \cdots, x_n$ and type-indices $a_1, \cdots, a_n$ of a
Wightman function and $\sigma$ internal vertices corresponding to
arbitrary variables in $M$. The vertices are connected to the
remainder of the graph by $q$ lines, with $q = 1$ ($q = p$) for
external (internal) vertices. A line is called an external line if
it is connected to an external vertex, an internal line otherwise.
While Wightman functions correspond to Feynman graphs, one can
show that truncated Wightman functions correspond to connected
Feynman graphs. We call a Feynman graph with arrows and labels on
all lines an extended Feynman graph.
On the level of graphs, the resolving of
(\ref{wf_+_tree_expansion}) via (\ref{cluster_expansion_in})
corresponds to gluing the leaves of $n$ trees with a total order
of $\sigma$ together to yield an extended Feynman graph of order
$\sigma$ with $n$ external vertices. As we calculate truncated
Wightman functions, only gluing possibilities that yield connected
Feynman graphs are allowed.
To finish describing the gluing process, we need to analyse the gluing lines in more detail. A line originating from gluing a pair of two leaves together, {\it i.e.},
a $D^{\pm}$-line, is defined to be labelled by combining the
leaves' labels to a pair. Starting from the beginning, we compare
the two labels slot-by-slot until we find a pair of numbers that
does not match. The arrow on the $D^{\pm}$-line then points to the
leaf corresponding to the lower of these numbers.
We have now described how to expand fields to trees that are
subsequently assembled to extended Feynman graphs (see figure
\ref{fig_gluing_fw} for two examples),
\begin{figure} [htb]\center \includegraphics[width=350pt]{glue1.eps} \caption{Two possibilities to glue the same set of trees to graphs
corresponding to $\langle\Omega,\phi^{a}_0(x_1)\phi^{{\rm
loc}}_2(x_2)\phi^{\text{out}}_1(x_3)\Omega\rangle^T$
}\label{fig_gluing_fw}
\end{figure}
\noindent but this also works the other way round. To calculate
$\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle^T_{\sigma}$,
we draw all topologically possible connected Feynman graphs of
order $\sigma$ with $n$ fixed external vertices $x_1, \cdots, x_n$
of type $a_1, \cdots, a_n$. We then consider all possibilities to
partition each Feynman graph into $n$ connected and loop-less
subgraphs, {\it i.e.}, trees and several remaining lines. Each such
subgraph contains exactly one root $x_i$ of type $a_i$. A
partition is fixed by marking a certain number of lines such that the
Feynman graph with these lines removed consists of $n$
disconnected tree graphs without leaves, arrows and labels and
each internal vertex is part of a such a tree (see figure \ref{fig_gluing_bw} for two examples, where the marked lines are displayed as dotted lines). In this process, the external lines
connected to external vertices of type $a=$ in always have to be marked.
Next, we assign labels to all external lines according to the
indices of their external vertex variables. Using these labels as
an initial step and starting from the roots, we "walk up" the trees
on the unmarked lines and inductively assign labels to all lines
emerging from the vertices we pass. As a result, the marked lines
have two labels, one from each vertex they connect. The inductive
dependence of the labels on the preceding labels is fixed by the
labelling algorithm defined above in the discussion of the tree expansion. For each
of the possible choices of labels we draw arrows on all lines. The
type of the arrows on the unmarked lines is chosen according
\begin{figure} [htb]\center
\includegraphics[width=350pt]{glue4.eps} \caption{Two
possibilities to extend a Feynman graph corresponding to
$\langle\Omega,\phi^{\text{loc}}(x_1)\phi^{{\rm
loc}}(x_2)\phi^{\text{out}}(x_3)\Omega\rangle^T_3$
}\label{fig_gluing_bw}
\end{figure}to $a_1, \cdots, a_n$ and the tree expansion. Furthermore,
each marked line becomes a $D^{\pm}$-line.
The direction of the arrow on such a line is determined by
comparison of the two labels of the line in the manner described
in the preceding paragraph.
To obtain the analytical expression corresponding to an extended
Feynman graph, we assign variables to all internal vertices, write
down the propagators corresponding to all lines and then integrate
over all internal vertices. Once we have the analytical
expressions, summing over all topologically possible Feynman
graphs and all possibilities to extend them yields
$\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle^T_{\sigma}$
(see figure \ref{fig_gluing_bw} for two examples).
For calculations, it is often convenient to drop
the labels and replace them by combinatorial factors, see \cite{Hack} for a detailed discussion of these issues.
\section{Properties of the Wightman
functions: Invariance, Hermiticity, spectral property, positivity, and the asymptotic condition}
\label{properties_of_wf}
With the means of computing the Wightman functions of our theory at hand, we can continue by discussing their fundamental properties in this section.
\paragraph*{Invariance under orthochronous isometric
diffeomorphisms} As we have shown in section
\ref{calculation_of_wf}, the Wightman functions of our
theory can be expressed in terms of integrals of products
of fundamental functions. Since we know from section \ref{setting}
that all fundamental functions are invariant under isometric
diffeomorphisms preserving the time direction and the integrals
contain the canonical volume form which is invariant under all
isometric diffeomorphisms by definition, invariance of the
Wightman functions follows immediately.
\paragraph*{Hermiticity} The Wightman functions fulfil Hermiticity if
\begin{equation}
\overline{\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^{a_n}(x_n)\Omega\rangle}
=\langle\Omega,\phi^{a_n}(x_n)\cdots\phi^{a_1}(x_1)\Omega\rangle.\label{wf_commu}
\end{equation}
Since we can express the Wightman functions in terms of
Wightman functions of "in"-fields convoluted with the real valued
fundamental functions $G_{r/a}$ and $D$ and since the order of
fields in the latter Wightman function corresponds to the order of
fields in the original Wightman function, it is sufficient to
prove (\ref{wf_commu}) for $a_1,\cdots,a_n=$ in.
We know from (\ref{cluster_expansion_in}) how to express Wightman
functions of incoming fields in terms of $D^+$. The complex
conjugation of (\ref{cluster_expansion_in}) exchanges all
$D^+(x_{j_1},x_{j_2})$ with $D^+(x_{j_2},x_{j_1})$ which obviously
corresponds to reversing the total order of fields in the Wightman
function of "in"-fields, thus (\ref{wf_commu}) holds for
$a_1,\cdots,a_n=$ in.
\paragraph*{Spectral condition} In general
spacetimes, there is no well-defined Fourier transformation,
therefore, standard spectral conditions can not be formulated. However, in
stationary spacetimes, the time translations form a one parameter group
of isometries and we assume that Fourier transformations w.r.t.
the time parameter are possible. A well-defined standard spectral condition
can thus be formulated in that case: one
restricts $\tilde D$ by requiring that the Fourier transform in the time arguments defined by the global
timelike killing field of $n$-point Wightman functions of "in"-fields
vanish if the sums $\Sigma_j\;\colon\!\!\!\!=\sum_{j=l+1}^{n}E_l$ are not all positive. Here,
$E_l$ is the variable conjugated to the $l$-th time argument in
the VEV. We note that all fundamental functions are invariant
under time translations by our assumptions, hence, they only depend on
time differences of both arguments. Therefore, the unitary time
translation operator that is obtained from the time translation
invariance of Wightman functions of interacting and outgoing fields via the GNS construction
coincides with the time translation operator for the "in"-fields,
which has positive spectrum by construction.
\paragraph*{Perturbative positivity} If we expand Wightman functions
perturbatively up to a given order $N$, we can add further terms
of order ${\cal O}(\lambda^{N+1})$ to obtain a VEV of fields
$\phi^{a,N}(x)=\sum_{\sigma=0}^N(-\lambda)^\sigma\phi^{a}_\sigma(x)$
that act as operator-valued distributions on the incoming Fock space.
The VEVs obviously fulfil positivity. Thus, the Wightman
functions expanded in $-\lambda$ up to an arbitrary but fixed order $N$ fulfil positivity
up to a ${\cal O}(\lambda^{N+1})$-term.
\paragraph*{Asymptotic condition} In this work, we have used asymptotic conditions
given by the Yang-Feldman equations (\ref{yf_eq}). It is a
natural question to ask up to what extent these asymptotic conditions lead
to the asymptotic conditions in the Heisenberg picture
$\phi(x)\to\phi^\text{in/out}(x)$ as $x^0\to\mp\infty$ in a given
foliation $M\simeq{\mathbb R}\times {\cal C}$ of $M$, where ${\cal C}$ is a
Cauchy surface. In a weak sense, the Heisenberg asymptotic
condition is
\begin{equation}
\label{Heisi}
\lim_{x_j^0\to\pm\infty}\left[\langle\Omega,\phi^{a_1}(x_1)\cdots\phi(x_j)\cdots\phi^{a_n}(x_n)\Omega\rangle^T-\langle\Omega,\phi^{a_1}(x_1)\cdots\phi^\text{out/in}(x_j)\cdots\phi^{a_n}(x_n)\Omega\rangle^T\right]=0.
\end{equation}
Let us consider the case $x^0\to-\infty$ first: In the extended
Feynman graph expansion of the left hand side, all connected
graphs where the tree with root $x_j$ is of order $\sigma_j=0$ cancel and
all other graphs survive. Likewise, if $x^0\to +\infty$, we obtain
in the expansion into connected extended Feynman graphs two times
all graphs where the order of the $x_j$ tree is larger than zero --
once with the trunk of the $j$-th tree being a retarded propagator
for the local field and once another graph with opposite sign where
the trunk is evaluated with $D$. Using $D=G_{r}-G_{a}$, we see
that, in the case $x^0\to +\infty$, one obtains all extended graphs where the tree with root
$x_j$ is of order larger than zero and its trunk is evaluated with
an advanced propagator, {\it cf.}, figure \ref{AsympFig}. We note that in the
limit $x^0\to\pm\infty$, the integral over the vertex variable $u$
is restricted to $u$ in the causal future/past of $x_j$ and hence
the domain of integration becomes smaller and smaller. An actual
proof of the vanishing of the left hand side of (\ref{Heisi})
requires technical assumptions on the manifold $M$ and on the
propagators $D^\pm$ -- and hence the state -- and we do not want
to go into the details now. It however seems that these conditions
are not much stronger than what is needed to assure that the
integrals over the vertices in the Feynman graphs exist.
\begin{figure} [htb]\center
\includegraphics[width=300pt]{asymp.eps} \caption{Heuristic check
of the asymptotic conditions on the level of
graphs}\label{AsympFig}
\end{figure}
\section{Properties of the Wightman
functions: Locality } \label{locality}
To prove locality of the truncated Wightman functions, we have to
show that
\begin{equation}
\langle\Omega,\phi^{a_1}(x_1)\; ...\;
[\phi^{a_i}(x_i),\phi^{a_{i+1}}(x_{i+1})]\cdots
\phi^{a_n}(x_n)\Omega\rangle^T \label{wf_locality}
\end{equation}
vanishes for $x_i \perp x_{i+1}$,$\;\forall i\in\{1,\;...\;,n-1\},\;a_j=$ loc if
$j\in\{i,i+1\},\;a_j=$ in/loc/out otherwise. For proving this, it is sufficient to show that the interacting field itself is local. Since we are given the
interacting field as a formal power series in $-\lambda$, locality of $\phi=\phi^\text{loc}$ has to be proven to each order in $-\lambda$ separately. To order
$\sigma$ we have
\begin{equation}
\left[\phi(x),\phi(y)\right]_\sigma = \sum\limits_{\sigma_1+\sigma_2=\sigma} \left[\phi(x)_{\sigma_1},\phi(y)_{\sigma_2}\right].
\end{equation}
For a fixed order, it is hence possible to replace the fields $\phi(x)_{\sigma_1}$, $\phi(x)_{\sigma_2}$ by their tree expansions and
compute $\left[\phi(x),\phi(y)\right]_\sigma$ as sums of commutators of single trees that are in effect commutators of products of free
fields integrated with retarded propagators. Employing Leibniz' rule for the commutator results in glueing together two leaves, one from
each tree, with a $D$-propagator. One can then hope to obtain an expression which
vanishes for spacelike-separated $x$ and $y$. The procedure for $\sigma=1$ in $\phi^3$-theory is depicted in figure \ref{commu1}, where the last step follows from
$D=G_r-G_a$ and "telescope cancellations".
\begin{figure} [htb]\center
\includegraphics[width=300pt]{locality1.eps}\caption{The commutator of two interacting fields to first order}\label{commu1}
\end{figure}
It turns out that one is left with a
"$G_r$-chain" and a "$G_a$-chain", {\it i.e.}, a product of retarded/advanced propagators such that the right slot of one propagator
corresponds to the left slot of another, namely,
\begin{equation}
\left[\phi(x),\phi(y)\right]_1 = 2i\int\limits_M\,d_gx_1\,\left\{G_r(x,x_1)G_r(x_1,y)- G_a(x,x_1)G_a(x_1,y)\right\} \phi^{\text{in}}(x_1).
\end{equation}
Owing to the causal support properties of the retarded and advanced propagators, we see that either $x\preceq x_1 \preceq y$ or $x\succeq
x_1 \succeq y$, where $x \preceq y$ $(x \succeq y)$ depicts that $x\in \overline V^\pm_y$. As a result, both the $G_r$-chain
and the $G_a$-chain vanish for $x \perp y$. To generalise this observation to arbitrary order, we prove the equivalence of the tree expansion to the expansion into retarded products. We then use the GLZ relation \cite{GLZ} to prove locality. See also \cite{Hack} for a proof up to second loop order that works on the level of Feynman graphs and proves locality graph by graph.
The retarded product $R_{1,n}(B_0(x_0)\,|\, B_1(x_1),\dots,B_n(x_n))$ of $n+1$ operators $B_0(x_0)$, $\dots$, $B_n(x_n)$ that are mutually local, {\it i.e.}, $[B_i(x_i),B_j(x_j)]$ vanishes for all $i$, $j$ if $x_i\perp x_j$, is defined as
$$R_{1,0}\left(B_0(x_0)\right)\;\colon\!\!\!\!= B_0(x_0),$$
$$R_{1,n}\left(B_0(x_0)\,|\, B_1(x_1),\dots,B_n(x_n)\right)\;\colon\!\!\!\!=$$
\begin{equation}
\label{defret}
\;\colon\!\!\!\!= (-1)^n\sum\limits_{\pi \in S_n}\left[B_{\pi(n)}(x_{\pi(n)}), \left[B_{\pi(n-1)}(x_{\pi(n-1)}), \cdots \left[B_{\pi(1)}(x_{\pi(1)}),B_0(x_0)\right]\cdots\right]\right] {\bf 1}_{x_0\preceq x_{\pi(1)} \preceq \cdots \preceq x_{\pi(n)}},
\end{equation}
where $S_n$ is the permutation group of order $n$ and $ {\bf 1}_A$ denotes the characteristic function of the set $A$. Note that renormalisation is necessary for a proper definition at coinciding points. With the above definition, $R_{1,n}(B_0(x_0)\,|\, B_1(x_1),\dots,B_n(x_n))$ is both symmetric in the last $n$ slots and manifestly vanishing if any of the spacetime positions of the last $n$ operators is not in the causal past of $x_0$. From \eqref{defret} it follows straighforwardly that retarded products of a certain order may be expressed in terms of retarded products of one order less
$$R_{1,n}\left(B_0(x_0)\,|\, B_1(x_1),\dots,B_n(x_n)\right)=$$
\begin{equation}
\label{retrecursion1}
= -\sum\limits_{j=1}^n\left[B_j(x_j), R_{1,n-1}(B_0(x_0)\,|\, B_1(x_1),\dots,\cancel{ B_j(x_j)},\dots,B_n(x_n))\right] {\bf 1}_{x_j\succeq x_i\;\forall i\in\{0,\dots,n\}}.
\end{equation}
Employing the above recursion and the Jacobi identity for the commutator, one can prove the GLZ relations \cite{GLZ},
$$R_{1,n}\left(A\,|\, C, B_1(x_1),\dots,B_n(x_n)\right)-R_{1,n}\left(C\,|\, A, B_1(x_1),\dots,B_n(x_n)\right)=$$
\begin{equation}
\label{glz}
=\sum\limits_{I\subset N\;\colon\!=\{1,\dots,n\}}\left[R_{1,|I|}\left(A\,\left|\,\prod\limits_{i\in I}B_i(x_i)\right.\right),R_{1,|N\setminus I|}\left(C\,\left|\prod\limits_{j\in N\setminus I}B_j(x_j)\right.\right)\right].
\end{equation}
The most important feature of retarded products in our context is the fact that the interacting field can be expanded in terms of retarded products, where a retarded product of order $\sigma$ encodes the complete $(-\lambda)^\sigma$-contribution of the interacting field, namely,
\begin{equation}
\label{retexpansion}
\phi(x)=\sum\limits_{\sigma=0}^\infty\frac{(i\lambda)^\sigma}{\sigma!}\int\limits_{M^\sigma}\prod\limits_{i=1}^\sigma d_gx_i \; R_{1,\sigma}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_\sigma)),
\end{equation}
with ${\mathcal L}_\text{int}(x)\;\colon\!\!\!\!= \phi^\text{in}(x)^p/p$ in our case. A combinatorial proof of (\ref{retexpansion}) that makes the relation with the tree expansion explicit will be given at the end of this section.
To prove locality in terms of retarded products, we start from
$$\left[\phi(x),\phi(y)\right]_\sigma = \int\limits_{M^\sigma}\prod\limits_{i=1}^\sigma d_gx_i\sum\limits_{\sigma_1+\sigma_2=\sigma}\frac{(-i)^{\sigma}}{\sigma_1!\sigma_2!}\;\times$$$$\times\;
\left[ R_{1,\sigma_1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma_1})), R_{1,\sigma_2}(\phi^\text{in}(y)\,|\,{\mathcal L}_\text{int}(x_{\sigma_1+1}),\dots,{\mathcal L}_\text{int}(x_{\sigma}))\right]$$
which, due to the GLZ relations, simplifies to
$$\left[\phi(x),\phi(y)\right]_\sigma = \frac{(-i)^{\sigma}}{\sigma!}
\left\{R_{1,\sigma+1}(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes\sigma})- R_{1,\sigma+1}(\phi^\text{in}(y)\,|\,\phi^\text{in}(x), {\mathcal L}_\text{int}^{\otimes\sigma})\right\},$$ where $R_{1,n+1}\left(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes n}\right)$ stands in shorthand for $$\int\limits_{M^n}\prod\limits_{i=1}^{\sigma} d_gx_i R_{1,n+1}(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_n)).$$ The result is, as we could have expected from our finger exercise in figure \ref{commu1}, a retarded piece, presumably corresponding to a sum of $G_r$-chains, minus an advanced piece, presumably corresponding to a sum of $G_a$-chains. In fact, we will prove this correspondence in appendix \ref{retapp}, since it will be necessary for the proof of the "out"-field CCR. If $x\perp y$, both the retarded and the advanced piece vanish due to the causal support properties of the retarded products.
Let us now proceed to see why the perturbative expansion of the interacting field in terms of retarded products is equivalent to the perturbative expansion of the interacting field due to the Yang-Feldman equation in terms of tree graphs. Since in case of the expansion in trees the $(-\lambda)^\sigma$-contribution to the interacting field consists of the sum of all possible tree graphs with $\sigma$ branching points, we have to show that the $(-\lambda)^\sigma$ term of the expansion in retarded products corresponds to exactly such sum. To achieve this, let us recall how one can inductively obtain all possible trees of order $\sigma$: one starts with the "in"-field and then replaces an "in"-field/leaf by a first order-tree $\sigma$ times and in all possible ways, taking care to discard trees occuring multiply. Starting from the recursion \eqref{retrecursion1}, we shall proceed to understand how it reproduces the aforementioned combinatorial procedure.
To this effect, let us first notice that \eqref{retrecursion1} simplifies considerably in the current context, since all operators $B_1,\dots,B_n$ are of the same type and the labelling of the $x_i$ does not matter due to them being integration variables. Hence, the symmmetrisation can be replaced by a factor and the $(-\lambda)^\sigma$-contribution to the interacting field in terms of retarded products reads
\begin{align}\phi(x)_{\sigma}&=\frac{(-i)^{\sigma}}{\sigma!}\int\limits_{M^\sigma}\prod\limits_{i=1}^{\sigma} d_gx_i \; R_{1,\sigma}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_\sigma))\notag\\
&=-\frac{(-i)^{\sigma}}{(\sigma-1)!}\int\limits_{M^\sigma}\prod\limits_{i=1}^{\sigma} d_gx_i \left[{\mathcal L}_\text{int}(x_{\sigma}),R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1}))\right]{\bf 1}_{x_\sigma\succeq x_i\atop\forall i\in\{0,\dots,\sigma-1\}}\notag.
\end{align}
Since all operators appearing in $R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1}))$ are monomials in the "in"-field, we know due to the Leibniz rule for the commutator that this retarded product is given by sums of products of "in"-fields. As ${\mathcal L}_\text{int}(x_{\sigma})$ is also a monomial in the incoming field and the commutator of two "in"-fields is a C-number, we can compute $[{\mathcal L}_\text{int}(x_{\sigma}),$$R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1}))]$ by means of
$$\left[B^n,\prod\limits_{i=1}^m A_i\right]=\sum\limits_{j=0}^{m}\prod\limits_{i=1}^{j-1} A_i \;[B,A_j]\frac{dB^{n}}{dB} \prod\limits_{i=j+1}^m A_i,$$ which holds under the assumption that $[A_i,B]$ commutes with all other occuring operators for all $i$ and where we have formally written $dB^{n}/dB$ in shorthand for $nB^{n-1}$. This formula implies that $[{\mathcal L}_\text{int}(x_{\sigma}),$$R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1}))] {\bf 1}_{x_\sigma\succeq x_i\forall i\in\{0,\dots,\sigma-1\}}$ can be computed by summing over all possibilities to replace one incoming field $\phi^\text{in}(x_j)$ in $R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1}))$ by
\begin{align}
\left[\phi^\text{in}(x_\sigma),\phi^\text{in}(x_j)\right]\frac{d{\mathcal L}_\text{int}(x_{\sigma})}{d\phi^\text{in}(x_\sigma)} {\bf 1}_{x_\sigma\succeq x_j}&=iD(x_\sigma, x_j)\phi^\text{in}(x_\sigma)^{p-1} {\bf 1}_{x_\sigma\succeq x_j}\notag\\&=-iG_r(x_j,x_\sigma)\phi^\text{in}(x_\sigma)^{p-1}.\notag
\end{align}
To account for the Leibniz rule of this procedure, we denote it by means of a formal derivative operator ({\it cf.}, \cite{Duetsch}, where the framework is such that it is a well-defined functional derivative), {\it viz.},
$${\mathcal D}(x)\phi^\text{in}(y)\;\colon\!\!\!\!= G_r(\,\cdot\,,x)\frac{d{\mathcal L}_\text{int}(x)}{d\phi^\text{in}(x)}\frac{d}{d\phi^\text{in}(\,\cdot\,)}\phi^\text{in}(y)\;\colon\!\!\!\!= G_r(y, x)\frac{d{\mathcal L}_\text{int}(x)}{d\phi^\text{in}(x)}.$$
Employing this notation, we can write the interacting field to order $\sigma$ as
\begin{align}\phi(x)_{\sigma}&=-\frac{(-i)^{\sigma}}{(\sigma-1)!}\int\limits_{M^\sigma}\prod\limits_{i=1}^{\sigma} d_gx_i \; \left[{\mathcal L}_\text{int}(x_{\sigma}),R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1}))\right]{\bf 1}_{x_\sigma\succeq x_i\atop\forall i\in\{0,\dots,\sigma-1\}}\notag\\&=\frac{(-i)^{\sigma-1}}{(\sigma-1)!}\int\limits_{M^\sigma}\prod\limits_{i=1}^{\sigma} d_gx_i \; {\mathcal D}(x_{\sigma})R_{1,\sigma-1}(\phi^\text{in}(x)\,|\,{\mathcal L}_\text{int}(x_1),\dots,{\mathcal L}_\text{int}(x_{\sigma-1})){\bf 1}_{x_\sigma\succeq x_i\atop\forall i\in\{0,\dots,\sigma-1\}}\notag,
\end{align} where it is understood that the operator expression replacing the "in"-field on which ${\mathcal D}(x_{\sigma})$ currently acts has to be inserted exactly at the position where that incoming field has been. We can now iterate the recursion \eqref{retrecursion1} to eventually obtain
\begin{equation}\label{pretrees}\phi(x)_{\sigma}=\int\limits_{M^\sigma}\prod\limits_{i=1}^{\sigma} d_gx_i \; {\mathcal D}(x_\sigma)\cdots{\mathcal D}(x_1)\phi^\text{in}(x) {\bf 1}_{x_\sigma\succeq \cdots \succeq x_1 \succeq x}.\end{equation}
This puts us in the position to prove formal equivalence of this expression to the one obtained from the Yang-Feldman equation in terms of tree graphs. Obviously, both expression are equal in zeroth order. Let us assume that they are equal to order $\sigma$ and show how equivalence to order $\sigma+1$ follows inductively. By our assumptions, the integrand of \eqref{pretrees} can be rewritten in a form devoid of explicit restrictions on the integration domain for $x_1, \dots, x_\sigma$, namely, the sum of the integrands of all trees of order $\sigma$, {\it viz.}, $$\phi(x)_{\sigma}=\colon\;\int\limits_{M^\sigma}\prod\limits_{i=1}^{\sigma} d_gx_i \;I_\sigma(x_1, \dots, x_\sigma).$$ With this notation, we have $$\phi(x)_{\sigma+1}=\int\limits_{M^{\sigma+1}}\prod\limits_{i=1}^{\sigma+1} d_gx_i {\mathcal D}(x_{\sigma+1})I_\sigma(x_1, \dots, x_\sigma) {\bf 1}_{x_{\sigma+1}\succeq x_i\atop\forall i\in\{0,\dots,\sigma\}}.$$ ${\mathcal D}(x_{\sigma+1})$ acts on $I_\sigma$ by replacing an "in"-field by the integrand of a first order tree graph in all tree integrands $I_\sigma$ consists of and in all possible ways. As a result, we obtain integrands of trees of order $\sigma+1$, still with no mutual restrictions on the integration variables $x_1, \dots, x_\sigma$, but with the constraint that $x_{\sigma+1}$ is causally earlier than all of these spacetimes points. It is clear that ${\mathcal D}(x_{\sigma+1})I_\sigma(x_1, \dots, x_\sigma){\bf 1}_{x_{\sigma+1}\succeq x_i \forall i\in\{0,\dots,\sigma\}}$ contains all possible tree integrands of order $\sigma+1$, the question is whether on can rewrite this expression in way that it contains every possible tree integrand of order $\sigma+1$ with unit weight and without any restrictions on the integration domain.
To answer this question, let us divide the tree integrands we straightforwadly obtain by the action of ${\mathcal D}(x_{\sigma+1})$ on $I_\sigma(x_1, \dots, x_\sigma)$ into equivalence classes, where two integrands are taken to be equivalent if they can be matched by permuting vertex variables. Let us choose an arbitrary but fixed equivalence class $E$ and assume that it has $m$ members. Their number implies that there are $m$ different possibilities to build the tree graph corresponding to $E$ out of a tree graph of order $\sigma$ by replacing a leaf with a first order tree. Hence, the tree graph corresponding to $E$ must have $m$ "virgin" vertices, where we call a vertex virgin if it has the maximal number of $p-1$ "in"-fields attached to it; the $m$ members of $E$ then correspond to the $m$ possible ways to remove one of these virgin vertices to obtain a tree of lower order. Now, let us permute the vertex variables of all members of $E$ such that they are all equal but have different restrictions on the integration domain of $x_1, \dots, x_{\sigma+1}$ and let ${\mathcal V}\;\colon\!\!\!\!= \{x_{i_1},\dots,x_{i_m}\}$ be the resulting integration variables of the virgin vertices. Since the virgin vertices are connected to the remainder of the tree by retarded propagators, we can discard the integration constrains on the $m$ integrands under consideration which are automatically fulfilled due to the causal support properties of $G_r$. The remaining restrictions on the integration domain can only be of the form ${\mathcal V}\ni x_j\succeq x_i \;\forall x_i\in {\mathcal V}$. In fact, due to the Leibniz rule for ${\mathcal D}(x_{\sigma+1})$, all $m$ possible integration constraints of this kind must appear. Hence, summing up the $m$ integrands with matching variables but different restrictions on the integration domain, we obtain once the same integrand, but without any integration constraints. Since the above described procedure is valid for all equivalence classes of tree integrands, $\int_{M^{\sigma+1}}\prod_{i=1}^{\sigma+1} d_gx_i {\mathcal D}(x_{\sigma+1})\cdots{\mathcal D}(x_1)\phi^\text{in}(x) {\bf 1}_{x_{\sigma+1}\succeq \cdots \succeq x_1 \succeq x}$ corresponds to the sum of all possible trees of order $\sigma+1$ weighted with unit multiplicity, and the formal equivalence of the expansion of the interacting field by means of the retarded products on the one hand and the Yang-Feldman equation via Parisi-Wu tree graphs on the other hand is established.
\section{Properties of the outgoing fields}\label{prop_out}
We now examine the properties of the outgoing fields.
\paragraph*{Klein-Gordon Equation}
We have already seen in section \ref{setting} that the
"out"-fields of the theory fulfil the Klein-Gordon equation as is manifest from their definition via the Yang-Feldman equation \eqref{yf_out}.
\paragraph*{Canonical commutation relations}
To prove the CCR, we have to examine the commutator of two outgoing fields to each order in $-\lambda$ separately. Due to the Yang-Feldman equations, we have
$$\left[\phi^\text{out}(x),\phi^\text{out}(y)\right]_\sigma=\left[\phi(x)-(-\lambda)G_a(x,
\phi^{p-1}),\phi(y)-(-\lambda)G_a(y,
\phi^{p-1})\right]_\sigma$$
\begin{equation}\label{retccr}=\underbrace{\left[\phi(x),\phi(y)\right]_\sigma}_{I_\sigma}-\underbrace{\left[\phi(x),G_a(y,
\phi^{p-1})\right]_{\sigma-1}}_{II_\sigma}-\underbrace{\left[G_a(x,
\phi^{p-1}),\phi(y)\right]_{\sigma-1}}_{III_\sigma}+\underbrace{\left[G_a(x,
\phi^{p-1}),G_a(y,
\phi^{p-1})\right]_{\sigma-2}}_{IV_\sigma}.\end{equation} In zeroth order, only the first term of \eqref{retccr} contributes and the result is $$\left[\phi^\text{out}(x),\phi^\text{out}(y)\right]_0=\left[\phi^\text{in}(x),\phi^\text{in}(y)\right]=iD(x,y).$$ To prove CCR for the "out"-field, we thus need to show that $\left[\phi^\text{out}(x),\phi^\text{out}(y)\right]_\sigma$ vanishes identically for $\sigma > 0$. In the case $\sigma=1$, only the first three terms of \eqref{retccr} contribute and we have
\begin{align}
I_1 &= -i\int\limits_M\,d_gx_1\,\left\{R_{1,2}\left(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}(x_1)\right)- R_{1,2}\left(\phi^\text{in}(y)\,|\,\phi^\text{in}(x), {\mathcal L}_\text{int}(x_1)\right)\right\}\notag\\
&=(p-1)i\int\limits_M\,d_gx_1\,\left\{G_r(x,x_1)G_r(x_1,y)- G_a(x,x_1)G_a(x_1,y)\right\} \phi^{\text{in}}(x_1)^{p-2}\notag\\
II_1 &= \left[\phi^\text{in}(x), G_a(y, (\phi^\text{in})^{p-1})\right]= (p-1)i\int\limits_M\,d_gx_1\,D(x,x_1)G_r(x_1,y)\phi^\text{in}(x_1)^{p-2}\notag\\
III_1 &= \left[G_a(x, (\phi^\text{in})^{p-1}), \phi^\text{in}(y)\right]= (p-1)i\int\limits_M\,d_gx_1\,G_a(x,x_1)D(x_1,y)\phi^\text{in}(x_1)^{p-2}\notag.
\end{align}Due to "telescope cancellations" by means of $D=G_r-G_a$, $I_1-II_1-III_1=0$.
For $\sigma>1$, the structure of cancellations is in principle the same as above, the only difference being the possible appearance of propagators "lying inbetween" $G_{r/a}(x,x_1)$ and $G_{r/a}(x_1,y)$. Such terms survive in $I_\sigma-II_\sigma-III_\sigma$ and have to be cancelled by $IV_\sigma$. To treat such terms, we need the following identities, proven in appendix B:
$$\frac{(-i)^n}{n!}R_{1,n+1}\left(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes n}\right)$$
\begin{equation}\label{retpull1}=\int\limits_M d_gx_1\;G_r(x,x_1)\sum\limits_{\sum \sigma_i = n-1}\sum\limits_{j=0}^{p-2}\phi(x_1)^j_{\sigma_1}\,\frac{(-i)^{\sigma_2}}{\sigma_2!}R_{1,\sigma_2+1}\left(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes \sigma_2}\right)\,\phi(x_1)^{p-2-j}_{\sigma_3}\end{equation}
\begin{equation}\label{retpull2}=\int\limits_M d_gx_1\sum\limits_{\sum \sigma_i = n-1}\sum\limits_{j=0}^{p-2}\phi(x_1)^j_{\sigma_1}\,\frac{(-i)^{\sigma_2}}{\sigma_2!}R_{1,\sigma_2+1}\left(\phi^\text{in}(x)\,|\,\phi^\text{in}(x_1), {\mathcal L}_\text{int}^{\otimes \sigma_2}\right)\,\phi(x_1)^{p-2-j}_{\sigma_3}\;G_r(x_1,y),\end{equation}
where $\phi(x)^j_\sigma$ stands in shorthand for $\sum\limits_{\sum\sigma_i=\sigma}\prod\limits_{i=1}^j\phi(x)_{\sigma_i}$.
Iterating these identies yields that $\frac{(-i)^n}{n!}R_{1,n+1}\left(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes n}\right)$ can be expressed completely in terms of $G_r$-chains connecting $x$ with $y$.
Employing the above listed identities, we have for $\sigma > 1$
\begin{align}
I_\sigma &= \frac{(-i)^\sigma}{\sigma!}\left\{R_{1,\sigma+1}\left(\phi^\text{in}(x)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes \sigma}\right)-R_{1,\sigma+1}\left(\phi^\text{in}(y)\,|\,\phi^\text{in}(x), {\mathcal L}_\text{int}^{\otimes \sigma}\right)\right\}\notag\\
&=\int\limits_{M}d_gx_1 \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i=0}^{p-2}\phi(x_1)^i_{\sigma_1}\frac{(-i)^{\sigma_2}}{\sigma_2!}
\left\{G_r(x,x_1)R_{1,\sigma_2+1}\left(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(y), {\mathcal L}_\text{int}^{\otimes \sigma_2}\right)-\right.\notag\\
&\qquad\left.-\;G_a(x,x_1)R_{1,\sigma_2+1}\left(\phi^\text{in}(y)\,|\,\phi^\text{in}(x_1), {\mathcal L}_\text{int}^{\otimes \sigma_2}\right)\right\}\phi(x_1)^{p-2-i}_{\sigma_3}\notag\\
&=(p-1)i\int\limits_{M}d_gx_1 \left\{G_r(x,x_1)G_r(x_1,y)-G_a(x,x_1)G_a(x_1,y)\right\}\phi(x_1)^{p-2}_{\sigma-1}\;+\notag\\
&\qquad+\int\limits_{M^2}d_gx_1 d_gx_2\sum\limits_{\sum\sigma_i=\sigma-2}\sum\limits_{i,j=0}^{p-2}\phi(x_1)^i_{\sigma_1}\phi(x_2)^j_{\sigma_2}\frac{(-i)^{\sigma_3}}{\sigma_3!}\times\notag\\&\qquad\qquad\times\left\{G_r(x,x_1)R_{1,\sigma_3+1}\left(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(x_2), {\mathcal L}_\text{int}^{\otimes \sigma_3}\right)G_r(x_2,y)\;-\right.\notag\\&\qquad\qquad
\left.-\;G_a(x,x_1)R_{1,\sigma_3+1}\left(\phi^\text{in}(x_2)\,|\,\phi^\text{in}(x_1), {\mathcal L}_\text{int}^{\otimes \sigma_3}\right)G_a(x_2,y)\right\}\phi(x_2)^{p-2-j}_{\sigma_4}\phi(x_1)^{p-2-i}_{\sigma_5},\notag
\end{align}where the first summand of the last line corresponds to the case $\sigma_2=0$ in the second line.
For $II_\sigma$ and $III_\sigma$, we again need \eqref{retpull1}, \eqref{retpull2} and, furthermore, the general commutator identity
\begin{align}\left[\prod\limits_{i=1}^nA_i,\prod\limits_{j=1}^mB_j\right]&=\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{m-1}\prod\limits_{i=1}^{k-1}A_i\prod\limits_{j=1}^{l-1}B_j[A_k,B_l]\prod\limits_{j=l+1}^{m}B_j\prod\limits_{i=k+1}^{n}A_i\notag\\
&=\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{m-1}\prod\limits_{j=1}^{l-1}B_j\prod\limits_{i=1}^{k-1}A_i[A_k,B_l]\prod\limits_{i=k+1}^{n}A_i\prod\limits_{j=l+1}^{m}B_j
\label{commprod},
\end{align} to obtain
\begin{align}
II_\sigma &= \int\limits_M d_gx_1 \left[\phi(x),\phi(x_1)^{p-1}\right]_{\sigma-1}G_r(x_1,y)\notag\\
&=\int\limits_M d_gx_2 \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i=0}^{p-2}\phi(x_2)^j_{\sigma_1}\left[\phi(x),\phi(x_2)\right]_{\sigma_2}\phi(x_2)^{p-2-j}_{\sigma_3}G_r(x_2,y)\notag\\
&=\int\limits_M d_gx_2 \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i=0}^{p-2}\phi(x_2)^j_{\sigma_1}\frac{(-i)^{\sigma_2}}{\sigma_2!}\left\{R_{1,\sigma_2+1}(\phi^\text{in}(x)\,|\,\phi^\text{in}(x_2),{\mathcal L}_\text{int}^{\otimes \sigma_2})\;-\right.\notag\\
&\qquad \left.- \;R_{1,\sigma_2+1}(\phi^\text{in}(x_2)\,|\,\phi^\text{in}(x),{\mathcal L}_\text{int}^{\otimes \sigma_2})\right\}\phi(x_2)^{p-2-j}_{\sigma_3}G_r(x_2,y)\notag\\
&=(p-1)i\int\limits_M d_gx_1 \left\{G_r(x,x_1)-G_a(x,x_1)\right\}G_r(x_1,y)\phi(x_1)^{p-1}_{\sigma-1}\;+\notag\\
&\qquad+\;\int\limits_{M^2} d_gx_1 d_gx_2 \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i,j=0}^{p-2}\phi(x_1)^i_{\sigma_1}\phi(x_2)^j_{\sigma_2}\frac{(-i)^{\sigma_3}}{\sigma_3!}\;\times\notag\\
&\qquad\qquad\times\;\left\{G_r(x,x_1)R_{1,\sigma_3+1}(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(x_2),{\mathcal L}_\text{int}^{\otimes \sigma_3})\;-\right.\notag\\
&\qquad\qquad \left.- \;G_a(x,x_1)R_{1,\sigma_3+1}(\phi^\text{in}(x_2)\,|\,\phi^\text{in}(x_1),{\mathcal L}_\text{int}^{\otimes \sigma_3})\right\}\phi(x_2)^{p-2-j}_{\sigma_4}\phi(x_1)^{p-2-i}_{\sigma_5}G_r(x_2,y),\notag\end{align}
\begin{align}
III_\sigma &= \int\limits_M d_gx_1 G_a(x,x_1)\left[\phi(x_1)^{p-1},\phi(y)\right]_{\sigma-1}\notag\\
&=\int\limits_M d_gx_1 G_a(x,x_1) \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i=0}^{p-2}\phi(x_1)^j_{\sigma_1}\left[\phi(x_1),\phi(y)\right]_{\sigma_2}\phi(x_1)^{p-2-j}_{\sigma_3}\notag\\
&=\int\limits_M d_gx_1\,G_a(x,x_1) \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i=0}^{p-2}\phi(x_1)^j_{\sigma_1}\frac{(-i)^{\sigma_2}}{\sigma_2!}\left\{R_{1,\sigma_2+1}(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(y),{\mathcal L}_\text{int}^{\otimes \sigma_2})\;-\right.\notag\\
&\qquad \left.- \;R_{1,\sigma_2+1}(\phi^\text{in}(y)\,|\,\phi^\text{in}(x_1),{\mathcal L}_\text{int}^{\otimes \sigma_2})\right\}\phi(x_1)^{p-2-j}_{\sigma_3}\notag\\
&=(p-1)i\int\limits_M d_gx_1\, G_a(x,x_1)\left\{G_r(x_1,y)-G_a(x_1,y)\right\}\phi(x_1)^{p-1}_{\sigma-1}\;+\notag\\
&\qquad+\;\int\limits_{M^2} d_gx_1 d_gx_2\,G_a(x,x_1) \sum\limits_{\sum\sigma_i=\sigma-1}\sum\limits_{i,j=0}^{p-2}\phi(x_1)^i_{\sigma_1}\phi(x_2)^j_{\sigma_2}\frac{(-i)^{\sigma_3}}{\sigma_3!}\;\times\notag\\
&\qquad\qquad\times\;\left\{R_{1,\sigma_3+1}(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(x_2),{\mathcal L}_\text{int}^{\otimes \sigma_3})G_r(x_2,y)\;-\right.\notag\\
&\qquad\qquad \left.- \;R_{1,\sigma_3+1}(\phi^\text{in}(x_2)\,|\,\phi^\text{in}(x_1),{\mathcal L}_\text{int}^{\otimes \sigma_3})G_a(x_2,y)\right\}\phi(x_2)^{p-2-j}_{\sigma_4}\phi(x_1)^{p-2-i}_{\sigma_5}.\notag
\end{align}
Finally, using \eqref{commprod}, the computation of $IV_\sigma$ yields
\begin{align}
IV_\sigma &= \int\limits_{M^2}d_gx_1d_gx_2\;G_a(x,x_1)\left[\phi(x_1)^{p-1}, \phi(x_2)^{p-2}\right]_{\sigma-2}G_r(x_2,y)\notag\\
&= \int\limits_{M^2}d_gx_1d_gx_2\;G_a(x,x_1)\sum\limits_{\sum\sigma_i=\sigma-2}\sum\limits_{i,j=0}^{p-2}\phi(x_1)^i_{\sigma_1}\phi(x_2)^j_{\sigma_2}\;\times\notag\\
&\qquad\times\;\left[\phi(x_1), \phi(x_2)\right]_{\sigma_3}\phi(x_2)^{p-2-j}_{\sigma_4}\phi(x_1)^{p-2-i}_{\sigma_5}G_r(x_2,y)\notag\\
&=\int\limits_{M^2}d_gx_1d_gx_2\;G_a(x,x_1)\sum\limits_{\sum\sigma_i=\sigma-2}\sum\limits_{i,j=0}^{p-2}\phi(x_1)^i_{\sigma_1}\phi(x_2)^j_{\sigma_2}\;\times\notag\\
&\qquad\times\;\frac{(-i)^{\sigma_3}}{\sigma_3!}\left\{R_{1,\sigma_3+1}(\phi^\text{in}(x_1)\,|\,\phi^\text{in}(x_2),{\mathcal L}_\text{int}^{\otimes\sigma_3})\;-\right.\notag\\
&\qquad\qquad\left.-\;R_{1,\sigma_3+1}(\phi^\text{in}(x_2)\,|\,\phi^\text{in}(x_1),{\mathcal L}_\text{int}^{\otimes\sigma_3})\right\}\phi(x_2)^{p-2-j}_{\sigma_4}\phi(x_1)^{p-2-i}_{\sigma_5}G_r(x_2,y).\notag
\end{align}
We have subsumed the partial results graphically in figure \ref{partialccr}, where the encircled double arrows depict $G_r$-chains. It is straightforward to check the cancellations $I_\sigma-II_\sigma-III_\sigma+IV_\sigma=0$. This closes the proof of the "out"-field CCR.
\begin{figure}
\includegraphics[width=350pt]{ccr.eps}\caption{The partial results for $\sigma>1$}\label{partialccr}\end{figure}
\paragraph*{Non-quasifree representation}
One of the main claims of this article is that, on non-stationary spacetimes, the outgoing field is in general in the GNS-representation of a state which is {\em not} quasifree. The reason for this is the lack of both spectral conditions and energy-momentum conservation, which would assure the vanishing of higher order truncated Wightman functions of the "out"-field in the stationary case.
To see this explicitely, let us consider as an example for a non-stationary spacetime ${\mathbb R}^d$
with metric $g(\epsilon)\;\colon\!\!\!\!=(1+\epsilon h)\eta$, where $\eta$ is the
Minkowski metric and $h$ a $C^\infty$-function on $M$ of compact support.
One then has $d_gx=\sqrt{\vert\mathsf g\vert}\,dx=(1+\epsilon h)^{d/2} dx$ and such a spacetime is non-stationary since the metric depends on "time".
By means of the methods described in section \ref{calculation_of_wf} (see \cite{Hack} for a detailed calculation), one computes the truncated 4-point function of the "out"-field
to first order in $-\lambda$ in $\phi^4$ theory on this spacetime as
\begin{equation}
\label{eqa_nontriv} \langle\Omega,\phi^\text{out}(x_1)\phi^\text{out}(x_2)\phi^\text{out}(x_3)\phi^\text{out}(x_4)\Omega\rangle^T_1=12 {\rm Im}\int\limits_{{\mathbb R}^d} dy\,
\prod_{l=1}^4D_{g(\epsilon)}^-(x_l,y)\, (1+\epsilon h(y))^{\frac{d}{2}},
\end{equation}
where the $D^-$ bear the subscript $g(\epsilon)$ to emphasize that they depend on the metric via
the Klein-Gordon-equation. To calculate (\ref{eqa_nontriv}) up to first order in $\epsilon$, one
needs to expand $(1+\epsilon h)^{d/2}$ and $D^-_g$ in $\epsilon$, where $D^-_g(x,y)=\langle\Omega,\phi^{\rm in}_g(y)\phi^{\rm in}_g(x)\Omega\rangle$ can be expanded
in $\epsilon$ by expanding each of the two fields $\phi^{\rm in}_g$ separately via
the free Klein-Gordon equation. Denoting the expansion of $\phi^{\rm in}_g$ up to first order in
$\epsilon$ as
$$
\phi^{\rm in}_g=\phi^{\rm in}_0+\epsilon\phi^{\rm in}_1+\mathcal{O}(\epsilon^2),$$
we obtain the expansion of $D^-_g$ to first order in $\epsilon$ as
\begin{equation*}
\begin{split}
D^-_g(x,y)
&=\langle\Omega,\phi^{\rm in}_0(y)\phi^{\rm in}_0(x)\Omega\rangle+\epsilon\left(\langle\Omega,\phi^{\rm in}_1(y)\phi^{\rm in}_0(x)\Omega\rangle+\langle\Omega,\phi^{\rm in}_0(y)\phi^{\rm in}_1(x)\Omega\rangle\right)+\mathcal{O}(\epsilon^2)\\
&=\colon\; D^-_{0,0}(x,y)+\epsilon\left(D^-_{0,1}(x,y)+D^-_{1,0}(x,y)\right)+\mathcal{O}(\epsilon^2).
\end{split}
\end{equation*}
To compute the single terms in the expansion of $\phi^{\rm in}_g$, one first
evaluates the wave operator as
\begin{equation*}
\begin{split}
\Box & =-{\vert\mathsf g\vert}^{-\frac{1}{2}}\partial_{b}g^{bc}{\vert\mathsf g\vert}^{\frac{1}{2}}\partial_{c}\\
&=-\eta^{bc}\partial_{b}\partial_{c}+\epsilon\left[-h\eta^{bc}\partial_{b}\partial_{c}-\left(\frac{d}{2}+1\right)\eta^{bc}\left(\partial_{b}h\right)\partial_{c}\right]\\
&=\colon\;\Box_0+\epsilon\Box_1.
\end{split}
\end{equation*}
Inserting this into the Klein-Gordon equation (with minimal coupling) for $\phi^{\rm in}_g$, one obtains to zeroth order in $\epsilon$
$$\left(\Box_0+m^2\right)\phi^{\rm in}_0=0,$$
{\it i.e.}, the Klein-Gordon equation on flat spacetime, and to first order in $\epsilon$
\begin{equation}
\label{eqa_in_field_first_order}
\begin{split}
&\quad\Box_1 \phi^{\rm in}_0 + \left(\Box_0+m^2\right)\phi^{\rm in}_1=0\\
\Rightarrow&\quad\left(\Box_0+m^2\right)\phi^{\rm in}_1=\left[\left(\frac{d}{2}+1\right)\eta^{bc}\left(\partial_{b}h\right)\partial_{c}+h\Box_0\right]\phi^{\rm in}_0\\
\Rightarrow&\quad \phi^{\rm in}_1=G_{r,0}\left[\left(\frac{d}{2}+1\right)\eta^{bc}\left(\partial_{b}h\right)\partial_{c}-hm^2\right]\phi^{\rm in}_0.
\end{split}
\end{equation}
with $G_{r,0}$ denoting the retarded Green's function on Minkowski spacetime.
The well-known expression for $D^-$ on flat spacetime is
\begin{equation}
D^-_{0,0}(x,y)=\int\limits_{{\mathbb R}^{d-1}} \frac{d\vec{k}}{2\omega_{\vec{k}}}\,e^{-i\vec{k}\vec{x}+i\omega_{\vec{k}}x_0}\,e^{i\vec{k}\vec{y}-i\omega_{\vec{k}}y_0},
\end{equation}
from which it is explicitly seen that the Fourier transform of $D^-_{0,0}(x,y)$ w.r.t. $x$,
respectively $x-y$, has support in the negative mass shell and its Fourier transform w.r.t.
$y$ has support in the positive mass shell, {\it i.e.}, $D^-_{0,0}(x,y)$ fulfils the spectral condition.
From (\ref{eqa_in_field_first_order}) it follows that
$D^-_{1,0}(x,y)$ ($D^-_{0,1}(x,y)$) can be obtained
by application of the operator
$G_{r,0}\left[\left(\frac{d}{2}+1\right)\eta^{ab}\left(\partial_{a}h\right)\partial_{b}-hm^2\right]$
to the first (second) argument of $D^-_{0,0}(x,y)$. Fourier transforming $D^-_{0,1}(x,y)$ on
Minkowski spacetime w.r.t. $y$, one thus gets the Fourier transform of $G_{r,0}$ multiplied by
the Fourier transform of (the derivative of) $h$ convoluted with the Fourier transform of
(the derivative) of $D^-_{0,0}$. Since the latter convolution smears up the mass shell spectrum
of (the derivative of) $D^-_{0,0}$ and $G_{r,0}$ is known to have off-shell spectrum, the Fourier
transform of $D^-_{0,1}(x,y)$ w.r.t. to $y$ clearly has off-shell support. In contrast,
the Fourier
transform of $D^-_{1,0}(x,y)$ w.r.t. to $y$ remains on-shell like the
Fourier transform of $D^-_{0,0}(x,y)$.
With these considerations in mind, we can continue examining (\ref{eqa_nontriv}).
An expansion up to first order in $\epsilon$ yields a zeroth order term and
three different first order terms, {\it viz.},
\begin{equation}
\label{eqa_nontriv_expanded}
\begin{split}
& {\rm Im}\int\limits_{{\mathbb R}^d}dy\,
\prod_{l=1}^4D_{g(\epsilon)}^-(x_l,y)\, (1+\epsilon h(y))^{\frac{d}{2}} \\
=\;& {\rm Im}\left[\int\limits_{{\mathbb R}^d}dy\,
\prod_{l=1}^4D_{0,0}^-(x_l,y) + \epsilon\left(\frac{d}{2}\int\limits_{{\mathbb R}^d}
\prod_{l=1}^4D_{0,0}^-(x_l,y)h(y)dy\right.\right.\\
& \qquad\left.\left.+ \sum_j\int\limits_{{\mathbb R}^d}dy\,
D_{0,1}^-(x_j,y)\prod_{l\neq j}D_{0,0}^-(x_l,y) + \sum_j\int\limits_{{\mathbb R}^d}dy\,
D_{1,0}^-(x_j,y)\prod_{l\neq j}D_{0,0}^-(x_l,y)\right)\right]+\mathcal{O}(\epsilon^2).\\
\end{split}
\end{equation}
\noindent Upon Fourier transforming w.r.t. $y$, it is easily seen that the zeroth order term vanishes due to energy-momentum conservation and the spectral support properties of $D^-_{0,0}$. Similarly, owing to the spectral support properties of $D_{1,0}^-$, the last first order term in \eqref{eqa_nontriv_expanded} also vanishes due to energy-momentum conservation. However, the remaining two first order terms are in general non-vanishing: regarding $$\int\limits_{{\mathbb R}^d}dy\,
\prod_{l=1}^4D_{0,0}^-(x_l,y)h(y),$$ we can see that it does not vanish in general, despite the spectral properties of $D_{0,0}^-$, as energy-momentum conservation is violated due to $h$ not having "$\delta$-support" in momentum space. In contrast to this, $$\int\limits_{{\mathbb R}^d}dy\,
D_{1,0}^-(x_j,y)\prod_{l\neq j}D_{0,0}^-(x_l,y)$$ is non-vanishing because of the spectral properties of $D_{1,0}^-$, even if energy-momentum conservation holds. It might be possible to fine-tune the situation in such a way that the two abovementioned non-trivial contributions to $\langle\Omega,\phi^\text{out}(x_1)\phi^\text{out}(x_2)\phi^\text{out}(x_3)\phi^\text{out}(x_4)\Omega\rangle^T_1$ due to alleviated spectral properties on the one hand and abolished energy-momentum conservation on the other hand cancel each other, but in general this is certainly not the case.
To close this section, we remark that from the discussion of the above example it follows that the metric $g$, and hence the curvature of
spacetime, must show characteristic changes on a time scale $t\lessapprox1/(4m)$ to allow the violation
of energy-momentum conservation (and presumably also the deviation from the spectrum condition) to be big enough to assure a non-quasifree "out"-state, {\it e.g.}, $t\lessapprox7\times10^{-6}s$ for a pion with
$m\approx140$MeV. This time scale is sufficiently shorter than the period of nucleosynthesis at about 1 to $10^2$ seconds after the Big Bang, such that these findings are not in contradiction with well established physical facts. It is however significantly longer than the time of, {\it e.g.}, electro-weak symmetry breaking, which happened at $\approx 10^{-12}s$ such that it seems highly reasonable that full curved spacetime QFT calculations are required to model the physics of the very early universe.
\section{Unitary transformations between CCR
representations}\label{unitrafo} In the previous section we have seen how non-quasifree states (for free fields) naturally appear in scattering theory on non-stationary curved spacetimes. If one is interested in a scattering picture in terms of particles, one thus needs a way to calculate the particle content of a non-quasifree state, {\it i.e.}, a unitary transformation relating the GNS-representations of a non-quasifree and a quasifree state, but of course the ability to calculate such a transformation is also interesting in its own right. Hence, given a quasifree representation
of the CCR with Fock-space ${\cal F}$, in this section, we
calculate the particle content of non-quasifree representations
that are unitarily equivalent to the given quasifree one. To arrive at such a result, we shall use the language of Wightman
functionals and $\star$-product calculus, {\it cf.}, appendix \ref{star.app} for
a short introduction and notational conventions.
To start, let $\varphi(x)$ be the operator valued distribution fulfilling
the Klein-Gordon equation and the CCR that has been obtained via
the GNS-construction from some Wightman functional (not necessarily corresponding to a quasifree state)
$\underline{W}'$ with GNS-Hilbert space ${\cal H}_\text{GNS}$ and GNS-vacuum state $\Psi_0\in{\cal H}_\text{GNS}$. Furthermore, let $\xi(x) $ be the
operator valued distribution obtained from a quasifree Wightman
functional $\underline{W}$ via the Fock construction given in
section \ref{setting} with $\Omega\in{\cal F}$ the Fock vacuum.
A relevant application is of course $\varphi=\phi^{\rm
out}$ and $\xi=\phi^\text{in}$. We assume
unitary equivalence of both CCR representations in the following
technical sense: let $U: {\cal F}_{GNS}\to{\cal F}$ be a unitary
transformation such that $U\varphi(f)U^*=\xi(f)$ $\forall f\in {\mathcal D}(M)$ and let $\Psi\;\colon\!\!\!\!= U\Psi_0\in{\cal F}$ such that $\Psi$ is in
a dense core of some closure of the Fock creation and annihilation
operators $a(\psi)$ and $a^\dagger(\chi)$, $\psi\in{\cal
H}^+,\chi\in{\cal H}^-$. It is furthermore assumed that, for any
vector $\Upsilon$ in this core, the vectors $a^\sharp(\psi_1)\cdots a^\sharp(\psi_n)\Upsilon $
are jointly continuous in the $\psi_l$
w.r.t. the $({\cal H}^\pm)^{\otimes n}$ and the ${\cal F}$ topologies, where $a^\sharp$ stands for either $a$ or $a^\dagger$.
To determine $U$, it is enough to calculate $\Psi$ since
$U\varphi(f_1)\cdots\varphi(f_n)\Psi_0=\xi(f_1)\cdots\xi(f_n)\Psi$, $f_l\in{\cal D}(M)$,
can be calculated using (\ref{infield.eqa}), once the $n$-particle
components of $\Psi$ are know. Being an element of ${\cal F}$, $\Psi$ can be parameterized as
$$\Psi=\sum_{n=0}^\infty\int\limits_{M^{n}} d_gx_1\cdots d_gx_n\,f_n(x_1,\ldots,x_n) \xi(x_1)\cdots\xi(x_n),$$
where the complex functions $f_n$ are symmetric under permutation
of arguments, purely positive frequency, {\it i.e.}, ${\cal S}_-^{\otimes n}f_n=0$, and fulfil a normalization
condition, {\it viz.}, $\|\Psi\ {\bf 1}_{\cal
F}^2=\sum_{n=0}^\infty\|{\cal S}^{\otimes n}f_n\|^2_{({\cal H}^+)^{\otimes n}}\linebreak =1$. Furthermore,
the $f_n$ are taken from some function space s.t.
$f_n\mapsto {\cal S}^{\otimes n}f_n\in({\cal H}^+)^{\hat \otimes n}$ is
onto, where $\hat\otimes$ denotes the symmetric tensor product. Given $\ul{W}$ and $\ul{W'}$, computing $U$ is hence equivalent to determining $f_n$, or rather ${\cal S}_+^{\otimes n}f_n$, since the mapping ${\cal S}_+$ is not one-to-one and only the solution part of $f_n$ is "visible" in $\Psi$.
Let $\ul{f}=(f_0,f_1,f_2,\ldots)$, then obviously
\begin{equation}\ul{W}'=\stackrel{\rightarrow}{D}_{\ul{f}^*}\stackrel{\leftarrow}{D}_{\ul{f}}\ul{W} =\sum_{n,j=0}^{\infty}\stackrel{\rightarrow}{D}_{f_n^*}\stackrel{\leftarrow}{D}_ {f_j}\ul{W},\label{operation}\end{equation}
where the convergence of the infinite sums on the right hand side
follows from the assumption that $\Psi$ is in a core for the {\em
closed} creation and annihilation operators.
The operators $\stackrel{\leftrightarrow}{D}_{\ul{f}}=\sum_{n=0}^\infty\stackrel{\leftrightarrow}{D}_{f_n}$
act by inserting $f_n$ into the first/last $n$ arguments of a Wightman function, {\it cf.}, appendix \ref{star.app}. Application of
the relation (\ref{star6.eqa}) derived in that appendix and $\star$-multiplication with
$\ul{W}^{\star-1}=\exp_{\star}(-\ul{W}^T)$ yields
\begin{eqnarray}
\label{Fock3.eqa}
\exp_\star(\underline{W}^{'T}-\ul{W}^T)&=&\sum_{m,j=0}^\infty \int\limits_{M^{n+j}}d_gx_1\cdots d_gx_{n+j}\,\sum_{I\in{\cal P}(\{1,\ldots,n+j\}\atop I=\{I_1,\ldots,I_k\},\, k\geq 1}\star_{\,l=1}^{\,k}\stackrel{\leftrightarrow}{D}_{I_l}\ul{W}^T\times\nonumber\\
&&\qquad\times~ f^*_n(x_1,\ldots,x_n)f_j(x_{n+1},\ldots,x_{n+j}).
\end{eqnarray}
Both $\ul{W}^{'}$ and $\ul{W}$ induce CCR representations. By
\cite[Lemma 5.2]{GT}, this is equivalent to $\text{Im}W_2^{'T}=\frac{1}{2}D$ and $W_n^{'T}(x_1,\ldots,x_n)$ being
symmetric under permutation of arguments -- and hence real by the
Hermiticity of $\ul{W}'$ -- for $n\geq3$. This automatically holds
for the quasifree state $\ul{W}$ since $W_2^{T}=D^+$ and
$W_n^T(x_1,\ldots,x_n)=0$ for $n\geq3$. Hence, the left hand side
of (\ref{Fock3.eqa}) is real and symmetric. We note that
\begin{eqnarray}
\label{Fock4.eqa}
\left.
\begin{array}{rcll}
\stackrel{\leftrightarrow}{D}_{I_l} \ul{W}^T&=&0&\mbox{ for } |I_l|>3\\
\stackrel{\leftrightarrow}{D}_{I_l} \ul{W}^T&=&(D^+(x_{j_1},x_{j_2}),0,\ldots)&\mbox{ for } I_l=\{j_1,j_2\}, j_1<j_2\\
\stackrel{\rightarrow}{D}_{I_l} \ul{W}^T&=&(0,D^+(x_{j},\,\cdot\,),0,\ldots)&\mbox{ for } I_l=\{j\}\\
\stackrel{\leftarrow}{D}_{I_l} \ul{W}^T&=&(0,D^+(\,\cdot\,,x_j),0,\ldots)&\mbox{ for } I_l=\{j\}\\
\end{array}
\right\}.
\end{eqnarray}
As a result, only the partitions that consist of sets with one or two elements only contribute in (\ref{Fock3.eqa}). Given such a
partition $I=\{I_1,\ldots,I_l\}$, let $S\;\colon\!\!\!\!=\cup_{l:|I_l|=1}I_l$ and
$\hat I\in{\cal P}'(\{1,\ldots,n+j\}\setminus S\})$ the remainder,
which is a pairing partition. Employing this notation, we can can compute
\begin{equation}
\label{Fock6.eqa} \sum_{I\in{\cal P}(\{1,\ldots,n+j\}\atop
I=\{I_1,\ldots,I_k\},\, k\geq
1}\star_{\,l=1}^{\,k}\stackrel{\leftrightarrow}{D}_{I_l}\ul{W}^T=\sum_{\begin{array}{c}
\scriptstyle S\subseteq\{1,\ldots,n+j\}\\\scriptstyle \hat I\in{\cal P'}(\{1,\ldots,n+m\}\setminus S)\\
\scriptstyle\hat I=\{I_1,\ldots,I_{(n+j-|S|)/2}\}\\ \scriptstyle
I_l=\{i_l,j_l\},\,
i_l<j_l\end{array}}\prod_{l=1}^{(n+j-|S|)/2}D^+(x_{i_l},x_{j_l})\,
\star_{j\in S}\stackrel{\leftrightarrow}{D}_{x_j}\ul{W}^T.
\end{equation}
Clearly, $(\star_{j\in
S}\stackrel{\leftrightarrow}{D}_{x_j}\ul{W}^T)_s=0$ if $s\not=|S|$
and for $s=|S|$, $S=\{j_1,\ldots,j_s\}$, $j_1<j_2<\ldots<j_q\leq
n<j_{q+1}<\ldots<j_s$,
\begin{equation}
\label{Fock7.eqa}
\left(\star_{j\in S}\stackrel{\leftrightarrow}{D}_{x_j}\ul{W}^T\right)_s(y_1,\ldots,y_s)=\sum_{\pi\in S_s} \prod_{l=1}^qD^+(x_{j_l},y_{\pi(l)})\prod_{l=q+1}^{s}D^-(x_{j_l},y_{\pi(l)}),
\end{equation}
where $S_n$ denotes the permutations of $\{1,\dots,n\}$. Inserting (\ref{Fock6.eqa}) and (\ref{Fock7.eqa}) into (\ref{Fock4.eqa}) yields
$$\exp_\star\left(\ul{W}^{'T}-\ul{W}^T\right)_s(y_1,\ldots,y_s)=$$
\begin{eqnarray}
&=&\sum\limits_{r\ge s\atop r-s~\text{even}}\sum\limits_{n=0}^r\int\limits_{M^{r}}d_gx_1\cdots d_gx_r \sum\limits_{\begin{array}{c}\scriptstyle \{j_1,\ldots,j_s\}\subseteq \{1,\ldots,r\}\\ \scriptstyle j_1<\cdots<j_q\leq n<\\\scriptstyle~~~ <j_{q+1}<\cdots<j_s\end{array}}\sum\limits_{\begin{array}{c}\scriptstyle \hat I\in{\cal P'}(\{1,\ldots,n+m\}\setminus S)\\
\scriptstyle\hat I=\{I_1,\ldots,I_{(r-s)/2}\}\\ \scriptstyle I_l=\{i_l,k_l\},\, i_l<k_l\end{array}}\sum\limits_{\pi\in S_s}\quad \times\label{Fock8.eqa}\\&&
\times \quad \prod\limits_{l=1}^{(r-s)/2}D^+(x_{i_l},x_{k_l})\, \prod\limits_{l=1}^qD^+(x_{j_l},y_{\pi(l)})\prod\limits_{l=q+1}^{s}D^-(x_{j_l},y_{\pi(l)})\quad \times\nonumber\\&&
\times \quad f_n^*(x_1,\ldots,x_n)f_{r-n}(x_{n+1},\ldots,x_r).\nonumber
\end{eqnarray}
We note that $\int_M d_gx\,D^{\pm}(x,y) f(x)=0$ if $f$ is
positive/negative frequency, {\it cf.}, (\ref{infield.eqa}). Furthermore, by our assumptions,
$f^*_n$ is purely negative frequency and $f_{n-r}$ purely positive
frequency. One can thus replace all propagators $D^\pm$
in (\ref{Fock8.eqa}) by the real symmetric function $\tilde
D=D^++D^-$ since the integral over the added propagator $D^\mp$
with $f^*_n$ or $f_{n-r}$ always vanishes.
Having done so, we can commute the sums over $n$ and over $S$ on
the right hand side of (\ref{Fock8.eqa}), such that the integral
contains the function
\begin{equation}
\label{Fock9.eqa}
\tilde z_r(x_1,\ldots,x_r)\;\colon\!\!\!\!=\sum_{n=0}^rf_n^*(x_1,\ldots,x_n)f_{n-r}(x_{n+1},\ldots,x_r).
\end{equation}
Next, we would like to symmetrise this expression, {\it viz.}, $$z_r(x_1,\ldots,x_r)\;\colon\!\!\!\!=(r!)^{-1}
\sum_{\pi\in S_r}\tilde
z_r(x_{\pi(1)},\ldots,x_{\pi(r)}),$$ a procedure which also makes $z_r$ a real
function, and to replace $\tilde z_r$ by $z_r$. To see that this is well-defined, let
$1\leq j<r$. We have to show that $\tilde z_r$ is integrated
w.r.t. a function which is symmetric in $x_j$ and $x_{j+1}$. Given
one term in the combinatorial sum, suppose that $j,j+1\in S$. Then,
symmetry follows from summation over $S_s$. Next, suppose
that either $j$ or $j+1$ is a member of a pairing and the other index is in
$S$. Then, there exists another contribution to the combinatorial
sum where $j$ and $j+1$ are exchanged showing symmetry for this
case. Let finally $j$ and $j+1$ both be members a pairing. If the
pairings are different, the argument just given applies. If this
is one and the same pairing, then symmetry follows from the
symmetry of $\tilde D$.
Taking into account that the sum over $S_s$ yields
a factor $s!$, the sum over $S$ a factor $\left({r\atop
s}\right)$ and the sum over pairings a factor
$2^{s-r}(r-s)!/((r-s)/2)!$, one obtains a combinatorial factor
$c_{s,r}$ by multiplication of these contributions. These considerations finally
lead to
\begin{eqnarray}
\label{Fock10.eqa}
\exp_\star\left(\ul{W}^{'T}-\ul{W}^T\right)_s(y_1,\ldots,y_s)&=&\sum_{r=s\atop r-s ~\text{even}}^\infty c_{s,r}\int\limits_{M^{r}}d_gx_1\cdots d_gx_r\,\prod_{l=1}^{(r-s)/2}\tilde D(x_{2l-1},x_{2l}) \;\times\\
&&\qquad\qquad\times\prod_{l=r-s+1}^r\tilde D(x_l,y_{l-r+s}) \,z_r(x_1,\ldots,x_r).\nonumber
\end{eqnarray}
To fulfil our task to compute $U$, we need to solve this system of equations for the solution part of
$z_r$, {\it i.e.}, for ${\cal S}^{\otimes r}z_r=\tilde D^{\otimes r}z_r$. To this end, to obtain a better understanding of the structure of
(\ref{Fock10.eqa}), we introduce some additional graphical notation: we denote the $s$-point function of the
functional on the left hand side by a white circle with $s$ legs
and the function $z_r$ by a shaded circle with $r$ amputated legs.
The integrations with the propagators $\tilde D$ then either add
free legs that carry two arrows with opposite direction or a line
of that type that goes back into the shaded circle. ${\cal S}^{\otimes r}z_r$ thus corresponds to a shaded circle with $r$
legs with double arrows of opposite direction. This way, one obtains two
decoupled systems, one for $s$ even and one for $s$ odd, {\it cf.},
figure \ref{triang} for the even system, which makes the upper
triangular structure visible. In the following, we focus on solving
the even system, the odd system can be solved alike. In
$\phi^p$-theories with $p$ even, the odd system is identically
zero on the left hand side and hence gives ${\cal S}^{\otimes
r}z_r=0$ for odd $r$.
\begin{figure}
\label{triang}
\centerline{\scalebox{.7}{\includegraphics{triang.eps}}}
\caption{The triangular system of equations for the functions
${\cal S}^{\otimes r}z_r$ }
\end{figure}
We note that the empty circles are solutions of the Klein-Gordon
equation in each of their legs. By the demand of continuity of the
creation/annihilation operators in $\psi$ and $\chi$ when
repeatedly applied to $\Psi$, Riesz' lemma implies that the
empty circle with $s$ legs is in ${\cal H}^{\hat \otimes s}$. Let
$\{h_j\}_{j\in{\mathbb N}}$ be an ONS in ${\cal H}$. Taking the scalar
product with $h_j$ in the first two legs and then summing over $j$
on the right hand side induces an opposite double arrow line that
goes back into the shaded circle, since $(\tilde Df,\tilde
Dh)=\tilde D(f,h)$ for $f,h$ real. On the left hand side, we
denote this contraction operation by a arrow-less line going back
into the white circle.
\begin{figure}
\centerline{\scalebox{.7}{\includegraphics{triang2.eps}}}
\caption{Solution to the triangular system }\label{triang2}
\end{figure}
The unique solution of the even system may hence be written down in
graphical form as in figure \ref{triang2}. The solution exists by
assumption of unitary equivalence, thus, all infinite sums involved in the inverse system
converge, which follows from $\lim_{n\to\infty}\Pi(n)\Psi=\Psi$ in
${\cal F}$, where $\Pi(n)$ projects on states with at most $n$
particles, and the fact that for a state with at most $n$ particles
we have a finite system of equations. The constants $d_{s,r}\;\colon\!\!\!\!= (C^{-1})_{s,r}$ are
defined as the entries of the inverse of the upper triangular
matrix $C\;\colon\!\!\!\!=(c_{s,r})_{s,r\in2{\mathbb N}}$.
Let us recall that the functions $z_r$ have been a convenient intermediate tool, and that our ultimate aim is to determine the solution part of the (purely positive frequency) functions $f_n$. Hence, it remains to reconstruct ${\cal S}^{\otimes n}f_n={\cal S_+}^{\otimes n}f_n$ from the functions ${\cal S}^{\otimes r}z_r$. To achieve this, let us first suppose that
$z_0=|f_0|^2\not=0$. As the state $\Psi$ is only determined up to
a phase, one may assume $f_0>0$. Then, by (\ref{Fock9.eqa}),
$$
{\cal S}_+^{\otimes r}z_r(x_1,\ldots,x_r)=f_0{\cal S}_+^{\otimes r}f_r(x_1,\ldots,x_r)\,,~~\mbox{ for } r\in{\mathbb N}.
$$
If ${\cal S}^{\otimes r}z_r=0$ for $r<r_0$ and $r_0$ is maximal,
$r_0$ must be even. It follows that ${\cal S}^{\otimes n}f_n=0$
for $n<r_0/2$. Hence, there exist $y_1,\ldots,y_{r_0/2}\in M$ such that
${\cal S}_-^{\otimes\frac{r_0}{2}}\otimes{\cal S}_+^{\otimes
\frac{r_0}{2}}z_{r_0}(y_1,
\ldots,y_{r_0/2},y_1, \ldots,y_{r_0/2})=|{\cal S}^{\otimes
\frac{r_0}{2}} f_{r_0/2}(y_1,\ldots,y_{r_0/2})|^2>0$. We may fix
the phase such that ${\cal S}^{\otimes
\frac{r_0}{2}}f_{r_0/2}(y_1,\ldots, y_{r_0/2})>0$ and we obtain
the solution part of $f_n$, $n\geq r_0/2$, via
$$
{\cal S}_-^{\otimes\frac{r_0}{2}}\otimes{\cal S}_+^{\otimes n}z_{\frac{r_0}{2}+n}(y_1,\ldots,y_{r_0/2},x_1,\ldots,x_n)
={\cal S}_+^{\otimes \frac{r_0}{2}}
f_{r_0/2}(y_1,\ldots,y_{r_0/2}){\cal S}_+^{\otimes
n}f_n(x_1,\ldots,x_n),$$
which closes the looked-for computation of $U$.
To obtain a complete description of the scattering process on non-stationary spacetimes in terms of particles, we have to assume that the spacetime under consideration is asymptotically flat\footnote{Here, asymptotically flat is meant in a rather loose sense, {\it i.e.}, we assume that both in the remote future and in the remote past of $(M,g)$, there is an open, non-empty, globally hyperbolic subset of (M,g) which contains a Cauchy surface of $(M,g)$ and is isometric to such a subset of Minkowski spacetime. In this setting, it is straightforward to define preferred states as the pull-backs of the ones in Minkowski space. However, even within the more strict definition of asymptotically flat spacetimes, one can obtain preferred states, as devised in \cite{Dappiaggi}.} in the remote future and past, such that unique preferred quasifree states are available both for the incoming and the outgoing field. Then, there are associated Fock spaces, say ${\cal F}_\text{in}={\cal F}$ and ${\cal F}_\text{out}$, and a combination of the scattering theory described in the previous sections and the results obtained in this section gives the $n$-particle amplitudes $f_n$ of the scattered incoming quasifree state in the particle picture of the remote future. If one wants to determine particle production from an incoming multi-particle state, one can apply suitably smeared incoming fields $\phi^\text{in}(x)$ to the incoming vacuum, then calculate the outgoing representation of the CCR, and then conclude as above.
\section{Conclusions and outlook}
\label{conc}
In the this work, we have seen that non-quasifree states for free fields appear naturally in scattering theory on non-stationary curved spacetimes. This result is well in line with recent works \cite{HR, Sanders} which show that a certain class of non-quasifree states, namely, the ones for which the truncated 2-point function is a distribution with the singularity structure of the Minkowski vacuum state and the other truncated $n$-point functions are smooth, is the natural class of states in perturbative quantum field theory on curved spacetimes. In the light of this, it seems somewhat unnatural and unnecessary to restrict oneself to quasifree states, although some important technical results are only available for quasifree states, see, {\it e.g.}, \cite{Lueders, Ver}.
Therefore, and also because there are situations where one is interested in the particle interpretation of non-quasifree states, we develop a method to calculate, provided it exists, a unitary transformation relating a non-quasifree state to a quasifree one. Heuristically, the form of the result could have been anticipated: as we assume unitary equivalence, the GNS-vacuum associated to the non-quasifree state corresponds to a state in the Fock space related to the quasifree state under consideration, and the task is to compute the $n$-particle components $f_n$ of this state. Since we assume both states to fulfil the same commutation relations, they only differ in the real and symmetric part of their 2-point function and the higher order truncated $n$-point functions, which are real and symmetric in the non-quasifree case and vanishing in the quasifree case. It is thus not surprising that our result \eqref{Fock10.eqa} relates the truncated $n$-point functions of the non-quasifree state to an expression in the symmetric part of the 2-point function of the quasifree state smeared with a real and symmetrised version of the $f_n$. The non-trivial part of our result are, however, the combinatorics involved, and we have managed to tame them by encoding them conveniently into $\star$-calculus on the dual of the Borchers-Uhlmann algebra.
The method of computing a unitary transformation relating the GNS-representations of non-quasifree and quasifree states developed in this work is well-suited for general treatments of the topic, but not for explicit numerical calculations. A different method to compute such a transformation, which is based on \cite{Glauber}, works for finite-dimensional systems, {\it i.e.}, "mode-by-mode", and is therefore better suited for numerical computations, has been developed and applied in \cite{Hack}.
\begin{acknowledgments}For the first named author it is
a pleasure to thank Horst Thaler for the ongoing and very fruitful
exchange of ideas. T.H. would like to express his gratitude towards Nicola Pinamonti for the interesting and fruitful discussions regarding graph combinatorics. We also thank Klaus Fredenhagen for a very instructive discussion on asymptotic conditions.
\end{acknowledgments}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.